Update on Harvesting Gradebook

In a Chronicle story about the release of the NILOA report “More than you think, Less than we need”  George Kuh says “What we want is for assessment to become a public, shared responsibility, so there should be departmental leadership.”  Kuh, Director of the National Institute for Learning Outcomes Assessment, goes on, “So we’re going to spend some time looking at the impact of the Voluntary System of Accountability. It’s one thing for schools to sign up, it’s another to post the information and to show that they’re actually doing something with it. It’s not about posting a score on a Web site-it’s about doing something with the data.”

He doesn’t take the next step and ask if it is even possible for schools to actually do anything with the data collected from the VSA (or its CLA component on learning outcomes) or to ask who has access to the criteria used in the assessment, to be able to unpack the meaning of the CLA numbers: Students? Faculty? Anyone?

Cathy Davidson made a stir in the Chronicle last summer with her experiment in crowd-sourcing grading. Her subsequent reflection (10/30) is about the utility (or not) of grades as feedback, is a microcosm of the comments of Kuh: A single number is a poor feedback mechanism. Davidson’s efforts are placing an emphasis on the importance of feedback that is richer that what a letter grade can provide.

We have previously commented on the weakness of grades and of the Collegiate Learning Assessment (CLA) to provide feedback that can lead to learning or meaningful change. Subsequently, we have been piloting mechanisms to give rich feedback to learners, gathering that feedback from multiple perspectives, and opening up the criteria to discussion (and revision) from a community of practice that extends beyond the university walls. Our first report of that work “Harvesting Gradebook” was presented at AAC&U January 2009.       Below we show results from the second iteration of those experiments, in progress now.

WSU’s newly created Office of Assessment and Innovation has been extending this Harvesting concept to include gathering feedback to improve academic programs’ learning outcomes assessment. The goal is to use harvesting feedback as a central component of the university’s “system of assessment.” Having a robust system is a requirement of WSU’s NWCCU accreditation.

The data below come from a junior-senior level design course where students are working in teams on a semester-long project, an extension of last year’s work. The course uses a rubric derived from WSU’s critical thinking rubric to provide formative peer feedback to the projects. The same tools will be used later in the semester to “harvest” feedback and grades from a professional community. (more detail on the 2008 version of the course and its outcomes.)

2009 results to date

One of the concerns expressed in some of the replies to Cathy Davidson’s work was the unreliability of student (peer) ratings. We find that students are un-normed and their judgments vary, but with well expressed criteria in a rubric, students are able to provide valuable textual feedback and _on average_ their judgments are useful.

Explore the interactive site used to report harvested feedback (updated 12/15) to better appreciate what is illustrated in the static images below (from time to time the Google Gadgets fail to update, try refreshing the page).

Figure 1. Comments written by students. You can scroll around to see more of the comments at the online site that records the results. Students are able to provide rich feedback when the criteria are structured.

Figure 2. Radar graph of the numeric scores provided by the student peer evaluators. You can interact with the graph (hiding and showing data sets) at the online site that records the results.

Figure 3. Comparison of averages of self-evaluation and peer-evaluation. In our experience Self is more generous than any other reviewer group. The order we have seen, from most to least generous is: Self; Peer; Instructor; Industry. http://wsuctlt.wordpress.com/2009/01/20/rich-assessment-from-a-harvesting-grade-book/

Figure 4. Perceptions of Readiness for Employment. This question asks for an overall appraisal of how ready the authors of the project are for employment. We have found that over the course of a term, students’ readiness for employment increases in the eyes of employers http://wsuctlt.wordpress.com/2009/01/20/evidence-for-the-harvesting-gradebook%E2%80%99s-impact-on-learning/

Figure 5. Using a formula that takes the instructor’s grading curve and applies it to the rubric scores, peer’s can assign a grade to the work. This, like the employment readiness in Figure 4 are overall estimations of the quality of the work (the coins of two different realms). Importantly, as we and Davidson note, this grade by itself does not give much feedback, but in this context, the implicit meaning of the grade can be readily unpacked by the student.

On TAs and their role in Assessment

A Liaison reports that the graduate school has expressed it may be inappropriate for TAs to be asked to participate in program assessment.

OAI addressed that issue with this note:

Ultimately we need to communicate the understanding that teaching, learning, and assessment cannot be disaggregated. I think it is a shared understanding that a college has responsibility for preparing its graduate students for the profession, and that might include preparing them to teach.  It is also clear that in many colleges, graduate students are teaching.  Helping graduate students understand issues of assessment in that context is then an obligation to both the graduate students and the undergrads they teach.  I don’t think it is clear that a college needs to fully relegate responsibility for professional development of their graduate students to the graduate school on this issue since, after all, the college has a better understanding of the requirements of the profession.

Recap of the Planning Request

The Plan We Need

The goal of the plan is to establish the team and strategy to be implemented in the spring.

We are working to be sure ALL programs and ALL teaching faculty, adjuncts, and TAs can report some level of involvement in the program outcomes work, including, minimally, identification of program outcomes on their syllabi and participation in the analysis of results and awareness of the action plan (how the program will be closing the loop).

It is reasonable to expect that the spring assessment will be a pilot, and that therefore the action plan might very well be something focused on ways to make the assessment more robust, valid, and designed to encourage greater participation and use of embedded activities faculty have already been doing in their courses.

So specifically, we hope the plan will include the development of a team and system (rubric criterion #1) and the beginnings of the outcomes and measurement strategy (rubric criterion #2):

1.     Identification of WHO in the program will be involved (#1).
2.     Identification of WHICH core courses and WHAT representative activities students will be doing that will be assessed. (#2)
3.     How that approach to direct measurement will be aligned with other existing indirect measures such as student evaluations, and if not, then how that alignment will be addressed in the future (the plan to plan) (#2)
The goal of this plan to plan is to make sure logistics are in place so that we can increase the likelihood that assessment will take place next spring. The spring deadline is necessary so that programs can also do the analysis and complete a report with action items for fall, 2010, when the updated report to the Northwest Commission is due.

Other Items
·        The rubric revisions are underway, thanks to excellent feedback from this group and staff in OAI.
·        We will have the rubric revisions ready along with two mock model self studies. The models will be aligned with the rubric and should help ongoing deliberations about the process and the template.
·        You can glimpse the rating of the Honors self study along with associated comments at this link.

Planning for Spring 2010 University-wide Assessment Activity

Planning for Spring 2010 University-wide Assessment Activity Attached are 2 images of the white board created during this meeting

—— Forwarded Message
From: Joshua Yeidel
Date: Thu, 29 Oct 2009 12:22:11 -0700
To: ctltmessages
Subject: Planning for Spring 2010 University-wide Assessment Activity

Our 11AM “Rain King Design” meeting reviewed the actions needed to be ready for the University wide assessment activity (for want of a better name at this point) which we expect to complete in May 2010.   The following action points came up:

  • Gary to update RK project list to combine Rubric Revision, Template, Instructions, and Model Self-Study (“Rocket Science”)
  • Participant Support should include a self-help resource for liaisons and OAI — Ashley to create project in RK project list.
  • OAI staff will need to be connected to Program Points when most of them have been appointed — part of Program Liaisons project
  • Dashboards will be needed for various parts — Joshua (deferred)
  • Timeline of milestones for UWAA will need to be developed as a consensus with liaisons — expand “Program Liaisons” project to include timeline and status and information-sharing with liaisons
  • Need a plan for public information sharing (what and why of UWAA)
  • Need a way to capture feedback about the Fall pilot
  • Need to clone “2008-2009” tab to do a second pilot with the “Rocket Science” model for usability, and should include linkup to report results — Joshua, Theron
  • Keep the Thursday 11AM time, but meet only as needed — Joshua

Please update relevant listings in the Rain King Project Tasks list for tomorrow morning’s staff meeting.

— Joshua

—— End of Forwarded Message

October 27 Agenda, Planning Guide, Template Draft

Handouts for October 27 meeting.  We decided to revisit the template.  More summary will follow. (Agenda one is Larry’s; 2 includes revisions following discussions with OAI leadership.)

From: James, Larry G
Sent: Tuesday, October 27, 2009 10:59 AM
To: Brown, Gary
Subject: Agenda

After Beta Rain King, Liaison asks Purpose of Initiative

A liaison asks in the online forum:

“With this experience behind us, I would ask for a quick rehash of the goals for the assessment of assessments. We may be better prepared to comment at this point.”

I suggest this response.

1.      Establish an institutional system of assessment aligned with NWCC&U principles
2.      NWCC&U further requires us to assess our assessment
3.      Establish a system that meets 1 and 2 in a way that is responsive to program diversity but is unified by principles of good assessment

Academic Effectiveness Liaison Council Meeting, Tuesday, October 27

Academic Effectiveness Liaison Council
Date: October 27, 2009
Start Time:  2:00:00 PM
End Time:  3:00:00 PM
Dialing Instructions: 5707955
Origin: Pullman (French 442)
Location: Spokane (SHSB 260), Tri-Cities (TEST 228), Vancouver (VCLS 308J)
Details: Academic Effectiveness Liaison Council
Special Request: Room reserved from 12:00 – 12:30 for set up.
Event Contact:

Donna Cofield     5-4854

Sneak Preview

1.      Liaison identification update
·        We have about half of our liaisons who have helped identify the point people for EACH of the programs in their college.  These point people will be critical in our overall communication strategy.  Remember, even if you plan on serving on point for each of the programs in your college, we need to be sure we know what and how many programs need to be identified.
2.      Forum feedback on assessment process and rubric
·        We have already learned enough and received good feedback on the rubrics.  We will have a short version very shortly to complement the longer version, and there are other notable refinements gained from liaisons’ critical vantage.  Keep the input coming!
3.      Conversations and support for process.
·        We have already received positive support from this effort from Old Dominion, Mount Royal, the University of Kentucky and Penn State who have heard of this work through the mock-up we presented with the TLT Group and are very interested in partnering with us (external review exchange).
·        Conversation with leadership with the NWCC&U has been scheduled for later this week.
·        The Western Cooperative of Educational Technologies already requested a presentation on this work for next year in La Jolla.  (Who wants to go?)
4.    We still need your participation in program contacts and doing the pilot Honors Self-Study assessment.

The task again:
1.      Go to https://universityportfolio.wsu.edu/20082009/Pages/default.aspx
2.      Scroll down to the link near the bottom ‘Honors Review Rubric’ which opens an online survey.
3.      The first page of that link is instructions, at the bottom of which is a ‘begin’ button.
Remember, when we have worked through refinements, the goal of the work should provide with a new template for reporting that streamlines the rating.  By writing reports in  template ‘chunks,’ we will be able to concatenate each of them into various formats to address the different reporting requests we get from professional accreditors, the HEC Board, OFM, and anybody else who might appreciate WSU’s commitment to improving student learning.

Recommended approach:

  • Print out a hard copy of the rubric (already being revised thanks to feedback at our forums).
  • Read through it to get the flavor.
  • Read the Honors Self Study
  • Rate the study using the online survey/rubric.  (You can cut and paste language from the rubric into the comment box on the rating form online, and that will help Honors understand the criteria you selected as important to your review of their self-study, and it will help us refine the rubric.)

In the news Today:
October 26, 2009, 02:53 PM ET
Most Colleges Try to Assess Student Learning, Survey Finds
A large majority of American colleges make at least some formal effort to assess their students’ learning, but most have few or no staff members dedicated to doing so. Those are among the findings of a survey report released Monday by the National Institute for Learning Outcomes Assessment, a year-old project based at Indiana University and the University of Illinois. Of more than 1,500 provosts’ offices that responded to the survey, nearly two-thirds said their institutions had two or fewer employees assigned to student assessment. Among large research universities, almost 80 percent cited a lack of faculty engagement as the most serious barrier to student-assessment projects.