In a Chronicle story about the release of the NILOA report “More than you think, Less than we need” George Kuh says “What we want is for assessment to become a public, shared responsibility, so there should be departmental leadership.” Kuh, Director of the National Institute for Learning Outcomes Assessment, goes on, “So we’re going to spend some time looking at the impact of the Voluntary System of Accountability. It’s one thing for schools to sign up, it’s another to post the information and to show that they’re actually doing something with it. It’s not about posting a score on a Web site-it’s about doing something with the data.”
He doesn’t take the next step and ask if it is even possible for schools to actually do anything with the data collected from the VSA (or its CLA component on learning outcomes) or to ask who has access to the criteria used in the assessment, to be able to unpack the meaning of the CLA numbers: Students? Faculty? Anyone?
Cathy Davidson made a stir in the Chronicle last summer with her experiment in crowd-sourcing grading. Her subsequent reflection (10/30) is about the utility (or not) of grades as feedback, is a microcosm of the comments of Kuh: A single number is a poor feedback mechanism. Davidson’s efforts are placing an emphasis on the importance of feedback that is richer that what a letter grade can provide.
We have previously commented on the weakness of grades and of the Collegiate Learning Assessment (CLA) to provide feedback that can lead to learning or meaningful change. Subsequently, we have been piloting mechanisms to give rich feedback to learners, gathering that feedback from multiple perspectives, and opening up the criteria to discussion (and revision) from a community of practice that extends beyond the university walls. Our first report of that work “Harvesting Gradebook” was presented at AAC&U January 2009. Below we show results from the second iteration of those experiments, in progress now.
WSU’s newly created Office of Assessment and Innovation has been extending this Harvesting concept to include gathering feedback to improve academic programs’ learning outcomes assessment. The goal is to use harvesting feedback as a central component of the university’s “system of assessment.” Having a robust system is a requirement of WSU’s NWCCU accreditation.
The data below come from a junior-senior level design course where students are working in teams on a semester-long project, an extension of last year’s work. The course uses a rubric derived from WSU’s critical thinking rubric to provide formative peer feedback to the projects. The same tools will be used later in the semester to “harvest” feedback and grades from a professional community. (more detail on the 2008 version of the course and its outcomes.)
2009 results to date
One of the concerns expressed in some of the replies to Cathy Davidson’s work was the unreliability of student (peer) ratings. We find that students are un-normed and their judgments vary, but with well expressed criteria in a rubric, students are able to provide valuable textual feedback and _on average_ their judgments are useful.
Explore the interactive site used to report harvested feedback (updated 12/15) to better appreciate what is illustrated in the static images below (from time to time the Google Gadgets fail to update, try refreshing the page).
Figure 1. Comments written by students. You can scroll around to see more of the comments at the online site that records the results. Students are able to provide rich feedback when the criteria are structured.
Figure 2. Radar graph of the numeric scores provided by the student peer evaluators. You can interact with the graph (hiding and showing data sets) at the online site that records the results.
Figure 3. Comparison of averages of self-evaluation and peer-evaluation. In our experience Self is more generous than any other reviewer group. The order we have seen, from most to least generous is: Self; Peer; Instructor; Industry. http://wsuctlt.wordpress.com/2009/01/20/rich-assessment-from-a-harvesting-grade-book/
Figure 4. Perceptions of Readiness for Employment. This question asks for an overall appraisal of how ready the authors of the project are for employment. We have found that over the course of a term, students’ readiness for employment increases in the eyes of employers http://wsuctlt.wordpress.com/2009/01/20/evidence-for-the-harvesting-gradebook%E2%80%99s-impact-on-learning/
Figure 5. Using a formula that takes the instructor’s grading curve and applies it to the rubric scores, peer’s can assign a grade to the work. This, like the employment readiness in Figure 4 are overall estimations of the quality of the work (the coins of two different realms). Importantly, as we and Davidson note, this grade by itself does not give much feedback, but in this context, the implicit meaning of the grade can be readily unpacked by the student.