How a rubric can communicate

OAI has been finishing up its 2009-10 cycle of reviews of program-level assessment, see the University’s portfolio for details about the process and the results.

One of the responses to a program regarding communication with stakeholders had a summary of the utility of a rubric as a communication tool:

Under “Communication” the report states: “Program Objectives and Outcomes will be more extensively discussed with the students in classes to encourage more participation in the assessment and improvement process.”

A programmatic assessment rubric could be a very useful tool to encourage students, and other stakeholders, to participate in the assessment and improvement process. For example a rubric:

  • Provides a reference point for students to consult repeatedly as they monitor their own learning and develop the skills of self-assessment. Students are supported in becoming better judges of quality in their own and others’ work.
  • Supports the development of a sense of shared expectations among students, faculty, staff, and external stakeholders.
  • Provides evaluators and those whose work is being evaluated with rich and detailed descriptions of what is being learned and what is not by facilitating a breaking down of outcomes into dimensions and of dimensions into criteria.
  • Provides criteria to shape and guide students’ engagement with one another and with course content.
  • Promotes a shared understanding among faculty, students, and stakeholders of the program outcomes.

Chart of raw scores on 3 related engineering assessment reports, Fall 2010

Re: chart of raw scores on 3 related engineering assessment reports, Fall 2010

Kimberly,
Thanks for making this graph, its a helpful visualization of the raters’ differences. You, Jayme and I have confirmed that the three reports are substantially identical. The other observation you made the other day was that Raters 1 & 2 were together on one program which has led to that program having a different rating from the other two programs.

I re-made your chart, adding a solid black line for the average of 6 ratings (as if all raters read one report, which in effect they did). I added two dashed black lines at 1/2 point above and below the average. Points between the dashed lines are ratings within 1/2 point agreement of the average.

I see 3 blue, 1 red and 1 orange rating that are outside the 1/2 point agreement band.

One conversation could be about the quality of our internal norming (this was an inadvertent test of that).

More pressing now perhaps is how we represent this data back to the college and programs involved.

Gary & I chatted about my first graph and so I removed Rater 1, the most significant outlier and re-made the chart. In this 2nd chart, only one rating of 20 is outside the 1/2 point tolerance now. I conclude we should update Process Actions with the average scores for all 3 programs. The resulting averages of 5 raters are:

Dim 1:2.40
Dim 2:2.70
Dim 3:2.00
Dim 4:2.10

On 11/8/10 2:18 PM, “Kimberly Green” wrote:

Regards,

Kimberly

Kimberly Green
Educational Designer / Assessment Specialist
Office of Assessment and Innovation
Washington State University
Pullman, Washington
(509) 335-5675
kimberly_green@wsu.edu

As you enter a classroom ask yourself this question: If there were no students in the room, could I do what I’m planning to do?  If your answer is yes, don’t do it.     -Ruben Cubero

CHEA 2011 Award Submitted

CHEA 2011 Award Submitted CHEA has an annual awards competition (http://chea.org/2011_CHEA_Award.html ) for innovative assessment efforts. Attached is the WSU 2011 application, submitted last Friday, describing our pilot year of institutional assessment.

WSU CHEA 2011 Award Application