Re: chart of raw scores on 3 related engineering assessment reports, Fall 2010
Thanks for making this graph, its a helpful visualization of the raters’ differences. You, Jayme and I have confirmed that the three reports are substantially identical. The other observation you made the other day was that Raters 1 & 2 were together on one program which has led to that program having a different rating from the other two programs.
I re-made your chart, adding a solid black line for the average of 6 ratings (as if all raters read one report, which in effect they did). I added two dashed black lines at 1/2 point above and below the average. Points between the dashed lines are ratings within 1/2 point agreement of the average.
I see 3 blue, 1 red and 1 orange rating that are outside the 1/2 point agreement band.
One conversation could be about the quality of our internal norming (this was an inadvertent test of that).
More pressing now perhaps is how we represent this data back to the college and programs involved.
Gary & I chatted about my first graph and so I removed Rater 1, the most significant outlier and re-made the chart. In this 2nd chart, only one rating of 20 is outside the 1/2 point tolerance now. I conclude we should update Process Actions with the average scores for all 3 programs. The resulting averages of 5 raters are:
On 11/8/10 2:18 PM, “Kimberly Green” wrote:
Educational Designer / Assessment Specialist
Office of Assessment and Innovation
Washington State University
As you enter a classroom ask yourself this question: If there were no students in the room, could I do what I’m planning to do? If your answer is yes, don’t do it. -Ruben Cubero