Examining the quality of our assessment system


With most of the 59 programs rated for this round, we are beginning an analysis of our system of assessment.

To re-cap, we drafted a rubric about a year ago and tested it with Honors college self-study and a made-up Dept of Rocket Science self-study. We revised the rubric with discussions among staff and some external input. In December we used the rubric to rate reports on 3 of 4 dimensions (leaving off action plan in the first round). Based on observations in the December round, the rubric was revised in mid-spring 2010.

We tested the new rubric at a state-wide assessment conference workshop in late April, using a program’s report from December. The group’s ratings agreed pretty well with our staff’s (data previously blogged).

The May-August versions of the rubric are nearly identical, with only some nuance changes based on the May experiences.

The figure below is a study of the ratings of OAI staff on each of 4 rubric dimensions. It reports the absolute value of difference of the ratings for each pair of raters — a measure of the inter-rater agreement. We conclude that our ratings are in high agreement [a 54% are 0.5 point or closer agreement (85/156); 83% are 1.0 point or closer]. We also observe that the character of the distribution of agreement is similar across all four of the rubric dimensions.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: