Developing WSU’s Guide to Effective Student Learning Outcomes Assessment Rubric

The table below charts the steps that Washington State University’s Office of Assessment and Innovation (OAI) and its stakeholders went through to develop the Guide to Effective Program Assessment rubric used in the WSU System of Student Learning Outcomes Assessment.

Description Generalized Model Specific example of application at WSU’s Office of Assessment and Innovation.
Determine the scope of the program assessment initiative. Solicit stakeholder input about the purpose of the assessment. Determine which aspects of program assessment will be most useful for the institution to assess Gary met with provosts and assistant deans to frame the general parameters.
Develop a framework for assessing assessment at the meta-level. Research existing literature (and tools) to begin delineating rubric dimensions. Developed a framework for the rubric based on the WASC Evidence Guide, writings from Peter Ewell and others, and the Transformative Assessment Rubric, an EDUCAUSE Learning Initiative project to evaluate the responsiveness of assessment plans.
Flesh out criteria for each dimension. October 2009 Begin drafting rubric criteria, periodically reviewing with stakeholders and revising as needed. Flesh out rubric dimensions with specific performance criteria share with the Assessment Liaison Council and external reviewers, getting feedback about what was clear, what was not, and how much detail was useful. (Guide to Assessment Rubric (Oct 2009))
Test a draft rubric November 2009 Solicit program assessment plans for low-stakes review. Solicit a program’s report written for WSU’s 2009 NWCCU accreditation self-study and test the rubric with OAI and external reviewers.
Pilot an initial assessment cycle Dec 2009 (ratings done Dec 2009-Feb 2010) Solicit program assessment plans for formative review. Norm raters and launch and initial review. Return rubric-based feedback and scores to programs and report program scores to the institution. Via program liaisons, all WSU undergraduate programs were required to submit a first draft of an assessment self-study by Dec 18, 2009. Programs were given a template with areas for each rubric dimension. In the first cycle, only three of the four dimensions were required. Reviewers participated in a norming session but in the initial phase all scores were reconciled if they were more than a point apart or if there was a split at the “competency” level. Occasionally, a 3rd rater was required. Assessment plans were scored with the rubric but it was emphasized that this was an initial and provisional rating. Guide to Assessment (Dec 2009)
Revise rubric/ assessment process. February-March 2010 Solicit feedback from programs as well as reviewers about what worked and what didn’t. Revise rubric and assessment process to be more useful and efficient. The rubric was revised based on feedback captured from notes reviewers made as they were using the rubric as well as feedback from programs via an online survey. Informal observations from programs to their OAI contacts were also included. The 4 dimensions remained essentially the same but number of levels, level names and wording of rubric criteria changed considerably. All wording was changed in a positive format (what was happening versus what was missing) and a definition of terms was added as a cover page. The 6 point scale remained the same.
Test rubric April May 2010 Solicit feedback from stakeholders in the process of using the rubric on a sample report The revised rubric was tested internally by rating a report written for the December 2009 cycle. The rubric and report were also used by an audience at a regional assessment conference. Despite not norming together the internal and external reviewers agreed fairly closely.
Launch a second assessment cycle. May-August 2010 Solicit program assessment plans. Norm raters on revised rubric and begin review process. Return rubric-based feedback and scores to programs and report program scores to the institution. Programs were required to submit a second draft of an assessment self-study by May 17, 2010 (with an option to delay to August 20). They used a similar template with areas for each rubric dimension. In the second cycle, all four dimensions were required. Reviewers participated in an extensive norming session over several days and the rubric was tweaked slightly.Guide to Effective Program Assessment Rubric (May 2010) and the slightly revised version Guide to Effective Program Learning Outcomes Assessment Rubric (August 2010)
October – December 2010 Review quantitative and qualitative evidence of the review process Studies of interrater agreement were conducted along with collection of observations from using the rubric (The scoring tool had a place for reviewers to make comments about the rubric in the context of a specific rating effort.). These were used to begin framing the next revision of the rubric and template. 

OAI Interrater analysis (version 1) excel datasheet

Draft next revision of rubric and report template (Halted Dec 2010) Review literature and data from previous uses of rubric, look for patterns of rater disagreement. OAI staff began examining their experiences and the kinds of inter-rater disagreements, reviewed literature for key performance criteria, examined notes left by reviewers as they used the rubric May-Sept. The resulting notes were intended as input to the next revision.Guide to Effective Program Learning Outcomes Assessment Rubric (Dec 2010 notes) In addition, the template programs used to complete the report was revised to better prompt the writing. Program Learning Outcomes Assessment Template Revision Dec 2010

CHEA 2011 Award Submitted

CHEA 2011 Award Submitted CHEA has an annual awards competition ( ) for innovative assessment efforts. Attached is the WSU 2011 application, submitted last Friday, describing our pilot year of institutional assessment.

WSU CHEA 2011 Award Application

Analysis of Inter-rater agreement 2009-10


Thanks for telling me that you completed rating Honors also.

Our average ratings for that program were 5; 5.5; 4.5; 5  so we are a little lower than you, but in the same category “Integrating” in all but one.

You can see all our results here:

We are exploring two measures of inter-rater reliability, within 1 point and within the same category.

In terms of scores, see the graph, which we think is good. 83% of our scores are within 1 point of each other

Regarding being in the same category, we are not doing as well, it seems that we often come close, but straddle the lines.

What is valuable about you rating 2 programs (one high and one low) is that we can begin to get a sense that you see our measure in the same way that we do.  Another kind of test we need to do is see if outsiders agree with us in the messy middle.

We have more work like this to do with external stakeholders to see how well our tool plays in the wider arenas


On 10/13/10 4:40 PM, “Lee” wrote:

> Hi Nils,
> I sent in my review of Honors.  I gave them all top marks.  Was I right?  They
> struck me as being the Bar we’re all trying to reach!  It’s almost like you
> wrote the review rubric to FIT what they’ve done!?
> Lee
> ________________________________
> From: Nils Peterson []
> Sent: Tuesday, September 28, 2010 3:47 PM
> To: Lee
> Subject: Another WSU program to review
> Lee,
> Rather than Business, I’m giving you our Honors program. This is a program
> that has worked with our unit for several years and we know them.
> I think you will find it contrasts from Biology’s report in ways that may help
> you exercise more of the rubric’s scale.
> Thanks for your interest and help

Norming and workflow for May 17 self study reviews

Norming and workflow for May 17 self study reviews This is another post linked to the notes for OAI Team in this note

There are three images
Results of norming on the Sociology Self-Study from Dec using the May 17 rubric. This chart compares OAI results with the results Gary & Nils brought back from the Assessment, Teaching and Learning conference the end of April. In that setting about 40 conference attendees rated Sociology, with about 10 people rating each dimension. No person rated all 4 dimensions. This session was done without norming, as part of sharing the WSU system.

A key agreement in the norming process was how to treat the self-studies.
1) the whole document is being read to inform each dimension, even if the particular section of the report does not describe the features sought in the rubric.
2) no “benefit of the doubt” or “inferring;” if the program did not describe a feature of its process, then it is to be treated as missing — an attempt to simulate the practice of outside accreditors. It was hoped that the feedback would be used by raters to communicate to programs in constructive ways when the score did not reflect what was known to exist.

Work flow for rating reports walks through the raters and OAI Contact roles and the flow of the documents and data. This process is somewhat changed from the December process, with more focus on how the raters reconcile and provide feedback to the OAI contact– with more emphasis on how to provide feedback to the program.

Plan for first week is a sketch of the timeline and scheduling to help OAI think about how to coordinate among various raters and reviewers.

Norming on the Guide to Assessment Rubric

Results of Norming on the Guide to Assessment rubric This image captures the data from OAI’s norming work on the first 2 dimensions of the May 17 Guide to Assessment rubric. The sample being assessed is the Sociology self-study from December 2009.

The red data are from Gary & Nils’ presentation to the Assessment, Teaching and Learning conference (“Compared to what?”) the end of April in Vancouver. ATL is a statewide conference hosted by the State Board of Community and Technical Colleges. We shared the rubric and the Sociology self study, divided the audience of 40 into 4 groups and had each group rate (un normed) the self-study using the rubric. We captured their data and displayed it in real time as a radar plot. From that effort we recruited several audience members to serve as external raters of the May 17 self studies.

Awards Ceremony 3/13- we will be recognizing exceptional leadership in college and program level assessment

From: Ater-Kranov, Ashley
Sent: Thursday, April 08, 2010 1:55 PM
To: OAI.Personnel
Subject: Awards Ceremony 3/13- we will be recognizing exceptional leadership in college and program level assessment

Please see the attached invitation. The awards list is below.

Office of Assessment & Innovation
Berat Hakan Gurocak
Assessment Leadership for Mechanical Engineering WSU Vancouver
Office of Assessment & Innovation
Douglas Jasmer
Assessment Leadership for the Veterinary Medicine Program
Office of Assessment & Innovation
Roberta Kelly
Assessment Leadership for the Murrow College of Communication
Office of Assessment & Innovation
Kimberlee Kidwell
Assessment Leadership for the Agricultural and Food Sciences Program
Office of Assessment & Innovation
Kimberlee Kidwell
Assessment Leadership for the College of Agriculture, Human and Natural Resources
Office of Assessment & Innovation
Kimberlee Kidwell
Assessment Leadership for the Integrated Plant Sciences Program
Office of Assessment & Innovation
Lisa McIntyre
Assessment Leadership for the Department of Sociology
Office of Assessment & Innovation
John Turpin
Assessment Leadership for the Department of Interior Design at WSU Spokane
Office of Assessment & Innovation
Phillip Waite
Assessment Leadership for the Department of Landscape Architecture
Office of Assessment & Innovation
Libby Walker
Assessment Leadership for the Honors College

An Invitation to the University College Awards Ceremony
Further information about these awards appeared in these blog posts:

HEC Board Report (Feb 25, 2010)

Sent and accepted by WSU administration….
attached report  HEC Board Outcomes Report (March 2010 FINAL)

Updated OAI web presence

The OAI website is updated along with new landing page for UniversityPortfolio/2009-2010 (the ‘dirdash’ page) that provides summary data from the Dec 18 self-study effort. In addition, preparations are in place for the ‘May Folder’ with materials for writing the May self-study and a more general ‘Resources folder’ for Outcomes Assessment resources not related to writing a particular self-study. Finally, conversations are open with the Provost’s office for revisions to the University’s Accreditation page. Still missing is opening up public access to the root of UniversityPortfolio, with a timeline history of the assessment activities.

Status of reports for 2009

Status of reports for 2009

Hi Larry,
Here is where we are at.

1.      “Ready” means we have it and are reviewing it now (or have reviewed and rated).
2.     “Not ready” means we are either talking with or working with programs to get their planning documents (and plans) in shape.
3.      “Not received” means just that, though the overall response is complicated by the limbo status of professional programs.  We will review the few we have received and work with their interested people.

At this point, we will “score” them not using the full scale of the rubric, but categorically into three bins which will be sorted on Monday.  All documents that have been shared with us will receive written feedback.  The three bins:

1.      On Target and going well (high)
2.      On Target with work
3.      Doubtful that adequate progress will be made this spring.

We will press to meet zero week with category #2.  I suspect category #3 represents the meetings with deans that you may want to initiate.

I’m taking my hardworking crew (those that have lasted through this adventure) to Riccos for some Nogg of the Egg at 2:00.  If you are free, please consider joining us.

And have a great holiday!

Attached spreadsheet with details by program status of 2009 self-study reports

Recognition from VP Quality, Curriculum, and Assessment at AAC&U

From a 12/15/2009 webcast, Terry Rhodes,Vice President for Quality, Curriculum, and Assessment at AAC&U:

Questioner: “How is VALUE & Power of Rubrics to assess learning playing in the VSA [Volunteer System of Accountability] Sphere?”

Rhodes: [VSA is ] very concerned about comparability among institutions, but they have indicated they would love campuses to use rubrics and to report on them, but they want to have some way that they can provide comparability. I think again the work that is going on at Washington State begins to provide a way to do that. It’s not necessarily a score but is a wonderful rich way to provide the multiplicity and multiple dimensions of learning in a graphic way that is easily represented and easily communicated.

Questioner: “Are there any accreditor responses to the use of rubrics (vs VSA test scores) share?”

Rhodes: “All of the accrediting workshops at SACS at Middle States–are very heavy into that. Northwest is one area that has lagged a little behind on this, but I think with Washington State pushing them they are going to get more enthusiastic. All of the accreditors have actually viewed rubrics, and the use of them, and the reporting of learning using rubrics as much more useful for campuses than a single test score.