The table below charts the steps that Washington State University’s Office of Assessment and Innovation (OAI) and its stakeholders went through to develop the Guide to Effective Program Assessment rubric used in the WSU System of Student Learning Outcomes Assessment.
|Description||Generalized Model||Specific example of application at WSU’s Office of Assessment and Innovation.|
|Determine the scope of the program assessment initiative.||Solicit stakeholder input about the purpose of the assessment. Determine which aspects of program assessment will be most useful for the institution to assess||Gary met with provosts and assistant deans to frame the general parameters.|
|Develop a framework for assessing assessment at the meta-level.||Research existing literature (and tools) to begin delineating rubric dimensions.||Developed a framework for the rubric based on the WASC Evidence Guide, writings from Peter Ewell and others, and the Transformative Assessment Rubric, an EDUCAUSE Learning Initiative project to evaluate the responsiveness of assessment plans.|
|Flesh out criteria for each dimension. October 2009||Begin drafting rubric criteria, periodically reviewing with stakeholders and revising as needed.||Flesh out rubric dimensions with specific performance criteria share with the Assessment Liaison Council and external reviewers, getting feedback about what was clear, what was not, and how much detail was useful. (Guide to Assessment Rubric (Oct 2009))|
|Test a draft rubric November 2009||Solicit program assessment plans for low-stakes review.||Solicit a program’s report written for WSU’s 2009 NWCCU accreditation self-study and test the rubric with OAI and external reviewers.|
|Pilot an initial assessment cycle Dec 2009 (ratings done Dec 2009-Feb 2010)||Solicit program assessment plans for formative review. Norm raters and launch and initial review. Return rubric-based feedback and scores to programs and report program scores to the institution.||Via program liaisons, all WSU undergraduate programs were required to submit a first draft of an assessment self-study by Dec 18, 2009. Programs were given a template with areas for each rubric dimension. In the first cycle, only three of the four dimensions were required. Reviewers participated in a norming session but in the initial phase all scores were reconciled if they were more than a point apart or if there was a split at the “competency” level. Occasionally, a 3rd rater was required. Assessment plans were scored with the rubric but it was emphasized that this was an initial and provisional rating. Guide to Assessment (Dec 2009)|
|Revise rubric/ assessment process. February-March 2010||Solicit feedback from programs as well as reviewers about what worked and what didn’t. Revise rubric and assessment process to be more useful and efficient.||The rubric was revised based on feedback captured from notes reviewers made as they were using the rubric as well as feedback from programs via an online survey. Informal observations from programs to their OAI contacts were also included. The 4 dimensions remained essentially the same but number of levels, level names and wording of rubric criteria changed considerably. All wording was changed in a positive format (what was happening versus what was missing) and a definition of terms was added as a cover page. The 6 point scale remained the same.|
|Test rubric April May 2010||Solicit feedback from stakeholders in the process of using the rubric on a sample report||The revised rubric was tested internally by rating a report written for the December 2009 cycle. The rubric and report were also used by an audience at a regional assessment conference. Despite not norming together the internal and external reviewers agreed fairly closely.
|Launch a second assessment cycle. May-August 2010||Solicit program assessment plans. Norm raters on revised rubric and begin review process. Return rubric-based feedback and scores to programs and report program scores to the institution.||Programs were required to submit a second draft of an assessment self-study by May 17, 2010 (with an option to delay to August 20). They used a similar template with areas for each rubric dimension. In the second cycle, all four dimensions were required. Reviewers participated in an extensive norming session over several days and the rubric was tweaked slightly.Guide to Effective Program Assessment Rubric (May 2010) and the slightly revised version Guide to Effective Program Learning Outcomes Assessment Rubric (August 2010)|
|October – December 2010||Review quantitative and qualitative evidence of the review process||Studies of interrater agreement were conducted along with collection of observations from using the rubric (The scoring tool had a place for reviewers to make comments about the rubric in the context of a specific rating effort.). These were used to begin framing the next revision of the rubric and template.
OAI Interrater analysis (version 1) excel datasheet
|Draft next revision of rubric and report template (Halted Dec 2010)||Review literature and data from previous uses of rubric, look for patterns of rater disagreement.||OAI staff began examining their experiences and the kinds of inter-rater disagreements, reviewed literature for key performance criteria, examined notes left by reviewers as they used the rubric May-Sept. The resulting notes were intended as input to the next revision.Guide to Effective Program Learning Outcomes Assessment Rubric (Dec 2010 notes) In addition, the template programs used to complete the report was revised to better prompt the writing. Program Learning Outcomes Assessment Template Revision Dec 2010|