Developing WSU’s Guide to Effective Student Learning Outcomes Assessment Rubric

The table below charts the steps that Washington State University’s Office of Assessment and Innovation (OAI) and its stakeholders went through to develop the Guide to Effective Program Assessment rubric used in the WSU System of Student Learning Outcomes Assessment.

Description Generalized Model Specific example of application at WSU’s Office of Assessment and Innovation.
Determine the scope of the program assessment initiative. Solicit stakeholder input about the purpose of the assessment. Determine which aspects of program assessment will be most useful for the institution to assess Gary met with provosts and assistant deans to frame the general parameters.
Develop a framework for assessing assessment at the meta-level. Research existing literature (and tools) to begin delineating rubric dimensions. Developed a framework for the rubric based on the WASC Evidence Guide, writings from Peter Ewell and others, and the Transformative Assessment Rubric, an EDUCAUSE Learning Initiative project to evaluate the responsiveness of assessment plans.
Flesh out criteria for each dimension. October 2009 Begin drafting rubric criteria, periodically reviewing with stakeholders and revising as needed. Flesh out rubric dimensions with specific performance criteria share with the Assessment Liaison Council and external reviewers, getting feedback about what was clear, what was not, and how much detail was useful. (Guide to Assessment Rubric (Oct 2009))
Test a draft rubric November 2009 Solicit program assessment plans for low-stakes review. Solicit a program’s report written for WSU’s 2009 NWCCU accreditation self-study and test the rubric with OAI and external reviewers.
Pilot an initial assessment cycle Dec 2009 (ratings done Dec 2009-Feb 2010) Solicit program assessment plans for formative review. Norm raters and launch and initial review. Return rubric-based feedback and scores to programs and report program scores to the institution. Via program liaisons, all WSU undergraduate programs were required to submit a first draft of an assessment self-study by Dec 18, 2009. Programs were given a template with areas for each rubric dimension. In the first cycle, only three of the four dimensions were required. Reviewers participated in a norming session but in the initial phase all scores were reconciled if they were more than a point apart or if there was a split at the “competency” level. Occasionally, a 3rd rater was required. Assessment plans were scored with the rubric but it was emphasized that this was an initial and provisional rating. Guide to Assessment (Dec 2009)
Revise rubric/ assessment process. February-March 2010 Solicit feedback from programs as well as reviewers about what worked and what didn’t. Revise rubric and assessment process to be more useful and efficient. The rubric was revised based on feedback captured from notes reviewers made as they were using the rubric as well as feedback from programs via an online survey. Informal observations from programs to their OAI contacts were also included. The 4 dimensions remained essentially the same but number of levels, level names and wording of rubric criteria changed considerably. All wording was changed in a positive format (what was happening versus what was missing) and a definition of terms was added as a cover page. The 6 point scale remained the same.
Test rubric April May 2010 Solicit feedback from stakeholders in the process of using the rubric on a sample report The revised rubric was tested internally by rating a report written for the December 2009 cycle. The rubric and report were also used by an audience at a regional assessment conference. Despite not norming together the internal and external reviewers agreed fairly closely.
Launch a second assessment cycle. May-August 2010 Solicit program assessment plans. Norm raters on revised rubric and begin review process. Return rubric-based feedback and scores to programs and report program scores to the institution. Programs were required to submit a second draft of an assessment self-study by May 17, 2010 (with an option to delay to August 20). They used a similar template with areas for each rubric dimension. In the second cycle, all four dimensions were required. Reviewers participated in an extensive norming session over several days and the rubric was tweaked slightly.Guide to Effective Program Assessment Rubric (May 2010) and the slightly revised version Guide to Effective Program Learning Outcomes Assessment Rubric (August 2010)
October – December 2010 Review quantitative and qualitative evidence of the review process Studies of interrater agreement were conducted along with collection of observations from using the rubric (The scoring tool had a place for reviewers to make comments about the rubric in the context of a specific rating effort.). These were used to begin framing the next revision of the rubric and template. 

OAI Interrater analysis (version 1) excel datasheet

Draft next revision of rubric and report template (Halted Dec 2010) Review literature and data from previous uses of rubric, look for patterns of rater disagreement. OAI staff began examining their experiences and the kinds of inter-rater disagreements, reviewed literature for key performance criteria, examined notes left by reviewers as they used the rubric May-Sept. The resulting notes were intended as input to the next revision.Guide to Effective Program Learning Outcomes Assessment Rubric (Dec 2010 notes) In addition, the template programs used to complete the report was revised to better prompt the writing. Program Learning Outcomes Assessment Template Revision Dec 2010
Advertisements

How a rubric can communicate

OAI has been finishing up its 2009-10 cycle of reviews of program-level assessment, see the University’s portfolio for details about the process and the results.

One of the responses to a program regarding communication with stakeholders had a summary of the utility of a rubric as a communication tool:

Under “Communication” the report states: “Program Objectives and Outcomes will be more extensively discussed with the students in classes to encourage more participation in the assessment and improvement process.”

A programmatic assessment rubric could be a very useful tool to encourage students, and other stakeholders, to participate in the assessment and improvement process. For example a rubric:

  • Provides a reference point for students to consult repeatedly as they monitor their own learning and develop the skills of self-assessment. Students are supported in becoming better judges of quality in their own and others’ work.
  • Supports the development of a sense of shared expectations among students, faculty, staff, and external stakeholders.
  • Provides evaluators and those whose work is being evaluated with rich and detailed descriptions of what is being learned and what is not by facilitating a breaking down of outcomes into dimensions and of dimensions into criteria.
  • Provides criteria to shape and guide students’ engagement with one another and with course content.
  • Promotes a shared understanding among faculty, students, and stakeholders of the program outcomes.

Awards Ceremony 3/13- we will be recognizing exceptional leadership in college and program level assessment

From: Ater-Kranov, Ashley
Sent: Thursday, April 08, 2010 1:55 PM
To: OAI.Personnel
Subject: Awards Ceremony 3/13- we will be recognizing exceptional leadership in college and program level assessment

Please see the attached invitation. The awards list is below.

Office of Assessment & Innovation
Berat Hakan Gurocak
hgurocak@wsu.edu
Assessment Leadership for Mechanical Engineering WSU Vancouver
Office of Assessment & Innovation
Douglas Jasmer
djasmer@wsu.edu
Assessment Leadership for the Veterinary Medicine Program
Office of Assessment & Innovation
Roberta Kelly
rkelly@wsu.edu
Assessment Leadership for the Murrow College of Communication
Office of Assessment & Innovation
Kimberlee Kidwell
kidwell@wsu.edu
Assessment Leadership for the Agricultural and Food Sciences Program
Office of Assessment & Innovation
Kimberlee Kidwell
kidwell@wsu.edu
Assessment Leadership for the College of Agriculture, Human and Natural Resources
Office of Assessment & Innovation
Kimberlee Kidwell
kidwell@wsu.edu
Assessment Leadership for the Integrated Plant Sciences Program
Office of Assessment & Innovation
Lisa McIntyre
ljmcint@wsu.edu
Assessment Leadership for the Department of Sociology
Office of Assessment & Innovation
John Turpin
jturpin@wsu.edu
Assessment Leadership for the Department of Interior Design at WSU Spokane
Office of Assessment & Innovation
Phillip Waite
pswaite@wsu.edu
Assessment Leadership for the Department of Landscape Architecture
Office of Assessment & Innovation
Libby Walker
walkerl@wsu.edu
Assessment Leadership for the Honors College

An Invitation to the University College Awards Ceremony
Further information about these awards appeared in these blog posts:

Engaging Employers and Other Community Stakeholders

Do you have ideas or examples of good practice of working with employers to promote workforce development? UK universities and colleges are under pressure to do “employer engagement” and some are finding it really difficult. This is sometimes due to the university administrative systems not welcoming non-traditional students, and sometimes because we use “university speak” rather than “employer speak”.
— a UK Colleague

Washington State University’s Office of Assessment and Innovation has been working on this question for several years. We presented this spectrum diagram to think about how the more traditional Institution-centric learning differs from Community-based learning. It may point to some of the places your programs get stuck thinking about this question.

We have also been exploring methods to gather assessments from stakeholders (employers as well as others) about aspects of academic programs. This example shows the twinned assessment of student work using a program rubric and assessment of the faculty’s assignment that prompted the work. We invite stakeholders to engage in both assessments. In other implementations of this process, we have asked stakeholders about the utility of the rubric itself.

We also are finding differences in the language used by faculty, students and employers. When asked about the most important things to learn about in a business program we got this feedback.

Another example of different groups using different language is this one, where industry and faculty used different language with different foci to give feedback to students. Particularly we saw industry use “problem” as in “problem statement” and faculty use “problems” synonymous with “confused” and “incorrect.”

Our method for learning about both language and values is through simple surveys of stakeholders as they are engaged with us in assessment activities. For example here (In Class Norming Survey), we asked people who had just assessed student work using a program rubric the importance of the rubric itself.

In this survey (AMDT Stakeholder Survey) a fashion design and marketing program is asking industry partners about language and criteria, as a precursor to building a program-wide assessment rubric. All these activities help programs understand the wider context in which they operate.

More on this work can also be found in this article. Brown, Gary, DesRosier, T., Peterson, Nils, Chida, M., Lagier, R. 2009. Engaging Employers in Assessment. About Campus Vol 14(5) Nov-Dec 2009. NUTN award for best essay – 2009

It may help to understand that we define stakeholders broadly to account for the variation among academic programs: employers, alumni, students themselves, professional and graduate school admissions officers, audiences (as in performance arts), etc.

Presently we have developed a rubric to guide the assessment of self-studies that our academic program are doing as part of our University-wide system of assessment, a component of our institution’s regional accreditation activities. You can see a snapshot of how our Colleges are doing here.

Assessment design is messy: challenges to cooordinating flexible and useful assessment for a college and its programs

Assessment planning, designing, and implementing is a messy process, as are most authentic, responsive endeavors that involve and engage many people from a variety of roles and vantage points.   Different people and programs have different concerns and genuine questions.

What’s the relationship between college and program assessment? Student learning goals identified by a college and by its programs are at the heart of assessing student learning and planning useful and coordinated assessment activities. Many challenges to designing  a coordinated, yet flexible assessment plan that balances multiple considerations.

OAI is addressing this question now with one WSU college, Spring 2010:

From OAI contact to a program head:

I’ll briefly summarize  a) assessment efforts at WSU, b) what I understand assessment efforts in one college, c) what I understand is on the table now for next steps, and d) my role as your OAI contact, as well as OAI’s role.

Assessment efforts at WSU: college-level and program-level

The current efforts for systematic assessment at WSU include two different approaches.

In some colleges, efforts are starting primarily at the program level, including developing an assessment plan, student learning goals, rubrics/tools, measures, and processes.  This approach can create assessment that’s immediately meaningful and useful to a program — but it also brings significant challenges in terms of “rolling up” results into something coherent at the college level.   Piecemeal efforts, dozens of unaligned rubrics and measures, there is a great deal needed to make that useful at the college (and institutional) level.

In other colleges, assessment efforts have been coordinated at the school or college level.  This approach identifies common student learning goals across programs, and seeks to develop common rubrics/tools, measures and processes for assessment.  This approach can provide useful assessment results at the college level, with aligned measures (such as course evaluations, senior exit surveys, alumni surveys, etc.) providing complementary data.  It also brings the opposite challenge — how to build in flexibility and give adequate agency at the program level so that programs can identify and focus on gathering useful assessment data that interests and informs them to guide ongoing improvement in learning and teaching.

Summary of your college assessment efforts to date (as I understand them, primarily involving OAI/CTLT)

For several years … the college has invested time and effort in a developing, piloting and refining a college-wide rubric, with input from faculty in the various areas, in collaboration with OAI/CTLT.  The assessment team, in consultation with OAI/CTLT, adapted WSU’s Critical and Integrated Thinking Rubric, and revised it with input from various faculty  Over two semesters, the rubric has been piloted and refined by assessing student work from five different courses in the college.

The assessment team has also developed and piloted a senior exit survey which aligns with the rubric and has started drafting an alumni survey that is similarly aligned.

Last semester, the Dean requested two faculty to draft student learning goals for the college, in alignment with WSU’s Six Learning Goals of the Baccalaureate. This was done independent from work by the assessment team.

In February 2010, OAI helped map the college rubric to these new learning goals and WSU’s Big 6, including adding another dimension for specialty (to be developed by each program).

Next steps

In March 2010, faculty in one program brought up questions about the purpose of assessment in their area, the rubric, and the college’s new student learning goals and performance criteria created last semester.   These raise important questions about how to balance program agency and assessment needs with college-level needs, including:

  • What is the relationship between the new student learning goals and performance criteria, and the rubric developed over the past two years, the complementary measures and data?
  • In the future, how does the college want to balance college level assessment and program level assessment for a coherent and useful overall assessment effort?

These questions and their answers represent an important conversation among key players in the college, faculty and leadership.

From an assessment standpoint, there is no fixed path ahead.  Rather, those involved should identify goals, needs, constraints, and priorities. The college should probably outline a coordinated, yet flexible assessment plan that balances those considerations; it’s possible to try different approaches in different programs, to decide to focus on shared goals, or to do something in between.

As a note, some programs at WSU are in the process of developing staggered assessment plans, in which they assess only some goals each year.

OAI’s role

OAI has been working with this college’s assessment team for two years to develop, pilot, and refine a college-wide rubric and other complementary measures, including a senior exit survey and alumni survey.  OAI is also available to work with programs on their assessment plan, tools, and measures.

However, in the interest of making the best use of our collective time and efforts, before beginning to develop any program’s assessment plan, I suggest that the college consider the questions raised above.

If invited, OAI is available to consult with the college about different paths you might choose to take for useful, quality assessment.

HEC Board Report (Feb 25, 2010)

Sent and accepted by WSU administration….
attached report  HEC Board Outcomes Report (March 2010 FINAL)

Update for Program Points on Assessment Status

A sample reminder for programs falling behind in the assessment work:

Just a quick note to catch up.  I last recall you were working on an assessment planning report, and we had talked about what we thought was a pretty good plan.  Let us know if there is more we can do to help you with this work.

FYI, making sure everybody is establishing an assessment plan and system is increasingly important.  We’re now completing an update report to the HEC Board regarding status of WSU assessment.  We included the status of colleges as well as the aggregate status o WSU. Individual program level ratings were not included.

The upcoming May report results by program will be made available for public viewing. I am confident when you implement your plan,  your program will demonstrate your program’s and WSU’s commitment to continuous improvement.