Institutional Self-Assessment Rubric

This post is in support of a TLT webinar, the series is titled “Power of Rubrics
[Archive of session]
Gary Brown, Theron DesRosier, Jayme Jacobson & Nils Peterson, Washington State University

Introduction and Background on the Problem

Washington State University is in the process of responding to changes in by its accrediting body, NWCCU. The response includes the transformation of the former Center for Teaching Learning and Technology (CTLT) to the Office of Assessment and Innovation (OAI).

The University is in the process of developing its response to NWCCU’s changed standards and OAI is helping move the institutional thinking toward one that embeds assessment in ways that help faculty to think about student learning outcomes, and about the processes that programs are using to assess their work on improving outcomes.

This work builds on work of the former CTLT known to as “Harvesting Gradebook.” Previous reports will provide context on using the Harvesting Gradebook with students: AAC&U report Jan 2009 Update Fall 2009. This report links to a webinar archive that paints a picture of how to roll harvesting up, From Student Work to University Accreditation.
Using Harvesting Feedback with Academic Programs

In the previous webinar (From Student Work to University Accreditation) we described a vision for how harvesting could be used to move data from the level of an individual piece of student work up through levels of assessment and reflection to a university-level accreditation report. Presently OAI is engaged in deploying a middle level piece of this vision, the assessment of program-level self studies with an “Assessment of Assessment” rubric. The most current version of the rubric and other materials for the process are linked from grey portion of OAI website banner.

Figure 1. The process involves the academic program collecting evidence, writing a self study, and having the self study assessed with the University’s rubric (Called Guide to Assessment on the OAI website, formerly called Assessment of Assessment rubric). This image shows the process from data sources (upper left, to self study, to rubric-based assessment, to radar graph of results. This diagram represents work on the level of an academic program, a “middle tier” in the vision presented in From Student Work to University Accreditation.

Readers interested in trying the process are invited to do so at the WSU University Portfolio site for 2009-10. The Department of Rocket Science (on the site as of 12/7/09) was created as a sample. Other programs appearing on the site (beginning in January 2010) are actual WSU programs seeking formative feedback. (strikeout March 2010) Contact us if you want.

A Prezi visual of the WSU assessment calendar, provides an interactive picture of the assessment cycle and calendar and will serve as a “Dashboard” for monitoring progress.

Guide to Assessment – Rubric
Because of the wide diversity of programs in the WSU 4-campus system, a one size approach to learning outcomes assessment will not fit all. Consequently, WSU is developing a rubric to assess the self-study plans (short form and long form). Like the AAC&U VALUE project, the WSU rubric assumes that “to achieve a high-quality education for all students, valid assessment data are needed to guide planning, teaching, and improvement.”

The Guide to Assessment is the tool OAI is creating to help programs assess the quality of their student learning outcomes assessment activities. Using the Harvesting mechanism, programs will be able to gather evidence from stakeholders outside the university — a requirement of the accreditor — as well as gathering self-, peer- and OAI reviews.

Short form of the Rubric

Advertisements

External Interest in Rain King from TLT Group

Program review rubric

From: Stephen C. Ehrmann [mailto:ehrmann@tltgroup.org]
Sent: Monday, October 26, 2009 12:06 PM
To: Larry Ragan; Abdous, M’Hammed; Jim Zimmer
Cc: Gary Brown
Subject: Program review rubric

Hi,
I mentioned to each of you that Gary Brown and his colleagues were in the early stages of using Flashlight Online [TLTGroup’s re-branding of the WSU online survey tool Skylight] to deploy an interesting set of rubrics for program review/evaluation.  Programs would get the rubrics in advance and use those ideas to document their performance. Their reports and a Flashlight form with the rubrics could then be sent to reviewers; their responses to the rubric could then then be easily summarized and displayed. I’ve seen the rough draft of their rubric and it seems quite promising to me. It’s designed for the review of academic departments, but I think the idea could be adapted for use with faculty support/development units.
When the material is ready for a wider look in a few weeks, Gary will send me a URL and I can pass that along. Or you could contact Gary directly if you like. His email address is BrownG@wsu.edu
Steve
**********
Stephen C. Ehrmann, Ph.D.
Director of the Flashlight Program for the Study and Improvement of Educational Uses of Technology;
Vice President, The Teaching, Learning, and Technology Group, a not-for-profit organization
Mobile: +1 240-606-7102
Skype: steveehrmann

The TLT Group:http://www.tltgroup.org
The Flashlight Program:http://www.tltgroup.org/flashlightP.htm
Blog:http://tlt-swg.blogspot.com/

(Old Dominion) (Penn State) are both thinking about how to design comprehensive evaluations of faculty support.  Your rubric for program review seems like it could be adapted to their purposes.   I was talking with folks from Mount Royal (Calgary) at ISSOTL;

From Student Feedback to University Accreditation

This post supports a free TLT Group webinar
The Harvesting Gradebook: From Student Feedback to University Accreditation

The event is now available as an archive made Friday, September 25, 2009  2:00 pm (ET)

Theron DesRosier, Jayme Jacobson, Nils Peterson, & Gary Brown
Office of Assessment and Innovation, Washington State University

This webinar is an extension of our previous thinking “Learning from the Transformative Gradebook.” Prior to (or following) the session, participants are invited to review a previous session and demonstration of these techniques being applied at the course level

During the session, participants will be invited to pilot our Assessment of Assessment rubric on a program-level accreditation report and to discuss the broader implications of the strategies proposed.

This hour long session will:

1. Review WSU’s model implementations of the Harvesting Gradebook that can be used to gather feedback on student work and the assignments that prompted it. (Background on Harvesting Gradebook)

2. Show how data from harvested assessments at the course level can flow up, providing direct evidence of learning outcomes at the program, college and university levels.

3. Demonstrate a systemic assessment process that applies a common assessment of assessment rubric across the all university’s assessment activities

4. Invite the audience to provide feedback on the Assessment of Assessment rubric by assessing an accreditation report. The goals of the hands-on activity are to:

  1. Gather feedback on the rubric
  2. Demonstrate time effective means of gathering feedback from a diverse community on assessment activities

5. New perspective on  Curricular Mapping, using harvested data.

Further Reading

The Prezi document used in the session (requires Adobe Flash).

Harvesting Gradebook in Production: We have been investigating the issues on the WSU campus surrounding taking the harvesting gradebook into production. While all the integrations with WSU Student Information Systems are not in place yet, we can see a path that automates moving student enrollments from the registrar to create a harvesting survey and moving the numeric scores from the survey back to the instructor where they might combine with other course scores to create the final grade that can be uploaded to the Registrar. A mostly automated pilot is being implemented Fall 2009.

Student Evaluations of Program Outcomes: The presentation references the idea of using student course evaluations to gather indirect evidence on the course’s achievement of the program’s learning outcomes. For several years, WSU’s College of Agriculture Human and Natural Resource Sciences (CAHNRS) has used a course evaluation college wide that asks students their perception of how much the course helped them developing skills in: thinking critically, writing, speaking, working on a team and other dimensions that align with university learning goals. We have explored gathering the faculty goals related to these skills and comparing them to the student perceptions.  To date, this data has not been systematically rolled up or used as evidence in accreditation self-studies.

Harvesting feedback on a course assignment

This post demonstrates harvesting rubric-based feedback in a course, and how the feedback can be used by instructors and programs, as well as students. It is being prepared for a Webinar hosted by the TLT group. (Update 7/28: Webinar archive here. Minutes 16-36 are our portion. Minutes 24-31 are music while participants work on the online task. This is followed by Terry Rhodes of AAC&U with some kind comments about how the WSU work illustrates ideas in the AAC&U VALUE initiative. Min 52-54 of the session is Rhodes’ summary about VALUE and the goal of rolling up assessment from course to program level. This process demonstrates that capability.)

Webinar Activity (for session on July 28) Should work before and after session, see below.

  1. Visit this page (opens in a new window)
  2. On the new page, compete a rubric rating of either the student work or the assignment that prompted the work.

Pre/Post Webinar

If you found this page, but are not in the webinar, you can still participate.

  • Visit the page above and rate either the student work or assignment using the rubric. Data will be captured but not be updated for you in real time.
  • Explore the three tabs across the top of the page to see the data reported from previous  raters.
  • Links to review:

Discussion of the activity
The online session is constrained for time, so we invite you to discuss the ideas in the comment section below. There is also a TLT Group “Friday Live” session  being planned for on Friday Sept 25, 2009 where you can join in a discussion of these ideas.

In the event above, we demonstrated using an online rubric-based survey to assess an assignment and to assess the student work created in response to the assignment. The student work, the assignment, and the rubric were all used together in a course at WSU. Other courses we have worked with have assignments and student products that are longer and richer, we chose these abbreviated pieces for pragmatic reasons, to facilitate a rapid process of scoring and reporting data during a short webinar.

The process we are exploring allows feedback to be gathered from work in situ on the Internet (e.g., a learner’s ePortfolio), without requiring work be first collected into an institutional repository. Gary Brown coined the term “Harvesting Gradebook”  to describe the concept, but we have come to understand that the technique can “harvest” more than grades, so a better term might be “harvesting feedback.”

harvesting-gradebook1

This harvesting idea allows a mechanism to support community-based learning (see Institutional-Community Learning Spectrum). As we have been piloting community-based learning activities from within a university context, we are coming to understand that it is important to assess  student work and assignments and the assessment instruments.

Importance of focusing assessments on Student Work

Gathering input on student projects provides the students with authentic experiences, maintains ways to engage students in  authentic communities, helps the community consider new hires, and gives employers the kind of interaction with students that the university can capitalize when asking for money. But, we also have come to understand that assessing student learning often yields little change in course design or learning outcomes, Figure 1. (See also http://chronicle.com/news/article/6791/many-colleges-assess-learning-but-may-not-use-data-to-improve-survey-finds?utm_source=at&utm_medium=en )

graph of 5 years of outcomes data

Figure 1. In the period 2003-2008 the program assessed student papers using the rubric above. Scores for the rubric dimensions are averaged in this graph. The work represented in this figure is different than the work being scored in the activity above. The “4” level on the rubric was determined by the program to be competency for a student graduating from the program.

The data in Figure 1 come from the efforts of a program that has been collaborating with CTLT for five years. The project has been assessing student papers using a version of the Critical Thinking Rubric tailored for the program’s needs.

Those efforts, measuring student work alone, did not produce any demonstrable change in the quality of the student work (Figure 1). In the figure, note that:

  • Student performance does not improve with increasing course level, eg 200,300,400-level within a given year
  • Only one time were students judged to meet the competency level set by the program itself (2005 500-level)
  • Across the years studied, student performance within a course level did not improve, e.g., examine the 300-level course in 2003, 2006, 2007, 2008

Importance of focusing assessments on Assignments

Assignments are important places for the wider community to give input, because the effort the community spends assessing assignments can be leveraged across a large group of students. Additionally, if faculty lack mental models of alternative pedagogies, assignment assessment helps focus faculty attention on very concrete strategies they can actually use to help students improve.

The importance of assessing more than just student work can be seen in Figure 1. As these results unfolded, we suggested to the program that it focus attention on the assignment design. They just did not follow through as a program in reflecting on and revising the assignments, nor did they follow through with suggestions to improve communication of the rubric criteria with students.

Figure 2 shows the inter-rater reliability from the same program. Note that the inter-rater reliability is 70+% and is consistent year to year.

Graph of inter-rater reliability data

Graph of inter-rater reliability data

This inter-rater reliability is borderline and problematic because, when extrapolated to high stakes testing, or even grades, this marginal agreement speaks disconcertingly to the coherence (or lack there of) of the program.

Figure 3 comes from a different program. It shows faculty ratings (inter-rater reliability) on a 101-level assignment and provides a picture of the maze, or obstacle course, of faculty expectations that students must navigate. Higher inter-rater reliability would be indicative of greater program coherence and should lead to higher student success.

Interrater reliability detail

Importance of focusing assessments on Assessment Instruments

Our own work and Allen and Knight (table 4) have found that faculty and professionals place different emphasis on the importance of criteria used to assess student work. Assessing the instrument in a variety of communities offers the chance to have conversations about the criteria and address questions of the relevance of the program to the community.

Summary

The intention of the triangulated assessment demonstrated above (assignment, student work and assessment instrument) is to keep the conversation about all parts of the process open to develop and test action plans that have potential to enhance learning outcomes. We are moving from pilot experiments with this idea to strategies to use the information to inform program-wide learning outcomes and to feed that data into ongoing accreditation work.