WSU System for Student Learning Outcomes Assessement

WSU’s Federated System of Student Learning Outcomes Assessment
2009-10

The WSU system is a federated one, premised on the need for programs to tailor assessment to their particular circumstances and context, yet coordinate with the central effort. The first fruits of the system are in the University’s 2009-10 portfolio of its assessment and accreditation work.

During the first cycle of the System, the Office of Assessment and Innovation (OAI) asked each Undergraduate Degree-granting Program to write an annual report on its student learning outcomes assessment work. The report summarizes how each undergraduate program:

  1. Chooses and defines assessment questions of interest and verifies these questions in dialog with its relevant stakeholder group.
  2. Defines goals, learning outcomes and direct measures of student learning that are well articulated and credible to stakeholders. Mature assessment efforts include direct measures of student learning that are mapped by the program to WSU’s institutional goals.
  3. Presents evidence collected in its assessment and cogently analyzes that data to suggest next action steps, which are verified with stakeholders.
  4. Motivates, facilitates and provides supportive leadership with concrete policies for the activities above.

The Provost’s office, through the Executive Council:

Chooses and defines assessment questions of interest to the university and verifies these questions in dialog with stakeholders (eg HECBoard, OFM, NWCCU, Washington Legislature) and in the national conversation about assessment and accreditation (AACU, AEA, etc) working with OAI, reports annually how the university:

  1. Articulates goals and learning outcomes that are credible to internal and external stakeholders. (eg, review and revise WSU 6 learning goals)
  2. Develops university-wide assessment questions (e.g., purpose for assessment beyond accreditation needs)
  3. Presents evidence collected in institution-wide assessment activities and cogently analyzes that data to suggest next action steps for the assessment system and the university, which are verified with WSU’s stakeholders. (eg, see annual updates in UniversityPortfolio.wsu.edu)
  4. Implements policies that motivate, facilitate and provide a supportive context for the activities above.

The Office of Assessment and Innovation facilitates the work of the Provost by:

  • Consulting with programs about their assessment activities, including providing feedback to programs on their reports.
  • Collecting annual program reports and facilitating evaluation of the reports using the Guide to Effective Student Learning Outcomes Assessment rubric.
  • Collecting data from programs on their direct measures of student learning outcomes
  • Reporting to the public and WSU constituents on efficacy of university and program-level assessment activities and attainment of learning outcomes.

Bibliography:

Advertisements

CHEA 2011 Award Submitted

CHEA 2011 Award Submitted CHEA has an annual awards competition (http://chea.org/2011_CHEA_Award.html ) for innovative assessment efforts. Attached is the WSU 2011 application, submitted last Friday, describing our pilot year of institutional assessment.

WSU CHEA 2011 Award Application

Planning the responses to College of Engineering and Architecture

Planning the responses to College of Engineering and Architecture Meeting notes today to organize efforts to send feedback to the college and its programs

Analysis of Inter-rater agreement 2009-10

Lee,

Thanks for telling me that you completed rating Honors also.

Our average ratings for that program were 5; 5.5; 4.5; 5  so we are a little lower than you, but in the same category “Integrating” in all but one.

You can see all our results here: https://universityportfolio.wsu.edu/2009-2010/Pages/default.aspx

We are exploring two measures of inter-rater reliability, within 1 point and within the same category.

In terms of scores, see the graph, which we think is good. 83% of our scores are within 1 point of each other

Regarding being in the same category, we are not doing as well, it seems that we often come close, but straddle the lines.

What is valuable about you rating 2 programs (one high and one low) is that we can begin to get a sense that you see our measure in the same way that we do.  Another kind of test we need to do is see if outsiders agree with us in the messy middle.

We have more work like this to do with external stakeholders to see how well our tool plays in the wider arenas

Nils

On 10/13/10 4:40 PM, “Lee” wrote:

> Hi Nils,
>
> I sent in my review of Honors.  I gave them all top marks.  Was I right?  They
> struck me as being the Bar we’re all trying to reach!  It’s almost like you
> wrote the review rubric to FIT what they’ve done!?
>
> Lee
> ________________________________
> From: Nils Peterson [nils_peterson@wsu.edu]
> Sent: Tuesday, September 28, 2010 3:47 PM
> To: Lee
> Subject: Another WSU program to review
>
> Lee,
>
> Rather than Business, I’m giving you our Honors program. This is a program
> that has worked with our unit for several years and we know them.
>
> I think you will find it contrasts from Biology’s report in ways that may help
> you exercise more of the rubric’s scale.
>
> Thanks for your interest and help

Continued work to develop UnivPort website

Here is a calendar of our end of year 2009-10 work to finish reports to NWCCU and discussion around data and representations to get the website developed.

—— Forwarded Message
From: Nils Peterson
Date: Thu, 14 Oct 2010 10:20:35 -0700
To: Corinna Lo , Gary Brown , Joshua Yeidel , Peg Collins
, Jayme Jacobson
Conversation: Ratings Page on UP/2009-2010
Subject: Re: Ratings Page on UP/2009-2010

Corinna, Its helpful to understand your intention. It seems to me that this may also be a discussion about summative vs formative. That is, ready comparison could encourage programs to find where they are in the community, to quickly find programs that performed higher on a given dimension, etc. The current implementation that makes comparison difficult makes it feel to me like our intention is summative.

On 10/13/10 9:16 PM, “Corinna Lo” wrote:

I think technically it would be possible in Mathematica.  My concern initially when I made the chart is that checkboxes are too inviting for comparison… For this particular use case, people will naturally be drawn to compare between programs across colleges.  This is what made me created the chart in dropdown menu instead.  I can envision other use cases, such as comparing program’s target and direct assessment of student work.  Then checkbox will be a good option to have.

– corinna

On 10/13/10 8:27 PM, “Brown, Gary” wrote:

I like the one with small charts to the right. It works fine. I also will think about trimming the prose.  I imagine, however, we will be asked to provide the check box comparisons we had in the Google docs approach.  Will this be possible to do with Mathmatica?

From: Yeidel, Joshua
Sent: Wednesday, October 13, 2010 7:32 PM
To: Lo, Corinna; Collins, Peggy; Peterson, Nils; Brown, Gary; Jacobson, Jayme K
Subject: Ratings Page on UP/2009-2010

I was dissatisfied with the layout of the Ratings page because the copy is so long it pushed the charts “below the fold”, as they say in the newspaper business.

I put together a test page called Ratings1 (not published, available only by logging in):
https://universityportfolio.wsu.edu/2009-2010/Pages/Ratings1.aspx

The College chart on this page comes from a test Mathematica page I made based on Corinna’s chart page, but somewhat smaller.  [I didn’t bother with the other charts until I get some feedback on this.]  I don’t think the chart can’t be much smaller because the dropdown has to fit the full name of CAHNRS.

If you look at this in a 1024×768 window, the copy column is narrow, but not unbearably so, IMO.  In wider windows, it works fine.

What do you think?

— Joshua

—— End of Forwarded Message

Examining the quality of our assessment system

With most of the 59 programs rated for this round, we are beginning an analysis of our system of assessment.

To re-cap, we drafted a rubric about a year ago and tested it with Honors college self-study and a made-up Dept of Rocket Science self-study. We revised the rubric with discussions among staff and some external input. In December we used the rubric to rate reports on 3 of 4 dimensions (leaving off action plan in the first round). Based on observations in the December round, the rubric was revised in mid-spring 2010.

We tested the new rubric at a state-wide assessment conference workshop in late April, using a program’s report from December. The group’s ratings agreed pretty well with our staff’s (data previously blogged).

The May-August versions of the rubric are nearly identical, with only some nuance changes based on the May experiences.

The figure below is a study of the ratings of OAI staff on each of 4 rubric dimensions. It reports the absolute value of difference of the ratings for each pair of raters — a measure of the inter-rater agreement. We conclude that our ratings are in high agreement [a 54% are 0.5 point or closer agreement (85/156); 83% are 1.0 point or closer]. We also observe that the character of the distribution of agreement is similar across all four of the rubric dimensions.

Calendar for the last month before NWCCU Report

This image shows more details of our understanding of the last 30 days before the NWCCU report on 10.15.2010. A year ago we “guessed” the date at 10.10.10, which still shows in the figure.

Late reports have come in around the 9.17 deadline, driven by President/Provost goal of getting to 100% reporting. The report (or at least the exec summary) is going thru Provost to Regents, hence the 9.17 deadline before the Sept Regents meeting.  Another post in this chronicle gives more details of the web reporting that needs to be accomplished in the remaining weeks.