The RainKing Chronicles

On Oct 16, 2009 at an OAI staff meeting, this whiteboard image was created to describe our vision of the RainKing Chronicles  The Chronicles were implemented in SharePoint allowing email to be cc:ed to them (an idea we learned from Margo Tamez’ portfolio work). Limitations of SharePoint meant that we needed to go and post-tag items.

With the disbanding of the team that did the work chronicled the materials were (mostly) exported to the RKChronicle tag in this blog.

Shot of whiteboard discussion of RainKing Chronicle

Shot of whiteboard discussion of RainKing Chronicle

Transparency, institutional self-presentation, and public interest

From November/December 2010 Change,

“Here’s looking at you:  Transparency, institutional self-presentation, and the public interest”
by Alexander C. McCormick (Director of NSSE at Indiana, Bloomington), pp. 35-43.


“But transparency can be about more than consumer information.  It can provide an opportunity for a college or university to proclaim its success while acknowledging that it needs to improve in some areas” (36).


“For internal audiences, this kind of information focuses attention and signals priorities for improvement, while for external observers it offers evidence that the academy takes its education mission seriously and practices what it preaches regarding the use of evidence to support assertions, interpretations, conclusions and prescriptions for action” (36).


“Such openness is risky for several reasons, though.  Revealing shortcomings invites negative consequences, whether from a legislature that may be seeking ways to cut budgets or to demonstrate hard-nosed commitment to quality, from competitors seeking to exploit vulnerabilities, from alumni or other constituents on the lookout for evidence of a decline in standards, and so on” (36).


“As a result, ‘transparency’ is sometimes a euphemism for what might be more accurately described as strategic communication or image management, in which information is carefully selected and presented so as to portray a successful and effective institution” (36).


“This is a time of great sensitivity about inter-institutional comparisons that rises in some cases to comparison phobia” (42).


“Now, let me propose two important ways that the transparency movement needs to be refocused.  A significant challenge that lies ahead for transparency efforts, and indeed for the entire accountability movement, is the exclusive and potentially misleading focus on institution-level measures of central tendency” (43).


The sooner we come to grips with the fact that variation in the student experience is far greater within institutions than between them and devise ways to represent and talk about this internal variability, the sooner we will focus on the real quality questions.  In light of unacceptably low graduation rates and a new national focus on improving college completion, we need to shift much of the energy presently focused on comparing institutions to finding ways to improve the engagement and success of the least engaged students at every institution” (43).


And finally, another imperative for accountability and transparency is to move from performance snapshots—point-estimates of student engagement and learning outcomes—to plans for improvement and results.  Policy makers and the news media should be less concerned with where an institution falls in the performance distribution than with what the results signify and what is to be done about them.  Rather than pressuring colleges and universities to disclose assessment scores, the emphasis should be on transparency regarding a different part of the assessment cycle: action plans and interventions, followed by careful evaluation of whether those interventions achieve the desired results” (43).

Migrating a SharePoint blog to Hosted WordPress

I needed to migrate a SharePoint blog to hosted WordPress (this blog, category RKChronicle), and after a bit of searching assembled this solution: SharePoint to Excel, Excel to CSV file, CSV imported to WP (3rd party host), WP export to WP (hosted) import.


SharePoint to MSExcel. Using Internet Explorer under Windows, in the SharePoint blog, use View All Site Content to access the Posts (a SharePoint list).

Create a SP List View for export. The importer expects these column headings, you can make your view align (mostly) with what it expects. The view I created (using SP naming) was:

Title, Body, Category, Created, Created By

When viewing the list use the Actions dropdown to Export to Excel. Save the resulting file.

MSExcel to CSV. The importer I found requires Comma Separated Values, so open the file created above in Excel and “save as” into CSV format.

Import CSV to WP. I found the CSV Importer ( ) by Hagerty, Pedro & Loeffler. (You can not install the plugin to the free version of WP hosted at  I used a 3rd party hosted version of WP to install the plugin.)

The importer is smart, you can omit columns and it will set default values. SharePoint didn’t have some of the data the importer expected, so I omitted Post_type, post_excerpt, post_tags, and post_slug. I renamed the column headings from the Excel export (step above).

The default columns expected by the importer and the ones I used (bold).


I set the option “import posts as draft” so I could work with them privately. When the import worked, it worked quickly. When it didn’t it gave no useful error messages. I found that I needed to be very careful with the CSV file (first couple times I edited it on a Macintosh and that seemed to make it not work). There is a sample file. Play with it until you get your tool set right.

Export from 3rd party WP and import into Hosted-WP. This was simple, 163 posts in 25 seconds. (Look under the Tools menu.)

The free hosted WP does not allow you to install plug ins, so I needed to work via a 3rd party installation of WP, ver 3.0.2

I tried going via RSS, but was unable to see how to get a SharePoint blog RSS feed that contained the bodies (as opposed to linking them). WP wants to import an XML file not a feed and I could not find a way to get the necessary file from a SP feed.

I mapped SharePoint Categories to WP tags for my own reasons.

If you touch the CSV file by opening and saving using Excel for Macintosh, you will screw it up somehow. All work must be done on the PC.

I found that I needed to convert some characters in the CSV file. I think SP must have used unicode for smart quotes and for carriage-return. I did a bulk replace on the CSV file to deal with most of the issues.

The process did not move attachments and images. I did this by hand and also made minor clean up to the posts.

Biggest annoyance. The process brought over the date posted but when I convert the drafts to published, the date is changed to the current date. Were I doing this again, I’d consider importing in published format (not draft) into an obscure instance of WP, do the editing, then export to the production WP.

Developing WSU’s Guide to Effective Student Learning Outcomes Assessment Rubric

The table below charts the steps that Washington State University’s Office of Assessment and Innovation (OAI) and its stakeholders went through to develop the Guide to Effective Program Assessment rubric used in the WSU System of Student Learning Outcomes Assessment.

Description Generalized Model Specific example of application at WSU’s Office of Assessment and Innovation.
Determine the scope of the program assessment initiative. Solicit stakeholder input about the purpose of the assessment. Determine which aspects of program assessment will be most useful for the institution to assess Gary met with provosts and assistant deans to frame the general parameters.
Develop a framework for assessing assessment at the meta-level. Research existing literature (and tools) to begin delineating rubric dimensions. Developed a framework for the rubric based on the WASC Evidence Guide, writings from Peter Ewell and others, and the Transformative Assessment Rubric, an EDUCAUSE Learning Initiative project to evaluate the responsiveness of assessment plans.
Flesh out criteria for each dimension. October 2009 Begin drafting rubric criteria, periodically reviewing with stakeholders and revising as needed. Flesh out rubric dimensions with specific performance criteria share with the Assessment Liaison Council and external reviewers, getting feedback about what was clear, what was not, and how much detail was useful. (Guide to Assessment Rubric (Oct 2009))
Test a draft rubric November 2009 Solicit program assessment plans for low-stakes review. Solicit a program’s report written for WSU’s 2009 NWCCU accreditation self-study and test the rubric with OAI and external reviewers.
Pilot an initial assessment cycle Dec 2009 (ratings done Dec 2009-Feb 2010) Solicit program assessment plans for formative review. Norm raters and launch and initial review. Return rubric-based feedback and scores to programs and report program scores to the institution. Via program liaisons, all WSU undergraduate programs were required to submit a first draft of an assessment self-study by Dec 18, 2009. Programs were given a template with areas for each rubric dimension. In the first cycle, only three of the four dimensions were required. Reviewers participated in a norming session but in the initial phase all scores were reconciled if they were more than a point apart or if there was a split at the “competency” level. Occasionally, a 3rd rater was required. Assessment plans were scored with the rubric but it was emphasized that this was an initial and provisional rating. Guide to Assessment (Dec 2009)
Revise rubric/ assessment process. February-March 2010 Solicit feedback from programs as well as reviewers about what worked and what didn’t. Revise rubric and assessment process to be more useful and efficient. The rubric was revised based on feedback captured from notes reviewers made as they were using the rubric as well as feedback from programs via an online survey. Informal observations from programs to their OAI contacts were also included. The 4 dimensions remained essentially the same but number of levels, level names and wording of rubric criteria changed considerably. All wording was changed in a positive format (what was happening versus what was missing) and a definition of terms was added as a cover page. The 6 point scale remained the same.
Test rubric April May 2010 Solicit feedback from stakeholders in the process of using the rubric on a sample report The revised rubric was tested internally by rating a report written for the December 2009 cycle. The rubric and report were also used by an audience at a regional assessment conference. Despite not norming together the internal and external reviewers agreed fairly closely.
Launch a second assessment cycle. May-August 2010 Solicit program assessment plans. Norm raters on revised rubric and begin review process. Return rubric-based feedback and scores to programs and report program scores to the institution. Programs were required to submit a second draft of an assessment self-study by May 17, 2010 (with an option to delay to August 20). They used a similar template with areas for each rubric dimension. In the second cycle, all four dimensions were required. Reviewers participated in an extensive norming session over several days and the rubric was tweaked slightly.Guide to Effective Program Assessment Rubric (May 2010) and the slightly revised version Guide to Effective Program Learning Outcomes Assessment Rubric (August 2010)
October – December 2010 Review quantitative and qualitative evidence of the review process Studies of interrater agreement were conducted along with collection of observations from using the rubric (The scoring tool had a place for reviewers to make comments about the rubric in the context of a specific rating effort.). These were used to begin framing the next revision of the rubric and template. 

OAI Interrater analysis (version 1) excel datasheet

Draft next revision of rubric and report template (Halted Dec 2010) Review literature and data from previous uses of rubric, look for patterns of rater disagreement. OAI staff began examining their experiences and the kinds of inter-rater disagreements, reviewed literature for key performance criteria, examined notes left by reviewers as they used the rubric May-Sept. The resulting notes were intended as input to the next revision.Guide to Effective Program Learning Outcomes Assessment Rubric (Dec 2010 notes) In addition, the template programs used to complete the report was revised to better prompt the writing. Program Learning Outcomes Assessment Template Revision Dec 2010

WSU System for Student Learning Outcomes Assessement

WSU’s Federated System of Student Learning Outcomes Assessment

The WSU system is a federated one, premised on the need for programs to tailor assessment to their particular circumstances and context, yet coordinate with the central effort. The first fruits of the system are in the University’s 2009-10 portfolio of its assessment and accreditation work.

During the first cycle of the System, the Office of Assessment and Innovation (OAI) asked each Undergraduate Degree-granting Program to write an annual report on its student learning outcomes assessment work. The report summarizes how each undergraduate program:

  1. Chooses and defines assessment questions of interest and verifies these questions in dialog with its relevant stakeholder group.
  2. Defines goals, learning outcomes and direct measures of student learning that are well articulated and credible to stakeholders. Mature assessment efforts include direct measures of student learning that are mapped by the program to WSU’s institutional goals.
  3. Presents evidence collected in its assessment and cogently analyzes that data to suggest next action steps, which are verified with stakeholders.
  4. Motivates, facilitates and provides supportive leadership with concrete policies for the activities above.

The Provost’s office, through the Executive Council:

Chooses and defines assessment questions of interest to the university and verifies these questions in dialog with stakeholders (eg HECBoard, OFM, NWCCU, Washington Legislature) and in the national conversation about assessment and accreditation (AACU, AEA, etc) working with OAI, reports annually how the university:

  1. Articulates goals and learning outcomes that are credible to internal and external stakeholders. (eg, review and revise WSU 6 learning goals)
  2. Develops university-wide assessment questions (e.g., purpose for assessment beyond accreditation needs)
  3. Presents evidence collected in institution-wide assessment activities and cogently analyzes that data to suggest next action steps for the assessment system and the university, which are verified with WSU’s stakeholders. (eg, see annual updates in
  4. Implements policies that motivate, facilitate and provide a supportive context for the activities above.

The Office of Assessment and Innovation facilitates the work of the Provost by:

  • Consulting with programs about their assessment activities, including providing feedback to programs on their reports.
  • Collecting annual program reports and facilitating evaluation of the reports using the Guide to Effective Student Learning Outcomes Assessment rubric.
  • Collecting data from programs on their direct measures of student learning outcomes
  • Reporting to the public and WSU constituents on efficacy of university and program-level assessment activities and attainment of learning outcomes.


Rubric revision process

With the NWCCU report filed we turned to some assessment of our inter-rater reliability, viewing that as an opportunity to understand what might be differences in our shared understandings. These may be rubric elements or interpretations of rubric language.

This google doc is the space where we are looking at that rating data and recording results of a read-aloud norming protocol.

How a rubric can communicate

OAI has been finishing up its 2009-10 cycle of reviews of program-level assessment, see the University’s portfolio for details about the process and the results.

One of the responses to a program regarding communication with stakeholders had a summary of the utility of a rubric as a communication tool:

Under “Communication” the report states: “Program Objectives and Outcomes will be more extensively discussed with the students in classes to encourage more participation in the assessment and improvement process.”

A programmatic assessment rubric could be a very useful tool to encourage students, and other stakeholders, to participate in the assessment and improvement process. For example a rubric:

  • Provides a reference point for students to consult repeatedly as they monitor their own learning and develop the skills of self-assessment. Students are supported in becoming better judges of quality in their own and others’ work.
  • Supports the development of a sense of shared expectations among students, faculty, staff, and external stakeholders.
  • Provides evaluators and those whose work is being evaluated with rich and detailed descriptions of what is being learned and what is not by facilitating a breaking down of outcomes into dimensions and of dimensions into criteria.
  • Provides criteria to shape and guide students’ engagement with one another and with course content.
  • Promotes a shared understanding among faculty, students, and stakeholders of the program outcomes.

Chart of raw scores on 3 related engineering assessment reports, Fall 2010

Re: chart of raw scores on 3 related engineering assessment reports, Fall 2010

Thanks for making this graph, its a helpful visualization of the raters’ differences. You, Jayme and I have confirmed that the three reports are substantially identical. The other observation you made the other day was that Raters 1 & 2 were together on one program which has led to that program having a different rating from the other two programs.

I re-made your chart, adding a solid black line for the average of 6 ratings (as if all raters read one report, which in effect they did). I added two dashed black lines at 1/2 point above and below the average. Points between the dashed lines are ratings within 1/2 point agreement of the average.

I see 3 blue, 1 red and 1 orange rating that are outside the 1/2 point agreement band.

One conversation could be about the quality of our internal norming (this was an inadvertent test of that).

More pressing now perhaps is how we represent this data back to the college and programs involved.

Gary & I chatted about my first graph and so I removed Rater 1, the most significant outlier and re-made the chart. In this 2nd chart, only one rating of 20 is outside the 1/2 point tolerance now. I conclude we should update Process Actions with the average scores for all 3 programs. The resulting averages of 5 raters are:

Dim 1:2.40
Dim 2:2.70
Dim 3:2.00
Dim 4:2.10

On 11/8/10 2:18 PM, “Kimberly Green” wrote:



Kimberly Green
Educational Designer / Assessment Specialist
Office of Assessment and Innovation
Washington State University
Pullman, Washington
(509) 335-5675

As you enter a classroom ask yourself this question: If there were no students in the room, could I do what I’m planning to do?  If your answer is yes, don’t do it.     -Ruben Cubero

CHEA 2011 Award Submitted

CHEA 2011 Award Submitted CHEA has an annual awards competition ( ) for innovative assessment efforts. Attached is the WSU 2011 application, submitted last Friday, describing our pilot year of institutional assessment.

WSU CHEA 2011 Award Application

Planning the responses to College of Engineering and Architecture

Planning the responses to College of Engineering and Architecture Meeting notes today to organize efforts to send feedback to the college and its programs