From: Brown, Gary
Sent: Wednesday, October 14, 2009 1:00 PM
To: Anelli, Carol A; Baker, Danial; Bray, Brenda; Brown, Gary; Chenoweth, Candace; Cote, Jane M; Helmstetter, Edwin; Jasmer, Doug; Kelly, Roberta; Kidwell, Kim; Lanier, Mary Sanchez; Olsen, Robert G; ‘Pratt, Dick’; Probst, Tahira; Roll, John; Whidbee, David; Ivory, Carol S
Cc: Brown, Gary; Rodriguez-Vivaldi, Ana Maria; Byington, Tori C; James, Larry G
Subject: 2 of 2–the pilot assessment of the Honors self-study
Hello again assessment pioneers.
Here is the task. If you log into this site, you will find the directions to guide you through the pilot assessment process:
As an overview, you will be asked to:
1. Read over the Assessment Criteria (more about this in a minute).
2. Read the Honors self Study
3. Go through the online rubric and assign the Honors self-study scores on each of the four dimensions of the rubric.
A note about this process (also reiterated on the assessment site and the home of the WSU Assessment Portfolio).
The goal of this work is to put into place an institution-wide system for reflecting on our assessment work. That system is intended to be formative, focused on principles of good assessment, and that make visible WSUs commitment to engage our community in a collaborative process of continuous improvement. It is our belief that a good process will yield good outcomes and demonstrate, as stated inWSU’s strategic plan, that ‘We are committed to being ethical and responsible stewards of University resources and to being accountable for upholding the full scope of these values.’
Clearly your feedback on the process, all aspects, is essential. The rubric used in this context is perhaps the first of its kind, and though it is derived from similar work weve adapted or done with other professional organizations, every new context is unique. The criteria are framed to reflect the principles of good assessment, and we are hoping the process will both promote rigor and afford flexibility – a key principle endorsed by the Council of Higher Education Accreditors (CHEA) and more recently by the US Department of Education. Your feedback on the Honors assessment will help us underscore the goal of attaining flexibility, in particular, and your comments in that regard will probably be even more valuable than the scores you assign, as that language along with your direct feedback on the rubric (also invited on the site) will be useful for the next iteration of the criteria or language we use to renew our culture of evidence.
You will note the assessment rubric has two parts. The first page is a short form, to introduce readers to the four dimensions of the rubric and provide a holistic overview. This page is followed by a longer form of the same rubric; long and dense, this version provides specific criteria describing each dimension at three different levels. This length is not an accident. We have done assessment of rubrics and learned that more sophisticated criteria associates with better outcomes. Use of a sophisticated or dense rubric is perhaps slow and challenging at first but raters internalize criteria quickly and it ultimately can result in more efficient and more accurate assessment. But the rubric is also long because an additional goal of this rubric and this effort, and formative assessment in general, is to be educative. The detailed descriptive criteria are our first attempt to address many of the key bottlenecks we have encountered in our work with about 45 WSU programshere and several elsewhere we have reviewed. Not the least of those bottlenecks is the language of assessment, and so we certainly anticipate deepening our own understanding of the way the language we use is variously perceived.
As part of the practice assessment you will do using this rubric, you’ll also be asked to give feedback on the rubric and your experience using it.
You will note, too, that nowhere in the DRAFT criteria is there a place to assess the level of student performance or gains resulting from the assessment. It is our view that at this stage of implementing the process that kind of a focus is distracting.
We will also want to consider our levels of agreement on the honors assessment as we hope to move, over time, toward greater agreement, or inter-rater reliability. If we are to reasonably claim WSU has a system, then an indicator of that system will need to be, eventually, consensus. Some reasonable measure of inter-rater reliability (roughly 80% agreement), not incidentally, is essential when establishing the validity of an assessment that values, as we advocate, the expertise of the faculty in our own programs. It is that last point ‘the expertise of our own community’ that we hope this process will help us promote. It is my view that establishing that kind of expertise is essential if we are to maintain the autonomy as an institution, as a university, that our work requires.
Obviously there is much to talk about and work through, but for now a couple of more notes on logistics:
We will close the online rubric rating form a couple of days before our next meeting. At that point, when you log on you will be able to see the results.
Meanwhile, we will be opening up a couple of open forums for you to visit us and work through this task with help from OAI staff. Those forums are not limited to technical concerns or procuring technical support. We welcome additional feedback or discussion on any of the points touched above or others that occur to you.
Dr. Gary R. Brown, Director
The Office of Assessment and Innovation
Washington State University
509 335-1362 (fax)
the referenced rubric A of A Rubric Beta (Oct 15 2009)