Coordinating on Glossary Terms

Folks,
Just an update on the meeting yesterday with Larry.  The Goals groups are meeting and defining terms for WSU’s Strategic Goals (Core Themes).  The implication as I read it is that we need to hold off on these terms:

Goals
Outcomes
Objectives

So in the context of the assignment Ashley shared, the language you find that elaborates on these concepts –or translates them effectively as suggested– may have to be reworked to align with efforts of the four WSU Goal groups.  Meanwhile, I am shipping AEA and NWCC&U definitions to the Goal Groups as Larry confirmed and suggested.

There remain  a number of terms and conceptual bottlenecks related to the language of assessment that will no doubt keep us busy.

FYI

Howard Grimes
Mary Wack
Muriel Oaks
Melynda Husky

Each chairs one of the four groups, in order.

Institutional Self-Assessment Rubric

This post is in support of a TLT webinar, the series is titled “Power of Rubrics
[Archive of session]
Gary Brown, Theron DesRosier, Jayme Jacobson & Nils Peterson, Washington State University

Introduction and Background on the Problem

Washington State University is in the process of responding to changes in by its accrediting body, NWCCU. The response includes the transformation of the former Center for Teaching Learning and Technology (CTLT) to the Office of Assessment and Innovation (OAI).

The University is in the process of developing its response to NWCCU’s changed standards and OAI is helping move the institutional thinking toward one that embeds assessment in ways that help faculty to think about student learning outcomes, and about the processes that programs are using to assess their work on improving outcomes.

This work builds on work of the former CTLT known to as “Harvesting Gradebook.” Previous reports will provide context on using the Harvesting Gradebook with students: AAC&U report Jan 2009 Update Fall 2009. This report links to a webinar archive that paints a picture of how to roll harvesting up, From Student Work to University Accreditation.
Using Harvesting Feedback with Academic Programs

In the previous webinar (From Student Work to University Accreditation) we described a vision for how harvesting could be used to move data from the level of an individual piece of student work up through levels of assessment and reflection to a university-level accreditation report. Presently OAI is engaged in deploying a middle level piece of this vision, the assessment of program-level self studies with an “Assessment of Assessment” rubric. The most current version of the rubric and other materials for the process are linked from grey portion of OAI website banner.

Figure 1. The process involves the academic program collecting evidence, writing a self study, and having the self study assessed with the University’s rubric (Called Guide to Assessment on the OAI website, formerly called Assessment of Assessment rubric). This image shows the process from data sources (upper left, to self study, to rubric-based assessment, to radar graph of results. This diagram represents work on the level of an academic program, a “middle tier” in the vision presented in From Student Work to University Accreditation.

Readers interested in trying the process are invited to do so at the WSU University Portfolio site for 2009-10. The Department of Rocket Science (on the site as of 12/7/09) was created as a sample. Other programs appearing on the site (beginning in January 2010) are actual WSU programs seeking formative feedback. (strikeout March 2010) Contact us if you want.

A Prezi visual of the WSU assessment calendar, provides an interactive picture of the assessment cycle and calendar and will serve as a “Dashboard” for monitoring progress.

Guide to Assessment – Rubric
Because of the wide diversity of programs in the WSU 4-campus system, a one size approach to learning outcomes assessment will not fit all. Consequently, WSU is developing a rubric to assess the self-study plans (short form and long form). Like the AAC&U VALUE project, the WSU rubric assumes that “to achieve a high-quality education for all students, valid assessment data are needed to guide planning, teaching, and improvement.”

The Guide to Assessment is the tool OAI is creating to help programs assess the quality of their student learning outcomes assessment activities. Using the Harvesting mechanism, programs will be able to gather evidence from stakeholders outside the university — a requirement of the accreditor — as well as gathering self-, peer- and OAI reviews.

Short form of the Rubric

Building a learning community online

I have been thinking about dissemination and adoption of knowledge as our organization (formerly WSU’s Center for Teaching Learning and Technology) is re-organized to become the Office of Assessment and Innovation (OAI). Our unit’s new challenge is to help the university develop a “system of learning outcomes assessment” in response to new requirements from our accrediting body, NWCCU.

We have captured a discussion about how our unit’s web presence might be changed in a series of notes and whiteboard shots attached to this blog post.

One part of our ideas for this “accreditation system” can be found in this presentation for the TLT Friday Live on harvesting feedback across multiple levels of the university.  We have a prototype of one of the middle tiers running now to test and refine the rubric.

Our system depends on OAI staff working with “Liaisons” for each College and their “Points” in each academic program, and developing skills among the Liaisons and Points so that they can provide useful feedback to programs on the assessment activities that the programs are undertaking. Because of the diversity of WSU, the specific learning outcomes assessment that programs undertake will need to vary by program. What the university seeks is “robust” assessment of student learning. Our method (links above) involves a meta-assessment of the assessment practices of the programs. For programs to understand and develop robust assessment strategies, I believe that the OAI’s, the Liaisons’, and the Contacts’ professional development needs to be a key component of the “system of assessment.” [That is, professional learning in the discipline.]

The challenge is to provide professional development in the context of a multi-campus university, with programs and learners at diverse places in their own learning.
We have advocated that learners find their community of practice and join it and work on their problem in the context of that community. However, that assumes the community of practice exists in an organized way that can be joined. Presently, the COP’s I’m aware of are loose knit collections of bloggers who have developed the skills of tracking one another. Novices would need to learn these skills as an entry requirement to their own participation. That barrier for entry/participation is probably too high for the WSU community we need to reach. I previously wrote a manifesto describing how our unit should change its web strategy. That proposal also included the concept of finding/ building the community of practice online — but it did not solve the problem of how to build that community.

In 2006, Dave Cormier proposed the idea of a “feedbook” of readings that was based on an RSS feed, rather than on traditional paper media. The book was more dynamic because it could be based on blogs or other contemporaneous sources.  In Dave’s later reflections he points to the interesting perspective that the feedbook is (can be) a collaborative effort among a community of learners.

“In addition to the freshness of the material, the multiplicity of voice and perspective and the fact that your textbook will never be out of date, one of the first things that would happen is a decentralization of the instructor. While the instructor would usually be responsible for the basic set of links…gone will be the rabbit out of a hat magic that comes from controlling the flow of knowledge. Students will actually be able to add to that flow of knowledge as their research brings up new sources of course material.”

Dave’s thoughts about multiplicity of voice and perspective seems to fit with Lave and Wenger’s Situated Learning: Legitimate Peripheral Participation Cambridge U Press, 1991

John Seely Brown summarizes several of Lave and Wenger’s writings in a piece written for Xerox PARC:

“This work unfolds a rich, complex picture of what a situated view of learning needs to account for and emphasizes, in particular the social, rather than merely physical nature of situatedness…

“Next, a few clarifications are probably helpful. First, as Lave (1991) herself notes, the situation is not simply another term for the immediate, physical context. If it is to carry any significant conceptual import, it has to be explored in social and historical terms. Two people together in a room are not inevitably identically situated, and the situated constraints on practice do not simply arise in and through such isolated interactions. The people and the constraints importantly have social and historical trajectories. These also need to be understood in any situated account.

“Second, community of practice denotes a locus for understanding coherent social practice. Thus it does not necessarily align with established communities or established ideas about what communities are. Community in Lave & Wenger’s view is not, a “warmly persuasive term for an existing set of relations” (Williams, 1977). Communities can be, and often are, diffuse, fragmented, and contentious. We suspect, however, that it may be this very connotation of warm persuasiveness that has made the concept so attractive to some.

“Third, legitimate peripheral participation (LPP) is not an academic synonym for apprenticeship. Apprenticeship can offer a useful metaphor for the way people learn. In the end, however, in part because of the way apprenticeship has historically been “operationalized,” the metaphor can be seriously misleading, as LPP has occasionally been located somewhere between indentured servitude and conscription.

“As Lave and Wenger put it:
‘Legitimate peripheral participation is not itself an educational form, much less a pedagogical strategy or a teaching technique. It is an analytic viewpoint on learning, a way of understanding learning. We hope to make it clear that learning through legitimate peripheral participation takes place no matter which educational form provides a context for learning, or whether there is any intentional educational form at all. Indeed, this viewpoint makes a fundamental distinction between learning and intentional instruction.’ [1991: 40]

This quote above is one that I’m still trying to fully absorb [np]

JSB continues:

“One of the powerful implications of this view is that the best way to support learning is from the demand side rather than the supply side. That is, rather than deciding ahead of time what a learner needs to know and making this explicitly available to the exclusion of everything else, designers and instructors need to make available as much as possible of the whole rich web of practice-explicit and implicit-allowing the learner to call upon aspects of practice, latent in the periphery, as they are needed.”

“… The workplace, where our work has been concentrated, is perhaps the easiest place to design [for legitimate peripheral participation] because, despite the inevitable contradictions and conflict, it is rich with inherently authentic practice-with a social periphery that, as Orr’s (1990) or Shaiken’s (1990) work shows, can even supersede attempts to impoverish understanding. Consequently, people often learn, complex work skills despite didactic practices that are deliberately designed to deskill. Workplace designers (and managers) should be developing technology to honor that learning ability, not to circumvent it.”

Applying these ideas to OAI/WSU

In the process of becoming the OAI we are re-vamping our website and proposing that it contain several elements:

  • a branded page that provides a basic OAI presence within the university
  • an archive of the former CTLT site, with its various linked resources (many of which retain some value and the URL has some reputation in search engines)
  • a university portfolio space (a showcase)  where we assist the university in mounting is publicly viewable and assessable evidence for accreditation (the system demonstrated above)
  • an assessment workspace for collaborations on assessment activities with academic units (these collaborations may require managed authorizations)
  • a “social” space for collaboration around professional development related to the problems we are working on

It is the last space that presents interesting design challenges, an opportunity to facilitate LPP, and is the cause of this reflection.

Given that the social space will be closely linked to the OAI’s main university presence, we have a requirement that it be “professional.” Dave’s comment above about multiplicity of voice and perspective is the opportunity/concern to manage. Its possible that in a community of learners, some members (perhaps more novice ones) will make contributions at a lower level of professionalism or with less insight. The community, and the visitor, should have ways of both hearing, and not over-valuing, these comments and the community should have ways of responding to these comments that can facilitate learning for multiple players in multiple ways.

In face to face environments, there are various protocols for making contributions, and cues that give an observer orientation to the hierarchy of expertise and authority within the community. An observer can use these cues to conclude which contributions carry the greatest authority in the group. Protocols for responding to contributions can also help organize the group dynamic. In online settings, these cues may be absent and a visitor may sense immaturity or cacophony (e.g., in a feedbook’s content) where in fact what is happening is that novices are exploring their (partial) understanding by sharing within the community.

The problems as I see them:

  • How can an open online community that embraces legitimate peripheral participation maintain a coherence that allows members and visitors to appreciate its professionalism/maturity while still making space for novices participation?
  • What mechanisms can be employed to create “emergent” authority in a web-based learning community without a central oligarchy?
  • Much of the literature on LPP was developed in the context of face-to-face communities. How can an Internet-based community exploit “incidental” (e.g., drop in) participation by experts, who would not have appeared in face-to-face settings? This question is a recognition that via a feedbook mechanism an item from an expert outside the community can be routed into the stream of the community’s readings (thus ‘incidental’).

A Of A Template and Rubric post-NWCC&U Meeting in Seattle (11.6)

From: Brown, Gary
Sent: Sunday, November 08, 2009 1:14 PM
To: Peterson, Nils; Ater-Kranov, Ashley
Cc: Green, Kimberly; Desrosier, Theron; Jacobson, Jayme K; ‘Jane Sherman’
Subject: a of a beta 2

Folks,

Attached is a working version of A of A Beta 2.  I’m trying to build into our Assessment of Assessment rubric and template some of Friday’s language from NWCC&U, focused so far mostly in the template (Nils, for Rocket Science mock model report update).  I’m working mostly on the digest form right now so more alignment with expanded criteria will be necessary in the ongoing iterations.  I’ll be working on squeezing in more language from NWCC&U, though it is deliberate in the level of abstraction they use’a very clear position taken that we operationalize it ourselves as one size will not fit all universities.  Our authority, such as it is, will come from the stance the WSU Executive Council takes and our ability to convey the principles of assessment as useful.

The meeting was literally a reading and Ron Baker’s interpretation of the new standards, which are somewhat different from what was recently posted.  What is key will be clarifying for ourselves the language of:

Mission
Core Themes
Goals
Objectives
Outcomes

Used everywhere and in different ways, we will want our language to align as much as possible with the way NWCC&U is using the terminology. Ron Baker appears to have worked on point to revise and explain the standards, so if we have questions we can go to him.  Jane will be here in December and we hope to confirm our approach to the language at that time, but the working clarification I have so far come to understand (awaiting confirmation) is that  Core Themes equates to WSU’s four strategic “goals”: http://www.strategicplan.wsu.edu/a of a set beta 2 (Yes, we have lots to operationalize in the four goal language in ways that programs and faculty can really make use of in practice).  Our institutional goals for meeting the core themes in the learning realm are still the six Gs of the B.  Programs will have objectives which are generally measurable (though function more like discrete program goals in this scheme) and outcomes which are what is actually measured.  We may have many objectives, for instance, but focus at any given time on actually doing that measurement to determine if we have achieved our outcomes.

(Jane, please weigh in if this corresponds to your understanding.  Needless to say, terminology can be a real bottleneck, is debated among experts, and we have little time to count dancing pin-headed angels, or something like that…..)

Anybody want to join me in CLA point meeting Tuesday 10:30-12:00?   CLA has a mix of chairs and good folks identified to make assessment happen in their programs.

I also want to get the revised A of A and template out to all Liaisons this week, sooner rather than later, and start setting up meetings to walk them through the Rocket Science mock report.

Gary

Dr. Gary R. Brown, Director
The Office of Assessment and Innovation
Washington State University
509 335-1352
509 335-1362 (fax)
browng@wsu.edu
https://mysite.wsu.edu/personal/browng/GRBWorld/
attached draft reporting template and rubric  a of a set beta 2

Assessing Capstone Courses and Internships–Where is the Value?

A faculty on point for her program’s assessment asks:

One of the group discussions at the last retreat was on choosing the capstone course. I understand that the major/core assignment in the capstone  course needs to address all the program goals. That will be the course we assess (along with a 200 level course). The degree really has a natural capstone course built into their core.  So, they are all set.

The group also talked about using the internship experience as their capstone course. Since the majors in the program are fairly diverse, the teaching faculty thought the unique internship experiences would address that issue. For assessment purposes, it seems most efficient to have one 400 level course per degree program that we assess (as opposed to multiple ones, possibly a capstone course for each major).

What are your thoughts about 1) using the internship course as a capstone course and 2) about having multiple 400 level courses that we assess.

Finally, are there other programs on campus that are using their internship as a capstone?

Thanks –
——————————————–

Combing through the new NWCC&U standards, prepping for a meeting with accreditors on Friday November 6, 2009, I note Standard 2.C.5 that states:

“Teaching faculty take collective responsibility for fostering and assessing student achievement of identified learning outcomes.”

That to me means the a capstone assessments gains formal utility when it involves many faculty in a program, and the much better reason than the rule is that program improvements are most potent when those faculty in the program are engaged in the assessment.  When faculty actually assess the performance, debate the performance (inter-rater reliability), a better handle on what it takes to improve the student learning experience emerges.

Internships are also invaluable targets especially when all or part of the same rubric might be used. We might engage internship supervisors and coordinators to also provide feedback on students’ performance during the internship.  That process helps corroborate or validate the capstone assessment and provides independent review that verifies our claimed outcomes. We’ve found that internship supervisors in other programs, in spite of some initial skepticism, were both willing and capable at applying rubric adaptations, and open to input from program faculty when they collaborate in the norming process.

Comment  added to original of this post

Capstone and Internships

There are some terrific internships going on for WSU students, and some not terrific ones.  From my conversations with faculty in a couple of different programs, the intership experience can be one of the best things but in reality it’s all over the map.

I think a great step that a number of programs may choose as a pressing need is to better assess the internship experience, and how students get feedback.  This could be rich assessment data.

However, because the internships are so varied in quality, I don’t see that as a good substitute for assessmetn of a capstone project (which should be addressing all the learning goals, for example, while an internship may not.)

Green, Kimberly at 12/18/2009 12:29 PM

After Beta Rain King, Liaison asks Purpose of Initiative

A liaison asks in the online forum:

“With this experience behind us, I would ask for a quick rehash of the goals for the assessment of assessments. We may be better prepared to comment at this point.”

I suggest this response.

1.      Establish an institutional system of assessment aligned with NWCC&U principles
2.      NWCC&U further requires us to assess our assessment
3.      Establish a system that meets 1 and 2 in a way that is responsive to program diversity but is unified by principles of good assessment

Seeking NWCCU Feedback on Rain King Initiative

From: Karen Houmiel [mailto:khoumiel@nwccu.org]
Sent: Tuesday, October 20, 2009 2:04 PM
To: Brown, Gary
Subject: RE: Seeking feedback

Hi Gary,

I have forwarded your email to Dr. Baker who is the person with whom you will want to speak.  If you need anything else, please don’t hesitate to let us know!  Have a great day!

Thank you,
Karen

From: Brown, Gary [mailto:browng@wsu.edu]
Sent: Tuesday, October 20, 2009 1:52 PM
To: Karen Houmiel
Subject: Seeking feedback

Hi Karen,
I have recently been appointed to lead our new Office of Assessment and Innovation and Washington State University.  We are developing a process for helping our programs at WSU assess their assessment, and in that process we are developing criteria and a template, doing our best to align it with principles of good practice and the new NWCCU standards.  It would be a tremendous help to talk about the process and invite some early feedback on our efforts now, and perhaps, if we are successful, gaining an endorsement in principle.

I recognize that improved learning outcomes and reaccreditation is the endorsement that matters most, but I am seeking some guidance and support to help WSU programs help the institution (and our students!) in achieving those ends.  Is it possible to speak with somebody in advance of the gathering in Seattle in November?

Gary Brown

Dr. Gary R. Brown, Director
The Office of Assessment and Innovation
Washington State University
509 335-1352
509 335-1362 (fax)
browng@wsu.edu
https://mysite.wsu.edu/personal/browng/GRBWorld/

From Student Feedback to University Accreditation

This post supports a free TLT Group webinar
The Harvesting Gradebook: From Student Feedback to University Accreditation

The event is now available as an archive made Friday, September 25, 2009  2:00 pm (ET)

Theron DesRosier, Jayme Jacobson, Nils Peterson, & Gary Brown
Office of Assessment and Innovation, Washington State University

This webinar is an extension of our previous thinking “Learning from the Transformative Gradebook.” Prior to (or following) the session, participants are invited to review a previous session and demonstration of these techniques being applied at the course level

During the session, participants will be invited to pilot our Assessment of Assessment rubric on a program-level accreditation report and to discuss the broader implications of the strategies proposed.

This hour long session will:

1. Review WSU’s model implementations of the Harvesting Gradebook that can be used to gather feedback on student work and the assignments that prompted it. (Background on Harvesting Gradebook)

2. Show how data from harvested assessments at the course level can flow up, providing direct evidence of learning outcomes at the program, college and university levels.

3. Demonstrate a systemic assessment process that applies a common assessment of assessment rubric across the all university’s assessment activities

4. Invite the audience to provide feedback on the Assessment of Assessment rubric by assessing an accreditation report. The goals of the hands-on activity are to:

  1. Gather feedback on the rubric
  2. Demonstrate time effective means of gathering feedback from a diverse community on assessment activities

5. New perspective on  Curricular Mapping, using harvested data.

Further Reading

The Prezi document used in the session (requires Adobe Flash).

Harvesting Gradebook in Production: We have been investigating the issues on the WSU campus surrounding taking the harvesting gradebook into production. While all the integrations with WSU Student Information Systems are not in place yet, we can see a path that automates moving student enrollments from the registrar to create a harvesting survey and moving the numeric scores from the survey back to the instructor where they might combine with other course scores to create the final grade that can be uploaded to the Registrar. A mostly automated pilot is being implemented Fall 2009.

Student Evaluations of Program Outcomes: The presentation references the idea of using student course evaluations to gather indirect evidence on the course’s achievement of the program’s learning outcomes. For several years, WSU’s College of Agriculture Human and Natural Resource Sciences (CAHNRS) has used a course evaluation college wide that asks students their perception of how much the course helped them developing skills in: thinking critically, writing, speaking, working on a team and other dimensions that align with university learning goals. We have explored gathering the faculty goals related to these skills and comparing them to the student perceptions.  To date, this data has not been systematically rolled up or used as evidence in accreditation self-studies.

Getting started with transformative assessment university-wide

Moving a university from a compliance mode of accreditation assessment to a transformative mode is a complex task, yet it is the task brought on by changing requirements of accrediting bodies. To get there, the university (viewed as a collection of learners) needs some scaffolding and some easy place to get started.

Background to the problem

Washington State University is accredited by the Northwest Commission on Colleges and Universities (NWCCU). The NWCCU is engaged in a process to review its standards. The process includes drafting some new standards and converting the review from a decennial calendar to a new septennial review schedule and adding a new catalog of types of reports that institutions must produce.

Regarding these new standards, NWCCU says:

“Standard Three requires evidence of strategic institutional planning that guides planning for the institution as a whole as well as planning for its core themes. Much like the current accreditation model, Standard Four requires assessment of effectiveness and use of results for improvement. However, unlike the current accreditation model, assessments are conducted with respect to the institution’s core themes, rather than its major functions.” [emphasis added]

It goes on to say:

“Goals and intended outcomes, with assessable indicators of achievement, are identified and published for [the institution’s] mission and for each of [its] core themes…

“A core theme is a major concentration, focus, or area of emphasis within the institution’s mission. Examples include, but are not limited to: Developmental education; workforce preparation; lower division transfer education; baccalaureate education; graduate preparation for professional practice; graduate preparation for scholarship and research; service; spiritual formation; student life; preservation of values and culture; personal enrichment; continuing education; academic scholarship; and research to discover, expand, or apply knowledge.”

We can assume that WSU will develop several core themes related to student learning, such as “undergraduate preparation for advanced study and professional careers” and “graduate and professional preparation for scholarship and research.”

NWCCU’s calendar would appear to require the University to provide a Year 1 report in 2011 that answers these points:

Section II: Core Themes

For each Core Theme: [Maximum of three (3) pages per theme]

a. Descriptive Title
b. Goals or Intended Outcomes for the Core Theme
c. Indicators of Achievement of the Core Theme’s Goals or Intended Outcomes
d. Rationale as to Why the Indicators are Assessable and Meaningful Measures of Achievement of the Core Theme’s Goals or Intended Outcomes

For the WSU core themes tied to student learning outcomes, the university will need assessments conducted with respect to the goals of its themes – implemented in ways that can be “rolled up” from program to college to university levels. WSU may elect to use something like its Six Learning Goals as the intended outcomes for its core themes that deal with student learning.

The Problem Statement

The challenge is to help programs move toward having and using indicators of achievement of WSU’s chosen outcomes, and to do so in a way that helps programs and colleges use the data to improve, rather than just performing a compliance activity (that is, developing a transformative approach to their assessment). The further challenge is to accomplish this in a resource-constrained environment.

Theron and I have been looking for a place that all ~100 undergraduate programs at WSU could start working toward meeting these requirements in time for a 2011 delivery date of a Year 1 report.

What we describe here is based around WSU’s goals, but the concept will likely work equally well for another campus with a different set of institutional learning goals

The Strategy

Figure 1 is our whiteboard of the concept. It starts with the idea of collecting sample assignments from each program (along with some metadata about each assignment) in order to provide feedback about the assignments to instructors and the program from their community of “critical friends.”  This process is intended to provide us baseline information. The baseline is in two forms: an assessment practice that programs can build on and data about the university’s teaching practices, such as the types of assignments used in programs and the kind of feedback (beyond grades) that the assignments can provide to learners, along with some demographics about the assignments. (If programs want to do something else, or something more, they can, see below.)

Figure 1. A brainstormed diagram of the kinds of metadata that would need to be collected with each assignment and the graphical analysis that could be done with the data.

Figure 1. A brainstormed diagram of the kinds of metadata that would need to be collected with each assignment and the graphical analysis that could be done with the data.

Here are some benefits that we see:

  • A simple message to communicate.

“Give us 3 assignments, we’ll help you get feedback to improve the assignments AND to meet NWCCU requirements”. (Provost to deans, deans to chairs, chairs to faculty, CTLT staff to WSU community. ) A simple message is less likely to get confused and become a “telephone game” nightmare.

  • A manageable and understandable process.

Every program has assignments. They are easily collected and can be assessed online using Harvesting techniques. Its not everything that might be included in program-level outcomes assessment, but it’s a starting point

  • Feedback from communities.

Specifically, communities important to faculty and instructors.

  • A common reference point.

With most programs doing the same thing, WSU can develop opportunities for shared models, resources and interdisciplinary partnerships.

  • Feedback from a broad range of stakeholders.

Assignments are artifacts that impact many people (downstream faculty, community, industry).  Those stakeholders can answer questions such as, “Does this assignment prepare students for your course? For your workplace? For life outside of the university? How would you improve it to meet your context?”

WSU’s Center for Teaching Learning and Technology has previously done work to map assignments to WSU’s Six Learning Goals using this proposed form (developed with the WSU Honors College): and in this case study with a academic program.

Or the mapping to WSU’s goals could be implemented more indirectly by scoring the assignment with WSU’s Critical and Integrating Thinking Rubric and then mapping the rubric to WSU’s six goals. If the program already has a rubric that is has been using, that rubric could be incorporated into the process and mapped to the WSU goals, see figure 2.

Figure 2.  Diagram of the mapping process from the Food Science rubric to the WSU Six goals and the the University of Idaho (UI) five goals. If the University changes its goals, the mapping can be readily changed, as illustrated by this joint WSU-UI program that is mapping its rubric to WSU's 6 goals and UI's 5 goals simultaneously.

Figure 2. Diagram of the mapping process from the Food Science rubric to the WSU Six goals and the the University of Idaho (UI) five goals. If the University changes its goals, the mapping can be readily changed, as illustrated by this joint WSU-UI program that is mapping its rubric to WSU's 6 goals and UI's 5 goals simultaneously.

The rating itself could be managed in a manner similar to the one we demonstrated to harvest feedback on an assignment, figure 3. Unlike that demonstration, in the pilot year of this proposed assessment plan, programs might only be asked  to provide sample assignments and not the associated student work. In subsequent years, programs may elect to use the full harvesting model, or may elect to use other assessments to provide triangulation to this approach.

Figure 3. Harvesting Feedback process gives results to the instructor (about the assignment) and to the program (about the assignments in aggregate and about the utility of the rubric used).

Figure 3. Harvesting Feedback process gives results to the instructor (about the assignment) and to the program (about the assignments in aggregate and about the utility of the rubric used).

Summary of the Process

1. Assessment for improving, not just proving:

  1. Academic programs deposit 3 sample assignments and fill in a form describing the courses where the assignments are given.
  2. The assignments are scored with a rubric (based on the WSU’s Goals, or the Critical Thinking Rubric, or a rubric that the program already uses to assess student work. The program can choose the rubric.) The program can help recruit the assessors from communities that have interest in the program’s success.
  3. The rubric is mapped to WSU’s chosen goals so the results can be rolled up from Program to College to University levels.
  4. Programs examine the feedback from the assessments, engage in conversations about it and choose action plans for the coming year; instructors can get specific feedback about their assignment to engage in SOTL or other process improvement.
  5. The whole process is assessed with a transformative assessment rubric to judge the quality of the assessment activity and suggest refinements in the assessment practice.

2. Or, if a program already has an assessment procedure (perhaps required by another accrediting body):

  1. The results of that assessment can be submitted, along with a mapping of the relevant data to WSU’s Goals.
  2. The assessment process is assessed, as above, to judge the quality of the assessment activity and to benchmark the program against all other WSU programs.

By using the Harvesting process to implement both the assessment of the assignment and the assessment of the assessments, it should be fairly simple to gather input about the program from a community that is relevant to, and interested in, the academic program’s success and to document the success of the assessment efforts.

Once programs have begun to engage in the discussions that we think these activities will trigger, they may elect to proceed in many directions that include: broadening the scope their assessment activities, refining the assignments or the criteria for assessing them, expanding the communities giving feedback, or assessing student work along with the assignments to gage how effective the assignments really are in impacting student learning.

Conclusion

We have proposed programs get started toward the new accreditation standards by harvesting assessments on their assignments, but this is not the only place to put a toe into the water. Syllabi or student work would be another place to start. Ultimately, we think programs should have direct measures of both student learning and faculty learning, and be able to talk about action plans related to improving that learning and/ or improving the assessment of that learning.