RE: Recap of the Planning Request


Good questions, liaison council member.

I think we’ve talked a bit about this, but it remains an interesting issue.  We have suggested the plan based upon our experience that seems to be echoed at least once a week these days.

For instance, last week you probably saw it in the Chronicle of Higher Education (October 27, 2009) in the comments of George Kuh, director of the National Institute for Learning Outcomes Assessment.  He makes several observations that align with our experience, reading, and thinking.  He notes, for instance, that “what we want is for assessment to become a public, shared responsibility, so there should be departmental leadership” (paragraph 14).

But to your question, he also notes that lots of places have developed outcomes, however:

“What’s still disconcerting is that I don’t see a lot of evidence of closing the loop. There’s a lot of data around, there’s some evidence it’s being used in a variety of ways, but we still don’t know if that information is being transferred in such a way as to change practices for the better. That’s still the place where we’re falling short” (paragraph 6).

http://chronicle.com/article/An-Expert-Surveys-the/48945/?key=HWsgcl03ZSJNZHs2K3UTKScFaHt6JkJ4Y34WMHYabFlW

Part of the reason closing the loop is so difficult is that outcomes assessment remains removed from what faculty do in their classrooms.  (There’s a nicepiece in Inside Higher Ed today on this, but this email is already too long.) So what we’ve learned that tends to work better and is generally most practical is to put the focus on what faculty are already doing. Peter Ewell, the VP ofThe National Center for Higher Education Management Systems, came to a similar conclusion that resonates with our experience and suggests a strategy for making outcomes assessment truly ‘practical’ or ‘functional’ for closing the loop.  Lamenting the failure of more than 10 years of the assessment reform in helping institutions and faculty close the loop, Ewell says:
“I have learned two additional lessons about the slippery matter of articulating abilities.  First, it’s more useful to start with the actual practice of the ability than with the stated outcome.  Phrases like ‘intellectual agility’ have great charm, but mean little in the absence of an actual student performance that might demonstrate them.  To construct assessment techniques, formal assessment design, as described in the textbooks, demands ever more detailed verbal specifications of the outcomes or competencies to be developed.  But it is often more helpful to go the other way. Given a broad general descriptor like ‘intellectual agility,’ can you imagine a very concrete situation in which somebody might display this ability, and how they might actually behave?  Better still, can you quickly specify the parameters of an assignment or problem that might demand a particular level of ability for access?  The performance that the student exhibits on the assessment is the definition of the ability itself; the ability has no independent existence” (pp. 6, 2004, General Education and the Assessment Reform Agenda).

We’ve worked with a couple of dozen programs here at WSU (and more than a few elsewhere) and found that starting with the real and embedded assignments faculty use is an effective way to approach outcomes assessment.  It helps programs refine and make concrete their understanding of outcomes in the context of their own teaching.  It helps them close the loop as reflected in their own assignment design.  So that’s the recommended plan.

Given the frighteningly short timeline we all face, and in the interests on building outcomes assessment systems in the day-to-day work in which faculty are already engaged, the proposed approach is what we are thinking will give us the best, well, outcome;-)

From: Liaison Council Member
Subject: RE: Recap of the Planning Request

Hi Gary,
I tend to look at things from a ‘practical’ stand point (and I realize that I operate from my own professional education world, which may or may not be reality for the rest of the university) — just so you know where my comments are coming from…

An important step for My College was the development of the curriculum outcomes (and they are far from perfect and in some cases we have discovered not too functional, but they have at least provided a foundation to start from).  I don’t have an informed understanding of how many University programs/colleges/units are operating with curricular outcomes now?  I sense that some of them may not have accomplished that step yet.

So….if there are programs without outcomes developed and outcomes are not integrated into their current courses, how are steps 2 and 3 outlined below possible to achieve in a short time frame?  Would it be better for this process to include a selection of alternative approaches to meet the programs where they are?  For example, implement the plan below for the appropriate units and then provide assistance for the others as they develop outcomes for their programs and provide support/guidance along the way.

Thanks,

Liaison Council Member

From: Brown, Gary
Subject: Recap of the Planning Request

The Plan We Need

The goal of the plan is to establish the team and strategy to be implemented in the spring.

We are working to be sure ALL programs and ALL teaching faculty, adjuncts, and TAs can report some level of involvement in the program outcomes work, including, minimally, identification of program outcomes on their syllabi and participation in the analysis of results and awareness of the action plan (how the program will be closing the loop).

It is reasonable to expect that the spring assessment will be a pilot, and that therefore the action plan might very well be something focused on ways to make the assessment more robust, valid, and designed to encourage greater participation and use of embedded activities faculty have already been doing in their courses.

So specifically, we hope the plan will include the development of a team and system (rubric criterion #1) and the beginnings of the outcomes and measurement strategy (rubric criterion #2):

1.     Identification of WHO in the program will be involved (#1).
2.     Identification of WHICH core courses and WHAT representative activities students will be doing that will be assessed. (#2)
3.     How that approach to direct measurement will be aligned with other existing indirect measures such as student evaluations, and if not, then how that alignment will be addressed in the future (the plan to plan) (#2)
The goal of this plan to plan is to make sure logistics are in place so that we can increase the likelihood that assessment will take place next spring. The spring deadline is necessary so that programs can also do the analysis and complete a report with action items for fall, 2010, when the updated report to the Northwest Commission is due.

Other Items
·        The rubric revisions are underway, thanks to excellent feedback from this group and staff in OAI.
·        We will have the rubric revisions ready along with two mock model self studies. The models will be aligned with the rubric and should help ongoing deliberations about the process and the template.
·        You can glimpse the rating of the Honors self study along with associated comments at this link.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: