Re-thinking Course Evals for Prog Eval purposes / Turmoil in Colleges

Re-thinking Course Evals for Prog Eval purposes / Turmoil in Colleges A conversation on the bus home from work with a College Liaison led to this exchange

On 4/6/10 9:18 AM, “Nils Peterson” wrote:
Further to our conversation on the bus. Collecting actionable data, and acting on it, are the things that we believe are likely to engage students and therefore to increase response rates on course evals. How about we get together and talk about some ways to re-think the course eval process. I’m looking for a partner that is in a position to take a bit of an adventurous view of the topic.

(I want to see if this Liaison would venture into something like paired assessment where faculty declare their goals for the course and students say what they perceived happened. We’d get curricular maps, which could then be used by the program to help students navigate course selection.)

Liaison replies:
I will be interested in doing this.  However, let’s put off the conversation for several weeks.  Amongst other things, I still do not know where I am going to land beginning July 1.  That is to say, I do not know what my position will be and in what college.

Moving evaluations online

We are getting more requests to move course evaluations from paper-based to online.  In most cases, the opportunity to explore greater alignment with outcomes assessment is subordinate to technology efficiencies.  The issue of response rates emerges, as always, and we have resources and rationale for thinking about response rates differently.   The note below reflects a response to the issue, the first of mine to more directly, in light of recent research, recast the issue of response rates more directly to response bias.

On the migration issue, what we encounter are lots of concerns about response rates, which typically run—on average—at about 50% (course based paper evaluations run at roughly 70%, which is a bit low, considering).  We have read lots of and done more than a little research on this and conclude that the online aspect of the concern is largely misplaced. Different instruments evince different response rates. The rate varies dramatically from class to class suggesting that something else is going on. The expected bias—that only malcontents and the deliriously happy respond—is just something we cannot document. (We have come to wonder the extent to which coercion in the classroom introduces bias, and there is research now suggesting that is so.)  Extra credit adds no more than a 5% boost and who knows what kind of attendant bias.

Response rates rise when faculty, their colleagues, and their leadership demonstrate that student opinions matter to them.

And of course all of this is, in part, contingent upon what faculty and programs do with the results.

I look forward to conversations about this and anything else in the not too distant future…




It would be great if we could get the evaluations for the courses online for this semester, and it would also be great to pick Gary’s brain about other things to do with the evaluation form itself and/or the reporting process in the long run to make the process more efficient and useful for all.

That said, I think we are really looking (for now) at simply migrating our existing evaluation form to the online platform.  The Curriculum Committee hasn’t discussed any changes to the evaluation form, and we have a reporting system in place that the Skylight data will just get fed into.  There’s a fair amount of skepticism about whether moving to online evaluations is workable for us, so the idea here (at least my idea – the Assoc. Dean may have something else in mind) is to take this in steps to either show that it can work or figure out what might need to be done if it doesn’t work so well.