Crowd sourcing to support learning at all levels of the university


We are developing a response here to the article Duke Professor uses ‘Crowdsourcing’ to Grade by Erica Hendry  in the Chronicle.

In our response (which for some reason did not appear as a comment to the Chronicle article but is reproduced in Cathy’s blog) we offered a survey that implements Gary Brown’s Harvesting Gradebook concept. Erica’s article is the object of the review, Cathy’s criteria are the basis of the instrument.

The demonstration we whipped up is a variant of an earlier demonstration of harvesting feedback for programatic assessment that we did in a webinar hosted by the TLT Group. The link is to David Eubank’s insightful review of the demo.

The basic concept is to link a survey to an object on the Internet and invite feedback from viewers, but the criteria are more elaborate than “its hot/its not.” The student gets visualization of the quantitative data and a listing of the qualitative comments.

If you have not tried it already, read Erica’s article and review it here.  The end of the review will take you to a page with the results (its not real time, we’ll  update periodically.)

Some of the angst in the comments to the Chronicle article seems to come from the energy surrounding high-stakes grading activities, and perhaps a recognition that grading does not advance the student’s learning (nor the faculty member’s).

A grade book traditionally is a one way reporting mechanism-it reports to students their performance as assessed by the instructor who designed the activity.   Learning  from grades in this impoverished but pervasive model is largely one way-the student learns, presumably, from the professor’s grade.  What does a student really learn from a letter or number grade? What does the faculty member learn from this transaction that will help him or her improve?  What does a program or institution learn?  We are exploring ways to do better.

We exploring ways to learn from grading by transforming the gradebook, and part of that transforming is to allow others into the process. For example, Alex Halavais rates his  experiment to use crowdsourcing as “revise and resubmit.” In Halavais’ example, students gamed the system, competing for points. The approach we are exploring has a couple key differences. First, the scales we’d advocate (such as WSU’s Critical Thinking Rubric) are absolute (we expect graduating seniors have not reached the top level; faculty map the rubric scale to grades in age-appropriate ways). Second, we imagine outsiders providing feedback, not just peers. When we did a pilot in an Appearal Merchandising course in the Fall of 2008, a group of industry professionals, faculty from the program, and students were all involved in assessing in a capstone course. The results were rich in formative feedback and let the students see how their work would be judged by other faculty in the program and by the people who might hire them.

Further, we have called this a “transformative” approach, because the data can be used by the student, by the instructor, by the program, and by the university, each for purposes of improving on practice.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: