Crowd-sourcing feedback


David Eubanks commented on our recent Harvesting Feedback demo. I’ll save replying about inter-rater reliability to focus now on his suggestion of using Mechanical Turk and the very insightful comment about the end of “enclosed garden” portfolios.

I think David correctly infers that Mechanical Turk is a potential mechanism to crowd-source the Harvesting Feedback process we are demonstrating. Its an Amazon marketplace to broker human expertise. The tasks, “HITs” (Human Intelligence Tasks) are ones that are not well suited to machine intelligence, in fact the site bills itself as “artificial artificial intelligence.”

To explore Mechanical Turk, I joined as a “Worker” to discover that “Requesters” (sources of HITs) can pre-qualify Workers with competency exams. I’m now qualified as a ‘”Headshot” Image Qualifier’ a skill to identify images that meet certain specific criteria important to requester Fred Graver. I also learned that workers earn (or maintain) a HIT approval rate, which is a measure of how well the worker has performed on past tasks. One might think of this as how well the worker is normed with the criteria of the task (though the criteria in this case are not explicit (which is a weakness in our view)). Being qualified for a task might be analogous to initiation to a community of practice; but one would need to then practice “in community” which Mechanical Turk does not seem to support.

We’ve also been exploring a couple other crowd-source feedback sites that help flesh out the character of this landscape. Slashdot and Leaf Trombone (website and video). Slashdot is a technology-related news website that features user-submitted and editor-evaluated current affairs news with a “nerdy” slant. Leaf Trombone is a game for the iPhone that lets you use your iPhone to play a slide trombone to a world audience.

The three systems are summarized in this table:

Mechanical Turk Leaf Trombone Slashdot
Goal of site/ developer’s reason for using reputation in the site Distributed processing of non-computable tasks/ sort for suitable workers Selling an iPhone app/ use ego to encourage players Building a reliable source of information/ screen for editors who can take high level tasks
Type of reputation / Participant’s purpose for having a good reputation Private reputation/ to secure future employment; earn more income Public reputaiton/ status in the community as player and judge; ongoing participation Public reputation/ enhanced opportunity to contribute to the common good (as opposed to being seen as clever fellow
Type of Reward/ Motivation for participant Money/ Personal gain Personal access to perform on world stage/ learning & fun “Karma” to enable future roles in the community/ improve the information in the community
Performance Space/ durability of the performance Private space (enclosed garden)/ durability is unknown, access to the performance is only available to the Requestor Public stage & synchronous; a new playback feature makes performances durable, but private for the artist Public stage & asynchronous/ permanent performance visible to public audience
Kind of feedback to the participant/ durability of the performance Binary (yes/no) per piece of work completed; assessments are accumulated as a lifetime “approval rate” score Rating scale & text comment per performance/ assessment are stored for the performer Rating scale per posting/ assessments are durable and public for both individual items and are accumulated into the user’s “Karma” level
Assessment to improve the system This could be implemented by the individual “Requester” if they desire ? High “Karma” users engage in meta-assessments of the assessors
Kind of learning represented Traditional employer authority sets a task and is arbiter of success; the goal is to weed out unsuccessful workers Synchronous, collaborative individual learning – judge as learner; performer as learner Asynchronous collaborative community learning
Type of crowd-sourcing Industrial model applied to crowd of workers Ad hoc judges gathered as needed for a performance Content and expertise are openly shared

The three systems represent an interesting spectrum, and each might be applied to our challenge of crowd-sourcing feedback. But looking at the different models they would have very different impacts on the process. I believe that only Slashdot’s model could be sustained by a community over an extended period of time, because it is the only one that has the potential to inform the community and build capital for all the participants.

The table above got me to think about another table we made, diagraming 4 models for delivering higher education. At one side of the chart is the industrial, closed, traditional institution. It progresses through MIT’s open courseware and Western Governor’s University’s student collected content and credit for work experiences to the other end of the chart that we called Community-based Learning.

Three rows in our chart addressed the nature of access to expertise, the assessment criteria, and what happens to student work. The table above informs my thinking on those dimensions. As I’ve charted it, in the Slashdot model expertise is open, assessment is open. (while assessment criteria are obscure, the meta-assessment helps the community maintain a collective sense of the criteria) and the contributer’s (learner’s) work remains permanently as a contribution to the community. This is what I think David is referring to when he applauds the demise of the “enclosed garden” portfolio.

A reason to work in public is to take advantage of an open-source/ crowd-wisdom strategy. David illustrated the power of “We smarter than me”  when called our attention to Mechanical Turk.

Another reason is the low cost to implement the model. Recently the UN Global Alliance for Information and Communication Technology and Development (GAID) announced the newly formed University of the People, a non-profit institution offering higher education to the masses. In the press briefing, University of the People founder Shai Reshef said that “this University opened the gate to these [economically disenfranchised] people to continue their studies from home and at minimal cost by using open-source technology, open course materials, e-learning methods and peer-to-peer teaching.” [emphasis added]

We propose that to be successful the University of the People must implement its peer-to-peer teaching as community-based learning and include a community-centric, non-monetary mechanism to crowd-source both assessment and credentialing.

Advertisements

One Response

  1. Here is an interesting addition to this idea, a company in the US, virtual-TA that is outsourcing assessing/ feedback on student papers to India. Article in Chronicle April, 2010. Cost might be in the range of $12US/paper. Rubric-based assessment is offered.

    http://chronicle.com/article/Outsourced-Grading-With/64954/?sid=at&utm_source=at&utm_medium=en

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: