A proposal to get external review/ test of rubric

A proposal to get external review/ test of rubric I sent this to a program contact in early April to help with the problem that the program was unwilling/unable to devote resources to testing their rubric against a small collection of student theses.

No reply in 2 weeks.


I just came across this resource. It looks like they will do a free demo. Are you interested in seeing if they can apply your rubric effectively? I’d suggest we send one paper and the rubric as an experiment.


Engaging Employers and Other Community Stakeholders

Do you have ideas or examples of good practice of working with employers to promote workforce development? UK universities and colleges are under pressure to do “employer engagement” and some are finding it really difficult. This is sometimes due to the university administrative systems not welcoming non-traditional students, and sometimes because we use “university speak” rather than “employer speak”.
— a UK Colleague

Washington State University’s Office of Assessment and Innovation has been working on this question for several years. We presented this spectrum diagram to think about how the more traditional Institution-centric learning differs from Community-based learning. It may point to some of the places your programs get stuck thinking about this question.

We have also been exploring methods to gather assessments from stakeholders (employers as well as others) about aspects of academic programs. This example shows the twinned assessment of student work using a program rubric and assessment of the faculty’s assignment that prompted the work. We invite stakeholders to engage in both assessments. In other implementations of this process, we have asked stakeholders about the utility of the rubric itself.

We also are finding differences in the language used by faculty, students and employers. When asked about the most important things to learn about in a business program we got this feedback.

Another example of different groups using different language is this one, where industry and faculty used different language with different foci to give feedback to students. Particularly we saw industry use “problem” as in “problem statement” and faculty use “problems” synonymous with “confused” and “incorrect.”

Our method for learning about both language and values is through simple surveys of stakeholders as they are engaged with us in assessment activities. For example here (In Class Norming Survey), we asked people who had just assessed student work using a program rubric the importance of the rubric itself.

In this survey (AMDT Stakeholder Survey) a fashion design and marketing program is asking industry partners about language and criteria, as a precursor to building a program-wide assessment rubric. All these activities help programs understand the wider context in which they operate.

More on this work can also be found in this article. Brown, Gary, DesRosier, T., Peterson, Nils, Chida, M., Lagier, R. 2009. Engaging Employers in Assessment. About Campus Vol 14(5) Nov-Dec 2009. NUTN award for best essay – 2009

It may help to understand that we define stakeholders broadly to account for the variation among academic programs: employers, alumni, students themselves, professional and graduate school admissions officers, audiences (as in performance arts), etc.

Presently we have developed a rubric to guide the assessment of self-studies that our academic program are doing as part of our University-wide system of assessment, a component of our institution’s regional accreditation activities. You can see a snapshot of how our Colleges are doing here.

Key features for implementing a Harvesting Gradebook

In working on our DML competition entry, I found myself enumerating the features we’ve found important to our Harvesting Gradebook work. The Harvesting Gradebook consists of a web-based survey that can be embedded in, or linked from, link to, or embed within itself a piece of work that is to be evaluated by a reviewer using the survey in conjunction with various forms of data visualization such as radar chart and tag clouds.
The first proofs of concept were in the summer of 2008. We first used the tool with students in Fall 2008. That work, thru April 2009 can be found here. Our explorations have branched in several directions since then, including elaborating the idea to university-wide program level learning outcomes assessment. There is some overlap among these 3 categories:

These implementations have used several tools: paper&pencil, Google Docs Forms/Spreadsheet, Microsoft SharePoint survey, and most recently, Diigo and Google Sidewiki. Production implementations have been done with WSU’s Skylight Matrix Survey System.
We keep gravitating back to Skylight because of features that it has that make it particularly well suited for implementing the Harvesting Gradebook. They are:

  1. multiple “Respondent Pools,”  which allows multiple surveys to use a common data store and shared reporting mechanism;
  2. Respondent Pool Metadata” to store additional data elements that describe a Respondent Pool with that pool’s data;
  3. a method for embedding the metadata in both an email and a survey when delivered to a respondent;
  4. the “Rubric Question Type,” rubrics are a commonly accepted assessment tool;
  5. the “Dashboard” for automatic, real-time web-based reporting of data from one or more Respondent Pools to one or more authorized viewers;
  6. tabular reports of data, available for both the survey overall or for an individual Respondent Pool or selected groups of Respondent Pools;
  7. a mechanism for securing of report data without a password, including sharing the results when a rater finishes a rating session;
  8. a mechanism to get the tabular data from the Skylight Matrix Survey System to a third party system via a uniform resource locator (URL);
  9. a mechanism to download all data and metadata associated with a survey  and/or Respondent Pool, including URLs for taking the survey, in a format readily used by spreadsheets.

DML Competition entries

I realized we were at risk of losing our DML Competition entries as their process progresses. The competition is interesting in that it runs in rounds and accepts crowd-source as well as expert feedback in the process. In January we submitted an initial proposal (below). Then used the opportunity of making a revision to mock-up the tool, and at the same time, to focus our thinking on how to improve our submission.

Here is the February 2010 version (below the earlier one):

Brief Project Description (50 word max):

Getting critical feedback is a collaborative social learning strategy that helps learners improve and is educative for givers too. Feedback shared in communities can be educative for bystanders. Feedback compliments other habits of mind like critical thinking. Learners embed Back@U into their work to gather feedback from their community.

Project Description (300 word max):

The whole web is a learning space. Back@U is an instrument embedded on any webpage for gathering critical feedback. Back@U learners solve diverse, multi-faceted problems requiring a collaboration among different disciplines and skills within communities invested in those problems.

Learners and peers provide rich and informative feedback leading to improvement. Back@U could help an NGO get feedback on the design of an irrigation system, while allowing the participating engineering intern to get feedback from all participants: the NGO, local residents, faculty advisors, peers, professional engineers, etc.

People freely engage in learning required to master games: attempting, getting feedback, trying new approaches. To reach a genuine achievement, learners need lots of trials, errors, and adjustments based on feedback (http://www.edutopia.org/healthier-testing-made-easy). These are the same skills life-long-learners use; they approach learning as a challenge, a game.

Back@U is a collaborative and social mechanism allowing learners to gather feedback about their work from multiple sources. It can be tailored by learning communities to address habits of mind from critical thinking to creativity, persistence, curiosity, storytelling, tinkering, improvisation.

Back@U’s structures feedback that help new learners get/contribute high quality peer-reviews in global “pro-am” communities. Giving quality feedback is a mechanism to ascend to leadership positions in complicated multiplayer teams.

Learners post their work anywhere and embed Back@U, where “judges” give feedback, similar to the iPhone app, Leaf Trombone World Stage (http://www.youtube.com/watch?v=0R5OVX6EKWg). Judges enter the community by having their work judged. Judges improve in expertise using a mechanism similar to the ESP Game (http://en.wikipedia.org/wiki/ESP_game) where agreement earns status. Back@U players agree on terms and phrases to describe the work using language their learning community values. Back@U also provides a mechanism for the community to refine the review criteria.

Back@U takes our Harvesting Gradebook ideas (http://wsuctlt.wordpress.com/harvesting_gradebook) to the wider world.

Back@U will collaborate with other teams who provide rich test-beds.

Original version from January 2010:

Back@U: Giving and Getting Structured Feedback; Growing in a Learning Community

Brief Project Description (50 word max):

Getting feedback guides learners to improve their work.  Giving feedback is educative for the givers also. And when feedback is given in learning communities, it can be educative for bystanders. Back@U can be used to build learning communities by letting users embed feedback mechanisms next to their work.

Project Description (300 word max):

Back@U turns Learning Labs inside-out; WWW is the lab, Back@U is an instrument. Its a mechanism that allows learners to gather feedback about their work from multiple sources. John Seely Brown illustrates Lave & Wegner’s concept of “legitimate peripheral participation” among copier repairmen to show how story telling in communities of practice creates effective training, even for novices.

Back@U’s feedback is more structured; learners post on the WWW and embed Back@U, where “judges” give feedback, similar to the iPhone app, Leaf Trombone World Stage.  Judges enter the community by having their work judged.  Judges improve in expertise using a mechanism similar to the ESP Game (Games With a Purpose) where agreement on tags earns points.  Rather than metadata tags, Back@U players agree on descriptive terms and phrases to describe the work using language the learning community values.  Back@U also provides a mechanism for community refinement of the criteria.

People freely engage in learning required to master games: attempting, getting feedback, trying new approaches. To reach a genuine achievement, learners need lots of trials, errors, and adjustments based on feedback. These are the same skills life-long-learners use; they approach learning as a challenge, a game.

DML Competition Partnering Possiblilties

The MacAurther Funded, HASTAC managed DML competition has entered its next round. We found several potential partners in the January round, and there are other potentials among the first round entrants, and perhaps some new potential partners in the round two entrants. To help would-be partners visualize how our project might interact with theirs, we created this example. We’ve since been turned on to a couple other models, which are noted here.

Toward the end of last week we cooked up a scheme to partner with three projects that have the potential to serve as test-beds for our Back@U concept that works like this. We well request some additional funds to support the additional costs of our partners to serve as test contexts.

We are inviting would be partners to contact us (Nils Peterson) and discuss ideas offline and then post a comment in our DML entry about your interest in partnering and what your test-bed offers. Let us know you did so we can post in your entry as well. When the public comment closes at DML we can keep the public talk going as comments to this post.

As of now, it seems it would be useful to us  to know:

  • nature of platform (website, blog, wiki, course management system, etc) to think how our tool can interface
  • nature of your participants (numbers, age group)
  • availability and nature of mentors to provide feedback

The judges, following from your comment on our page  to your entry will probably want to know:

  • your project name
  • the URL of your page in DML

Status Update: From Harvesting Gradebook to Learning Outcomes Assessment

The Cal State University system has been holding an internal series of webinars on ways to integrate and assess general education, including the use of ePortfolios, the VALUE rubrics, and themes like “sustainability.” By invitation we just presented a summary of the harvesting gradebook and beyond

Thursday, February 11, 2010
10:00-11:00 a.m.

Draft abstract

WSU does not have a centrally endorsed ePortfolio tool, and has been moving to view the web itself as an ePortfolio. You can read a summary of our (somewhat radical) thinking here: https://communitylearning.wordpress.com/2009/09/27/not-your-fathers-portfolio/

One of the challenges inherent in the strategy is how to manage assessment when student work is scattered across the Internet. To meet that challenge, we have been developing a tool called the “Harvesting Gradebook” that allows multiple reviewers, both inside and outside the institution, to give rubric-based feedback to learners wherever the learners’ work resides.

This “embedded assessment” approach has the advantage that it can be rolled up from program to college to university level for meaningful program review. WSU is piloting such a system for use in their NWCCU accreditation reporting. Their concept piece can be found here: https://communitylearning.wordpress.com/2009/09/21/from-student-feedback-to-university-accreditation/

In the effort to balance the tension between accountability and assessment, WSU is currently refining a rubric ( https://universityportfolio.wsu.edu/2009-2010/December%20Packet/Guide%20to%20Assessment%20(Expanded).pdf ) to provide formative feedback to academic programs about their assessment work.

WSU’s work is attempting to articulate, coordinate and be accountable for student learning outcomes across many scales within a diverse university setting.

Back@U DML Competition Mockup

The Digital Media and Learning Competition has reached the stage for authors to revise. I’ve been seeking out projects that fit with our thoughts for Reimagining Learning and saving them here. Now we have a couple interesting comments left by other entrants.

The question in my mind is how to better show what we are proposing to those commentators and perhaps use the demo to self-reflect and make our entry better.

First Attempt – Diigo

Following a brainstorming session with Theron, I created a Diigo.com group “DML Competition”  so that I could begin exploring how to give Back@U type feedback (or at least an approximation of it) to myself and other DML competitors, using the criteria of the competition itself. I wanted to do my exploration in a public way that could become understood by the judges and the audience as well.

I found four criteria in the DML call for submissions and highlighted them in 4 colors with Diigo’s tools. After you install Diigo, check out the Digital Media and Learning/reimagining_learning.php entry in the DML Competition or see screenshot below.

click to enlarge

Based on what I found in their call for proposals, here are four suggested categories for structuring feedback:

Yellow=Rich problems. Diverse, multi-faceted problems. New and emerging problems requiring a collaboration among different disciplines and skills to address.

Blue=habits of mind, including critical thinking, but extending to dispositions leading to innovation: creativity, persistance, curiosity, storytelling, tinkering, improvision, collaboration

Green=Social and collaborative learning; new learning resources, approaches and skills that augment traditional ones

Pink=Learning setting/activity: tangible, creative activities, that are open and discovery-based, involve tinkering and play and are not highly prescriptive.

Next I went to our Back@U entry and attempted to place the 4 colored highlights and some comment about how the criteria is met into our entry. This exercise was very instructive for thinking about our revisions…

click to enlarge

Why do this in Diigo rather than as comments?

1) I’ve found that comments in the DML system have a limited length.
2) The color highlight allows me to point at the relevant place for my comment.

Get a Diigo account. Join DML Competition. Use the colors codes above and begin highlighting and commenting.

Steve Spaeth has jumped on the Diigo idea and is trying it in a project he has going.

Second Attempt – Google Sidewiki

My second attempt was with Google’s Sidewiki. Its an IE and FF plugin. Go to Google to get it. Sidewiki allows comments by multiple authors for the whole page and/or for selections on the page. Sidewiki does not support color coding.

click to enlarge

What’s still missing?

Each of these tools captures some of our thinking, and perhaps enough to help us provide critiques for improvement.

Community agreed dimensions. I started by pointing to (conjectured) dimensions for assessing this work. The tools (Diigo ans Sidewiki) don’t support the posting of the dimensions.

Rating scale. While the tools let us point at parts of a text, we can’t use rubric criteria in the tool to provide a measure.

User control. It would be nice for the author to be able to embed a rating widget, preset with the dimensions and rating scale and invite feedback in more explicit ways.

Here is a hypothetical screen shot with widget embedded.