Key features for implementing a Harvesting Gradebook

In working on our DML competition entry, I found myself enumerating the features we’ve found important to our Harvesting Gradebook work. The Harvesting Gradebook consists of a web-based survey that can be embedded in, or linked from, link to, or embed within itself a piece of work that is to be evaluated by a reviewer using the survey in conjunction with various forms of data visualization such as radar chart and tag clouds.
The first proofs of concept were in the summer of 2008. We first used the tool with students in Fall 2008. That work, thru April 2009 can be found here. Our explorations have branched in several directions since then, including elaborating the idea to university-wide program level learning outcomes assessment. There is some overlap among these 3 categories:

These implementations have used several tools: paper&pencil, Google Docs Forms/Spreadsheet, Microsoft SharePoint survey, and most recently, Diigo and Google Sidewiki. Production implementations have been done with WSU’s Skylight Matrix Survey System.
We keep gravitating back to Skylight because of features that it has that make it particularly well suited for implementing the Harvesting Gradebook. They are:

  1. multiple “Respondent Pools,”  which allows multiple surveys to use a common data store and shared reporting mechanism;
  2. Respondent Pool Metadata” to store additional data elements that describe a Respondent Pool with that pool’s data;
  3. a method for embedding the metadata in both an email and a survey when delivered to a respondent;
  4. the “Rubric Question Type,” rubrics are a commonly accepted assessment tool;
  5. the “Dashboard” for automatic, real-time web-based reporting of data from one or more Respondent Pools to one or more authorized viewers;
  6. tabular reports of data, available for both the survey overall or for an individual Respondent Pool or selected groups of Respondent Pools;
  7. a mechanism for securing of report data without a password, including sharing the results when a rater finishes a rating session;
  8. a mechanism to get the tabular data from the Skylight Matrix Survey System to a third party system via a uniform resource locator (URL);
  9. a mechanism to download all data and metadata associated with a survey  and/or Respondent Pool, including URLs for taking the survey, in a format readily used by spreadsheets.

More Back@U mockups and an example application

We just discovered that Jason B. Jones has a proof of concept mashup of a Google form with Diigo using a Firefox extension to put both on a split screen. The emphasis seems to be more on grading, but it has a nice place to give suggestions for future work. We had previously tried making a mashup with Google, Jones’ Firefox extension looks more promising. I think I like the Google Sidewiki approach (middle of this page) best among all of these, because it can pull out in a drawer and get out of the way again.

While its easy (in our University setting) to think about these ideas in terms of grading (which is certainly where we started), our DML entry is trying to push to a wider learning community perspective.

Example: Learner with a problem

In the video William Kamkwamba sets out a problem statement, including his context, which many of us might find challenging. Imagine if William could have posted his problem and invited help at a site (perhaps facilitated by an NGO or WSU’s Ripple Effect) where others could have given feedback on the problem statement. Further, imagine that the site had ways of collecting proposed solutions and gathering feedback from William and others in the a audience on the quality and utility of the solutions. The criteria in the rubric might have included ideas like critical thinking, but also habits of mind at the heart of innovation and achievement – creativity, persistence, imagination, curiosity, storytelling, tinkering, improvisation, passion, and risk-taking.

William, and his collaborators could have learned about the problem and its solution. Bystanders could have learned also, either about the ingenious solution, or about William’s approach to problem solving.

Status Update: From Harvesting Gradebook to Learning Outcomes Assessment

The Cal State University system has been holding an internal series of webinars on ways to integrate and assess general education, including the use of ePortfolios, the VALUE rubrics, and themes like “sustainability.” By invitation we just presented a summary of the harvesting gradebook and beyond

Thursday, February 11, 2010
10:00-11:00 a.m.

Draft abstract

WSU does not have a centrally endorsed ePortfolio tool, and has been moving to view the web itself as an ePortfolio. You can read a summary of our (somewhat radical) thinking here: https://communitylearning.wordpress.com/2009/09/27/not-your-fathers-portfolio/

One of the challenges inherent in the strategy is how to manage assessment when student work is scattered across the Internet. To meet that challenge, we have been developing a tool called the “Harvesting Gradebook” that allows multiple reviewers, both inside and outside the institution, to give rubric-based feedback to learners wherever the learners’ work resides.

This “embedded assessment” approach has the advantage that it can be rolled up from program to college to university level for meaningful program review. WSU is piloting such a system for use in their NWCCU accreditation reporting. Their concept piece can be found here: https://communitylearning.wordpress.com/2009/09/21/from-student-feedback-to-university-accreditation/

In the effort to balance the tension between accountability and assessment, WSU is currently refining a rubric ( https://universityportfolio.wsu.edu/2009-2010/December%20Packet/Guide%20to%20Assessment%20(Expanded).pdf ) to provide formative feedback to academic programs about their assessment work.

WSU’s work is attempting to articulate, coordinate and be accountable for student learning outcomes across many scales within a diverse university setting.

POD 2009 Innovation Award Application

The item below is a nice synthesis of our thinking on the last 18 months of work in the Harvesting Gradebook. It was developed as a Professional and Organizational Development Network in Higher Education (POD) Award application for the 2009 competition.

Author Contact Information
Gary Brown, Theron DesRosier, Jayme Jacobson, Corinna Lo, Nils Peterson
Center for Teaching Learning and Technology
Washington State University
Pullman, WA 99164
browng@wsu.edu
509/335-1355
http://ctlt.wsu.edu

Innovation Award Description
Title:
Harvesting Gradebook

Category of Innovation Award
Teaching and Learning

Abstract
A grade book traditionally is a one way reporting mechanism. It reports to students their performance as assessed by the instructor.  This model assumes and implies that students learn primarily from the professor and the professor’s grade.

The Center for Teaching, Learning, & Technology Center at WSU has developed, implemented and assessed an enriched gradebook that affords multiple stakeholders opportunity to assess student work and provide quantitative and qualitative feedback to students and faculty.


Project Description
Originality

Nationally, efforts to integrate active learning, critical thinking and assess the outcomes have been elusive.  Faculty’s experience and resources for providing students with feedback that contains multiple perspectives has been limited.

In our work with faculty at a land grant, research institution, faculty preconceptions tend to assume that undergraduates do not have the wherewithal to engage in constructive peer and self-assessment. Many doubt that rich feedback about authentic problems can be offered in any but a few select courses.

This innovation counters these assumptions.  At heart is the WSU Guide to Critical and Integrative Thinking (CITR), an internationally recognized instrument and assessment process.  We have previously shown that students appreciate and can provide rich peer assessments with the CITR, and that those assessments mirror judgments by faculty. We have further shown that online tools help facilitate the process.

In this phase of the innovation we extend the process to a distributed audience of students, peers, faculty in the program and industry professionals around the globe. The resulting feedback and ratings from each of these groups provides invaluable insight and a rich resource breaking the barrier between educational practice and the “real world.”  For instance, reviewers provided insights into:
1.      their perception of the value of the rubric‘s dimensions
2.      the changes in employ-ability of the students based on the work

Scope and Results
We have piloted this Harvesting Gradebook technique in an undergraduate level market forecasting class and a similar Honors class. In the forecasting class, student teams received rating and textual feedback from peers, faculty in the program, and industry professionals at mid-term and again at end of term,

We observed that:

Transferability
CTLT is presently scaling up its capacity to offer this grading/feedback approach University-wide and we are generalizing the idea of Harvesting Feedback to apply it to the Learning Outcomes aspects of WSU’s University Accreditation activities.

Effectiveness
This process is effective and efficient way to gather rich feedback for students working on authentic problems. Students can work in any space on the Internet and post in their workspace a link to an online rubric for their purposes.

They can also send a request for feedback through email. Faculty can easily recruit industry participants and because the rating process is fast, the professionals can contribute without significant time cost. Results can be centrally harvested and reported to students and faculty in real time.

Crowd sourcing to support learning at all levels of the university

We are developing a response here to the article Duke Professor uses ‘Crowdsourcing’ to Grade by Erica Hendry  in the Chronicle.

In our response (which for some reason did not appear as a comment to the Chronicle article but is reproduced in Cathy’s blog) we offered a survey that implements Gary Brown’s Harvesting Gradebook concept. Erica’s article is the object of the review, Cathy’s criteria are the basis of the instrument.

The demonstration we whipped up is a variant of an earlier demonstration of harvesting feedback for programatic assessment that we did in a webinar hosted by the TLT Group. The link is to David Eubank’s insightful review of the demo.

The basic concept is to link a survey to an object on the Internet and invite feedback from viewers, but the criteria are more elaborate than “its hot/its not.” The student gets visualization of the quantitative data and a listing of the qualitative comments.

If you have not tried it already, read Erica’s article and review it here.  The end of the review will take you to a page with the results (its not real time, we’ll  update periodically.)

Some of the angst in the comments to the Chronicle article seems to come from the energy surrounding high-stakes grading activities, and perhaps a recognition that grading does not advance the student’s learning (nor the faculty member’s).

A grade book traditionally is a one way reporting mechanism-it reports to students their performance as assessed by the instructor who designed the activity.   Learning  from grades in this impoverished but pervasive model is largely one way-the student learns, presumably, from the professor’s grade.  What does a student really learn from a letter or number grade? What does the faculty member learn from this transaction that will help him or her improve?  What does a program or institution learn?  We are exploring ways to do better.

We exploring ways to learn from grading by transforming the gradebook, and part of that transforming is to allow others into the process. For example, Alex Halavais rates his  experiment to use crowdsourcing as “revise and resubmit.” In Halavais’ example, students gamed the system, competing for points. The approach we are exploring has a couple key differences. First, the scales we’d advocate (such as WSU’s Critical Thinking Rubric) are absolute (we expect graduating seniors have not reached the top level; faculty map the rubric scale to grades in age-appropriate ways). Second, we imagine outsiders providing feedback, not just peers. When we did a pilot in an Appearal Merchandising course in the Fall of 2008, a group of industry professionals, faculty from the program, and students were all involved in assessing in a capstone course. The results were rich in formative feedback and let the students see how their work would be judged by other faculty in the program and by the people who might hire them.

Further, we have called this a “transformative” approach, because the data can be used by the student, by the instructor, by the program, and by the university, each for purposes of improving on practice.

Taking the Harvesting Gradebook to production

Last summer we took Gary’s Harvesting Gradebook idea from concept to implementation and during the school year we conducted two pilots in classes. Figure 1 is a diagram of the concept, showing how data can serve the needs of student, instructor and academic program. Here is a live demo you can try. One of our goals of the demo is to illustrate that reviewers can give rubric-based feedback in 5-10 minutes – a level of time commitment we think is reasonable to ask of outside reviewers.

Figure 1. Diagram of harvesting feedback on an assignment and on student work, then feeding it to student, instructor and academic program dashboards

Figure 1. Diagram of harvesting feedback on an assignment and on student work, then feeding it to student, instructor and academic program dashboards

This summer our goal is to take the concept to production and integrate it with Washington State University’s systems. Figure 2 is our whiteboard analysis of the life cycle of the process. The key element is to be able to produce a spreadsheet with two columns, the student identifier and a letter grade summarized from the harvesting process.

harvesting gradebook scaleup

Step 1. Create surveys for students to embed within their work

The life cycle of the process begins with the Registrar, where students select their classes. The data are extracted from there, either directly by faculty as they do to create their class lists, or from a shadow system (CTLT’s Enrollment Web). After massaging in Excel, the data are uploaded into Skylight Matrix Survey System to create the Respondent Pools that are the individual student surveys. Metadata about the student (importantly WSU ID number) can be included in the upload process so that it will be available in the reporting process to link students with their grades.

Step 2. URLs for linking the survey to the student work.

Student are assumed to be working in a variety of media and locations on the Internet. The ideal situation is to embed a survey in situ, less ideal (but more practical) is to place a URL to the survey with the work. There are two mechanisms for getting the URLs (created in step 1) distributed to students.

  1. In Step 1, students can be authorized to the Skylight Dashboard (this tool was created for faculty for course evaluations, but it works the same for any person who is the subject of a survey). In the FAQ of the page above is information about getting the URL to the survey.

  2. Alternately, an instructor can download an Excel report for the whole survey and obtain the URLs from it.

Step 3. Students access to the Harvested Feedback

Students can see summary data in their survey using the Skylight Dashboard. They can also download the raw data and/or can make a customized report (or use a template provided to them to get a customized report). Customized reports have great potential to reformat and visualize the data. For example, in the live demo this Custom report was created in Google Docs, and the resulting graphs were displayed in a SharePoint space (which could be a student’s portfolio).

Step 4. Converting data for use in traditional gradebook

The instructor can use a Standard Report from Skylight to get all the data for the class and then in another Excel sheet have it automatically processed into letter grades using formulas chosen by the instructor. Then a final Excel sheet is created to merge student identity information (downloaded from Skylight, e.g., name, WSU ID) and letter grade. This latter sheet is a key part of the following strategy.

Step 5. Reporting letter grades to students

While students can see their feedback and rubric scores in the Skylight Dashboard, they do not have access to the resulting letter grade (which may be a function of several instructor-set parameters). Using the Excel sheet discussed at the end of Step 4, the instructor can upload the grade information as a column in a gradebook within a course management system. Different CMS systems have slightly different requirements for this transaction, but all support it generally. Within the CMS gradebook, the grade from the harvested activity can be included with scores from other sources (e.g., quiz). The instructor can calculate a cumulative grade for the course from these multiple sources.

Step 6. Reporting the final grade to the Registrar

The instructor can download the scores, and final grade, from the CMS and with suitable adjustment of columns, be prepared to upload the final results to the Registrar at the end of the term. This activity, while an important productivity enhancement for the instructor is a general one that may be beyond the scope of this work.

Harvesting feedback on a course assignment

This post demonstrates harvesting rubric-based feedback in a course, and how the feedback can be used by instructors and programs, as well as students. It is being prepared for a Webinar hosted by the TLT group. (Update 7/28: Webinar archive here. Minutes 16-36 are our portion. Minutes 24-31 are music while participants work on the online task. This is followed by Terry Rhodes of AAC&U with some kind comments about how the WSU work illustrates ideas in the AAC&U VALUE initiative. Min 52-54 of the session is Rhodes’ summary about VALUE and the goal of rolling up assessment from course to program level. This process demonstrates that capability.)

Webinar Activity (for session on July 28) Should work before and after session, see below.

  1. Visit this page (opens in a new window)
  2. On the new page, compete a rubric rating of either the student work or the assignment that prompted the work.

Pre/Post Webinar

If you found this page, but are not in the webinar, you can still participate.

  • Visit the page above and rate either the student work or assignment using the rubric. Data will be captured but not be updated for you in real time.
  • Explore the three tabs across the top of the page to see the data reported from previous  raters.
  • Links to review:

Discussion of the activity
The online session is constrained for time, so we invite you to discuss the ideas in the comment section below. There is also a TLT Group “Friday Live” session  being planned for on Friday Sept 25, 2009 where you can join in a discussion of these ideas.

In the event above, we demonstrated using an online rubric-based survey to assess an assignment and to assess the student work created in response to the assignment. The student work, the assignment, and the rubric were all used together in a course at WSU. Other courses we have worked with have assignments and student products that are longer and richer, we chose these abbreviated pieces for pragmatic reasons, to facilitate a rapid process of scoring and reporting data during a short webinar.

The process we are exploring allows feedback to be gathered from work in situ on the Internet (e.g., a learner’s ePortfolio), without requiring work be first collected into an institutional repository. Gary Brown coined the term “Harvesting Gradebook”  to describe the concept, but we have come to understand that the technique can “harvest” more than grades, so a better term might be “harvesting feedback.”

harvesting-gradebook1

This harvesting idea allows a mechanism to support community-based learning (see Institutional-Community Learning Spectrum). As we have been piloting community-based learning activities from within a university context, we are coming to understand that it is important to assess  student work and assignments and the assessment instruments.

Importance of focusing assessments on Student Work

Gathering input on student projects provides the students with authentic experiences, maintains ways to engage students in  authentic communities, helps the community consider new hires, and gives employers the kind of interaction with students that the university can capitalize when asking for money. But, we also have come to understand that assessing student learning often yields little change in course design or learning outcomes, Figure 1. (See also http://chronicle.com/news/article/6791/many-colleges-assess-learning-but-may-not-use-data-to-improve-survey-finds?utm_source=at&utm_medium=en )

graph of 5 years of outcomes data

Figure 1. In the period 2003-2008 the program assessed student papers using the rubric above. Scores for the rubric dimensions are averaged in this graph. The work represented in this figure is different than the work being scored in the activity above. The “4” level on the rubric was determined by the program to be competency for a student graduating from the program.

The data in Figure 1 come from the efforts of a program that has been collaborating with CTLT for five years. The project has been assessing student papers using a version of the Critical Thinking Rubric tailored for the program’s needs.

Those efforts, measuring student work alone, did not produce any demonstrable change in the quality of the student work (Figure 1). In the figure, note that:

  • Student performance does not improve with increasing course level, eg 200,300,400-level within a given year
  • Only one time were students judged to meet the competency level set by the program itself (2005 500-level)
  • Across the years studied, student performance within a course level did not improve, e.g., examine the 300-level course in 2003, 2006, 2007, 2008

Importance of focusing assessments on Assignments

Assignments are important places for the wider community to give input, because the effort the community spends assessing assignments can be leveraged across a large group of students. Additionally, if faculty lack mental models of alternative pedagogies, assignment assessment helps focus faculty attention on very concrete strategies they can actually use to help students improve.

The importance of assessing more than just student work can be seen in Figure 1. As these results unfolded, we suggested to the program that it focus attention on the assignment design. They just did not follow through as a program in reflecting on and revising the assignments, nor did they follow through with suggestions to improve communication of the rubric criteria with students.

Figure 2 shows the inter-rater reliability from the same program. Note that the inter-rater reliability is 70+% and is consistent year to year.

Graph of inter-rater reliability data

Graph of inter-rater reliability data

This inter-rater reliability is borderline and problematic because, when extrapolated to high stakes testing, or even grades, this marginal agreement speaks disconcertingly to the coherence (or lack there of) of the program.

Figure 3 comes from a different program. It shows faculty ratings (inter-rater reliability) on a 101-level assignment and provides a picture of the maze, or obstacle course, of faculty expectations that students must navigate. Higher inter-rater reliability would be indicative of greater program coherence and should lead to higher student success.

Interrater reliability detail

Importance of focusing assessments on Assessment Instruments

Our own work and Allen and Knight (table 4) have found that faculty and professionals place different emphasis on the importance of criteria used to assess student work. Assessing the instrument in a variety of communities offers the chance to have conversations about the criteria and address questions of the relevance of the program to the community.

Summary

The intention of the triangulated assessment demonstrated above (assignment, student work and assessment instrument) is to keep the conversation about all parts of the process open to develop and test action plans that have potential to enhance learning outcomes. We are moving from pilot experiments with this idea to strategies to use the information to inform program-wide learning outcomes and to feed that data into ongoing accreditation work.

Google gadgets for presenting data

One of the issues in learning in an era of information abundance is the need for tools to help visualize data. Examples of this are emerging including Blaise Aguera y Arcas demos Photosynth andHans Rosling shows the best stats you’ve ever seen. Our experiments are more modest, using Google to graph data and share it, below, and in a gadget being built by Corinna Lo in support of our Harvesting Gradebook work. In both the graph below and Lo’s work, data in a Google Doc Spreadsheet is being fed to a dynamic graphing tool that can then be mashed up into another presentation.

Contrast the current data above with the idealized trajectory in this WHO/CDC composite.

Graph of levels of infection vs stages in flu pandemic

Graph of levels of infection vs stages in flu pandemic

Earning credentials in a learning community

Recently David Eubanks has posted some thoughts on assessment and Gary Brown and I each followed up with comments. That led David to make this post summarizing (and re-broadcasting) the thinking we have been doing around a Harvesting Gradebook.

David’s is a smaller and more personal example of an idea I’ve been exploring: learning communities organizing around problems, providing critiques and credentials to members, and doing all this outside the walls of the university. In this case, David is getting his head around some work that I have been involved in so I have a different perspective than in the Lisi example I wrote about before.

In a little email to Gary, David writes “I’m still wrapping my mind around it, and hope I didn’t get any details wrong.  If I need to correct something, please let me know.” David, I think you got it about right, especially because you note “I hope the registrar has a defibrillator in the office” which tells me you recognize the magnitude of the disruption this idea proposes for the current institutional structures.

The meta analysis

By linking to the work and providing some reflection, David is both announcing his interest this Harvesting Gradebook community and extending some of his social capital to it. By linking back to David, and giving some assessment, I’m offering both a welcoming into the community of practice and some social capital in return. Without pushing these analogies too far, this post is offering a credential to David, certification that he “gets it.” And here is where things get dicey for the traditional university. Were a community of practice robust enough, and the accumulated credentials understood to carry substantial social capital, the community (and not the university) would be able to offer credentials. The income stream of the university would be treatened. Hence David’s concern for our Registrar’s cardiac health. (In the Lisi example above, time on a particle accelerator to test his theories would be certification of Lisi’s credentials in the High Energy Physics world.)