Taking the Harvesting Gradebook to production

Last summer we took Gary’s Harvesting Gradebook idea from concept to implementation and during the school year we conducted two pilots in classes. Figure 1 is a diagram of the concept, showing how data can serve the needs of student, instructor and academic program. Here is a live demo you can try. One of our goals of the demo is to illustrate that reviewers can give rubric-based feedback in 5-10 minutes – a level of time commitment we think is reasonable to ask of outside reviewers.

Figure 1. Diagram of harvesting feedback on an assignment and on student work, then feeding it to student, instructor and academic program dashboards

Figure 1. Diagram of harvesting feedback on an assignment and on student work, then feeding it to student, instructor and academic program dashboards

This summer our goal is to take the concept to production and integrate it with Washington State University’s systems. Figure 2 is our whiteboard analysis of the life cycle of the process. The key element is to be able to produce a spreadsheet with two columns, the student identifier and a letter grade summarized from the harvesting process.

harvesting gradebook scaleup

Step 1. Create surveys for students to embed within their work

The life cycle of the process begins with the Registrar, where students select their classes. The data are extracted from there, either directly by faculty as they do to create their class lists, or from a shadow system (CTLT’s Enrollment Web). After massaging in Excel, the data are uploaded into Skylight Matrix Survey System to create the Respondent Pools that are the individual student surveys. Metadata about the student (importantly WSU ID number) can be included in the upload process so that it will be available in the reporting process to link students with their grades.

Step 2. URLs for linking the survey to the student work.

Student are assumed to be working in a variety of media and locations on the Internet. The ideal situation is to embed a survey in situ, less ideal (but more practical) is to place a URL to the survey with the work. There are two mechanisms for getting the URLs (created in step 1) distributed to students.

  1. In Step 1, students can be authorized to the Skylight Dashboard (this tool was created for faculty for course evaluations, but it works the same for any person who is the subject of a survey). In the FAQ of the page above is information about getting the URL to the survey.

  2. Alternately, an instructor can download an Excel report for the whole survey and obtain the URLs from it.

Step 3. Students access to the Harvested Feedback

Students can see summary data in their survey using the Skylight Dashboard. They can also download the raw data and/or can make a customized report (or use a template provided to them to get a customized report). Customized reports have great potential to reformat and visualize the data. For example, in the live demo this Custom report was created in Google Docs, and the resulting graphs were displayed in a SharePoint space (which could be a student’s portfolio).

Step 4. Converting data for use in traditional gradebook

The instructor can use a Standard Report from Skylight to get all the data for the class and then in another Excel sheet have it automatically processed into letter grades using formulas chosen by the instructor. Then a final Excel sheet is created to merge student identity information (downloaded from Skylight, e.g., name, WSU ID) and letter grade. This latter sheet is a key part of the following strategy.

Step 5. Reporting letter grades to students

While students can see their feedback and rubric scores in the Skylight Dashboard, they do not have access to the resulting letter grade (which may be a function of several instructor-set parameters). Using the Excel sheet discussed at the end of Step 4, the instructor can upload the grade information as a column in a gradebook within a course management system. Different CMS systems have slightly different requirements for this transaction, but all support it generally. Within the CMS gradebook, the grade from the harvested activity can be included with scores from other sources (e.g., quiz). The instructor can calculate a cumulative grade for the course from these multiple sources.

Step 6. Reporting the final grade to the Registrar

The instructor can download the scores, and final grade, from the CMS and with suitable adjustment of columns, be prepared to upload the final results to the Registrar at the end of the term. This activity, while an important productivity enhancement for the instructor is a general one that may be beyond the scope of this work.

Advertisements

Enhance course evaluations with faculty data

Previously we have explored assessing assignments along with the student work the assignment prompted.  We have also experimented with a parallel idea in course evaluations. The work was done in the College of Agriculture Human and Natural Resource Sciences, with their college-wide course evaluation between 2008 and 2009. The course evaluation instrument contains several Likert blocks that ask about constructs: Skills Development, Learning Environment, Critical Engagement and Disciplinary Knowledge.

The data from the course evaluation were reported to faculty in a multi-tabbed Excel workbook. The first tab contained a survey-like structure where faculty were asked to rate their intentions for the course, using the same measures as were included in the student course evaluation, see Figure 1

Figure 1, screen shot of a portion of the Excel workbook where faculty were prompted to rate their intention for the course. Averages are calculated from the several ratings in the construct and shown to the right. The averages will also be used on Figure 2.

Figure 1, (click to enlarge) screen shot of a portion of the Excel workbook where faculty were prompted to rate their intention for the course. Averages are calculated from the several ratings in the construct and shown to the right. The averages will also be used on Figure 2.

The second tab of the work book provided a means for faculty to compare their goals with the student perception of how successful the course was at meeting the goal, and also compare the course against a programmatic average, Figure 2.

Figure 2, a screen shot of a portion of the Excel workbook where faculty can compare their goal (pink) against their students perceptions of the course and also against an academic program's average rating.

Figure 2, (click to enlarge) a screen shot of a portion of the Excel workbook where faculty can compare their goal (pink) against their students perceptions of the course (blue) and also against an academic program's average rating (white).

To date, we have not worked with faculty using these data, nor have the faculty intentions been collected centrally and rolled up the the program level to assess intentions of courses against the goals of the program.

Harvesting feedback on a course assignment

This post demonstrates harvesting rubric-based feedback in a course, and how the feedback can be used by instructors and programs, as well as students. It is being prepared for a Webinar hosted by the TLT group. (Update 7/28: Webinar archive here. Minutes 16-36 are our portion. Minutes 24-31 are music while participants work on the online task. This is followed by Terry Rhodes of AAC&U with some kind comments about how the WSU work illustrates ideas in the AAC&U VALUE initiative. Min 52-54 of the session is Rhodes’ summary about VALUE and the goal of rolling up assessment from course to program level. This process demonstrates that capability.)

Webinar Activity (for session on July 28) Should work before and after session, see below.

  1. Visit this page (opens in a new window)
  2. On the new page, compete a rubric rating of either the student work or the assignment that prompted the work.

Pre/Post Webinar

If you found this page, but are not in the webinar, you can still participate.

  • Visit the page above and rate either the student work or assignment using the rubric. Data will be captured but not be updated for you in real time.
  • Explore the three tabs across the top of the page to see the data reported from previous  raters.
  • Links to review:

Discussion of the activity
The online session is constrained for time, so we invite you to discuss the ideas in the comment section below. There is also a TLT Group “Friday Live” session  being planned for on Friday Sept 25, 2009 where you can join in a discussion of these ideas.

In the event above, we demonstrated using an online rubric-based survey to assess an assignment and to assess the student work created in response to the assignment. The student work, the assignment, and the rubric were all used together in a course at WSU. Other courses we have worked with have assignments and student products that are longer and richer, we chose these abbreviated pieces for pragmatic reasons, to facilitate a rapid process of scoring and reporting data during a short webinar.

The process we are exploring allows feedback to be gathered from work in situ on the Internet (e.g., a learner’s ePortfolio), without requiring work be first collected into an institutional repository. Gary Brown coined the term “Harvesting Gradebook”  to describe the concept, but we have come to understand that the technique can “harvest” more than grades, so a better term might be “harvesting feedback.”

harvesting-gradebook1

This harvesting idea allows a mechanism to support community-based learning (see Institutional-Community Learning Spectrum). As we have been piloting community-based learning activities from within a university context, we are coming to understand that it is important to assess  student work and assignments and the assessment instruments.

Importance of focusing assessments on Student Work

Gathering input on student projects provides the students with authentic experiences, maintains ways to engage students in  authentic communities, helps the community consider new hires, and gives employers the kind of interaction with students that the university can capitalize when asking for money. But, we also have come to understand that assessing student learning often yields little change in course design or learning outcomes, Figure 1. (See also http://chronicle.com/news/article/6791/many-colleges-assess-learning-but-may-not-use-data-to-improve-survey-finds?utm_source=at&utm_medium=en )

graph of 5 years of outcomes data

Figure 1. In the period 2003-2008 the program assessed student papers using the rubric above. Scores for the rubric dimensions are averaged in this graph. The work represented in this figure is different than the work being scored in the activity above. The “4” level on the rubric was determined by the program to be competency for a student graduating from the program.

The data in Figure 1 come from the efforts of a program that has been collaborating with CTLT for five years. The project has been assessing student papers using a version of the Critical Thinking Rubric tailored for the program’s needs.

Those efforts, measuring student work alone, did not produce any demonstrable change in the quality of the student work (Figure 1). In the figure, note that:

  • Student performance does not improve with increasing course level, eg 200,300,400-level within a given year
  • Only one time were students judged to meet the competency level set by the program itself (2005 500-level)
  • Across the years studied, student performance within a course level did not improve, e.g., examine the 300-level course in 2003, 2006, 2007, 2008

Importance of focusing assessments on Assignments

Assignments are important places for the wider community to give input, because the effort the community spends assessing assignments can be leveraged across a large group of students. Additionally, if faculty lack mental models of alternative pedagogies, assignment assessment helps focus faculty attention on very concrete strategies they can actually use to help students improve.

The importance of assessing more than just student work can be seen in Figure 1. As these results unfolded, we suggested to the program that it focus attention on the assignment design. They just did not follow through as a program in reflecting on and revising the assignments, nor did they follow through with suggestions to improve communication of the rubric criteria with students.

Figure 2 shows the inter-rater reliability from the same program. Note that the inter-rater reliability is 70+% and is consistent year to year.

Graph of inter-rater reliability data

Graph of inter-rater reliability data

This inter-rater reliability is borderline and problematic because, when extrapolated to high stakes testing, or even grades, this marginal agreement speaks disconcertingly to the coherence (or lack there of) of the program.

Figure 3 comes from a different program. It shows faculty ratings (inter-rater reliability) on a 101-level assignment and provides a picture of the maze, or obstacle course, of faculty expectations that students must navigate. Higher inter-rater reliability would be indicative of greater program coherence and should lead to higher student success.

Interrater reliability detail

Importance of focusing assessments on Assessment Instruments

Our own work and Allen and Knight (table 4) have found that faculty and professionals place different emphasis on the importance of criteria used to assess student work. Assessing the instrument in a variety of communities offers the chance to have conversations about the criteria and address questions of the relevance of the program to the community.

Summary

The intention of the triangulated assessment demonstrated above (assignment, student work and assessment instrument) is to keep the conversation about all parts of the process open to develop and test action plans that have potential to enhance learning outcomes. We are moving from pilot experiments with this idea to strategies to use the information to inform program-wide learning outcomes and to feed that data into ongoing accreditation work.

Mockups of Rain King Pages

FYI, the partial mockup I demo’ed last Thursday is available here.

The mockup of a college page is here.

The mockup of a college-year page is here.

The mockup of a college-topic page is here.
All of this is highly subject to change.
— Joshua