Not your father’s Portfolio

We were working with a writer on an article about ePortfolios to appear in Campus Technology (its here 11.2009). One of our examples to illustrate our thinking about ePortfolios was Margo Tamez’ El Calaboz Portfolio. Our writer got back with this:

“The editor for my article about eportfolios had a question about my coverage of Margo Tamez’s eportfolio usage. She had expressed concern that the eportfolio have a home beyond the duration of the court case. Does Washington State have any kind of official policy or practices specifically about the life of its student eportfolios? Is there any kind of guarantee that it will live on after the student has left the institution? Anything you can say about that?”

There is a short answer and a long answer to the question.

Short answer: WSU has no policy or procedure in place to delete a student’s SharePoint mySite (where Tamez portfolio is) after graduation, but after 12 months this site becomes read-only unless the graduate makes a specific request to have management access restored.

Long answer: The problem with the short answer is that it focuses on the technical survival of a specific thing at a specific URL. Thinking about a specific collection of artifacts in a specific system at the specific URL is too narrow a focus for our understanding of an ePortfolio.

At the risk of insulting the Campus Technology editor by paraphrasing a Oldsmobile ad, an ePortfolio ‘is not your father’s Portfolio,’ by which I mean that in our view an ePortfolio is not at all an electronic counterpart of the paper portfolio.

An electronic portfolio is both more durable and more tenuous than its paper predecessor. Its also more powerful. Its not a thing or a place, its a practice.

Googling Margo Tamez (she is lucky to have a fairly unique name)  illustrates that she built her electronic reputation in many places, that is, her ePortfolio is not in one place. Rather, it is the sum of artifacts imbedded in the contexts of the communities where she was working. This image is a Touchgraph (link to live Touchgraph) of a search for Margo. It shows her portfolio as a collection of the web 2.0 places she is working.

TouchGraph rendering of Google results for Margo Tamez

Due to the nature of the problem she was working on, Margo intentionally built her portfolio in a distributed fashion. Many of the key documents were emailed to a list of readers, where the body of the email served as description and context for the document. Her WSU ePortfolio was the recipient of a cc: of those emails. Other pieces of her work were created in wikis or as guest posts in blogs. She worked in her community, keeping the artifacts of her work (her ePortfolio) in the places that were best suited to them.

As part of our ePortfolio case study work, we interviewed Dennis Haarsager, now Senior Vice President for System Resources and Technology at National Public Radio, about blogging and building portfolios in public places. In our reflection we said:

“In our interview, Haarsager argued for the public lectures he gives on his chosen problem. The lecture is a showcase portfolio of Haarsager’s current, best thinking. The medium is mostly broadcast, but he feels it allows him to reach new audiences, and to get kinds of feedback about his ideas that he does not get in comments on his blog.

“Tamez is also creating showcase “mini-portfolios” in the form of printed fliers and media interviews. These productions may have some of the risk-related prestige that Tenner ascribes to printed books, while at the same time having the new audience-reaching and immediacy values that Haarsager associates with his lectures. In her learning portfolio, these mini-portfolios document where Tamez’ thinking was at points in her learning trajectory.”

Thus, our thinking is that ePortfolios are created as by-products of work, and are scattered across the venues and contexts in which the work is conducted. An ePortfolio is continually dissipating as systems storing the work go away, and continually growing as new work is added.

I have been struggling for awhile with the problem of describing a 21st century resume.  It too is not like its 20th century counterpart. In that 2007 post I did not yet fully recognize the obvious, which I’m coming to see here. My blog(s) and the other places I post online are my ePortfolio (and my resume).

Rather that focusing on the durability of an ePortfolio system or URL, the most important things we see about an ePortfolio (and ePortfolio as 21st century resume) are the abilities to:

  1. find your work when you need it for reflection or repurposing,
  2. establish that you are indeed the author (possibly under multiple identities) of the works you wish to claim, and
  3. leverage the Google Juice of your work so that it helps you be found by people who share your interests and can help you in your work.

The first of these requirements is most likely met with a hybrid of several Web 2.0 tools. It could be supplemented with a social bookmark service where you track yourself.

The second challenge, proving that the work is “yours” is probably done by making a claim to a corpus of works rather than to a single piece, and by making an appeal to a community and context in which the work was done. (Unlike Catherine Howell‘s thought (ca 2005)  that “universities have a role in ‘authenticating’ individuals [and endowing]… them with certain attributes,” we think an ePortfolio world that enables community-based learning and community-based credentials breaks those assumptions about the university, see a recent piece for AAC&U.)

The third requirement is met by working in public and working in venues where your community of practice will likely congregate and then linking from those contexts to works you created in other contexts that contribute to the conversation.

This third point can be illustrated if you Google me (Nils Peterson). You will discover that there are two people using that name with different career trajectories.  Nils Peterson the Poet is in the Bay Area and worked at San Jose State. His identify is authenticated by a variety of news stories (that is, a community of other writers know that he is who he claims and they are in agreement in their accounts of him).

I claim to be other Nils Peterson who is (currently) prominent in Google, the Nils Peterson who publishes in Campus Technology as well the author here, and the blogger at nilspeterson.com.  I have made a consistent effort to create user identities using Nils+Peterson in many systems and to link from one system to another. This strengthens my claim to be the Nils Peterson who is saying all those things. I don’t depend on my employer or the universities that educated me to substantiate my claims, but I do depend on the corroboration of the communities in which I work.

But, the claim is circumstantial, like solving a jig saw puzzle by inferring which pieces fit together. Following the notions of Helen Barrett, and because I work online in public, my ePortfolio (and resume) is a life lifelong and life wide  web of the works Google associates with me, where ever they exist.

Advertisements

From Student Feedback to University Accreditation

This post supports a free TLT Group webinar
The Harvesting Gradebook: From Student Feedback to University Accreditation

The event is now available as an archive made Friday, September 25, 2009  2:00 pm (ET)

Theron DesRosier, Jayme Jacobson, Nils Peterson, & Gary Brown
Office of Assessment and Innovation, Washington State University

This webinar is an extension of our previous thinking “Learning from the Transformative Gradebook.” Prior to (or following) the session, participants are invited to review a previous session and demonstration of these techniques being applied at the course level

During the session, participants will be invited to pilot our Assessment of Assessment rubric on a program-level accreditation report and to discuss the broader implications of the strategies proposed.

This hour long session will:

1. Review WSU’s model implementations of the Harvesting Gradebook that can be used to gather feedback on student work and the assignments that prompted it. (Background on Harvesting Gradebook)

2. Show how data from harvested assessments at the course level can flow up, providing direct evidence of learning outcomes at the program, college and university levels.

3. Demonstrate a systemic assessment process that applies a common assessment of assessment rubric across the all university’s assessment activities

4. Invite the audience to provide feedback on the Assessment of Assessment rubric by assessing an accreditation report. The goals of the hands-on activity are to:

  1. Gather feedback on the rubric
  2. Demonstrate time effective means of gathering feedback from a diverse community on assessment activities

5. New perspective on  Curricular Mapping, using harvested data.

Further Reading

The Prezi document used in the session (requires Adobe Flash).

Harvesting Gradebook in Production: We have been investigating the issues on the WSU campus surrounding taking the harvesting gradebook into production. While all the integrations with WSU Student Information Systems are not in place yet, we can see a path that automates moving student enrollments from the registrar to create a harvesting survey and moving the numeric scores from the survey back to the instructor where they might combine with other course scores to create the final grade that can be uploaded to the Registrar. A mostly automated pilot is being implemented Fall 2009.

Student Evaluations of Program Outcomes: The presentation references the idea of using student course evaluations to gather indirect evidence on the course’s achievement of the program’s learning outcomes. For several years, WSU’s College of Agriculture Human and Natural Resource Sciences (CAHNRS) has used a course evaluation college wide that asks students their perception of how much the course helped them developing skills in: thinking critically, writing, speaking, working on a team and other dimensions that align with university learning goals. We have explored gathering the faculty goals related to these skills and comparing them to the student perceptions.  To date, this data has not been systematically rolled up or used as evidence in accreditation self-studies.

Crowd-sourcing feedback

David Eubanks commented on our recent Harvesting Feedback demo. I’ll save replying about inter-rater reliability to focus now on his suggestion of using Mechanical Turk and the very insightful comment about the end of “enclosed garden” portfolios.

I think David correctly infers that Mechanical Turk is a potential mechanism to crowd-source the Harvesting Feedback process we are demonstrating. Its an Amazon marketplace to broker human expertise. The tasks, “HITs” (Human Intelligence Tasks) are ones that are not well suited to machine intelligence, in fact the site bills itself as “artificial artificial intelligence.”

To explore Mechanical Turk, I joined as a “Worker” to discover that “Requesters” (sources of HITs) can pre-qualify Workers with competency exams. I’m now qualified as a ‘”Headshot” Image Qualifier’ a skill to identify images that meet certain specific criteria important to requester Fred Graver. I also learned that workers earn (or maintain) a HIT approval rate, which is a measure of how well the worker has performed on past tasks. One might think of this as how well the worker is normed with the criteria of the task (though the criteria in this case are not explicit (which is a weakness in our view)). Being qualified for a task might be analogous to initiation to a community of practice; but one would need to then practice “in community” which Mechanical Turk does not seem to support.

We’ve also been exploring a couple other crowd-source feedback sites that help flesh out the character of this landscape. Slashdot and Leaf Trombone (website and video). Slashdot is a technology-related news website that features user-submitted and editor-evaluated current affairs news with a “nerdy” slant. Leaf Trombone is a game for the iPhone that lets you use your iPhone to play a slide trombone to a world audience.

The three systems are summarized in this table:

Mechanical Turk Leaf Trombone Slashdot
Goal of site/ developer’s reason for using reputation in the site Distributed processing of non-computable tasks/ sort for suitable workers Selling an iPhone app/ use ego to encourage players Building a reliable source of information/ screen for editors who can take high level tasks
Type of reputation / Participant’s purpose for having a good reputation Private reputation/ to secure future employment; earn more income Public reputaiton/ status in the community as player and judge; ongoing participation Public reputation/ enhanced opportunity to contribute to the common good (as opposed to being seen as clever fellow
Type of Reward/ Motivation for participant Money/ Personal gain Personal access to perform on world stage/ learning & fun “Karma” to enable future roles in the community/ improve the information in the community
Performance Space/ durability of the performance Private space (enclosed garden)/ durability is unknown, access to the performance is only available to the Requestor Public stage & synchronous; a new playback feature makes performances durable, but private for the artist Public stage & asynchronous/ permanent performance visible to public audience
Kind of feedback to the participant/ durability of the performance Binary (yes/no) per piece of work completed; assessments are accumulated as a lifetime “approval rate” score Rating scale & text comment per performance/ assessment are stored for the performer Rating scale per posting/ assessments are durable and public for both individual items and are accumulated into the user’s “Karma” level
Assessment to improve the system This could be implemented by the individual “Requester” if they desire ? High “Karma” users engage in meta-assessments of the assessors
Kind of learning represented Traditional employer authority sets a task and is arbiter of success; the goal is to weed out unsuccessful workers Synchronous, collaborative individual learning – judge as learner; performer as learner Asynchronous collaborative community learning
Type of crowd-sourcing Industrial model applied to crowd of workers Ad hoc judges gathered as needed for a performance Content and expertise are openly shared

The three systems represent an interesting spectrum, and each might be applied to our challenge of crowd-sourcing feedback. But looking at the different models they would have very different impacts on the process. I believe that only Slashdot’s model could be sustained by a community over an extended period of time, because it is the only one that has the potential to inform the community and build capital for all the participants.

The table above got me to think about another table we made, diagraming 4 models for delivering higher education. At one side of the chart is the industrial, closed, traditional institution. It progresses through MIT’s open courseware and Western Governor’s University’s student collected content and credit for work experiences to the other end of the chart that we called Community-based Learning.

Three rows in our chart addressed the nature of access to expertise, the assessment criteria, and what happens to student work. The table above informs my thinking on those dimensions. As I’ve charted it, in the Slashdot model expertise is open, assessment is open. (while assessment criteria are obscure, the meta-assessment helps the community maintain a collective sense of the criteria) and the contributer’s (learner’s) work remains permanently as a contribution to the community. This is what I think David is referring to when he applauds the demise of the “enclosed garden” portfolio.

A reason to work in public is to take advantage of an open-source/ crowd-wisdom strategy. David illustrated the power of “We smarter than me”  when called our attention to Mechanical Turk.

Another reason is the low cost to implement the model. Recently the UN Global Alliance for Information and Communication Technology and Development (GAID) announced the newly formed University of the People, a non-profit institution offering higher education to the masses. In the press briefing, University of the People founder Shai Reshef said that “this University opened the gate to these [economically disenfranchised] people to continue their studies from home and at minimal cost by using open-source technology, open course materials, e-learning methods and peer-to-peer teaching.” [emphasis added]

We propose that to be successful the University of the People must implement its peer-to-peer teaching as community-based learning and include a community-centric, non-monetary mechanism to crowd-source both assessment and credentialing.

Harvesting feedback on a course assignment

This post demonstrates harvesting rubric-based feedback in a course, and how the feedback can be used by instructors and programs, as well as students. It is being prepared for a Webinar hosted by the TLT group. (Update 7/28: Webinar archive here. Minutes 16-36 are our portion. Minutes 24-31 are music while participants work on the online task. This is followed by Terry Rhodes of AAC&U with some kind comments about how the WSU work illustrates ideas in the AAC&U VALUE initiative. Min 52-54 of the session is Rhodes’ summary about VALUE and the goal of rolling up assessment from course to program level. This process demonstrates that capability.)

Webinar Activity (for session on July 28) Should work before and after session, see below.

  1. Visit this page (opens in a new window)
  2. On the new page, compete a rubric rating of either the student work or the assignment that prompted the work.

Pre/Post Webinar

If you found this page, but are not in the webinar, you can still participate.

  • Visit the page above and rate either the student work or assignment using the rubric. Data will be captured but not be updated for you in real time.
  • Explore the three tabs across the top of the page to see the data reported from previous  raters.
  • Links to review:

Discussion of the activity
The online session is constrained for time, so we invite you to discuss the ideas in the comment section below. There is also a TLT Group “Friday Live” session  being planned for on Friday Sept 25, 2009 where you can join in a discussion of these ideas.

In the event above, we demonstrated using an online rubric-based survey to assess an assignment and to assess the student work created in response to the assignment. The student work, the assignment, and the rubric were all used together in a course at WSU. Other courses we have worked with have assignments and student products that are longer and richer, we chose these abbreviated pieces for pragmatic reasons, to facilitate a rapid process of scoring and reporting data during a short webinar.

The process we are exploring allows feedback to be gathered from work in situ on the Internet (e.g., a learner’s ePortfolio), without requiring work be first collected into an institutional repository. Gary Brown coined the term “Harvesting Gradebook”  to describe the concept, but we have come to understand that the technique can “harvest” more than grades, so a better term might be “harvesting feedback.”

harvesting-gradebook1

This harvesting idea allows a mechanism to support community-based learning (see Institutional-Community Learning Spectrum). As we have been piloting community-based learning activities from within a university context, we are coming to understand that it is important to assess  student work and assignments and the assessment instruments.

Importance of focusing assessments on Student Work

Gathering input on student projects provides the students with authentic experiences, maintains ways to engage students in  authentic communities, helps the community consider new hires, and gives employers the kind of interaction with students that the university can capitalize when asking for money. But, we also have come to understand that assessing student learning often yields little change in course design or learning outcomes, Figure 1. (See also http://chronicle.com/news/article/6791/many-colleges-assess-learning-but-may-not-use-data-to-improve-survey-finds?utm_source=at&utm_medium=en )

graph of 5 years of outcomes data

Figure 1. In the period 2003-2008 the program assessed student papers using the rubric above. Scores for the rubric dimensions are averaged in this graph. The work represented in this figure is different than the work being scored in the activity above. The “4” level on the rubric was determined by the program to be competency for a student graduating from the program.

The data in Figure 1 come from the efforts of a program that has been collaborating with CTLT for five years. The project has been assessing student papers using a version of the Critical Thinking Rubric tailored for the program’s needs.

Those efforts, measuring student work alone, did not produce any demonstrable change in the quality of the student work (Figure 1). In the figure, note that:

  • Student performance does not improve with increasing course level, eg 200,300,400-level within a given year
  • Only one time were students judged to meet the competency level set by the program itself (2005 500-level)
  • Across the years studied, student performance within a course level did not improve, e.g., examine the 300-level course in 2003, 2006, 2007, 2008

Importance of focusing assessments on Assignments

Assignments are important places for the wider community to give input, because the effort the community spends assessing assignments can be leveraged across a large group of students. Additionally, if faculty lack mental models of alternative pedagogies, assignment assessment helps focus faculty attention on very concrete strategies they can actually use to help students improve.

The importance of assessing more than just student work can be seen in Figure 1. As these results unfolded, we suggested to the program that it focus attention on the assignment design. They just did not follow through as a program in reflecting on and revising the assignments, nor did they follow through with suggestions to improve communication of the rubric criteria with students.

Figure 2 shows the inter-rater reliability from the same program. Note that the inter-rater reliability is 70+% and is consistent year to year.

Graph of inter-rater reliability data

Graph of inter-rater reliability data

This inter-rater reliability is borderline and problematic because, when extrapolated to high stakes testing, or even grades, this marginal agreement speaks disconcertingly to the coherence (or lack there of) of the program.

Figure 3 comes from a different program. It shows faculty ratings (inter-rater reliability) on a 101-level assignment and provides a picture of the maze, or obstacle course, of faculty expectations that students must navigate. Higher inter-rater reliability would be indicative of greater program coherence and should lead to higher student success.

Interrater reliability detail

Importance of focusing assessments on Assessment Instruments

Our own work and Allen and Knight (table 4) have found that faculty and professionals place different emphasis on the importance of criteria used to assess student work. Assessing the instrument in a variety of communities offers the chance to have conversations about the criteria and address questions of the relevance of the program to the community.

Summary

The intention of the triangulated assessment demonstrated above (assignment, student work and assessment instrument) is to keep the conversation about all parts of the process open to develop and test action plans that have potential to enhance learning outcomes. We are moving from pilot experiments with this idea to strategies to use the information to inform program-wide learning outcomes and to feed that data into ongoing accreditation work.

Google Wave unifies Workspace and Showcase Portfolios

Google Wave was just announced at the Google I/O conference, video here. At minute 27:00 to 31:00 you will see a segment with important implications for the conversation we have been having with Helen Barrett about Showcase vs Workspace ePortfolios. Our original, and her graphic.  What seems to be important about Google Wave is the way it is a PLE and by adding new reviewers of some works, a showcase as well.  This could address the challenge of making showcases which might have otherwise been an aftertthought to the actual work.

The ability to imbed a Wave in another environment, such as a blog, could be used to quickly publish work items and invite wider communities to engage with them.

The survey tool shown later in the demo appears to be able to integrate with other aspects of a document, and if so, and if the survey tool is rich enough, might provide the embedded feedback for at least the author’s use as a harvesting gradebook.