Engaging Employers and Other Community Stakeholders

Do you have ideas or examples of good practice of working with employers to promote workforce development? UK universities and colleges are under pressure to do “employer engagement” and some are finding it really difficult. This is sometimes due to the university administrative systems not welcoming non-traditional students, and sometimes because we use “university speak” rather than “employer speak”.
— a UK Colleague

Washington State University’s Office of Assessment and Innovation has been working on this question for several years. We presented this spectrum diagram to think about how the more traditional Institution-centric learning differs from Community-based learning. It may point to some of the places your programs get stuck thinking about this question.

We have also been exploring methods to gather assessments from stakeholders (employers as well as others) about aspects of academic programs. This example shows the twinned assessment of student work using a program rubric and assessment of the faculty’s assignment that prompted the work. We invite stakeholders to engage in both assessments. In other implementations of this process, we have asked stakeholders about the utility of the rubric itself.

We also are finding differences in the language used by faculty, students and employers. When asked about the most important things to learn about in a business program we got this feedback.

Another example of different groups using different language is this one, where industry and faculty used different language with different foci to give feedback to students. Particularly we saw industry use “problem” as in “problem statement” and faculty use “problems” synonymous with “confused” and “incorrect.”

Our method for learning about both language and values is through simple surveys of stakeholders as they are engaged with us in assessment activities. For example here (In Class Norming Survey), we asked people who had just assessed student work using a program rubric the importance of the rubric itself.

In this survey (AMDT Stakeholder Survey) a fashion design and marketing program is asking industry partners about language and criteria, as a precursor to building a program-wide assessment rubric. All these activities help programs understand the wider context in which they operate.

More on this work can also be found in this article. Brown, Gary, DesRosier, T., Peterson, Nils, Chida, M., Lagier, R. 2009. Engaging Employers in Assessment. About Campus Vol 14(5) Nov-Dec 2009. NUTN award for best essay – 2009

It may help to understand that we define stakeholders broadly to account for the variation among academic programs: employers, alumni, students themselves, professional and graduate school admissions officers, audiences (as in performance arts), etc.

Presently we have developed a rubric to guide the assessment of self-studies that our academic program are doing as part of our University-wide system of assessment, a component of our institution’s regional accreditation activities. You can see a snapshot of how our Colleges are doing here.

Building a learning community online

I have been thinking about dissemination and adoption of knowledge as our organization (formerly WSU’s Center for Teaching Learning and Technology) is re-organized to become the Office of Assessment and Innovation (OAI). Our unit’s new challenge is to help the university develop a “system of learning outcomes assessment” in response to new requirements from our accrediting body, NWCCU.

We have captured a discussion about how our unit’s web presence might be changed in a series of notes and whiteboard shots attached to this blog post.

One part of our ideas for this “accreditation system” can be found in this presentation for the TLT Friday Live on harvesting feedback across multiple levels of the university.  We have a prototype of one of the middle tiers running now to test and refine the rubric.

Our system depends on OAI staff working with “Liaisons” for each College and their “Points” in each academic program, and developing skills among the Liaisons and Points so that they can provide useful feedback to programs on the assessment activities that the programs are undertaking. Because of the diversity of WSU, the specific learning outcomes assessment that programs undertake will need to vary by program. What the university seeks is “robust” assessment of student learning. Our method (links above) involves a meta-assessment of the assessment practices of the programs. For programs to understand and develop robust assessment strategies, I believe that the OAI’s, the Liaisons’, and the Contacts’ professional development needs to be a key component of the “system of assessment.” [That is, professional learning in the discipline.]

The challenge is to provide professional development in the context of a multi-campus university, with programs and learners at diverse places in their own learning.
We have advocated that learners find their community of practice and join it and work on their problem in the context of that community. However, that assumes the community of practice exists in an organized way that can be joined. Presently, the COP’s I’m aware of are loose knit collections of bloggers who have developed the skills of tracking one another. Novices would need to learn these skills as an entry requirement to their own participation. That barrier for entry/participation is probably too high for the WSU community we need to reach. I previously wrote a manifesto describing how our unit should change its web strategy. That proposal also included the concept of finding/ building the community of practice online — but it did not solve the problem of how to build that community.

In 2006, Dave Cormier proposed the idea of a “feedbook” of readings that was based on an RSS feed, rather than on traditional paper media. The book was more dynamic because it could be based on blogs or other contemporaneous sources.  In Dave’s later reflections he points to the interesting perspective that the feedbook is (can be) a collaborative effort among a community of learners.

“In addition to the freshness of the material, the multiplicity of voice and perspective and the fact that your textbook will never be out of date, one of the first things that would happen is a decentralization of the instructor. While the instructor would usually be responsible for the basic set of links…gone will be the rabbit out of a hat magic that comes from controlling the flow of knowledge. Students will actually be able to add to that flow of knowledge as their research brings up new sources of course material.”

Dave’s thoughts about multiplicity of voice and perspective seems to fit with Lave and Wenger’s Situated Learning: Legitimate Peripheral Participation Cambridge U Press, 1991

John Seely Brown summarizes several of Lave and Wenger’s writings in a piece written for Xerox PARC:

“This work unfolds a rich, complex picture of what a situated view of learning needs to account for and emphasizes, in particular the social, rather than merely physical nature of situatedness…

“Next, a few clarifications are probably helpful. First, as Lave (1991) herself notes, the situation is not simply another term for the immediate, physical context. If it is to carry any significant conceptual import, it has to be explored in social and historical terms. Two people together in a room are not inevitably identically situated, and the situated constraints on practice do not simply arise in and through such isolated interactions. The people and the constraints importantly have social and historical trajectories. These also need to be understood in any situated account.

“Second, community of practice denotes a locus for understanding coherent social practice. Thus it does not necessarily align with established communities or established ideas about what communities are. Community in Lave & Wenger’s view is not, a “warmly persuasive term for an existing set of relations” (Williams, 1977). Communities can be, and often are, diffuse, fragmented, and contentious. We suspect, however, that it may be this very connotation of warm persuasiveness that has made the concept so attractive to some.

“Third, legitimate peripheral participation (LPP) is not an academic synonym for apprenticeship. Apprenticeship can offer a useful metaphor for the way people learn. In the end, however, in part because of the way apprenticeship has historically been “operationalized,” the metaphor can be seriously misleading, as LPP has occasionally been located somewhere between indentured servitude and conscription.

“As Lave and Wenger put it:
‘Legitimate peripheral participation is not itself an educational form, much less a pedagogical strategy or a teaching technique. It is an analytic viewpoint on learning, a way of understanding learning. We hope to make it clear that learning through legitimate peripheral participation takes place no matter which educational form provides a context for learning, or whether there is any intentional educational form at all. Indeed, this viewpoint makes a fundamental distinction between learning and intentional instruction.’ [1991: 40]

This quote above is one that I’m still trying to fully absorb [np]

JSB continues:

“One of the powerful implications of this view is that the best way to support learning is from the demand side rather than the supply side. That is, rather than deciding ahead of time what a learner needs to know and making this explicitly available to the exclusion of everything else, designers and instructors need to make available as much as possible of the whole rich web of practice-explicit and implicit-allowing the learner to call upon aspects of practice, latent in the periphery, as they are needed.”

“… The workplace, where our work has been concentrated, is perhaps the easiest place to design [for legitimate peripheral participation] because, despite the inevitable contradictions and conflict, it is rich with inherently authentic practice-with a social periphery that, as Orr’s (1990) or Shaiken’s (1990) work shows, can even supersede attempts to impoverish understanding. Consequently, people often learn, complex work skills despite didactic practices that are deliberately designed to deskill. Workplace designers (and managers) should be developing technology to honor that learning ability, not to circumvent it.”

Applying these ideas to OAI/WSU

In the process of becoming the OAI we are re-vamping our website and proposing that it contain several elements:

  • a branded page that provides a basic OAI presence within the university
  • an archive of the former CTLT site, with its various linked resources (many of which retain some value and the URL has some reputation in search engines)
  • a university portfolio space (a showcase)  where we assist the university in mounting is publicly viewable and assessable evidence for accreditation (the system demonstrated above)
  • an assessment workspace for collaborations on assessment activities with academic units (these collaborations may require managed authorizations)
  • a “social” space for collaboration around professional development related to the problems we are working on

It is the last space that presents interesting design challenges, an opportunity to facilitate LPP, and is the cause of this reflection.

Given that the social space will be closely linked to the OAI’s main university presence, we have a requirement that it be “professional.” Dave’s comment above about multiplicity of voice and perspective is the opportunity/concern to manage. Its possible that in a community of learners, some members (perhaps more novice ones) will make contributions at a lower level of professionalism or with less insight. The community, and the visitor, should have ways of both hearing, and not over-valuing, these comments and the community should have ways of responding to these comments that can facilitate learning for multiple players in multiple ways.

In face to face environments, there are various protocols for making contributions, and cues that give an observer orientation to the hierarchy of expertise and authority within the community. An observer can use these cues to conclude which contributions carry the greatest authority in the group. Protocols for responding to contributions can also help organize the group dynamic. In online settings, these cues may be absent and a visitor may sense immaturity or cacophony (e.g., in a feedbook’s content) where in fact what is happening is that novices are exploring their (partial) understanding by sharing within the community.

The problems as I see them:

  • How can an open online community that embraces legitimate peripheral participation maintain a coherence that allows members and visitors to appreciate its professionalism/maturity while still making space for novices participation?
  • What mechanisms can be employed to create “emergent” authority in a web-based learning community without a central oligarchy?
  • Much of the literature on LPP was developed in the context of face-to-face communities. How can an Internet-based community exploit “incidental” (e.g., drop in) participation by experts, who would not have appeared in face-to-face settings? This question is a recognition that via a feedbook mechanism an item from an expert outside the community can be routed into the stream of the community’s readings (thus ‘incidental’).

External Interest in Rain King from TLT Group

Program review rubric

From: Stephen C. Ehrmann [mailto:ehrmann@tltgroup.org]
Sent: Monday, October 26, 2009 12:06 PM
To: Larry Ragan; Abdous, M’Hammed; Jim Zimmer
Cc: Gary Brown
Subject: Program review rubric

Hi,
I mentioned to each of you that Gary Brown and his colleagues were in the early stages of using Flashlight Online [TLTGroup’s re-branding of the WSU online survey tool Skylight] to deploy an interesting set of rubrics for program review/evaluation.  Programs would get the rubrics in advance and use those ideas to document their performance. Their reports and a Flashlight form with the rubrics could then be sent to reviewers; their responses to the rubric could then then be easily summarized and displayed. I’ve seen the rough draft of their rubric and it seems quite promising to me. It’s designed for the review of academic departments, but I think the idea could be adapted for use with faculty support/development units.
When the material is ready for a wider look in a few weeks, Gary will send me a URL and I can pass that along. Or you could contact Gary directly if you like. His email address is BrownG@wsu.edu
Steve
**********
Stephen C. Ehrmann, Ph.D.
Director of the Flashlight Program for the Study and Improvement of Educational Uses of Technology;
Vice President, The Teaching, Learning, and Technology Group, a not-for-profit organization
Mobile: +1 240-606-7102
Skype: steveehrmann

The TLT Group:http://www.tltgroup.org
The Flashlight Program:http://www.tltgroup.org/flashlightP.htm
Blog:http://tlt-swg.blogspot.com/

(Old Dominion) (Penn State) are both thinking about how to design comprehensive evaluations of faculty support.  Your rubric for program review seems like it could be adapted to their purposes.   I was talking with folks from Mount Royal (Calgary) at ISSOTL;

Whiteboards of the OAI web strategy

I say “web strategy” in the title rather than “web site” because the concept is a “red” section (photo of left portion of white board) that is the branded OAI website hosted by UnivPubs CMS. The Red Section, (nominally) a
one page site, provides links off-site to the Assessment Innovation “blue” “social” site where there is a membership feature, with the goal of developing a community space. A “Green” section (photo of right portion of
white board) is a public read space. Its content is muddled in this drawing, including both 3rd party blogs and RainKing Chronicles and other fully public read OAI content. Added later in out thinking an “orange” section below the
green one has read restricted OAI content (eg projects being worked with programs)

The other three images are from conversations with Theron and Joshua about the MRG reading from John Seely Brown Minds on Fire

The challenge is to figure out how to have JSB’s “legitimate peripheral participation” and a “social” site (a la Educause’s Ning site) and still have the “professional” qualities of a site linked to OAI. (See Data Flows photo)

Data flows to red and blue zones

The issue is, how to get the “authority” voices to take more central places without squashing the “novice” voices on the periphery.

Audiences

Contexts (groups)

The reason to work on this I’m still framing, but it involves developing a “system of assessment” for NWCCU that I think requires a “system” of professional development for OAI and Liaisons as well as a “system” of sharing WSU work on the problem of Learning Outcomes assessment in ways that can gather help on the problem from distributed collaborators.

The model that we were looking at in these drawings is Huffington Post more than the old CTLT Common Underground page.

Later, Nils wrote this blog post trying to think thru the”Blue Zone” part of this strategy discussion.

Beyond The University

“More than one-third of the world’s population is under 20. There are over 30 million people today qualified to enter a university who have no place to go. During the next decade, this 30 million will grow to 100 million. To meet this staggering demand, a major university needs to be created each week.” —Sir John Daniel, 1996 in John Seely Brown Minds on Fire: Open Education, the Long Tail, and Learning 2.0 (EDUCAUSE Review) | EDUCAUSE

In Can “The Least Of Us” Disrupt and Change Education for “The Rest Of Us?” Rob Jacobs writes: “Disruption will come when the poor of the world figure out ways to educate themselves and their neighbors via the Internet. Of course this education won’t match the focus, rigor, and quality of Western schools, but never the less, the drive and need to learn will create a youth movement in these developing countries for using the Internet as a tool to educate themselves and others.”

By ‘disruption’ I gather Jacobs means in the Innovator’s Dilemma sense of Clayton Christensen, where a new business model (or new technology) disrupts the existing businesses using the existing model.

John Brockman quotes Don Tapscott “Universities are finally losing their monopoly on higher learning… There is fundamental challenge to the foundational modus operandi of the University — the model of pedagogy. Specifically, there is a widening gap between the model of learning offered by many big universities and the natural way that young people who have grown up digital best learn.”

There seem to be two intertwined ideas here: technology access and pedagogy.

So, if a large unmet demand for education exists at a price point well below that of a university degree, and the Internet infrastructure is available to deliver both the content and the collaboration, will the higher education model in the developed world succumb to Jacob’s disruptive revolution from below? And if so, when? And which players will be the facilitators?

Lets take the time dimension out of the problem. Will the revolution be substantially underway by 2020 (10 years out)? I pick 10 years because it is about one cycle of accreditation review for most US universities and it would be at the outer edge of most corporate planning. Here is why I conclude the preconditions for Jacob’s bottom up revolution, the technical infrastructure, will exist within 10 years if they don’t already exist today. The question is, will learners develop a reflective practice within an optimistic, constructivist and collaborative pedagogy to realize the potential, and if they do, will this self-organizing revolution be sufficient to engulf the developed world’s educational systems.

The Technical Infrastructure Exists

First, a conversation between Charlie Rose and Eric Schmidt, CEO of Google points to the direction for the technology of continued exponential improvement of hardware (mobile devices, Google’s servers and the network bandwidth) following Moore’s law with video recording and GPS-aware applications standard on phones. As illustration, my cell phone in 2000 had a small B&W screen, could store phone numbers and make calls. My cell phone today is an iPhone. In another decade my cell phone will be equally unrecognizably different from my iPhone of today.

Second, Seb Paquet proposed the idea of making group-forming ridiculously easy and postulated an extension to Reed’s Law: “The value of a group-forming network increases exponentially with the number of people in the network, and in inverse proportion to the effort required to start a group.”

Mechanisms for finding, forming or joining groups can be expected to be enhanced. Amazon and Google demonstrate the impact of personalize recommendations based on data about you. Their capacity to recommend will grow, and those recommendations could help learners discover and expand niche communities.

Third, people younger than myself seem to think nothing of making video to communicate their ideas. The rate of production of user content on YouTube will increase beyond the current 1000 hrs of user created video uploaded per hour. Micheal Wesch describes how media are becoming environments that change our conversations.

Fourth, Internet World Stats provides information on the level of Internet access worldwide by region.  On a planetary basis, 24% of the world’s population has access today and the growth rate is 362% over the period 2000-2009. In another decade it seems credible that almost anyone wanting Internet access (e.g., wanting an education) will have it.

I contend that the technology is in place (MediaWiki alone is adequate to the task) and that there is a sufficiently large user base to start the revolution (over a billion people outside North America and Europe connected Internet (World Internet Stats, 9/2009)).

The Pedagogy Exists (and is being implemented outside of universities)

In her review (ca 2000) of George Hillocks Jr.’s book Ways of Thinking, Ways of Teaching. New York: Teachers College Press, 1999. Carol Rutz (Writing Program Administration 23.3, Summer 2000, 127-129) describes Hillocks’ analysis of his subjects’ teaching approaches:

“[Their] four ways of thinking are derived from two epistemological stances, the objectivist—knowledge is “out there” to be apprehended and understood—and the constructivist—knowledge is constructed actively by learners as they interact with the world. These positions are modified by the teacher’s attitude toward students. The pessimist views students as defective creatures unable to learn without close supervision, whereas the optimist views students as capable and eager to learn.”

Hillocks constructs a 2×2 matrix of these stances and uses them to analyze the behaviors of English composition teachers.

Near the end of her review, Rutz notes and laments, “Had Hillocks interviewed students as well as teachers, his case for the effects of reflective practice within an optimistic, constructivist pedagogy would likely become even stronger.”

Unspoken in Rutz’ review is that this is an analysis of classroom teachers, published in 1999 before the emergence of Web 2.0. What happens if one relaxes the assumption that teachers are meeting students in classrooms and explores the “effects of reflective practice within an optimistic, constructivist pedagogy” on learners in Internet communities? That is a key phrase ‘reflective practice within an optimistic, constructivist pedagogy’ and a key observation, it is not held by all teachers.

In a 2007 study, we have found similar spectrum of teacher’s beliefs about teaching, ranging from teacher-centered, to learning-centered, to learner-centered and as part of our Harvesting Gradebook work we found s spectrum of learning beliefs among students, ranging from teacher-authority to learner-as-agent.

John Seely Brown begins to give us an idea how widely this practice/pedagogy is held by learners (outside the university).

“The most profound impact of the Internet, an impact that has yet to be fully realized, is its ability to support and expand the various aspects of social learning. What do we mean by “social learning”? Perhaps the simplest way to explain this concept is to note that social learning is based on the premise that our understanding of content is socially constructed through conversations about that content and through grounded interactions, especially with others, around problems or actions. The focus is not so much on what we are learning but on how we are learning.”

“[I]nstead of starting from the Cartesian premise of “I think, therefore I am,” and from the assumption that knowledge is something that is transferred to the student via various pedagogical strategies, the social view of learning says, “We participate, therefore we are.” “

“A contemporary model that exemplifies the power of this type of social learning is provided by the distributed virtual communities of practice in which people work together voluntarily to develop and maintain open source software. The open source movement has produced software such as the Linux operating system and the Apache web server, which have offered surprisingly robust alternatives to commercial products. …

“Open source communities have developed a well-established path by which newcomers can “learn the ropes” and become trusted members of the community through a process of legitimate peripheral participation. [‘legitimate peripheral participation’ is another key phrase for this new learning model. np]  New members typically begin participating in an open source community by working on relatively simple, noncritical development projects … As they demonstrate their ability to make useful contributions and to work in the distinctive style and sensibilities/taste of that community, they are invited to take on more central projects. Those who become the most proficient may be asked to join the inner circle … Today, there are about one million people engaged in developing and refining open source products, and nearly all are improving their skills by participating in and contributing to these networked communities of practice.”

The “hole in the wall experiment” (below) is the low tech version of this idea.

The Long Tail in Learning (learning without the University)

Seely Brown continues…

“Chris Anderson, the editor of Wired, has shown that Internet-based e-commerce differs from commerce in the physical world. In the world of physical retailing, and particularly in areas of selling goods like books, music, and movies, sales are usually dominated by best-sellers. Typically, 20 percent of titles generate 80 percent of all sales, [b]ut Anderson notes that e-commerce sites such as Amazon.com, Netflix, and Rhapsody don’t follow this pattern. They are able to maintain inventories of products—books, movies, and music—that are many times greater than can be offered by any conventional store. The result is an economic equation very different from what has prevailed in the physical world … the bulk of their sales comes from their vast catalogs of less-popular titles, which collectively sell more than the most popular items. … From the customers’ standpoint, online enterprises offering unprecedented choice are able to cater much more efficiently to individual tastes and interests than any brick-and-mortar store.”

“As more of learning becomes Internet-based, a similar pattern seems to be occurring. Whereas traditional schools offer a finite number of courses of study, the “catalog” of subjects that can be learned online is almost unlimited. There are already several thousand sets of course materials and modules online, and more are being added regularly. Furthermore, for any topic that a student is passionate about, there is likely to be an online niche community of practice of others who share that passion. … The Faulkes Telescope Project and the Decameron Web are just two of scores of research and scholarly portals that provide access to both educational resources and a community of experts in a given domain. The web offers innumerable opportunities for students to find and join niche communities where they can benefit from the opportunities for distributed cognitive apprenticeship.”

Seely Brown gives an example of this change:

“A very different sort of initiative that is using technology to leverage social learning is Digital StudyHall (DSH), which is designed to improve education for students in schools in rural areas and urban slums in India. The project is described by its developers as “the educational equivalent of Netflix + YouTube + Kazaa.” Lectures from model teachers are recorded on video and are then physically distributed via DVD to schools that typically lack well-trained instructors (as well as Internet connections). While the lectures are being played on a monitor, a “mediator,” periodically pauses the video and encourages engagement among the students by asking questions or initiating discussions about the material they are watching. The recorded lectures provide the educational content, and the local mediators stimulate the interaction that actively engages the students and increases the likelihood that they will develop a real understanding of the lecture material through focused conversation.”

The biggest example of all may be Wikipedia, who’s goal is to get a free encyclopedia to everyone in the world. It averaged 379 Million page views/day (all languages) in Sept 2009. More importantly than being a top global website, Wikipedia demonstrates a how to create a self-organizing learning organization using volunteers.

While the Seely Brown and Wikipedia examples points to some sophisticated (and already educated) communities, Sugata Mitra gives an example of third-world children using similar strategies to teach themselves to use computers and the Internet.  In his Hole in the Wall project, young kids figured out how to use a PC on their own — and then taught other kids. He asks, “what else can children teach themselves?”

Conclusion

Extending Clay Shirkey’s analysis of the problems facing newspapers, we might conclude Universities may be denying the impending revolution.

“When reality is labeled unthinkable, it creates a kind of sickness in an industry. Leadership becomes faith-based, while employees who have the temerity to suggest that what seems to be happening is in fact happening are herded into Innovation Departments, where they can be ignored en masse. This shunting aside of the realists in favor of the fabulists has different effects on different industries at different times. One of the effects on the newspapers is that many of their most passionate defenders are unable, even now, to plan for a world in which the industry they knew is visibly going away.

“The curious thing about the various plans hatched in the ’90s is that they were, at base, all the same plan: “Here’s how we’re going to preserve the old forms of organization in a world of cheap perfect copies!” The details differed, but the core assumption behind all imagined outcomes (save the unthinkable one) was that the organizational form of the newspaper, as a general-purpose vehicle for publishing a variety of news and opinion, was basically sound, and only needed a digital facelift.

“With the old economics [of newspaper publishing] destroyed, organizational forms perfected for industrial production have to be replaced with structures optimized for digital data. It makes increasingly less sense even to talk about a publishing industry, because the core problem publishing solves — the incredible difficulty, complexity, and expense of making something available to the public — has stopped being a problem.”

I think there is evidence to believe a “reflective, self-organizing, constructivist and collaborative pedagogy” is already practiced by many people organized into communities that make use of “legitimate peripheral participation.” So, to paraphrase Shirky: It makes increasingly less sense even to talk about a University industry, because the core problem Universities solve — the incredible difficulty, complexity, and expense of building knowledge and making it available to the public — has stopped being a problem.

Dry tinder is in the box, all that is needed is a hot spark.

What will the spark be, and where would it most effectively land?

Not your father’s Portfolio

We were working with a writer on an article about ePortfolios to appear in Campus Technology (its here 11.2009). One of our examples to illustrate our thinking about ePortfolios was Margo Tamez’ El Calaboz Portfolio. Our writer got back with this:

“The editor for my article about eportfolios had a question about my coverage of Margo Tamez’s eportfolio usage. She had expressed concern that the eportfolio have a home beyond the duration of the court case. Does Washington State have any kind of official policy or practices specifically about the life of its student eportfolios? Is there any kind of guarantee that it will live on after the student has left the institution? Anything you can say about that?”

There is a short answer and a long answer to the question.

Short answer: WSU has no policy or procedure in place to delete a student’s SharePoint mySite (where Tamez portfolio is) after graduation, but after 12 months this site becomes read-only unless the graduate makes a specific request to have management access restored.

Long answer: The problem with the short answer is that it focuses on the technical survival of a specific thing at a specific URL. Thinking about a specific collection of artifacts in a specific system at the specific URL is too narrow a focus for our understanding of an ePortfolio.

At the risk of insulting the Campus Technology editor by paraphrasing a Oldsmobile ad, an ePortfolio ‘is not your father’s Portfolio,’ by which I mean that in our view an ePortfolio is not at all an electronic counterpart of the paper portfolio.

An electronic portfolio is both more durable and more tenuous than its paper predecessor. Its also more powerful. Its not a thing or a place, its a practice.

Googling Margo Tamez (she is lucky to have a fairly unique name)  illustrates that she built her electronic reputation in many places, that is, her ePortfolio is not in one place. Rather, it is the sum of artifacts imbedded in the contexts of the communities where she was working. This image is a Touchgraph (link to live Touchgraph) of a search for Margo. It shows her portfolio as a collection of the web 2.0 places she is working.

TouchGraph rendering of Google results for Margo Tamez

Due to the nature of the problem she was working on, Margo intentionally built her portfolio in a distributed fashion. Many of the key documents were emailed to a list of readers, where the body of the email served as description and context for the document. Her WSU ePortfolio was the recipient of a cc: of those emails. Other pieces of her work were created in wikis or as guest posts in blogs. She worked in her community, keeping the artifacts of her work (her ePortfolio) in the places that were best suited to them.

As part of our ePortfolio case study work, we interviewed Dennis Haarsager, now Senior Vice President for System Resources and Technology at National Public Radio, about blogging and building portfolios in public places. In our reflection we said:

“In our interview, Haarsager argued for the public lectures he gives on his chosen problem. The lecture is a showcase portfolio of Haarsager’s current, best thinking. The medium is mostly broadcast, but he feels it allows him to reach new audiences, and to get kinds of feedback about his ideas that he does not get in comments on his blog.

“Tamez is also creating showcase “mini-portfolios” in the form of printed fliers and media interviews. These productions may have some of the risk-related prestige that Tenner ascribes to printed books, while at the same time having the new audience-reaching and immediacy values that Haarsager associates with his lectures. In her learning portfolio, these mini-portfolios document where Tamez’ thinking was at points in her learning trajectory.”

Thus, our thinking is that ePortfolios are created as by-products of work, and are scattered across the venues and contexts in which the work is conducted. An ePortfolio is continually dissipating as systems storing the work go away, and continually growing as new work is added.

I have been struggling for awhile with the problem of describing a 21st century resume.  It too is not like its 20th century counterpart. In that 2007 post I did not yet fully recognize the obvious, which I’m coming to see here. My blog(s) and the other places I post online are my ePortfolio (and my resume).

Rather that focusing on the durability of an ePortfolio system or URL, the most important things we see about an ePortfolio (and ePortfolio as 21st century resume) are the abilities to:

  1. find your work when you need it for reflection or repurposing,
  2. establish that you are indeed the author (possibly under multiple identities) of the works you wish to claim, and
  3. leverage the Google Juice of your work so that it helps you be found by people who share your interests and can help you in your work.

The first of these requirements is most likely met with a hybrid of several Web 2.0 tools. It could be supplemented with a social bookmark service where you track yourself.

The second challenge, proving that the work is “yours” is probably done by making a claim to a corpus of works rather than to a single piece, and by making an appeal to a community and context in which the work was done. (Unlike Catherine Howell‘s thought (ca 2005)  that “universities have a role in ‘authenticating’ individuals [and endowing]… them with certain attributes,” we think an ePortfolio world that enables community-based learning and community-based credentials breaks those assumptions about the university, see a recent piece for AAC&U.)

The third requirement is met by working in public and working in venues where your community of practice will likely congregate and then linking from those contexts to works you created in other contexts that contribute to the conversation.

This third point can be illustrated if you Google me (Nils Peterson). You will discover that there are two people using that name with different career trajectories.  Nils Peterson the Poet is in the Bay Area and worked at San Jose State. His identify is authenticated by a variety of news stories (that is, a community of other writers know that he is who he claims and they are in agreement in their accounts of him).

I claim to be other Nils Peterson who is (currently) prominent in Google, the Nils Peterson who publishes in Campus Technology as well the author here, and the blogger at nilspeterson.com.  I have made a consistent effort to create user identities using Nils+Peterson in many systems and to link from one system to another. This strengthens my claim to be the Nils Peterson who is saying all those things. I don’t depend on my employer or the universities that educated me to substantiate my claims, but I do depend on the corroboration of the communities in which I work.

But, the claim is circumstantial, like solving a jig saw puzzle by inferring which pieces fit together. Following the notions of Helen Barrett, and because I work online in public, my ePortfolio (and resume) is a life lifelong and life wide  web of the works Google associates with me, where ever they exist.

Crowd-sourcing feedback

David Eubanks commented on our recent Harvesting Feedback demo. I’ll save replying about inter-rater reliability to focus now on his suggestion of using Mechanical Turk and the very insightful comment about the end of “enclosed garden” portfolios.

I think David correctly infers that Mechanical Turk is a potential mechanism to crowd-source the Harvesting Feedback process we are demonstrating. Its an Amazon marketplace to broker human expertise. The tasks, “HITs” (Human Intelligence Tasks) are ones that are not well suited to machine intelligence, in fact the site bills itself as “artificial artificial intelligence.”

To explore Mechanical Turk, I joined as a “Worker” to discover that “Requesters” (sources of HITs) can pre-qualify Workers with competency exams. I’m now qualified as a ‘”Headshot” Image Qualifier’ a skill to identify images that meet certain specific criteria important to requester Fred Graver. I also learned that workers earn (or maintain) a HIT approval rate, which is a measure of how well the worker has performed on past tasks. One might think of this as how well the worker is normed with the criteria of the task (though the criteria in this case are not explicit (which is a weakness in our view)). Being qualified for a task might be analogous to initiation to a community of practice; but one would need to then practice “in community” which Mechanical Turk does not seem to support.

We’ve also been exploring a couple other crowd-source feedback sites that help flesh out the character of this landscape. Slashdot and Leaf Trombone (website and video). Slashdot is a technology-related news website that features user-submitted and editor-evaluated current affairs news with a “nerdy” slant. Leaf Trombone is a game for the iPhone that lets you use your iPhone to play a slide trombone to a world audience.

The three systems are summarized in this table:

Mechanical Turk Leaf Trombone Slashdot
Goal of site/ developer’s reason for using reputation in the site Distributed processing of non-computable tasks/ sort for suitable workers Selling an iPhone app/ use ego to encourage players Building a reliable source of information/ screen for editors who can take high level tasks
Type of reputation / Participant’s purpose for having a good reputation Private reputation/ to secure future employment; earn more income Public reputaiton/ status in the community as player and judge; ongoing participation Public reputation/ enhanced opportunity to contribute to the common good (as opposed to being seen as clever fellow
Type of Reward/ Motivation for participant Money/ Personal gain Personal access to perform on world stage/ learning & fun “Karma” to enable future roles in the community/ improve the information in the community
Performance Space/ durability of the performance Private space (enclosed garden)/ durability is unknown, access to the performance is only available to the Requestor Public stage & synchronous; a new playback feature makes performances durable, but private for the artist Public stage & asynchronous/ permanent performance visible to public audience
Kind of feedback to the participant/ durability of the performance Binary (yes/no) per piece of work completed; assessments are accumulated as a lifetime “approval rate” score Rating scale & text comment per performance/ assessment are stored for the performer Rating scale per posting/ assessments are durable and public for both individual items and are accumulated into the user’s “Karma” level
Assessment to improve the system This could be implemented by the individual “Requester” if they desire ? High “Karma” users engage in meta-assessments of the assessors
Kind of learning represented Traditional employer authority sets a task and is arbiter of success; the goal is to weed out unsuccessful workers Synchronous, collaborative individual learning – judge as learner; performer as learner Asynchronous collaborative community learning
Type of crowd-sourcing Industrial model applied to crowd of workers Ad hoc judges gathered as needed for a performance Content and expertise are openly shared

The three systems represent an interesting spectrum, and each might be applied to our challenge of crowd-sourcing feedback. But looking at the different models they would have very different impacts on the process. I believe that only Slashdot’s model could be sustained by a community over an extended period of time, because it is the only one that has the potential to inform the community and build capital for all the participants.

The table above got me to think about another table we made, diagraming 4 models for delivering higher education. At one side of the chart is the industrial, closed, traditional institution. It progresses through MIT’s open courseware and Western Governor’s University’s student collected content and credit for work experiences to the other end of the chart that we called Community-based Learning.

Three rows in our chart addressed the nature of access to expertise, the assessment criteria, and what happens to student work. The table above informs my thinking on those dimensions. As I’ve charted it, in the Slashdot model expertise is open, assessment is open. (while assessment criteria are obscure, the meta-assessment helps the community maintain a collective sense of the criteria) and the contributer’s (learner’s) work remains permanently as a contribution to the community. This is what I think David is referring to when he applauds the demise of the “enclosed garden” portfolio.

A reason to work in public is to take advantage of an open-source/ crowd-wisdom strategy. David illustrated the power of “We smarter than me”  when called our attention to Mechanical Turk.

Another reason is the low cost to implement the model. Recently the UN Global Alliance for Information and Communication Technology and Development (GAID) announced the newly formed University of the People, a non-profit institution offering higher education to the masses. In the press briefing, University of the People founder Shai Reshef said that “this University opened the gate to these [economically disenfranchised] people to continue their studies from home and at minimal cost by using open-source technology, open course materials, e-learning methods and peer-to-peer teaching.” [emphasis added]

We propose that to be successful the University of the People must implement its peer-to-peer teaching as community-based learning and include a community-centric, non-monetary mechanism to crowd-source both assessment and credentialing.

Harvesting feedback on a course assignment

This post demonstrates harvesting rubric-based feedback in a course, and how the feedback can be used by instructors and programs, as well as students. It is being prepared for a Webinar hosted by the TLT group. (Update 7/28: Webinar archive here. Minutes 16-36 are our portion. Minutes 24-31 are music while participants work on the online task. This is followed by Terry Rhodes of AAC&U with some kind comments about how the WSU work illustrates ideas in the AAC&U VALUE initiative. Min 52-54 of the session is Rhodes’ summary about VALUE and the goal of rolling up assessment from course to program level. This process demonstrates that capability.)

Webinar Activity (for session on July 28) Should work before and after session, see below.

  1. Visit this page (opens in a new window)
  2. On the new page, compete a rubric rating of either the student work or the assignment that prompted the work.

Pre/Post Webinar

If you found this page, but are not in the webinar, you can still participate.

  • Visit the page above and rate either the student work or assignment using the rubric. Data will be captured but not be updated for you in real time.
  • Explore the three tabs across the top of the page to see the data reported from previous  raters.
  • Links to review:

Discussion of the activity
The online session is constrained for time, so we invite you to discuss the ideas in the comment section below. There is also a TLT Group “Friday Live” session  being planned for on Friday Sept 25, 2009 where you can join in a discussion of these ideas.

In the event above, we demonstrated using an online rubric-based survey to assess an assignment and to assess the student work created in response to the assignment. The student work, the assignment, and the rubric were all used together in a course at WSU. Other courses we have worked with have assignments and student products that are longer and richer, we chose these abbreviated pieces for pragmatic reasons, to facilitate a rapid process of scoring and reporting data during a short webinar.

The process we are exploring allows feedback to be gathered from work in situ on the Internet (e.g., a learner’s ePortfolio), without requiring work be first collected into an institutional repository. Gary Brown coined the term “Harvesting Gradebook”  to describe the concept, but we have come to understand that the technique can “harvest” more than grades, so a better term might be “harvesting feedback.”

harvesting-gradebook1

This harvesting idea allows a mechanism to support community-based learning (see Institutional-Community Learning Spectrum). As we have been piloting community-based learning activities from within a university context, we are coming to understand that it is important to assess  student work and assignments and the assessment instruments.

Importance of focusing assessments on Student Work

Gathering input on student projects provides the students with authentic experiences, maintains ways to engage students in  authentic communities, helps the community consider new hires, and gives employers the kind of interaction with students that the university can capitalize when asking for money. But, we also have come to understand that assessing student learning often yields little change in course design or learning outcomes, Figure 1. (See also http://chronicle.com/news/article/6791/many-colleges-assess-learning-but-may-not-use-data-to-improve-survey-finds?utm_source=at&utm_medium=en )

graph of 5 years of outcomes data

Figure 1. In the period 2003-2008 the program assessed student papers using the rubric above. Scores for the rubric dimensions are averaged in this graph. The work represented in this figure is different than the work being scored in the activity above. The “4” level on the rubric was determined by the program to be competency for a student graduating from the program.

The data in Figure 1 come from the efforts of a program that has been collaborating with CTLT for five years. The project has been assessing student papers using a version of the Critical Thinking Rubric tailored for the program’s needs.

Those efforts, measuring student work alone, did not produce any demonstrable change in the quality of the student work (Figure 1). In the figure, note that:

  • Student performance does not improve with increasing course level, eg 200,300,400-level within a given year
  • Only one time were students judged to meet the competency level set by the program itself (2005 500-level)
  • Across the years studied, student performance within a course level did not improve, e.g., examine the 300-level course in 2003, 2006, 2007, 2008

Importance of focusing assessments on Assignments

Assignments are important places for the wider community to give input, because the effort the community spends assessing assignments can be leveraged across a large group of students. Additionally, if faculty lack mental models of alternative pedagogies, assignment assessment helps focus faculty attention on very concrete strategies they can actually use to help students improve.

The importance of assessing more than just student work can be seen in Figure 1. As these results unfolded, we suggested to the program that it focus attention on the assignment design. They just did not follow through as a program in reflecting on and revising the assignments, nor did they follow through with suggestions to improve communication of the rubric criteria with students.

Figure 2 shows the inter-rater reliability from the same program. Note that the inter-rater reliability is 70+% and is consistent year to year.

Graph of inter-rater reliability data

Graph of inter-rater reliability data

This inter-rater reliability is borderline and problematic because, when extrapolated to high stakes testing, or even grades, this marginal agreement speaks disconcertingly to the coherence (or lack there of) of the program.

Figure 3 comes from a different program. It shows faculty ratings (inter-rater reliability) on a 101-level assignment and provides a picture of the maze, or obstacle course, of faculty expectations that students must navigate. Higher inter-rater reliability would be indicative of greater program coherence and should lead to higher student success.

Interrater reliability detail

Importance of focusing assessments on Assessment Instruments

Our own work and Allen and Knight (table 4) have found that faculty and professionals place different emphasis on the importance of criteria used to assess student work. Assessing the instrument in a variety of communities offers the chance to have conversations about the criteria and address questions of the relevance of the program to the community.

Summary

The intention of the triangulated assessment demonstrated above (assignment, student work and assessment instrument) is to keep the conversation about all parts of the process open to develop and test action plans that have potential to enhance learning outcomes. We are moving from pilot experiments with this idea to strategies to use the information to inform program-wide learning outcomes and to feed that data into ongoing accreditation work.

Critical Thinking Skills Upgrade and Prezi

WSUCTLT has published in multiple places on its critical thinking rubric. Our work on ePorfolios, and Harvesting Gradebook have us thinking that the Networked Learner needs some skills not well represented in that rubric. We hesitate to use the 21st Centrury moniker, but we need to update for a new era.

A while back, WSUCTLT created a “critical thinking action figure”  a tongue-in-cheek symbol for the skills we hope to develop in students. A desire to re-think the rubric, and a desire to use play as part of the vehicle for that work, has led to a plan for co-creating the 21st Century Global Action Figure based on some of the ideas from Jenkins, Pink, and Rheingold.  We’ve got a tentative date of Friday, June 5, sometime in the afternoon to run the first virtual back-of-the-napkin session.  For fun, JaymeJ created an invitation in Prezi (which is an interesting tool with a lot of possibilities if you’re not familiar with it. (think SeaDragon)  Here is the link to the invitation.

http://prezi.com/52605/

It takes a few minutes to load and then you can click through the sequence by using the right arrow in the lower right corner or you can zoom in or out of objects by clicking on them.

The tool we were planning on using for the back-of-the-napkin session is Twiddla possibly combined with another tool for the audio—still working out the kinks.

We’ll try to capture the sessions and post them.

Blackboard + Angel = reason for open learning

In response to the Wired Campus article about Blackboard’s acquisition of Angel Learning,  Scott Leslie commented about moving beyond the LMS to networked learning options. His comment led me to this 2005 post where he saw the social software light and a later post looking for help making the case for “fully open” content.

I think it would be useful to go beyond open content to other aspects of openness, such as Downes’ open assessment. Beyond open content and assessment lie open problems worked in community. One of the problems, if students are working in the cloud, is how to manage the assessment of the distributed student work. WSUCTLT has been exploring how to gather assessments from the cloud, with an idea called Harvesting Gradebook. This has led them to look at teaching/learning practices along a spectrum from institution-based to community based (PDF). That work began with perspectives shaped by institutional contexts, but is now branching out to examine open learning outside the univeristy‘s walls.

Scott, as you suggest, part of making the case for open, is the potential for greater scalability. When the institution locks up the content and locks up the expertise in a course, then its hard to scale. MIT’s open content takes one step. Western Governor’s University takes another step with its “bring your own content and experience” strategies. Community based learning unlocks the content, the experts and the assessment, making for the most scalable solution. (Diagram of WSUCTLT thinking on these four models (PDF).)