Notes from October exchange with College Liaison:
The College Assessment Committee meets this Thursday and I would like to begin addressing the program report that you’d like by the end of the semester.
Should we limit our focus to degree learning outcomes? Or, should we include other outcomes, as well. For example, the tables that describe our program assessment systems cross reference the learning outcomes, conceptual framework elements, professional accreditation standards, and State accreditation standards.
Attached is an example. Learning Outcomes (LO) referred to in the last column are listed at the bottom of the table.
[No files were attached to the original of this post. Ed]
From: Brown, Gary
Sent: Wed 10/28/2009 8:38 PM
Subject: RE: Report at end of semester
There is much not completely clear to me looking at the matrix, but what I wonder about at first pass is how are faculty judgments in the courses validated? (or more simply, how is the assessment different from the way the work is graded?) I see a rubric “was considered,” but don’t see what process of review will validate the rubric. I suspect some approach of corroboration is planned, but I am also recalling the new press that calls for independent or external review. I have a few other questions and would like to understand the planned approach more fully and then perhaps use our rubric to help reflect on both the assessment plan and the rubric. Is it possible to engage your team in that activity?
Thanks for getting back to me on this. Actually, my question was whether we should limit the information to Learning Outcomes, only. Most of our documents include references to [professional accreditation], State, Conceptual Framework, and Learning Outcomes. If we only need to report on LOs, I can simplify our materials to reflect LOs only.
Regarding your question about the validity of class assignments that are scored by the instructor. Most of these involve a rubric that was developed by the faculty. Oftentimes, the rubric is piloted and revised. Revisions continue as needed, if the rubric doesn’t appear to give the type of information needed for program improvement. That’s part of the process of regular review of the assessment system (usually on an annual basis). I don’t think an external/independent review of the rubrics are feasible or appropriate. Our [professional] accreditation doesn’t require it. However, we do conduct “studies” (mostly informal ones) of our assessments and assessment processes to assure they’re valid, fair, and unbiased.