The new issue of Peer Review, published quarterly by the Association of American Colleges and Universities (AAC&U), focuses on the Association's "VALUE Project," an effort to develop national standards for assessing essential learning outcomes without resort to standardized tests.
VALUE stands for Valid Assessment of Learning in Undergraduate Education. As explained in this overview of the VALUE Project, the "essential outcomes" for which AAC&U seeks to develop valid assessment tools are those of its LEAP initiative. (LEAP is an acronym for Liberal Education and America's Promise.) The outcomes, listed here, are broad, in accordance with the broad effect on students that the best liberal education, taken as a whole, is meant to yield. In other words, they're outcomes not of this or that degree program, nor even of a general education currciculum, but of the student's entire undergraduate experience.
Two noteworthy features of the VALUE project are, first, the effort to build "metarubrics" based on the accumulation and study of rubrics developed at various individual institutions, and, second, the promotion of e-portfolios as a method of storing and documenting student performances.
As AAC&U notes,
There are no standardized tests for many of the essential outcomes of an undergraduate education. Existing tests are based on typically nonrandom samples of students at one or two points in time, are of limited use to faculty and programs for improving their practices, and are of no use to students for assessing their own learning strengths and weaknesses. VALUE argues that, as an academic community, we possess a set of shared expectations for learning for all of the essential outcomes, general agreement on what the basic criteria are, and a shared understanding of what progressively more sophisticated demonstration of student learning looks like.
Metarubrics aren't simply a compilation and distillation of best practice at various institutions; for campuses that adopt them, they move the entire assessment process in the direction of shared expectations and standards, thereby increasing the validity of learning measurements.
E-portfolios benefit both institutions and students. For the former, they constitute a repository of performances useful for evaluating and tracking insitutional effectiveness; for the latter, they represent an archive of accomplishments that can be shared with graduate insitutions and prospective employers.
Copies of Peer Review Vol. 11, No. 1 (Winter 2009) are available from AAC&U for $8 to members. Geneseo is a member institution.
Here's a nice web resource on rubrics courtesy of Howard University. H/t Bruce Spear.
At insidehighered.com, there's a good article today about AACU's most recent statement on assessment (pdf).
AACU is holding the line against "assessment for accountability," insisting that the real imperative in higher education is to conduct "assessment for improvement," and maintaining, as we've done at Geneseo, that the latter is itself an accountability measure that should strengthen public confidence even if (because?) it does not produce numbers purporting to make possible cross-institutional comparisons of effectiveness.
At Academe Online, James Berger, professor of English at Hofstra University, has posted A Mission Counterstatement, which he characterizes as "an intellectual defense against the mission statement-outcomes assessment ideology." While I share Berger's distate for the way higher education has adopted various forms of corporate-speak in its efforts to communicate its purposes internally and to the public, I find his argument anything but "intellectual." In fact, it's anti-intellectual not only in form but in spirit.
What I mean by calling it anti-intellectual in form is just that it's a bad argument. Berger believes that the "current emphasis on mission statements and outcomes assessment is part of a political struggle over the status of the humanities. It's part of an effort to denigrate our values and methods." The methods of social science, he goes on to explain, are fundamentally different from those of the humanities. Whereas "the social scientist stands (or believes he or she stands) outside his or her data sample," in literary analysis the "scholar is always and necessarily implicated in the thing he or she studies." Setting aside for the moment the question whether all social scientists would recognize themselves in this characterization, consider the conclusion to which Berger's distinction leads. It isn't, as he seems to suppose, that outcomes assessment is a fraud, only that it can't be applied to the humanities. That leaves a considerable portion of the curriculum - well, most of it, in fact - where assessment might still be supposed to have some relevance. The inapplicability of assessment to the humanities - accepting for the moment that it's indeed inapplicable - isn't an argument against the validity or usefulness of assessment, much less an argument that assessment is part of a nefarious plot to turn the academy into Microsoft with dorms.
But Berger's argument is bad for other, perhaps more interesting reasons, too. "The knowledge conveyed by literature does not employ abstract models," he writes. This would come as a surprise to novelists, poets, dramatists, screenwriters, and so on interested in abstractions, whether moral, political, or scientific. It's also neither here nor there with respect to whether abstract models might be of some use in understanding what and how students learn - about literature as well as other things. But from asserting that literature itself conveys no knowledge of abstract models, he goes on (it appears) to argue that abstract knowledge of literature is unattainable. Narrative in particular is proof against modeling (don't quit your day jobs, narratologists!). But it's odd that this hostility to abstraction finds expression in so many abstract claims about the nature of literature and literary study. ("Literary study tries to understand what literature is and does...Literature imagines alternatives to the world as it is...Even the result of randomness in a literary text is the result of a decision by an author...Literature depicts lived experience.")
What's even more odd is the feeling one may have, reading Berger, of being transported back in time to the theory-wars of the 1970s and 80s, when impressionist and formalist literary critics who imagined themselves to be practicing an art neither requiring nor informed by theory inveighed against structuralist, feminist, Marxist, deconstructionist, and other systematic efforts to think in abstract terms about texts, readers, and the relationship between them. In practicing the occult art of "sensitive" close reading, all those traditionalist professors of English had turned themselves into a kind of literary priesthood. The theorists threatened to rob their practice of its mystery. It's Berger's similar attempt to protect the mystery of humanistic expression, scholarship, and learning that I have in mind when I say that his argument is anti-intellectual in spirit.
At the end of the day, though, all this is beside the point because Berger is working with an understanding of outcomes assessment apparently dervied from nothing more than the particular assessment initiative on his own campus - which, for all one knows, he may not have fully understood.
Good outcomes assessment in literature involves doing what we've always done in evaluating student work but doing it in ways that make our thinking more explicit to our students and ourselves. Like the anti-theorists of a quarter-century ago, we're only deceiving ourselves if we believe that our evaluation of our students' work isn't informed by a theory of what consitutes good, mediocre, and poor performance. Assessment simply asks us to (1) spell out the theory in some simple descriptions (e.g., "uses appropriate evidence to support conclusions," "provides necessary transitions between ideas") so that students understand what we expect of them, (2) check our students' work regularly against these descriptions so that we can see just where they're succeeding or failing to implement the theory of good performance we're teaching them, and (3) feed the information we get from this into discussions among ourselves about how to improve the likelihood that more students will succeed more of the time.
It's easy to advertise that our academic programs - whether in the humanities, the social sciences, or elsewhere - do this, that, or the other thing for our students. Be an English major! You too can learn to unweave the woven object that is the text! You too can learn to read critically!
Advertising makes the corporate world go round.
By contrast, what makes the academic world go round is theorizing practice and making practical decisions based on evidence.
Assessment is the opposite of advertising.
On the Modern Language Association's website, Gerald Graff, 2008 President of the MLA, explains that he has "become a believer in the potential of learning outcomes assessment, which challenges the elitism of the Best-Student Fetish by asking us to articulate what we expect our students to learn - all of them, not just the high-achieving few - and then holds us accountable for helping them learn it." He goes on to assert that "By bringing us out from behind the walls of our classrooms, outcomes assessment deprivatizes teaching, making it not only less of a solo performance but more of a public activity."
Update: Graff's essay is also available at insidehighered.com. It will be interesting to watch reader reaction as reflected in the page comments. Why not record your own reactions right here? Just click "Add Comment" to attach your thoughts about Graff's essay to this blogpost. (You must be logged in to add a comment.)
Higher education accreditation agencies play an important role in setting expectations for student learning-outcomes assessment. As this December 14 article from Insidehighered.com makes clear, the U.S. Department of Education can exert pressure on regional accrediting agencies as one way to force colleges and universities to extend and standardize their assessment efforts. Next week, NACIQI - the National Advisory Committee on Institutional Quality and Integrity, which advises the Secretary of Education and oversees the accrediting agencies - will be meeting in Washington. As Insidehighered.com points out,
In its last several meetings, dating to late 2006, the advisory panel has aggressively challenged accreditors to insist - arguably as never before - that colleges measure how well their students learn, and threatened to rebuke agencies that are perceived as failing to hold member colleges accountable enough, and to set minimum levels of quality.
Looks like this will be an important meeting to watch.
Insidehighered.com claims to have reviewed a leaked version of a new document on assessment arising from a joint effort by AAC&U and CHEA. According to the Nov. 20 article,
The aim of the draft was to have numerous college associations sign on to the framework outlined, with the idea of then encouraging their members to join in "a compact" to commit to the document. Some higher education leaders have strongly backed the efforts, arguing that the best way to fend off government intrusion is for academe to set its own standards.
But parts of the document are controversial. While education groups agree that colleges should have goals and that they should consider how to improve the education they offer, many fear that moves to measure student learning will inevitably lead to the use of standardized testing and to facile comparisons of institutions.
The U.S. House of Representatives has passed a bill re-authorizing the Higher Education Act. It includes an amendment from Rep. Robert E. Andrews, (D-NJ) removing language that would have given campuses primary responsibility for determining how to measure student learning outcomes.
From the Chronicle of Higher Education: "Accreditors ambushed colleges and universities with the Andrews amendment last night, which unravels months of hard work to get language into the Higher Education Act acknowledging the right of institutions to establish their own student-learning-outcome measures," said Becky Timmons, assistant vice president for government relations at the American Council on Education.
The most recent development in SUNY-wide assessment is the participation of some SUNY campuses in a national project known as the Voluntary System of Accountability. The project is a joint undertaking of the National Association of State Universities and Land-Grant Colleges (NASULCG) and the American Association of State Colleges and Universities (AASCU).
VSA has been covered by the _Chronicle of Higher Education and insidehighered.com. As the latter reports here, the project became public in the wake of news that Education Secretary Margaret Spellings' Commission on the Future of Higher Education was discussing the imposition of a nation-wide standardized testing regime designed to assure quality in higher education.
Campuses that participate in VSA will post information about themselves using a common template, called the "College Portrait." Each campus will post its College Portrait on its own website; the portraits will not be collected in a central location. However, a list of participating campuses is to be made available at the VSA website. (The list doesn't appear to be up yet, but the site does have additional information about the project.)
One section of the College Portrait template is for information related to the assessment of student learning, and the requirement for participating campuses to complete this section using standardized test scores is generating some understandable controversy.
VSA campuses must measure students' critical thinking and written communication skills using one of the following standardized instruments: CAAP (College Assessment of Academic Proficiency), CLA (College Learning Assessment), or MAPP (Measure of Academic Proficiency and Progress).
Currently, there appears to be no plan on the part of SUNY system administration to require state campuses to participate in VSA. However, a number of SUNY campuses have independently signed on to the project. Geneseo has not joined them, and according to Provost Conway-Turner, there is no prospect of our doing so.
At the October, 2007 Plenary Session of the University Faculty Senate, there was concern that over time SUNY-wide participation could become mandatory or expected. The following resolution was therefore proposed and approved:
Resolution on the State University and the "Voluntary" System of
Accountability
- Whereas the University Faculty Senate has indicated through a number of
different resolutions that it opposes the collection and public
distribution of standardized measures assessing student leaning outcomes
that would allow for invidious and inappropriate comparisons among SUNY
units, and
- Whereas each campus of the State University has an assessment process that
is the result of agreements between that campus and the System
Administration, the singular purpose of which is the improvement of
undergraduate education, and
- Whereas the Voluntary Assessment System recently fostered by AASCU and
other educational organizations inappropriately uses such data as
marketing tools rather than for the improvement of undergraduate
education, and
- Whereas eight State University campuses have "volunteered" to pilot the
Voluntary System of Accountability with little or no consultation with
local faculty governance bodies,Therefore,
- Be It Resolved that the University Faculty Senate strongly opposes any
move to implement the Voluntary System of Accountability as a State
University-wide requirement.
- Be It Further Resolved that the University Faculty Senate urges a
prohibition of additional campus involvement in the pilot process without
explicit and meaningful consultation with local governance bodies.
For information on the history of SUNY-wide assessment, see the page on Campus-Based Gen Ed Assessment for SUNY and SUNY-wide Assessment - Timeline and Documents.
I just came across this article from the August 14 edition of insidehighered.com by Donna Engelmann of Alverno College. It's a thoughtful description of how learning outcomes assessment works at a campus that has earned a national reputation for effective assessment, and a spirited defense of the benefits of locally developed assessment processes as opposed to nationally normed standardized testing.
The New York Times reports today on a common website, being developed together by private and public institutions of higher learning, "that would enable easy comparison on everything from class size to what students do after graduation."
NB:
The public universities have proposed going further than the private ones to make public data sought by the federal education secretary that shows whether their students are actually learning and developing in college.
One section would offer data on student engagement, including survey results on matters like students' overall satisfaction and participation in group learning experiences. The other part would offer statistics on student learning outcomes, based on standardized tests that measure things like critical thinking and analytic writing.
Here's an article from the New York Times on the pros and cons of the "growth model" of tracking students' educational progress. Though the focus is not on higher education, there are obvious similarities here to the issues surrounding "value-added" assessment at the college level.
Jeremy Penn, Assessment Associate for PEARL (Program Excellence through Assessment, Research and Learning) at the University of Nebraska at Lincoln, distinguishes between "assessment for us" and "assessment for them" - aka "assessment for improvement" and "assessment for accountability" - in this article at insidehighered.com. Penn uses the SUNY Assessment Initiative to illustrate the point that "assessment for accountability" needn't require one-size-fits-all tests designed to enable cross-institutional comparisons; it can take the form of "demonstrating a commitment to student learning and being accountable for a process" rather than being accountable for specific results.
Insidehighered.com published this article today on the value of discipline-specific versus institution-wide assessment.today
Alexander C. McCormick, Senior Scholar at the Carnegie Foundation for the Advancement of Teaching, examines how the benefits of assessment can be undone by ill-considered use of the results in this essay in the Foundation's Perspectives series.