Here's the latest update on assessment from the SUNY Office of the Provost, presented at the AIRPO winter conference and containing the "latest news from System Administration about the University's assessment policies and practices, resources for campuses, and questions for campus assessment leaders to consider."
Robert Connor, president of the Teagle Foundation, on how he started down the slippery slope from, as it were, litotes to learning outcomes:
When I left a research center for the humanities and started work in a philanthropic foundation over five years ago, I wanted to know if a foundation could make a difference to the extent and depth of student learning in the liberal arts. To answer that question, I had to learn as much as I could about how students learn and how we know about their learning. Before long, I was studying reports such as the one produced by the Association of American Colleges and Universities’ Liberal Education and America’s Promise initiative (LEAP) that argued that liberal education ought to be understood not as exposing students to certain fields of knowledge, but as helping them to develop long-lasting cognitive and personal capacities. When I started using that phrase, I was on a slippery slope.
The next thing I knew, I was asking whether colleges and universities were translating that understanding of liberal education into clear learning outcomes.
Read the rest of Connor's story here.
From Nancy Willie-Schiff, Assistant Provost for Undergraduate Education, Office of Academic Affairs, State University of New York:
TO: Chief Academic Officers and Assessment Contacts
RE: Administrative Changes to Streamline the SUNY Assessment Initiative
I am pleased to inform you about administrative changes designed to streamline the SUNY Assessment Initiative. They include updated web pages and simplified reporting procedures and forms. These changes respond to the Board of Trustees' Re-engineering SUNY initiative and preliminary feedback from an audit conducted by the Office of the State Comptroller and were developed in consultation with assessment coordinators on campuses in every sector as well as representatives of the University Faculty Senate and the Faculty Council of Community Colleges.
Updated Web Pages
The Provost's web site for assessment has been updated to serve as a one-stop location for the SUNY Assessment Initiative. It has links to policy and procedures pages, new reporting forms, email addresses and external web sites. The URL is http://www.suny.edu/provost/Assessmentinit.cfm?navLevel=5.
Pages on SUNY's online policy and procedure library have been updated to clarify policies and procedures, explain new reporting procedures and provide links to new reporting forms and background information. You can reach them from the Provost's page (above) or http://www.suny.edu/sunypp/, where you can search for them using the term "assessment."
A new survey out from AAC&U shows that "nearly 80% of colleges now have a broad set of learning outcomes for all students and more than 70% now assess outcomes across the curriculum beyond the use of course grades."
Finger Lakes Community College hosted an assessment workshop today. It was led by Linda Suskie, a VP with Middle States. She has written a book (Assessing Student Learning: A Common Sense Guide; first edition available in Milne and second edition on sale) that I am going to check out. I really believe it will help me make assessment more practical in a number of ways. I think it is going to help me improve how we might assess our outcomes at the classroom level while applying it to our broader departmental goals. I think it will also help me create more effective rubrics for grading papers and essays.
Mostly, I liked the opening conversation we had about assessment. OK folks, let's not make a big deal out of this! Assessment is simply "deciding what we want our students to learn - and making sure they learn it!"
If anyone wants to talk more about how we might be able to do that more effectively and with less work, drop me an email.
The new issue of Peer Review, published quarterly by the Association of American Colleges and Universities (AAC&U), focuses on the Association's "VALUE Project," an effort to develop national standards for assessing essential learning outcomes without resort to standardized tests.
VALUE stands for Valid Assessment of Learning in Undergraduate Education. As explained in this overview of the VALUE Project, the "essential outcomes" for which AAC&U seeks to develop valid assessment tools are those of its LEAP initiative. (LEAP is an acronym for Liberal Education and America's Promise.) The outcomes, listed here, are broad, in accordance with the broad effect on students that the best liberal education, taken as a whole, is meant to yield. In other words, they're outcomes not of this or that degree program, nor even of a general education currciculum, but of the student's entire undergraduate experience.
Two noteworthy features of the VALUE project are, first, the effort to build "metarubrics" based on the accumulation and study of rubrics developed at various individual institutions, and, second, the promotion of e-portfolios as a method of storing and documenting student performances.
As AAC&U notes,
There are no standardized tests for many of the essential outcomes of an undergraduate education. Existing tests are based on typically nonrandom samples of students at one or two points in time, are of limited use to faculty and programs for improving their practices, and are of no use to students for assessing their own learning strengths and weaknesses. VALUE argues that, as an academic community, we possess a set of shared expectations for learning for all of the essential outcomes, general agreement on what the basic criteria are, and a shared understanding of what progressively more sophisticated demonstration of student learning looks like.
Metarubrics aren't simply a compilation and distillation of best practice at various institutions; for campuses that adopt them, they move the entire assessment process in the direction of shared expectations and standards, thereby increasing the validity of learning measurements.
E-portfolios benefit both institutions and students. For the former, they constitute a repository of performances useful for evaluating and tracking insitutional effectiveness; for the latter, they represent an archive of accomplishments that can be shared with graduate insitutions and prospective employers.
Copies of Peer Review Vol. 11, No. 1 (Winter 2009) are available from AAC&U for $8 to members. Geneseo is a member institution.
AACU is holding the line against "assessment for accountability," insisting that the real imperative in higher education is to conduct "assessment for improvement," and maintaining, as we've done at Geneseo, that the latter is itself an accountability measure that should strengthen public confidence even if (because?) it does not produce numbers purporting to make possible cross-institutional comparisons of effectiveness.
At Academe Online, James Berger, professor of English at Hofstra University, has posted A Mission Counterstatement, which he characterizes as "an intellectual defense against the mission statement-outcomes assessment ideology." While I share Berger's distate for the way higher education has adopted various forms of corporate-speak in its efforts to communicate its purposes internally and to the public, I find his argument anything but "intellectual." In fact, it's anti-intellectual not only in form but in spirit.
What I mean by calling it anti-intellectual in form is just that it's a bad argument. Berger believes that the "current emphasis on mission statements and outcomes assessment is part of a political struggle over the status of the humanities. It's part of an effort to denigrate our values and methods." The methods of social science, he goes on to explain, are fundamentally different from those of the humanities. Whereas "the social scientist stands (or believes he or she stands) outside his or her data sample," in literary analysis the "scholar is always and necessarily implicated in the thing he or she studies." Setting aside for the moment the question whether all social scientists would recognize themselves in this characterization, consider the conclusion to which Berger's distinction leads. It isn't, as he seems to suppose, that outcomes assessment is a fraud, only that it can't be applied to the humanities. That leaves a considerable portion of the curriculum - well, most of it, in fact - where assessment might still be supposed to have some relevance. The inapplicability of assessment to the humanities - accepting for the moment that it's indeed inapplicable - isn't an argument against the validity or usefulness of assessment, much less an argument that assessment is part of a nefarious plot to turn the academy into Microsoft with dorms.
But Berger's argument is bad for other, perhaps more interesting reasons, too. "The knowledge conveyed by literature does not employ abstract models," he writes. This would come as a surprise to novelists, poets, dramatists, screenwriters, and so on interested in abstractions, whether moral, political, or scientific. It's also neither here nor there with respect to whether abstract models might be of some use in understanding what and how students learn - about literature as well as other things. But from asserting that literature itself conveys no knowledge of abstract models, he goes on (it appears) to argue that abstract knowledge of literature is unattainable. Narrative in particular is proof against modeling (don't quit your day jobs, narratologists!). But it's odd that this hostility to abstraction finds expression in so many abstract claims about the nature of literature and literary study. ("Literary study tries to understand what literature is and does...Literature imagines alternatives to the world as it is...Even the result of randomness in a literary text is the result of a decision by an author...Literature depicts lived experience.")
What's even more odd is the feeling one may have, reading Berger, of being transported back in time to the theory-wars of the 1970s and 80s, when impressionist and formalist literary critics who imagined themselves to be practicing an art neither requiring nor informed by theory inveighed against structuralist, feminist, Marxist, deconstructionist, and other systematic efforts to think in abstract terms about texts, readers, and the relationship between them. In practicing the occult art of "sensitive" close reading, all those traditionalist professors of English had turned themselves into a kind of literary priesthood. The theorists threatened to rob their practice of its mystery. It's Berger's similar attempt to protect the mystery of humanistic expression, scholarship, and learning that I have in mind when I say that his argument is anti-intellectual in spirit.
At the end of the day, though, all this is beside the point because Berger is working with an understanding of outcomes assessment apparently dervied from nothing more than the particular assessment initiative on his own campus - which, for all one knows, he may not have fully understood.
Good outcomes assessment in literature involves doing what we've always done in evaluating student work but doing it in ways that make our thinking more explicit to our students and ourselves. Like the anti-theorists of a quarter-century ago, we're only deceiving ourselves if we believe that our evaluation of our students' work isn't informed by a theory of what consitutes good, mediocre, and poor performance. Assessment simply asks us to (1) spell out the theory in some simple descriptions (e.g., "uses appropriate evidence to support conclusions," "provides necessary transitions between ideas") so that students understand what we expect of them, (2) check our students' work regularly against these descriptions so that we can see just where they're succeeding or failing to implement the theory of good performance we're teaching them, and (3) feed the information we get from this into discussions among ourselves about how to improve the likelihood that more students will succeed more of the time.
It's easy to advertise that our academic programs - whether in the humanities, the social sciences, or elsewhere - do this, that, or the other thing for our students. Be an English major! You too can learn to unweave the woven object that is the text! You too can learn to read critically!
Advertising makes the corporate world go round.
By contrast, what makes the academic world go round is theorizing practice and making practical decisions based on evidence.
Assessment is the opposite of advertising.
On the Modern Language Association's website, Gerald Graff, 2008 President of the MLA, explains that he has "become a believer in the potential of learning outcomes assessment, which challenges the elitism of the Best-Student Fetish by asking us to articulate what we expect our students to learn - all of them, not just the high-achieving few - and then holds us accountable for helping them learn it." He goes on to assert that "By bringing us out from behind the walls of our classrooms, outcomes assessment deprivatizes teaching, making it not only less of a solo performance but more of a public activity."
Update: Graff's essay is also available at insidehighered.com. It will be interesting to watch reader reaction as reflected in the page comments. Why not record your own reactions right here? Just click "Add Comment" to attach your thoughts about Graff's essay to this blogpost. (You must be logged in to add a comment.)
Higher education accreditation agencies play an important role in setting expectations for student learning-outcomes assessment. As this December 14 article from Insidehighered.com makes clear, the U.S. Department of Education can exert pressure on regional accrediting agencies as one way to force colleges and universities to extend and standardize their assessment efforts. Next week, NACIQI - the National Advisory Committee on Institutional Quality and Integrity, which advises the Secretary of Education and oversees the accrediting agencies - will be meeting in Washington. As Insidehighered.com points out,
In its last several meetings, dating to late 2006, the advisory panel has aggressively challenged accreditors to insist - arguably as never before - that colleges measure how well their students learn, and threatened to rebuke agencies that are perceived as failing to hold member colleges accountable enough, and to set minimum levels of quality.
Looks like this will be an important meeting to watch.
The aim of the draft was to have numerous college associations sign on to the framework outlined, with the idea of then encouraging their members to join in "a compact" to commit to the document. Some higher education leaders have strongly backed the efforts, arguing that the best way to fend off government intrusion is for academe to set its own standards.
But parts of the document are controversial. While education groups agree that colleges should have goals and that they should consider how to improve the education they offer, many fear that moves to measure student learning will inevitably lead to the use of standardized testing and to facile comparisons of institutions.
The U.S. House of Representatives has passed a bill re-authorizing the Higher Education Act. It includes an amendment from Rep. Robert E. Andrews, (D-NJ) removing language that would have given campuses primary responsibility for determining how to measure student learning outcomes.
From the Chronicle of Higher Education: "Accreditors ambushed colleges and universities with the Andrews amendment last night, which unravels months of hard work to get language into the Higher Education Act acknowledging the right of institutions to establish their own student-learning-outcome measures," said Becky Timmons, assistant vice president for government relations at the American Council on Education.
The most recent development in SUNY-wide assessment is the participation of some SUNY campuses in a national project known as the Voluntary System of Accountability. The project is a joint undertaking of the National Association of State Universities and Land-Grant Colleges (NASULCG) and the American Association of State Colleges and Universities (AASCU).
VSA has been covered by the _Chronicle of Higher Education and insidehighered.com. As the latter reports here, the project became public in the wake of news that Education Secretary Margaret Spellings' Commission on the Future of Higher Education was discussing the imposition of a nation-wide standardized testing regime designed to assure quality in higher education.
Campuses that participate in VSA will post information about themselves using a common template, called the "College Portrait." Each campus will post its College Portrait on its own website; the portraits will not be collected in a central location. However, a list of participating campuses is to be made available at the VSA website. (The list doesn't appear to be up yet, but the site does have additional information about the project.)
One section of the College Portrait template is for information related to the assessment of student learning, and the requirement for participating campuses to complete this section using standardized test scores is generating some understandable controversy.
VSA campuses must measure students' critical thinking and written communication skills using one of the following standardized instruments: CAAP (College Assessment of Academic Proficiency), CLA (College Learning Assessment), or MAPP (Measure of Academic Proficiency and Progress).
Currently, there appears to be no plan on the part of SUNY system administration to require state campuses to participate in VSA. However, a number of SUNY campuses have independently signed on to the project. Geneseo has not joined them, and according to Provost Conway-Turner, there is no prospect of our doing so.
At the October, 2007 Plenary Session of the University Faculty Senate, there was concern that over time SUNY-wide participation could become mandatory or expected. The following resolution was therefore proposed and approved:
Resolution on the State University and the "Voluntary" System of
- Whereas the University Faculty Senate has indicated through a number of
different resolutions that it opposes the collection and public
distribution of standardized measures assessing student leaning outcomes
that would allow for invidious and inappropriate comparisons among SUNY
- Whereas each campus of the State University has an assessment process that
is the result of agreements between that campus and the System
Administration, the singular purpose of which is the improvement of
undergraduate education, and
- Whereas the Voluntary Assessment System recently fostered by AASCU and
other educational organizations inappropriately uses such data as
marketing tools rather than for the improvement of undergraduate
- Whereas eight State University campuses have "volunteered" to pilot the
Voluntary System of Accountability with little or no consultation with
local faculty governance bodies,
- Be It Resolved that the University Faculty Senate strongly opposes any
move to implement the Voluntary System of Accountability as a State
- Be It Further Resolved that the University Faculty Senate urges a
prohibition of additional campus involvement in the pilot process without
explicit and meaningful consultation with local governance bodies.