Yesterday I read this piece in the New York Times by Molly Worthen. Then I made the mistake of reading the comments thread, which included a fair amount of vitriol against supposedly lazy and irresponsible college teachers. Overall the comments were a mixed bag of support and critique—understandably, since questions about the purposes and standards in our colleges are multi-leveled and efforts at critical evaluation are not all wrong. Also, I’m glad to report that when I later returned to the thread, Worthen’s supporters had significantly outpaced her haters, and there is now a wealth of testimony that echoes my own, including the observation that assessment is designed more for things like predicting the lifespan [read: planned obsolescence] of circuits in microwave ovens than for high-level learning tasks.
I started this reflection thinking it might be a quick response to a hater on this thread, but my words kept pouring out.
I’m a college professor. I’m far from opposed to teaching and measuring quality. The point is that the available forms of “assessing” at my school—especially in cookie-cutter quantified forms—make almost everything worse and almost nothing better. These supposedly objective and clear measures bear almost zero relation to measuring quality, much less to solving any real problems that they occasionally flag. Overall the results are overwhelmingly negative—above all, because this is probably the single greatest factor in dumbing down college education.
That last sentence may be hyperbolic as it stands, unless we consider it in the context that Worthen suggests—a lack of public support for most students, so that most are working long hours of wage labor and racking up unconscionable debt at the same time that they study. Bogus assessment becomes a sort of alibi to pretend that something is being done to address the resulting gap, in which it becomes increasingly futile for teachers to attempt to enforce expectations for substantive study outside the classroom. Meanwhile students are understandably stressed because neoliberal policies are building them a future that lacks decent jobs, health care, or capacity to address world-historical crises like climate change. Obviously doing away with assessment would not solve all these problems by itself—it is just that assessment is a leading edge of the failure of universities to address the problems.
Surely our bean-counters could generate, if they cared to try, a quantifiable correlation between how often teachers have to think about assessment—and/or about the many legitimate teaching jobs being eliminated to fund it—and outcomes like despair, rage, crushed morale, productive time wasted on fantasies of revenge or early retirement, shrunken creativity, numbed affect, distorted collegiality, and cynicism. It would be amazing if this did not translate into quantifiably low morale, debased standards, and unhappiness in classrooms.
It makes sense, logically and intuitively, that we would wish to measure what is being accomplished in our universities. I absolutely understand this—even while I insist that current methods are decisively counter-productive, and that we would be better off assessing the (lack of) accomplishments by administrators, measured in ways attuned to the needs of faculty, students, and the public good.
So let me try to articulate what is wrong with assessment, as it exists on the ground. This the time to note a book that—alongside thirty years’ experience of teaching—informs what I am about to say and brilliantly theorizes it in a far more sustained way than an essay like this can do. Mark Fisher’s Capitalist Realism: Is There No Alternative? includes a lucid account of how assessment regimes transform university culture into what he memorably calls “Market Stalinism”—a hybrid of the worst features of the free market reduction of everything to a commodity form, the Kafkaesque opacity and thuggishness of bureaucrats in a command economy, and the aesthetic ham-handedness of Soviet Realism at its worst. By no means does Fisher theorize this in a vacuum—for starters he builds on Frederic Jameson’s work and resonates with Fred Moten and Stefano Harney’s The Undercommons—and by no means does he see Market Stalinism being confined to the universities. He simply does an exemplary job of showing how education is a paradigmatic site from which this regime spreads throughout our culture.
Imagine a machine that could quantify whether you can swing a bat, but not whether you can hit the ball; or whether you can hit the ball but not whether you can it place on the field where it is a hit; or whether you can run fast to first base but not whether you are a smart enough base-runner to ever score; or whether you can press down a piano key with your finger but not whether you can play something that anyone would consider music. Yet your career depends on engaging with this machine continually and defending your worth in terms of those four criteria.
This is what assessment feels like on the ground!
Worthen notes how assessors declare themselves to be far more sophisticated than this, quantifying outcomes like “truthseeking and analyticity” [sic.] Perhaps one day they will even quantify how many poll-takers lie about truth-seeking. Let’s agree that we might complexify the above measures a bit more—say, to assess how often we make stupid base-running errors or whether we can play with four fingers and keep pace with a metronome—but that this still leaves us 99% sure that, in practice, we could never get more than 10% of the way to quantifying our actual goal, even after huge amounts of trouble.
Suppose this goal is composing worthy music and playing it well, using all ten fingers and a rubato style, plus making an appropriate choice from a repertoire of seventy pieces, plus improvising on this selection in a jazz ensemble with aesthetic intelligence. Imagine that a demand to break this goal into quantifiable mini-components, in order to demonstrate your worth (“Hmm, I suppose I could count the exact number of songs in my repertoire”) was a continual distraction from the work that you actually needed to do to succeed. Suppose this led you to “cover” 70 songs in the sloppiest possible way and to attain an absurdly fast metronome setting for ripping through “Stella By Starlight”—and thus you failed to play even one piece well.
This is an apt analogy for what is happening to reading skills. It would not surprise me at all to learn that skimming through a dumbed down cheat guide (SparkNotes, Wikipedia synopses, etc.) to pass a multiple choice test is assessed more favorably than reading a complex book and writing a thoughtful essay about it. Literally, some students might go all the way through college without digesting one difficult book in depth—this is not always true but is now far more likely than it was a generation ago. Today I routinely teach college seniors who have never been assigned a term paper—sometimes no paper at all longer than 10 pages. And this is absolutely linked to our assessment regimes. Teachers may despair of finding measures for the needed projects (rubrics that are neither straight-jackets to strangle creativity nor full of loopholes for a growing cohort of students who simply will not do the work), or fear low student popularity ratings; or predict that they would be downgraded for supposedly inappropriate time management (trading valuable research time for non-quantifiable teaching efforts). It is not that the logic of assessment must pull in this direction, in the abstract—it just does pull in this direction, and trying to work for improvements within its logic is very likely to pull one further downward.
Imagine now that a phrase I used earlier—“work that you actually need to do in order to succeed”—gradually loses meaning. It comes to lack any practical application other than “succeeding” in terms of the quantitative measures. More precisely, the result of retranslation into these measures is a massive slippage, lived as a transmutation of priorities. This is the place where everyone should read Fisher’s book. But consider, as a pale illustration of the process, the gap between my current gym practices and those from years ago when I was a fairly decent basketball player. This involved many physical, mental, and intuitive skills—each of them more complex than factors one could straightforwardly measure, such as whether I knew the rules, how high I could jump, or what percentage of free throws I could make. It is thrilling to play basketball at a high level, with a team improvising together in dialogue with opponents also improvising. This is not at all reducible to won/loss records and a few statistics, especially if one plays pickup basketball. When I consider how far I’ve fallen to merely getting my sixty-year-old knees onto a treadmill, then trying to reach a target number of calories burned in thirty minutes, it is hard to overstate the gap in richness. But I am extremely well-assessed numerically! I had an objectively “successful outcome” yesterday, with 367 actual calories burned versus a target of 360. Meanwhile I guess I “objectively lost” in many ballgames that ended as near-ties after engaging me at levels that were orders of magnitude deeper. But these days I rarely think about it—I think about counting calories instead—and this despite the fact that I began my treadmill routine specifically to get my knees in shape for basketball.
“Assessment” transforms education like that. It recalibrates goals and expectations (at many more levels than simply classrooms) from something like high-level basketball—collaborative creativity not at all measurable by one letter, W or L—to something more like recording numbers from a treadmill. (367 calories in 30 minutes and 20 seconds!) Do we really want this? If so, have we forgotten even how to imagine what we are losing?
Suppose that a whole generation loses track of the distinction between pressing a piano key and music. Suppose schools socialize students to expect that approaching a piano to get music out of it would be a waste of time. Maybe it would even be wrong, since it would be an opportunity cost in a war of all against all to build personal brands—or at least it would be disturbingly open-ended, thus creating anxiety (mainly quantified as a bad thing.) Imagine it no longer occurs to people to sit at a piano except to complete an assignment of pressing down a key a certain number of times, within a structure that rewards doing so “efficiently,” which in practice means as quickly as possible while cutting the maximum number of corners. Imagine that the assessed “quality” of teaching (boiled down to a single digit derived from a consumer satisfaction survey) is a rating that correlates negatively with creating anxiety and positively with the lowest number of repetitions that a teacher requires before s/he certifies that students have “covered” the task of “learning to play.”
This is the world of assessment.
Now imagine a critic responding: Fuck books! How do they generate measurable profits? The universities are full of leftists and dilettantes. Don’t fund them—punish them! (Start with Worthen for saying that “the value of universities… depends on their ability to resist capitalism, to carve out space for intellectual endeavors that don’t have obvious metrics or market value.”) And fuck Hulsether’s tedious music examples! A computer ringtone version of Beethoven fits my business model—and I like it better, it’s relaxing—and only a sucker would be a musician anyway—and why are we even talking about music when I just want to calibrate the planned obsolescence of this microwave? Also, fuck team sports that aren’t for profit! (Oh, you say that sports cost a lot of money and we fund them anyway as a public good? Well, fuck that too, you’re confusing me!) Just tell me: can my employee push the right button—one that orders a cheeseburger—the correct number of times, or not? Can the university measure whether students learn to follow orders like that? Fuck everything else!
This is actually not the world of assessment—that world is ever so much less crass and more uplifting. (It does have gulags, but always for self-improvement.) However, this is the logic of the regime of assessment. It may be the single worst thing about higher education today, and that’s a tough competition to “win.”