...the ballooning assessment industry — including the tech companies and consulting firms that profit from assessment — is a symptom of higher education’s crisis, not a solution to it. It preys especially on less prestigious schools and contributes to the system’s deepening divide into a narrow tier of elite institutions primarily serving the rich and a vast landscape of glorified trade schools for everyone else.
...Mr. Gilbert became an outspoken assessment skeptic after years of watching the process fail to capture what happens in his classes — and seeing it miss the real reasons students struggle. “Maybe all your students have full-time jobs, but that’s something you can’t fix, even though that’s really the core problem,” he said. “Instead, you’re expected to find some small problem, like students don’t understand historical chronology, so you might add a reading to address that. You’re supposed to make something up every semester, then write up a narrative” explaining your solution to administrators.
Here is the second irony: Learning assessment has not spurred discussion of the deep structural problems that send so many students to college unprepared to succeed. Instead, it lets politicians and accreditors ignore these problems as long as bureaucratic mechanisms appear to be holding someone — usually a professor — accountable for student performance...
...Without thoughtful reconsideration, learning assessment will continue to devour a lot of money for meager results. The movement’s focus on quantifying classroom experience makes it easy to shift blame for student failure wholly onto universities, ignoring deeper socio-economic reasons that cause many students to struggle with college-level work. Worse, when the effort to reduce learning to a list of job-ready skills goes too far, it misses the point of a university education. (emphasis mine)
Worthen goes on to express what many of us talk about in the hallways of CSU or on the way to our commanded attendance at the semester assessment meetings --though rarely do the cowed CSU faculty voice these in public meetings. Did I hear about a CSU administrator who was sent somewhere in Asia to attend an international assessment conference a few years ago? Give me a break. New ways to waste more time and money.
But wait. CSU made its way indirectly into Worthen's discussion. Oh do not fear ye who blame the media for reporting "only the bad stuff" about poor benighted Chicago State. The university is unnamed. But mirabile dictu! One of our very own, Dr. Eric Lief Peters, who retired from CSU last year was one of three letters to the Editor of the NY Times chosen to appear in their daily letters column. I'm posting his letter here.
Laugh (and cry) as required. Molly Worthen's article follows.
But wait. CSU made its way indirectly into Worthen's discussion. Oh do not fear ye who blame the media for reporting "only the bad stuff" about poor benighted Chicago State. The university is unnamed. But mirabile dictu! One of our very own, Dr. Eric Lief Peters, who retired from CSU last year was one of three letters to the Editor of the NY Times chosen to appear in their daily letters column. I'm posting his letter here.
Laugh (and cry) as required. Molly Worthen's article follows.
To the Editor:
I spent the last several years at my university on this
nonsense as an “assessment coordinator.” It was a total waste of more than 40
percent of my time and left me with no time to do research. It was bureaucracy
at its worst. It was impossible to implement in any tangible way that would
yield meaningful data, and nobody would or could provide guidance. It was a
very large reason I retired early.
Here is a general summary:
Me: “Well, here is what we are thinking about for an
assessment plan.”
Them: “You should really come up with a plan that
assesses student performance.”
Me (gritting teeth): “Yes, we have that in all our
classes. They are called graded assignments and exams.”
Them: “Those are great! But they should be aligned with
the goals of the courses.”
Me (grinding teeth): “Yes, these are assignments that are
based on evaluating the students’ grasp of the course content.”
Them: “But they should instead reflect other things that
the students gain in the course.”
Me: “Like what?”
Them: “We can’t tell you, but you will know it when you
see it.”
Me: “Can you give us a hint?”
Them: “No, these should be your assessments of what is
important.”
Me: “Why don’t you just shoot me and get it over with?”
Them: “Your assessment reports will be due on LiveText by
…”
ERIC L. PETERS, GLENWOOD, ILL.
Molly Worthen, "The Misguided Drive to Measure 'Learning Outcomes'" The New York Times, Feb. 23, 2018.
I teach at a big state university, and I often receive
emails from software companies offering to help me do a basic part of my job:
figuring out what my students have learned.
If you thought this task required only low-tech materials
like a pile of final exams and a red pen, you’re stuck in the 20th century. In
2018, more and more university administrators want campuswide, quantifiable
data that reveal what skills students are learning. Their desire has fed a
bureaucratic behemoth known as learning outcomes assessment. This elaborate,
expensive, supposedly data-driven analysis seeks to translate the subtleties of
the classroom into PowerPoint slides packed with statistics — in the hope of
deflecting the charge that students pay too much for degrees that mean too
little.
It’s true that old-fashioned course grades, skewed by
grade inflation and inconsistency among schools and disciplines, can’t tell us
everything about what students have learned. But the ballooning assessment
industry — including the tech companies and consulting firms that profit from
assessment — is a symptom of higher education’s crisis, not a solution to it.
It preys especially on less prestigious schools and contributes to the system’s
deepening divide into a narrow tier of elite institutions primarily serving the
rich and a vast landscape of glorified trade schools for everyone else.
Without thoughtful reconsideration, learning assessment
will continue to devour a lot of money for meager results. The movement’s focus
on quantifying classroom experience makes it easy to shift blame for student
failure wholly onto universities, ignoring deeper socio-economic reasons that
cause many students to struggle with college-level work. Worse, when the effort
to reduce learning to a list of job-ready skills goes too far, it misses the
point of a university education.
The regional accrediting agencies that certify the
quality of education an institution provides — and its fitness to receive
federal student financial aid — now require some form of student learning
assessment. That means most American colleges and universities have to do it.
According to a recent survey, schools deploy an average of four methods for
evaluating learning, which include testing software and rubrics to standardize
examinations, e-portfolio platforms to display student projects, surveys and
other tools.
No intellectual characteristic is too ineffable for
assessment. Some schools use lengthy surveys like the California Critical
Thinking Disposition Inventory, which claims to test for qualities like
“truthseeking” and “analyticity.” The Global Perspective Inventory, administered
and sold by Iowa State University, asks students to rate their agreement with
statements like “I do not feel threatened emotionally when presented with
multiple perspectives” and scores them on metrics like the “intrapersonal
affect scale.”
Surveys can’t tell you everything. So universities
assemble committees of faculty members, arm them with rubrics and assign them
piles of student essays culled from across the school (often called “student
products,” as if they are tubes of undergraduate Soylent Green). Assessment has
invaded the classroom, too: On many campuses, professors must include a list of
skills-based “learning outcomes” on every syllabus and assess them throughout
the semester.
All this assessing requires a lot of labor, time and
cash. Yet even its proponents have struggled to produce much evidence — beyond
occasional anecdotes — that it improves student learning. “I think assessment
practices are ripe for re-examining,” said David Eubanks, assistant vice
president for assessment and institutional effectiveness at Furman University
in Greenville, S.C., who has worked in assessment for years and now speaks out
about its problems. “It has forced academic departments to use data that’s not
very good,” he added. “And the process of getting this data that’s not very
good can be very painful.”
The push to quantify undergraduate learning is about a
century old, but the movement really took off in the 1980s. The assessment boom
coincided — not, I think, by accident — with the decision of state legislatures
all over the country to reduce spending on public universities and other social
services. That divestment continued, moving more of the cost of higher
education onto students. (These students are often graduates of underfunded
high schools that can’t prepare them for college in the first place.) It was
politically convenient to hold universities accountable for all this, rather
than to scrutinize neoliberal austerity measures.
In 2006, the Commission on the Future of Higher
Education, convened by Margaret Spellings, the secretary of education at the
time, issued a scathing critique of universities. “Employers report repeatedly
that many new graduates they hire are not prepared to work, lacking the
critical thinking, writing and problem-solving skills needed in today’s
workplaces,” the commission’s report complained.
Educators scrambled to ensure that students graduate with
these skills — and to prove it with data. The obsession with testing that
dominates primary education invaded universities, bringing with it a large
support staff. Here is the first irony of learning assessment: Faced with
outrage over the high cost of higher education, universities responded by
encouraging expensive administrative bloat.
Many of the professionals who work in learning assessment
are former faculty members who care deeply about access to quality education.
Pat Hutchings, a senior scholar at the National Institute for Learning Outcomes
Assessment (and former English professor), told me: “Good assessment begins
with real, genuine questions that educators have about their students, and
right now for many educators those are questions about equity. We’re doing
pretty well with 18- to 22-year-olds from upper-middle-class families, but what
about — well, fill in the blank.”
It seems that the pressure to assess student learning
outcomes has grown most quickly at poorly funded regional universities that
have absorbed a large proportion of financially disadvantaged students, where
profound deficits in preparation and resources hamper achievement. Research
indicates that the more selective a university, the less likely it is to
embrace assessment. Learning outcomes assessment has become one way to answer
the question, “If you get unprepared students in your class and they don’t do
well, how does that get explained?” Mr. Eubanks at Furman University told me.
When Erik Gilbert, a professor of history at Arkansas
State University, reached the end of his World Civilization course last fall,
he dutifully imposed the required assessment: an extra question on the final
exam that asked students to read a document about Samurai culture and answer
questions using knowledge of Japanese history. Yet his course focused on
“cross-cultural connections, trade, travel, empire, migration and bigger-scale
questions, rather than area studies,” Mr. Gilbert told me. His students had not
studied Japanese domestic history. “We do it this way because it satisfies what
the assessment office wants, not because it addresses concerns that we as a
department have.”
Mr. Gilbert became an outspoken assessment skeptic after
years of watching the process fail to capture what happens in his classes — and
seeing it miss the real reasons students struggle. “Maybe all your students
have full-time jobs, but that’s something you can’t fix, even though that’s
really the core problem,” he said. “Instead, you’re expected to find some small
problem, like students don’t understand historical chronology, so you might add
a reading to address that. You’re supposed to make something up every semester,
then write up a narrative” explaining your solution to administrators.
Here is the second irony: Learning assessment has not
spurred discussion of the deep structural problems that send so many students
to college unprepared to succeed. Instead, it lets politicians and accreditors
ignore these problems as long as bureaucratic mechanisms appear to be holding
someone — usually a professor — accountable for student performance.
All professors could benefit from serious conversations about
what is and is not working in their classes. But instead they end up
preoccupied with feeding the bureaucratic beast. “It’s a bit like the old
Soviet Union. You speak two languages,” said Frank Furedi, an emeritus
professor of sociology at the University of Kent in Britain, which has a
booming assessment culture. “You do a performance for the sake of the auditors,
but in reality, you carry on.”
Yet bureaucratic jargon subtly shapes the expectations of
students and teachers alike. On the first day of class, my colleagues and I —
especially in the humanities, where professors are perpetually anxious about
falling enrollment — find ourselves rattling off the skills our courses offer
(“Critical thinking! Clear writing!”), hyping our products like Apple Store
clerks.
I teach intellectual history. Of course that includes
skills: learning to read a historical source, interpret evidence and build an
argument. But cultivating historical consciousness is more than that: It means
helping students immerse themselves in a body of knowledge, question
assumptions about memory and orient themselves toward current events in a new
way.
If we describe college courses as mainly delivery
mechanisms for skills to please a future employer, if we imply that history,
literature and linguistics are more or less interchangeable “content” that
convey the same mental tools, we oversimplify the intellectual complexity that
makes a university education worthwhile in the first place. We end up using the
language of the capitalist marketplace and speak to our students as customers
rather than fellow thinkers. They deserve better.
“When kids come from backgrounds where they’re the first
in their families to go to college, we have to take them seriously, and not
flatter them and give them third-rate ideas,” Mr. Furedi told me. “They need to
be challenged and inspired by the idea of our disciplines.” Assessment culture
is dumbing down universities, he said: “One of the horrible things is that many
universities think that giving access to nontraditional students means turning
a university into a high school. That’s not giving them access to higher
education.”
Here is the third irony: The value of universities to a
capitalist society depends on their ability to resist capitalism, to carve out
space for intellectual endeavors that don’t have obvious metrics or market
value.
Consider that holy grail of learning outcomes, critical
thinking — what the philosopher John Dewey called the ability “to maintain the
state of doubt and to carry on systematic and protracted inquiry.” Teaching it
is not a cheap or efficient process. It does not come from trying to educate
the most students at the lowest possible cost or from emphasizing short,
quantifiable, standardized assignments at the expense of meandering, creative
and difficult investigation.
Producing thoughtful, talented graduates is not a matter
of focusing on market-ready skills. It’s about giving students an opportunity
that most of them will never have again in their lives: the chance for serious
exploration of complicated intellectual problems, the gift of time in an
institution where curiosity and discovery are the source of meaning.
That’s how we produce the critical thinkers American
employers want to hire. And there’s just no app for that.
Molly Worthen (@MollyWorthen) is the author, most
recently, of “Apostles of Reason: The Crisis of Authority in American
Evangelicalism,” an assistant professor of history at the University of North
Carolina, Chapel Hill, and a contributing opinion writer.
And for the assessment mavens in the house here is an opposition point of view from an Assessment bureaucrat, note Ph.D. in educational psychology. Be sure to read the comments--most are negative, but those in favor of assessment say that the problem is not assessment it is the implementation of it (seems to be throwing the blame back on the faculty). My favorite comment: Yiddishist • 8 hours ago
ReplyDelete"For all our new vocabularies, which have been trying to replace the feeling of twilight melancholy with apt psychological and neuro-chemical terms, there is still something that eludes us." Andrei Codrescu
What eludes us is that in education which cannot be captured through "rigorous standards for ascertaining student learning." That elusive element in education, that engagement of the mind, will always flourish, despite all efforts to render it thoroughly explicit. When we recognize that that elusive realm ultimately sustains education, we may begin to understand the words of the great philosopher who said that human beings cannot know truth, but they can embody it.
http://www.insidehighered.com/views/2018/03/01/assessment-isnt-about-bureaucracy-about-teaching-and-learning-opinion#disqus_thread
It should be noted that the writer of the first letter published in the Times is the President of the "American Council on Education", which is:
ReplyDelete"...the major coordinating body for the nation’s colleges and universities. W̲e̲ ̲r̲e̲p̲r̲e̲s̲e̲n̲t̲ ̲n̲e̲a̲r̲l̲y̲ ̲1̲,̲8̲0̲0̲ ̲c̲o̲l̲l̲e̲g̲e̲ ̲a̲n̲d̲ ̲u̲n̲i̲v̲e̲r̲s̲i̲t̲y̲ ̲p̲r̲e̲s̲i̲d̲e̲n̲t̲s̲ ̲a̲n̲d̲ ̲t̲h̲e̲ ̲e̲x̲e̲c̲u̲t̲i̲v̲e̲s̲ ̲a̲t̲ ̲r̲e̲l̲a̲t̲e̲d̲ ̲a̲s̲s̲o̲c̲i̲a̲t̲i̲o̲n̲s̲ [emphasis added], and are the only major higher education association to represent all types of U.S. accredited, degree-granting institutions: two-year and four-year, public and private. Our strength lies in our loyal and diverse base of member institutions, 75 percent of which have been with ACE for over 10 years. That loyalty stands as a testament to the value derived from membership. We convene representatives from all sectors to collectively tackle the toughest higher education challenges, with a focus on improving access and preparing every student to succeed."
Just a lobbying group for administrators. Nothing about teaching anyone (other than administrators how to milk the system).
To the Editor:
DeleteRe “No Way to Measure Students,” by Molly Worthen (Sunday Review, Feb. 23), criticizing “a bureaucratic behemoth known as learning outcomes assessment”:
Learning assessment in higher education is simply an effort to document that students have indeed learned something. More work for faculty? You bet. It’s a lot harder than giving out the As, Bs and Cs that have been the traditional measure of student success. But it’s also far more meaningful for students, parents, policymakers and employers.
As higher education costs climb and student borrowing increases, it should come as no surprise that colleges and universities are under more pressure to demonstrate what students have gained. Thanks to the work of many dedicated faculty members and accreditors, colleges and universities are providing a richer and more complete picture of student learning than in the past. This is important and worthwhile.
Sure, we can always do better. But the demand that colleges assess learning will not slacken. One hopes faculty members will lend a hand to these efforts.
TED MITCHELL, WASHINGTON
And one more assault on the Assessment Industry that holds us hostage. See Chronicle of Higher Ed.
ReplyDeleteAn Insider’s Take on Assessment: It May Be Worse Than You Thought
By Erik Gilbert JANUARY 12, 2018
https://www.chronicle.com/article/An-Insider-s-Take-on/242235