You have /5 articles left.
Sign up for a free account or log in.

 

 

When I was in college at the University of Illinois (class of ’92), no one seemed to care all that much about what I was learning, including me.

My gen eds were heavy on large (like 1500 student large) lecture courses I selected with maximum ease in mind. I do not remember anything from my biological science requirement, EEE 105, except that those E’s stood for: Ecology, Ethology, and Entomology.

I just had to look up what “ethology” means.

No doubt, I was at a relative extreme when it came to crafting a degree plan that provided minimal resistance, but I certainly wasn’t alone in taking those 1500-person courses, and not only did the university not seem overly bothered about what we may or may not be learning, those courses were full semester after semester.

Things have changed. We now have something called the Multi-State Collaborative to Advance Quality Student Learning, “a large-scale project, involving 900 faculty members at 80 public two- and four-year institutions in 13 states.”

The goal is to find the “common understandings and measurements of some of the most important outcomes of a college education.” The process utilizes what’s known as “Value” rubrics (Valid Assessments of Learning in Undergraduate Education) by examining real-life student artifacts like homework, essays, or problem sets and rate them as to how well students demonstrate critical skills.

George D. Kuh, a legendary figure in assessment declared, “In terms of trying to assess authentic student learning, it’s the most ambitious effort.”

An effort that is needed, according to Professor Kuh, because, “We know less about what our student know and are able to do than just about virtually any other aspect of the enterprise. It’s a national embarrassment.”

For sure, looking at student work completed as part of course sounds much better than trying to create standardized assessments that will somehow apply across all categories of student and institution. This type of assessment is much closer to the kind of self-assessment and reflection that I believe many instructors engage in where we consider what we’ve asked students to do, and then what they’ve actually done.

I have worries, though, and I hesitate to even express them as I am not a legendary figure in student assessment, but here goes.

My first concern is that the massification and standardization of this kind of assessment seems likely to hold many potentially bad unintended, but entirely foreseeable consequences. Where there be metrics, there be rankings, and there be administrative meddling. While I am always eager to hear new insights and perspectives on teaching, this sort of initiative has the distinctive whiff of the administrative university, even if this particular query is faculty-driven.

Part of this is when it comes to teaching, I firmly believe there is no such thing as a “best practice.” There are “best principles,” but the shape those principles take in the classroom are dependent on instructor, course, and students. It’s why some professors can be enormously effective teachers, even in a 1500 person lecture (a rare, but real skill), while others are maestros of the flipped classroom.

Rather than narrowing the discussion of students and teaching, we should be keeping it as diverse and exploratory as possible. This diversity is the most significant strength of the US system of higher education.

This particular inquiry could be equally effective at developing possible approaches for effective teaching at a much smaller scale, and it wouldn’t run the risk of additional commoditization of student learning.

My second concern is that I don’t believe that those student artifacts are necessarily meaningful reflections of student learning.

For one, what percentage of college assignments do we believe are constructed in such a way as to provoke the kind of work we believe is most important according to these Value rubrics?

Put another way, to find meaning in this student work, students must first find meaning in the assignment. In an era where students routinely express sentiments like “I love learning; I hate school” we’re putting a lot of faith in the idea those homework assignments are a reflection of students at their potential best.

For two, one thing I’ve learned both as a student and a teacher of writing, is that when it comes to learning, much of it is invisible and reveals itself only with hindsight. It seems possible to me that we don’t know how to “measure” learning because the most meaningful parts of learning aren’t measurable.

Over and over again I’ve read student writing that on the page has manifest problems, but also demonstrates clear evidence that the struggle to express an idea to an audience has been engaged, and it’s only a matter of time and practice for the student’s potential to be realized.

Do we have a rubric to capture the moment a student switches themselves into the “on” position?

I have no particular fondness for the educational benign neglect that I experienced as a student. My college education was low friction, but as I now know, friction means energy, and learning requires the generation of a little heat.

But there’s something to be said for the kind of freedom we used to take for granted.

Should the quality of my undergraduate education be measured by the writing I was producing upon graduation, the semester I became besotted with the woman who would become my wife and between senioritis and my desperate wooing I could barely be bothered to spend any time or mental energy on my work?

Or should it be measured by the quality of my submission packet to graduate schools in creative writing two years post graduation when I took objectively terrible undergraduate stories and turned them into work less-terrible enough to be admitted to a fully-funded program?

Should the quality of my education be measured by this blog, or the books I’ve published?

Maybe it should be measured by the books I’ve written that aren’t publishable, or the half-finished, half-baked essays and articles that litter my hard drive.

This is why, rather than assessing student artifacts, I am a believer in orienting our institutions towards creating atmospheres and educational experiences that correlate to future well-being, such as those studied by the Gallup-Purdue index.

One of the questions the index asks is whether or not a student had “at least one professor in college who makes me excited about learning.”

Substitute “writing” for “learning,” and I had two, Steve Davenport and Philip Graham. In their classes, they made me believe writing matters and it’s something people like me could try to do.

I can’t imagine life without that education, and it sure wouldn’t have been captured in a rubric.

 

 

Next Story

Written By