Measuring Teaching Quality

The Government of Ontario, in its ongoing quest to try to reform its funding formula, continues to insist that one element of the funding formula needs to relate to the issue of “teaching quality” or “quality of the undergraduate experience”.  Figuring out how to do this is of course a genuine puzzle.

There are some of course who believe that quality can only be measured in terms of inputs (i.e. funding) and not through outputs (hi, OCUFA!)  Some like the idea of sticking with existing instruments like the National Survey on Student Engagement (NSSE); others want to measure this through “hard numbers” on post-graduate outcomes like employment rates, average salaries and the like.  Still others are banging away at certain types of solutions involving testing of graduates; HEQCO’s Essential Adult Skills Initiative seems like an interesting experiment in this respect.

But there are obvious defects with each of these approaches.  The problem with the “let’s-measure-inputs-not-outputs” approach is that it’s bollocks.  The problem with the “hard numbers” approach is that unemployment and income among graduates are largely functions of location and program offerings (a pathetic medical school in Toronto would always do better than a kick-ass Arts school in Thunder Bay).  And while the testing approach is interesting, all that testing is a bit on the clunky side, and it’s not entirely clear how well the data from such exercises would actually help institutions improve themselves.

That leaves the old survey stalwarts like NSSE and CUSC.  These, to be honest, don’t tell us much about quality or paths to improvement.  They did when they were first introduced, 15-20 years ago, but each successive survey adds less and less.  To be honest, pretty much the only reason we still use them is because nobody wants to break up the time-series.  But that’s an argument against particular surveys rather than surveys in general.  Surveys are good because they are cheap and easily replicable.  We just need to find a better survey, one that measures quality more directly.

Here’s my suggestion.  What we really need to know is how many students are being exposed to good teaching practices and at what frequency.  We know from various types of research what good teaching practices are (e.g. Chickering & Gamson’s classic Seven Principles for Good Practice).  Why not ask students about whether they see those practices in the classroom?  Why not ask students how instructional time is used in practice (e.g. presenting content vs. discussion vs. group work), or what they are asked to do outside of class?  And not just in a general way across all classes, the way NSSE does it (which ends up resembling a kind of satisfaction measurement exercise and doesn’t give Deans or departmental chairs a whole lot to work with): why not do it for every single class a student takes, and link those responses to the students’ academic record?

Think about it: at an aggregate faculty or institutional level – which is all you would need to report publicly or to government – the results of such a survey would instantly become a credible source of data on teaching quality.  But more importantly,  they would provide institutions with incredible data on what’s going on inside their own classrooms.  Are certain teaching practices associated with elevated levels of dropping out, or with an upward shift in grades?  By tying the survey to individual student records on a class-by-class basis, you could know that from such a survey.  A Dean could ask intelligent questions about why one department in her faculty seem to be less likely to involve group work or interactive discussions than others, as well as see how that plays into student completion or choice of majors.  Or one could see how teaching patterns vary by age (are blended learning classes only the preserve of younger profs?).  Or, by matching descriptions of classes to other more satisfaction-based instruments like course evaluations, it would be possible to see whether certain modes of teaching or types of assignment result in higher or lower student satisfaction results – and whether or not the relationship between practices and satisfaction hold true across different disciplines (my guess is it wouldn’t in some cases, but there’s only one way to find out!)

So there you go: a student-record-linked survey with a focus on classroom experiences on a class-by-class could conceivably get us a system which a) provides reliable data for accountability purposes on “learning experiences” and b) provides institutions with vast amount of new, appropriately granular data which can help them improve their own performance.  And it could be done much more cheaply and less intrusively than wide-scale testing.

Worth a try, surely.

Posted in

4 responses to “Measuring Teaching Quality

  1. I am doing exactly this in my faculty. Got a small teaching grant to ask my colleagues across the faculty how they spend class time, and now we will ask students themselves what they experience/what they think we do in class. Should be very interesting. Results will be kinda like asking married couples, separately, how much sex they have-

    Not a perfect measure of quality, but a much better measure than the ones we use now.

    (Previous dean suggested I drop this line of inquiry. I obeyed; now, we have a new dean… and I have tenure.)

  2. How would a survey like this account for extrinsic constraints on choice of pedagogical methods? Not all pedagogical methods are feasible in large classes for instance, especially not in the absence of substantial TA support.

    Also, are students really going to be any good at estimating how much time their class spends on pedagogical technique X? In general, people who aren’t formally tracking their time are terrible at estimating their time allocation. And we know from surveys of *faculty* that they routinely mis-estimate how they allocate class time to different pedagogical techniques (https://bioscience.oxfordjournals.org/content/61/7/550.full.pdf). Why should students be any better at estimating how class time is spent? And I wouldn’t assume that those estimation errors will be random (unbiased) and so cancel out in aggregate. I suppose one way to deal with this is to only ask students to provide very rough estimates of how class time is allocated–2/3 or something. Or just ask whether particular pedagogical techniques were used at all.

  3. Why is quality of experience solely related to quality of teaching? What about ease of institutional use, navigating the institutional systems, expedient and timely bureaucratic response to student concerns instead of getting the “runaround”, such as unanswered student emails and voice mail messages and students getting bounced around between different offices instead of the help they need to switch programs, etc.? These kinds of stresses influence a student’s experience and academic performance as well. Does it make sense for the government and our tax payers’ money to support institutions that make higher education more difficult and burdensome for our students?

  4. Not sure that the ‘large class excuse’ holds anymore (thank goodness!) but I do have one tiny concern that certainly doesn’t take away from the merit of your proposal, just adds a small factor that would need consideration: my research supports the notion that students tend to rate courses in comparison to the others that they are taking at the time. Therefore, a course that employs a tiny bit of active learning would score higher if the student is taking a suite of courses in which active learning does not appear.

    Just a tiny thought.

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.