Every attempt to manage academia makes it worse

March 17, 2017

I’ve been on Twitter since April 2011 — nearly six years. A few weeks ago, for the first time, something I tweeted broke the thousand-retweets barrier. And I am really unhappy about it. For two reasons.

First, it’s not my own content — it’s a screen-shot of Table 1 from Edwards and Roy (2017):

c49rdmlweaaa4if

And second, it’s so darned depressing.

The problem is a well-known one, and indeed one we have discussed here before: as soon as you try to measure how well people are doing, they will switch to optimising for whatever you’re measuring, rather than putting their best efforts into actually doing good work.

In fact, this phenomenon is so very well known and understood that it’s been given at least three different names by different people:

  • Goodhart’s Law is most succinct: “When a measure becomes a target, it ceases to be a good measure.”
  • Campbell’s Law is the most explicit: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
  • The Cobra Effect refers to the way that measures taken to improve a situation can directly make it worse.

As I say, this is well known. There’s even a term for it in social theory: reflexivity. And yet we persist in doing idiot things that can only possibly have this result:

  • Assessing school-teachers on the improvement their kids show in tests between the start and end of the year (which obviously results in their doing all they can depress the start-of-year tests).
  • Assessing researchers by the number of their papers (which can only result in slicing into minimal publishable units).
  • Assessing them — heaven help us — on the impact factors of the journals their papers appear in (which feeds the brand-name fetish that is crippling scholarly communication).
  • Assessing researchers on whether their experiments are “successful”, i.e. whether they find statistically significant results (which inevitably results in p-hacking and HARKing).

What’s the solution, then?

I’ve been reading the excellent blog of economist Tim Harford, for a while. That arose from reading his even more excellent book The Undercover Economist (Harford 2007), which gave me a crash-course in the basics of how economies work, how markets help, how they can go wrong, and much more. I really can’t say enough good things about this book: it’s one of those that I feel everyone should read, because the issues are so important and pervasive, and Harford’s explanations are so clear.

In a recent post, Why central bankers shouldn’t have skin in the game, he makes this point:

The basic principle for any incentive scheme is this: can you measure everything that matters? If you can’t, then high-powered financial incentives will simply produce short-sightedness, narrow-mindedness or outright fraud. If a job is complex, multifaceted and involves subtle trade-offs, the best approach is to hire good people, pay them the going rate and tell them to do the job to the best of their ability.

I think that last part is pretty much how academia used to be run a few decades ago. Now I don’t want to get all misty-eyed and rose-tinted and nostalgic — especially since I wasn’t even involved in academia back then, and don’t know from experience what it was like. But could it be … could it possibly be … that the best way to get good research and publications out of scholars is to hire good people, pay them the going rate and tell them to do the job to the best of their ability?

[Read on to Why do we manage academia so badly?]

References

Bonus

Here is a nicely formatted full-page version of the Edwards and Roy table, for you to print out and stick on all the walls of your university. My thanks to David Roberts for preparing it.

77 Responses to “Every attempt to manage academia makes it worse”

  1. Bryan Riolo Says:

    Simply put: hiring good people, pay them well, and get out of the way. Works virtually all the time.

  2. Fair Miles Says:

    Can’t be done, and you certainly understand why. I’m pretty sure you can ilustrate us with tons of examples of very useful features that have become useless after the environment has changed. So they dissappeared [and, luckily, remained in the fossil record ;) ]

    Problem lies in “good people” and “hire them to do the job”. Who’s to say? How? Are you able to sustain a “every-good-one-in” policy? How much time/jobs will you hire the good ones for? How do you evaluate if their job is done? Can you spend without periodic checks?

    I think there is still too much fear to return to the “problems of
    the old subjective paradigms (e.g., old-boys’ networks)” [Edwards & Roy 2017]. Quantitative evaluations pretended “objectivity”, with the ultimate goal of stating a ranking algorithm that can replace experts’ opinion. Scrutiny under anonymous peer-review commanded by prestigious journal names point in that same direction (“it’s not us judging you: it’s the scientific system providing objective truth out there”). Of course that is in part an illussion (e.g., I.F. algorithms are obscure, institutional/local rules are set by appointed experts anyway, citations grow within old-boys networks) but it has somehow provided *openness* (something we agree to support on related themes) to the hard, difficult task of evaluating people and their performance.

    Managed science [see “Solari et al. 2017. La ciencia administrada. Sociología y tecnociencia 2:30–55” if you can grasp some Spanish] forces you to make such decisions. Now add academic capitalism, scientific acceleration and hypercompetition (knowledge is cheaper, people is more, resources are finite). *This* is the environment. “Good people doing a good job” just can’t survive anymore…

    If you agree on this evolutionary scenario, would you bet on a slow decay, on a mass extintion or on a sudden change in environmental conditions?

    [And you already used the title “It’s all just too awful”. Ha!]

  3. stpiamce Says:

    Miles I laughed at your reply. It’s all very clever…

    What are your suggestions for moving forward and solving the problems that currently exist? Seems to me that we do kinda want some good people doing a good job.

  4. David Whitlock Says:

    The “problem” with the “hire good people” idea is that you have to be able to tell who is good and who isn’t. The only way you can do that is by actually understanding what the person is supposed to do, and then evaluate how they go about doing it.

    If you don’t understand what they are supposed to do, you will never be able to figure out who is good and who isn’t.

    This is the problem with non-scientists trying to evaluate the “quality” of scientific work. If you don’t understand the science, you can’t evaluate how good it is.

    The problem is the people in charge of science funding are not good at picking who is a good scientist because they don’t understand what the scientists they are funding are doing. They don’t understand the science, so they try to come up with all kinds of heuristic metrics that don’t involve understanding the science.

    These will always fail because they don’t measure the quality of the science. You will get a non-scientists idea of what a scientist is.

  5. Mike Taylor Says:

    The “problem” with the “hire good people” idea is that you have to be able to tell who is good and who isn’t. The only way you can do that is by actually understanding what the person is supposed to do, and then evaluate how they go about doing it.

    I am not convinced that is a problem.

    If you don’t understand what they are supposed to do, you will never be able to figure out who is good and who isn’t.

    That is why you want to use people who do understand. Trying to come up with numbers to use instead of understanding is surely doomed.

  6. ferniglab Says:

    There is no problem with ‘hire good people’, a larger problem is that those (= politicians and senior management) who make judgements and demand accountability are often those who are most corrupt and untalented.

  7. Pat Corry Says:

    But who (or WHAT these days ) ‘judges’, perceives, or reckons about the fates of others. The bottom line only appears to be ability and associated talents when it is actually more about whether or not the interviewer ‘feels’ you are right, or likes you, or whether you are sympathetic to their ‘minority’ stake. When certain professions are filling up with ‘the bad’ eg traumatised and yet ‘healed’ types getting jobs in counselling or “forgiven not forgotten” criminal types getting bureacratic decision-making roles in “welfare” etc; who are they going to hire? People like themselves is the politically incorrect answer. Regulate and test the regulators while they regulate themselves regularly, I say!!!!!


  8. The managerialism of academia is very evident on this side of the Atlantic. We’ve clearly noticed increasingly more focus on ‘objectives’ metrics, metrics and more metrics… Who still be the first to introduce Six Sigma into academia…


  9. Reblogged this on Gestión y Estudios Organizacionales/Management and Organization Studies and commented:
    This is trully important! We haf been talking about this for a long time. We need to move on ASAP!

  10. mrobmsu Says:

    The problem comes when those doing the hiring (or policy making, or grant awarding, or tenure deciding, or…) don’t have the background, education, or experience to determine who is “good” at doing what you are hiring them to do, and then tries to “evaluate” what they do based on “metrics,” or “improvement,” or some other technocratic idea.

  11. Greg Flanagan Says:

    “… could it possibly be … that the best way to get good research and publications out of scholars is to hire good people, pay them the going rate and tell them to do the job to the best of their ability?” This is true in all endeavours. One thing needs to be added: “the job”–the organization needs a clear mission and the person added needs to be motivated by this mission. So with a clear mission, spend careful time in recruitment, treat the new person well (i.e. full time, secure,with appropriate benefits, positions), and then let them get on with the task. The results are phenomenal.
    Unfortunately, this was lost in academe decades ago. The field is rife with part-time insecure positions, poorly paid full-time carrying more load, unclear mission, and micro manageing by a new and vastly increased administrative structure.

  12. Erika Says:

    Reblogged this on Erika Gisela Abad and commented:
    I’ve been thinking about this for quite some time. I had to force myself to stop checking emails and reassess my writing goals because I felt I was losing the heart in my work. I will read Edwards and Roy’s work on academic research given similar concerns I have about the integrity in my own research production.

  13. Jon Says:

    The draw back of that option is that hidden biases (such as discrimination against women) have an easier time hiding and perpetuating.

  14. Mike Taylor Says:

    That is certainly true, Jon, and whatever an ideal approach looks like, it will certainly include some way of mitigating such biases.

  15. Simon Tanner Says:

    Define “good people” in a saturated market and then define “going rate” and further what the “job” is or what “best ability” means then we can have a meaningful debate. The problem is all those are subjective terms that have tangible effects on whether money should go to STEM or Hunanities, experimental or theoretical research etc etc. Of course the moment we start defining the terms we set measures and targets and back around we go…

  16. Mike Taylor Says:

    All legitimate questions, Simon. Here are some stabs at some answers:

    1. “Good people”: I have a fair idea who the “good people” are in my field. Those more deeply embedded in it have a better idea. I think you could make a good first approximation by seeing what newly minder Ph.Ds’ supervisors think about them. Obviously you’d want some checks and balances here.

    2. “Going rate”: I don’t believe that would be too hard to determine. I’m not even completely sure that this one is a real issue on academic as it is in banking (the field that Tim Harford was commenting on in the quoted sentence.)

    3. “Best ability”: I think most of us know when we’re doing our jobs properly and when we are not.


  17. You cannot count on good academics just to do their jobs well. This would not justify hiring multiple layers of administrators to measure and manage them. :-)

  18. DrKP Says:

    Reblogged this on karenpriceblog and commented:
    Great questions for complex systems. Whilst this references mostly performance metrics this pervasive buffoonery is also applied to health.
    It lacks understanding of the sector and is misapplied.
    Good to ponder and to rebut the bean counters who are necessary but should never be in charge. IMHO of course. You may have a different view. Enjoy.


  19. […] deal from the third chapter, about the inefficiency of merit raises, whose points are supported by this recent article on metrics. And I am fascinated with the poor behavior of some professors I know in the 1995 Yale […]

  20. Glen Cook Says:

    One thing that is not being discussed is the Affirmative Action Output failure in Academia. I have never had in forty years a positive response in education to the following: You have to have an extremely delicate neurosurgery type of operation. You have two choices a white doctor who is brilliant and slaved to become so and a black doctor who used Affirmative Action to get where he is. Which Doctor do you chose for your operation to try to save your life. This directly affects testing and management quality in education.

  21. Yvonne Says:

    What is the problem? Why should I teach people who are not custimized with academic principles and values to take over my job!Let them pay the college ‘money’ to follow the academic curriculum.If this proposal is a trial to solve the problem of underestimate ‘non-academic’ courses, than the proposal implies a non-valued presupposition.

  22. Marko Attila Hoare Says:

    I entirely agree with this article. What you need is enlightened, intellectually sophisticated university management hiring good academics for their own intrinsic worth, as opposed to having the hiring dictated by metrics, league tables, market considerations etc. Yes, of course these are subjective qualities, but that doesn’t matter: let universities hire on the basis of their own respective subjective, arbitrary evaluations of what ‘good’ is. Those that do it well will earn their own reputations, and you’ll generate diversity in place of standardisation, conformism and obsession over targets. Academics will no longer be forced to adhere to a single, bureaucratic, narrowly defined standard of what ‘good’ is, and will be more free to be themselves, which will mean a more vibrant academic culture.


  23. […] Every attempt to manage academia makes it worse […]

  24. Mike Taylor Says:

    DrKP, I think you are quite right that the root of this problem is that people responsible for finances, who should be working for academia, have somehow ended up in charge of it.

    Glen, I hope you won’t object if I say that I’d prefer that this thread not get hijacked by the complex and emotive subject of affirmative action. I’m not saying it isn’t important; but it’s really a quite different subject from the present one.

    Yvonne, I am afraid I can’t quite follow what point you are making.

    Marko, what you say makes sense to me. I suppose this is pretty much how universities were run before the present cult of managerialism took over. The challenge would be figuring out how to return to the best parts of the old system without also bringing back the sexism, racism, old-boy network and suchlike. (I don’t want to give the impression that, just because I see horrible flaws in the present approach, I think just going back to How It Was In The Good Old Days is a good answer.)

  25. Allan Rasmusson Says:

    There is an interesting consequence of the incentive “Researchers rewarded for increased grant funding” in that size of grants end up on CVs. This has outrageous consequences. Imagine two PIs doing projects leading to exactly the same conclusions, with the same strength of stats. One use 1 million money, the other 2 millions. Then the second one gets promoted as a better researcher, with more grants on the CV.

  26. tmlutas Says:

    Every attempt to uniformly manage academics on one metric is doomed to failure but if you measure on all fronts and allow the manager to pick the weighting formula, it’s very hard to tune performance.

    Who is the customer of the University? Are there enough customers to confuse those who would tune their behavior to artificially please and therefore get ahead at the expense of actual quality?


  27. yes, the answer is yes
    :)

  28. Another Mike T. Says:

    Isn’t the problem here that both sides have a legitimate point? On the one hand, assessing quality in a field requires judgment, fueled by expertise in the field. On the other hand, relying on the judgment of experts allows for abuses (as in “The Good Old Days”) by those who attain the status of experts, and also decreases accountability to the non-experts (who are often funding the endeavor). I’m not thrilled with metrics either, but to do better one has to first understand the problems they were trying to solve.

  29. jilly Says:

    Reblogged this on fluffysciences and commented:
    Very important stuff when it comes to measuring academia.

  30. David Marjanović Says:

    1. “Good people”: I have a fair idea who the “good people” are in my field. Those more deeply embedded in it have a better idea. I think you could make a good first approximation by seeing what newly minder Ph.Ds’ supervisors think about them. Obviously you’d want some checks and balances here.

    Those more deeply embedded in it have, more often than not, contradictory ideas about who the good people are. These ideas tend to align with whom they like or who they’re most impressed by for reasons that may or may not have anything to do with science.

    It’s like a job interview: a job interview rewards rhetorics, social skills, how much the interviewer likes you, not so much what the job is about.

    2. “Going rate”: I don’t believe that would be too hard to determine. I’m not even completely sure that this one is a real issue on academic as it is in banking (the field that Tim Harford was commenting on in the quoted sentence.)

    Where I come from, academic salaries are set by federal law. :-| Professors are not paid by universities, but out of the federal budget; universities cannot offer more money to people as an incentive.

    (At least that’s the public-owned universities. There are so few private ones that I don’t even know.)

    One thing that is not being discussed is the Affirmative Action Output failure in Academia.

    Affirmative action is an American thing. This thread is about global issues, not local ones.

    (Besides… somehow I doubt that affirmative action alone can make anyone a neurosurgeon.)

    Who is the customer of the University?

    This question only makes any sense for private for-profit universities, which are common in the US but rare elsewhere.


  31. Hmm. Metrics can be gamed, but metrics can also be used to make sure people like Glen Cook don’t make bad decisions based on bad ideas. I don’t know what the answer is to any of the problems in this post, but I think the existence of that comment alone speaks to the need to have some kind of measures in academia so groups of people who’ve had to fight really hard to carve a space in research and teaching that other people don’t want them to have, for no good reasons, don’t have to keep fighting so hard forever. Ugh.

  32. stevepostrel Says:

    There was a relevant systems-level difference in the past that had to do with institutional vs. professional orientation of academia and how funding worked. As I understand it, in the Old Days individual evaluation was left up to the members of one’s institution to carry out in whatever nuanced and/or prejudiced/political manner they chose. Even outside letters weren’t that important for tenure, etc. But government or foundation funders used various metrics to evaluate institutions–what kind of publications were they generating, what discoveries had they made, etc., and the people in charge of making these evaluations knew something about the areas they were evaluating. Information overload was minimized because the outside funders didn’t have to rank and grade each person, just the salient “best of” output of the whole institution. In turn, that put pressure on the institutional leaders to retain/promote/facilitate the work of the people whom their intimate knowledge suggested were most likely to help the institution as a whole produce valuable knowledge, even if they weren’t publishing scads of papers.

    That setup would have its own manifold dysfunctions, from old-boyism to seniors with inferiority or insecurity or narrow-mindedness blighting the careers of talented investigators who happened not to “fit” into the situation. But if there were enough competition among institutions and a modicum of mobility between them, these problems could be mitigated.

    Here’s an article about the Medical Research Council’s Laboratory for Molecular Biology that seems to show they’ve been able to retain this model with some success:

    “Another reason for LMB’s success may be the risky, hard-to-solve problems the researchers are encouraged to tackle. “At the LMB, you can approach big questions, like how is gene expression controlled?” says Lori Passmore, an LMB group leader studying the function and assembly of protein complexes. Passmore was a postdoc in Ramakrishnan’s lab.

    LMB researchers can afford to ask big questions: They don’t have to teach, and they are free to do whatever research interests them. “There’s a tradition of trying to hire smart people and then basically leaving them to it,” says Leo James, a group leader in the Protein and Nucleic Acid Chemistry Division. There’s less pressure to constantly publish papers. “It’s an environment where you’re encouraged to go after the big thing instead of having to have a publication every 3 months,” James says.

    LMB’s funding model also encourages risky, long-term research and the lab’s collaborative culture. It receives stable core funding, which is shared by all research groups and reviewed every 5 years. The divisions each get a share of space and the budget, which they distribute at their discretion, Pelham says. Group leaders can expect a couple of MRC-funded positions for postdoctoral researchers or technicians and an average of two Ph.D. students. Additional postdoctoral researchers may be funded from personal fellowships. Many group leaders also have external grants that allow them to hire more people, but they must get permission before applying to ensure there’s enough space for expansion. Research projects are funded from the core MRC budget, subject to decisions by Pelham and the heads of the laboratory’s four divisions. The process is faster than making a grant application, James says.

    Ramakrishnan says LMB’s secure, long-term funding allowed him to focus on his Nobel Prize–winning research, which he began at the University of Utah in Salt Lake City. “I didn’t put all my energy into that project until I had the security of the LMB,” he says. Otherwise, he’d have spread his efforts over several projects to reduce the risk, and “probably missed the bus.””

    The success of every researcher contributes to the lab’s funding prospects, which encourages cooperation, Ramakrishnan says. “One thing I’ve noticed here is, if someone does well, there’s no jealousy. We’re all in the same boat.”
    http://www.sciencemag.org/careers/2011/07/nobel-prize-winning-culture

  33. Juan Says:

    There will always be ‘good’ hires and ‘bad’ hires. The good hires don’t need to be managed. A good manager will make the best of the bads.

  34. Dar Orko Says:

    I think quality doesn’t really need a explicit measure. It turns out by itself, by fulfilling the expectations by those who otherwise would asses it by coming up with measures.

  35. Antony B Says:

    The real challenge with assessing research is that when you’re at the cutting edge there are no established guidelines to say what’s good and what’s bad. Universities build their reputations over the long term where the full impact of their researchers’ work can be evaluated in the fullness of time. This is deeply disconcerting to managers who want their next promotion based on short-term success, or administrators and bureaucrats who run whole departments devoted to “performance measurement”… but they currently have the balance of power at many unis.

  36. Mike Taylor Says:

    Thanks, Victoria, for an opposing (or at least not wholly agreeing) perspective. The last part of your comment suggests that you’ve personally experienced the upsides of metric-based evaluation — or at least, the downside of non-metric-based evaluation. Have you ever written about this in more detail? I’d be interested.

    And thank you, stevepostrel, for some interesting historical background and a fascinating and encouraging glimpse at an alternative approach. It’s good to know that places like LMB exist, and are doing well, and I particularly like the insight that (in addition to all the unwelcome effects documented in the table) metric-based approaches tend to produce short-term, safety-first approaches.

    Juan: I agree; and what’s more, a good manager will also (probably regretfully) move the bad hires on if they don’t improve over time.

    Dar, I am not sure whether to take your comment at face value, as an endorsement of non-metric-based evaluation, or as a gently sarcastic critique of such approaches. Certainly, aiming at “fulfilling the expectations” of assessors can have its own unwanted consequences.

    Antony, I think you nailed the core issue: everything of importance that happens in research is done over a long term; yet every measurement is measured over the short term — sometimes at the behest of people who have come in from outside academia and simply do not understand how it works.

  37. I have a doctorate Says:

    I believe that these current metrics are bad and do not work, too. However, the presented “solution” is not a solution. It is actually the problem statement (as pointed by many here already): “How do we hire good people to academia?”.

    Now, back in the recent old days (1970s->), the system worked pretty much like that you graduated masters, and continued as PhD candidate. You got some supervisor. You likely did decent job which was not much noticed. If you were lucky, you found some good contacts around your field. After PhD, you basically had two possible tracks: 1) your home uni hires you to continue, mostly based on suggestion or money your supervisor had, 2) you used your contact network and got a hold a position at some other uni. Basically, at this point the “good” evaluation was done by your network. Your supervisor probably was the only one who had some idea, how good you actually are. In by far majority of the cases you then continued to increase your network, you got a lot of friends, maybe couple enemies, too.

    An associate professorship came open, either in your current uni or some other. In both cases, your research was evaluated. The suggestion from others were in major role in the evaluation, at the end they should know how you work. If you had done at least some job, you probably had your name in many publications with varying contribution, too. You looked good on paper, you had endorsments from three of your colleaques (read: “best mates”), you were “a good person(ality)”. You got the position. And so on.

    The problem is that you were not actually ever evaluated in this process considering your skills and abilities to produce science. Only thing that matter was that who you knew and were they willing to endorse you. Even the publications were (for majority) more tied to who you knew, not what you actually did. As a result, academia had a lot of these people, who pushed through the whole pipe without ever really producing any advancement for science by their own input. OK, the job is collaborative. We can (I think) accept these kinda paths, too. But the problem rises when this kinda person actually fills the position in favor to that other more brilliant person, who does not have the huge network and has worked maybe even isolated to produce maybe the biggest revolution for that field ever (this has happened multiple time in history) but is just lacking funding for the final meters to push his/hers research out to the public. That is a tradegy.

    Now, there are plenty of opposing examples for this from various persons who have managed to actually do a lot of good work and pushed through the pipe. But (in my exeprience) for every good professor, there was at least one professor, who had not really contributed anything on the science in decades. They had just been lucky with good networks in right time. They might still be considered good ones by their network, if they continued to collaborate. Or they might have got depressed due to change in the field or some other issue and just hanged around. And obviously there are hundreds of other unique stories and paths to add to this.

    I argue that anyone with enough long time in academia knows this problem. And the more they have conrtibuted by themselves the more likely they are aware of this. This is just basic human nature. You get sad/mad/envyus/frustrated, when you realize you are the one doing the work and someone else is just hanging around, scores the points too.

    Back to the original problem: obviously the current metrics do not solve this issue at all. You can still exploit the system to gather yourself writer positions in publications and get good endorsements. In addition those metrics do indeed may even worsen the quality, partly due to the fact that also now these “network only persons” need to start to produce more. And what do you know (here’s the climax and the core point), some of them are actually very bad at their work.

    Unfortunately, I do not have a solution for how to hire good people in academia. However, I do believe in science, and I believe that science as a self-fixing process can figure this out. We just need to apply that process to this problem (if we consider these network persons a problem, I am not sure myself is that really a problem.. but it is costly to society nevertheless).

  38. Fair Miles Says:

    Excellent description of the “old-boys’ networks”.
    Contrary to some comments above, I don’t think the main points useful to discuss today are to measure vs. not to measure and managers yes vs. managers no. To return to my analogy above, that is just the evolutionary path we are in.
    If we agree that a change of scientific environment would be beneficial for the survival of some features we miss (or don’t want to keep losing), I think it is more interesting to (be able to) assess what to measure and how to measure.
    Our current criteria (with, apparently, notable exceptions) tends to be extremely unidimensional (written technical productivity in English) and our indicators poor. Academia is (was / should be) much more than that, and almost anyone with some time in it knows it.
    From my point of view, the combination of simplistic evaluation criteria and poor/corrupted indicators (if only of that single axis of assessment) erodes the system itself because it devalues other activities and abilities that are needed and assumed (by the system itself) as working perfectly. E.g., peer review, education, other forms of scientific communication, assessment, outreach. No surprise that you end up with groups of people (researchers, information crunchers, editors, managers, teachers, policy makers) unaware of the realities and complexities of the others.
    Any evaluation/prestige scenario will favour some institutions, researchers and situations over others. That is inevitable. Now, is it reasonable to expect that the same ones that reached their favourable positions with these criteria will feel an urge to change them? What kind of bleaked future do you have to paint for them to realise the path we are in? Can analyses such as Edwards & Roy’s above be disregarded by The Scientific System as ‘fake news’ or ‘alternative facts’?

  39. Musgrove Says:

    “as soon as you try to measure how well people are doing, they will switch to optimizing for whatever you’re measuring, rather than putting their best efforts into actually doing good work.”

    OK, so what’s the problem, if every single human behaves this way(which of course we don’t, which is another central problem of the argument)? Aren’t we measuring what they’re supposed to be doing? Which we’d like optimized? And for them to put their best efforts into? And do good work?

  40. mtrasel Says:

    But then the managers would be out of their bullshit jobs!

  41. Nima Says:

    When the top people in academia are not “good people”, there will be very little hiring of “good people” as they are not wanted. Fish rots from the head down. I can only speak from my own experience, in my university I would estimate 60% of the admin staff was utterly useless and irrelevant freeloaders clogging things up and MISmanaging funds, and the quality of the education would be no worse (and possibly even improve) if they got the Mussolini treatment. So much dead wood. They build a $15 million Law School building when there’s no law program! They triple the campus police budget and stick intrusive cameras everywhere when there have been only two major violent crimes in the past 15 years, both by non-students. They let the once respected geology department shrivel to dust while siphoning $5 million to “politically correct” special interest programs that benefit less than 20 students on the whole campus – all to boost their USA Today metrics.

    The problem with nearly all of these metric incentives is that they in reality reward academic institutions for being more stringent herders and social engineers, whatever the cost or waste, not for actually finding talented and dedicated people. They only care for quantity and pleasing big newspapers and lobby groups, hence departments will cheat any way they can to compete for “who can be the best sheep herder” funding carrots. Standards fall, the job market is flooded with inferior applicants for research positions, the PhD is cheapened as a result, and yet tuition per student keeps jacking up despite increases in scale. It incentivizes cheating the system, not finding “good people”.

    I would rather have a few very passionate PhDs, than millions of mediocre, robotic, “cram for the test” bean counter PhDs who don’t know what they want to do, are burdened by piles of debt, who simply try to tell the boss what they think he wants to hear, and don’t do any useful work.

  42. William Miller Says:

    “The challenge would be figuring out how to return to the best parts of the old system without also bringing back the sexism, racism, old-boy network and suchlike.”

    Hmm. Isn’t it likely that old networks would be sufficiently disrupted by time (retirement of people previously in hiring positions, etc.) that they wouldn’t just start up again? Especially given that any (new) change wouldn’t happen immediately.

    But I think this is just one facet of a wider problem – ‘degree inflation’ for jobs (you now need a Master’s degree for jobs that used to only require a Bachelor’s, etc.), standardized testing in public education, etc. Pressure to make decisions in a provably objective way leads to focus on basic metrics (or, in the case of jobs, just filters) that may not actually have much to do with the desired goal.


  43. […] previous post (Every attempt to manage academia makes it worse) has been a surprise hit, and is now by far the most-read post in this blog’s nearly-ten-year […]

  44. David Marjanović Says:

    tuition per student keeps jacking up

    Ah, America.


  45. […] * Every attempt to manage academia makes it worse. […]


  46. […] tip to Bronwen (thanks!): Mike Taylor “ Every attempt to manage academia makes it worse” I love the takeaway at the end of the post; because it’s true, and it […]


  47. “Simply put: hiring good people, pay them well, and get out of the way. Works virtually all the time.”

    This is how we have been doing things in Germany for decades, if not for centuries. Micro-Supervision is a waste of time, invite your researches for cake and tee 4 to 8 times a year and make sure that they stay motivated.

  48. Hannah Says:

    “Simply put: hiring good people, pay them well, and get out of the way. Works virtually all the time.”

    The trick is actually getting the right person. Too often the right person might be rubbish at interviewing or your own bias and pre-conceptions get in the way. Full strength & challenge personality assessments have come so far they can get rid of all that to make achieving a great hire that fits your team brilliantly a risk free process…it’s so frustrating not to see it being used more by startups.

  49. Michael Says:

    A commenter stated: (Besides… somehow I doubt that affirmative action alone can make anyone a neurosurgeon.)

    Maybe not a traditionally successful neurosurgeon, although simply obtaining an MD certainly happens. I know for a fact Affirmative Action produces black female PhDs all. the. time. And I suspect our last president’s path through academia was very similar, based on his own account and evidence.

    [I wavered before letting this one though moderation, as it seems to be all assertion and no evidence. Michael, if you do this again I’m blocking the comment. If you want to impugn Obama’s credentials, bring evidence. Same if you want to criticise Trump, for that matter. — Mike.]


  50. He doesn’t mention that the problem he is talking about hinges on the fallacy of misplaced concreteness, which Gregory Bateson defined simply as ‘eating the menu instead of the dinner.’

    As for his prescription, “hire good people,” how is one to determine who is ‘good’? Trying to answer that often leads right back to standards, metrics, indicators, and proxies. It also leaves unaddressed the question of what the mission of faculty is: research or teaching or….?

    Paying ‘the going rate’ has similar problems. How do you determine what the going rate is? Usually it is gauged by surveying a sample of the pertinent population. But the methods of sampling and statistical analysis again created proxy indicators for the totality of the thing that is sought.

    And how do you know if someone is doing the best they can? How do you relate observed performance with potential? Effective managers, or at least apparently effective managers, seem to focus rather on encouraging continuous improvement — on the theory that the only way to get to best is through ‘better.’

    But that runs into the barrier of tenure — which removes most incentive for improvement other than personal pride or ambition.

    Finally, if every effort to manage higher education makes it worse, is there any evidence that the absence of any management makes it better? If it even possible for a large enterprise to be unmanaged?

  51. Mike Taylor Says:

    Haha, yes — “eating the menu instead of the dinner” is very good.

    You rightly point out that “Hire good people” is not a trivial problem. It’s not. It is, however, one that many institutions have found good solutions to, and which the application of metrics is in many cases making harder to get right rather than easier.

    The one good thing about metrics is that they are in some sense blind to racism and sexism, so they should help departments to make choices untainted by those factors. Unfortunately, as has been well documented, the metrics themselves are affected by all the same -isms: for example, papers submitted by lead authors with female names have a harder time in peer-review that those with male lead authors. So it’s possible that the use of metrics is in fact impeding departments from making race- and sex-free recruitment choices.

    Finally: “And how do you know if someone is doing the best they can?” You don’t. But that’s OK, because you don’t want people to do “the best they can”. You want them to do good work but also retain an actual life, so that they don’t burn out eighteen months in. Again, it comes back to what you are optimising for: an impressive annual departmental output, or real progress made over the next decade or two.


  52. […] a great blog post doing the rounds today, titled “Every attempt to manage academia makes it worse“. Going through a number of examples of metric-based assessment, the conclusion is that […]


  53. Great food for thought.

    It makes me wonder what would be included if there were a few more columns added to the chart?

    i.e., before “incentive” and “intended effect” one might presume there is an “identified problem” that leads us to search for solutions/incentives.

    For example, if the incentive is that “teachers are rewarded for increased student evaluation scores” one might presume that the identified problem is that enough students aren’t learning, or students aren’t learning enough (or simply put, that the aspiration is that students would learn more).

    One might also presume (based on the incentive) that teachers could do better at teaching, if only they were more motivated, and that money would motivate them more.

    If these truly are the assumptions, I think that is where the error lies: research shows that teacher autonomy is essential for creativity, and, if one presumes that creative teaching strategies might lead to greater learning AND autonomy promotes motivation, then a more fitting incentive might be to provide greater teacher autonomy and support teacher creativity.

    The measurement/assessment of student learning then, might be used to provide formative feedback to teachers, whom one presumes to be both talented and motivated without the additional financial reward.

    I wonder if sometimes we try to solve the right problem with the wrong incentive, and/or if we sometimes try to solve the wrong problem.

  54. Lee Hc Says:

    But who will judge the judges?


  55. […] 這邊提到在教育領域所設計的制度所期望帶來的效果,與實際帶來的效果的差異:「Every attempt to manage academia makes it worse」。 […]


  56. […] het doel wordt van een bedrijf, de kans groter is dat de resultaten bevooroordeeld zijn. Een zeer goed artikel (Engels) legt de basis van die bevooroordeelde resultaten (door sociologen reflexiviteit […]


  57. […] d’une entreprise, la probabilité est élevée que les résultats deviennent biaisés. Un très bon article explique l’origine de ce biais (que les sociologues appellent réflexivité) et mentionne […]


  58. […] Every Attempt to Manage Academia Makes it Worse – Mike […]


  59. […] problem of “scooping” is a systematic one that is symptomatic of a larger problem within academia; one that I don’t feel thoroughly prepared to get into in this article. But I will offer […]


  60. […] or perish” model will always favour scientists that find the most effective ways to boost their own metrics and patently ignores many important criteria that should factor into a professor’s career […]


  61. […] Every attempt to manage academia makes it worse, Mike Taylor […]

  62. Jim Says:

    Instead of using one measure, use all the measures in aggregation. It’s a lot harder to game them all, I imagine.

  63. Mike Taylor Says:

    That is the goal I was aiming for in this paper.

  64. sexmama Says:

    i think it is dumb to just tell people to do the best job they can… Even with deans and administrators present, professors love to abuse their position and purposefully screw off on grading- even at “teaching oriented” universities!

  65. Mike Taylor Says:

    Well, sexmama, no-one argues that the “Hire good people, let them get on with it” approach is perfect. The question is whether it’s less imperfect than the present scheme. I suspect (but of course can’t prove) that it is. Suppose 10% of professors slack off and do only half as much work as they ought — then we lose 5% of the output we’d like to have. Will anyone argue that the administrative and bureaucratic burden of the present system costs us less than 5% of our effort? I doubt it.


  66. […] most-read post at the time of writing is Every attempt to manage academia makes it worse (with 214,438 views), followed by Elsevier is taking down papers from Academia.edu (62,695), […]


  67. […] “Once you start measuring people’s job perfomance, they will switch to optimizing for wh… — is there a lesson in this for academia (via Pekka Väyrynen) […]


  68. […] of the profession. Don’t reflect fondly on the days of yore when philosophers could “hire good people” without the help of objective criteria, without emphasizing that those were days when hiring […]


  69. […] noticed a spike recently in people tweeting about the post Every attempt to manage academia makes it worse, and it made me wonder how it ranks among the most-viewed posts on this blog. Turns out […]


  70. […] try to measure so many things, but we are aware that measuring things in academia can incentivize perverse behavior and lead to weird side-effects. So, I’ve decided to measure something that isn’t commonly cared about in education and […]


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.