Half an Hour,
Mar 27, 2026
I'd like to offer a rejoinder to Junhong Xiao and David CL Lim's paper Is AI the solution to the problems that make higher education "ill" in the first place? Towards a technology-agnostic, future-proof approach.
Near the end of the paper is a section titled "Key takeaways for policymakers and institutional leaders". This post addresses those takeaways specifically.
1. AI is not a cure for structural problems in higher education. The two persistent "illnesses" of higher education remain uneven quality and inequitable access. These challenges are rooted primarily in structural underfunding rather than technological absence. AI cannot compensate for chronic investment gaps in infrastructure, staffing, and institutional capacity.
There's a lot going on here (and to be fair, in this point as with the others a lot of this is explained in more detail and argued for in the main part of the paper). Let's analyze a bit.
Let's first consider the "ills" that frame the paper as a whole: uneven quality and inequitable access.
Uneven quality
What do the authors mean by this? There isn't a lot of detail in the paper, though a few things are mentioned: "existing curricula and/or courses are outmoded and need updating," and "inefficient administration, ineffective pedagogy and learning assessment, the disempowerment of teachers, limited flexibility, and a failure to cultivate skills deemed relevant for the future."
The use of the word "uneven" suggests that the authors believe that in some cases these are addressed, and in others they are not. The overall argument of the paper is that addressing these is mainly a matter of cost. They argue that AI is not needed in order to address issues of quality.
Let's break down the concept of 'quality' to mean the following:
- Currency - the content and skills taught are up-to-date and take into account recent developments in the field
- Relevance - the content and skills taught will offer future benefits to students (and/or society), for example, by improving employment prospects
- Pedagogy - the method and process of instruction employed actually advance student knowledge and skills in the field
- Assessment - evaluation of student knowledge and skills is both accurate (measures actual knowledge and skills being taught) and reliable (is fair and consistent in this assessment)
I think it is fair to say there is probably no definition of 'quality' today that would require the intervention of AI to succeed. This might not always be the case, though. We can imagine future cases where humans are not able to offer the same standard of currency, relevance, pedagogy and assessment that their AI counterparts can provide.
Moreover, it might not be the case that AI is required so much as that it is desired. We could imagine a fairly short-term future in which AI is better able to support improved quality across all four dimensions listed, and very possibly, at a lower cost than what it costs today.
Inequitable Access
The bulk of the paper is dedicates to the question of inequitable access. What do the authors mean by this? They mention factors such as "global higher education enrolment rates" and location of "the world's leading universities." They also reference Sustainable Development Goal Four (SDG 4), which defines a set of indicators pointing to things like "free, equitable and quality primary and secondary education," number of teachers, access to scholarships, and investment (as a proportion of GDP) in education.
Most discussions of the subject focus on the meaning of 'equitable'. UNESCO, for example, speaks of "fair access to quality learning opportunities, regardless of their background or circumstances" and "ensuring that resources and support are allocated based on students' diverse needs." So, here the questions are "how many students" and "which students"? But we should also ask, "access to what?" Usually we think it means something like 'formal enrollment in a college or university' (my phrasing).
When we interpret the question this way, the argument being offered seems almost circular: equitable access is defined in terms of investment in operations and facilities, and so increasing equitable access requires increasing investment in operations and facilities.
There's a disconnect here, though. When we consider 'quality', we are looking at specific outcomes: currency, relevance, pedagogy and assessment. When we look at 'access', however, we are looking at something like 'enrollment in colleges or universities.' Does the first require the second? This breaks down into two separate questions: do colleges and universities have to continue doing the same things they do now to provide currency, relevance, pedagogy and assessment? And second, do we even need colleges and universities to provide currency, relevance, pedagogy and assessment?
Clearly, we don't need AI to continue to support colleges and universities in doing what they are already doing; they have managed to do this for centuries without AI, so suggesting AI is required seems absurd. Moreover, it seems clear AI is not needed in order to increase investment in operations and facilities. It could be used for this, but the same money could also be used to build campuses and hire instructors.
So long as you define 'quality' and 'access' in terms of "infrastructure, staffing, and institutional capacity," the only way to increase these is to increase investment. But it's not at all clear that 'quality' and 'access' must be defined this way.
2. The "iron triangle" remains unbroken. There is currently no robust evidence that AI simultaneously improves quality, expands access, and reduces cost at scale. Claims that AI can break the iron triangle of access, quality, and cost remain largely aspirational rather than empirically demonstrated.
In some cases, equity could be viewed as a zero-sum equation (that is, increasing access to one group requires reducing access to another group) but there's no need to view it this way, and the phrase 'expands access' suggests an overall expansion. Similarly, 'increasing quality' could be viewed the same way. Improving any aspect of quality could be seen as requiring additional expenditure.
Let's take on the question of the 'iron triangle', then. We'll look at it generally, then we'll look at it specifically with request to artificial intelligence, and then we'll look at it from the perspective of AI supporting quality and access.
The existence of this 'iron triangle' is itself a myth. It really needs to be prefaced with a ceteris paribus clause: all else being the same, it is not possible to expand access, reduce cost and increase quality at the same time. But not all else is the same, of course. We have expanded all three many times in history.
I'll offer one simple example to make the point: the arrival of effective antibiotics. Before antibiotics, health care was awful. It was poor quality, often impossible to access, and expensive. Antibiotics dramatically improved quality, greatly enhanced access to much better health outcomes, and was cost-effective. Antibiotics didn't just cure infected wounds and injuries, they made possible a range of treatments that would have been risky and rare because of the danger of infection.
Examples like this abound through history. If new technologies did not in fact break the iron triangle, there would be no reason to develop and use them. The smelting of iron, the invention of the printing press, refrigeration, the automobile - all of these simultaneously increased access, reduced cost and enhanced quality.
We now turn more specifically to AI. Does this new technology break the iron triangle? It is arguable that in many cases it already has. Again, there are many examples. Using computer vision for product inspection instead of humans, for example, catches defects faster, reduces waste and improves product quality, making the resulting product better, cheaper and more accessible. Using AI to detect skin cancer allows more people to be tested, increased the accuracy of detection, and lowers the overall cost of skin cancer screening.
Now we turn to the question of whether AI addresses the iron triangle in education. One area that has undergone considerable study is the field of automated essay assessment. We read, for example, "Automated Essay Scoring (AES) systems have surfaced as transformative instruments, redefining the paradigm of writing assessment. Incorporating AES into pedagogical contexts meets a pivotal demand for efficiency and impartiality in evaluating written responses, particularly in large-scale testing scenarios." Even before the AI boom of 2022, automated essay scoring was widely studied.
Even if it is true today that "claims that AI can break the iron triangle of access, quality, and cost remain largely aspirational rather than empirically demonstrated," it becomes less and less true by the day, and probably isn't true as readers read these words (which, though written by a human, could probably have been written more effectively, more accessibly, and at a lower cost by a machine).
3. Personalization is not synonymous with educational quality. AI-enabled personalization amounts largely to mass customization based on data patterns, not genuine relational tutoring. Evidence that one-on-one AI tutoring produces durable learning gains in higher education is inconclusive. Policymakers should avoid equating algorithmic optimization with meaningful learning.
This 'takeaway' begins discussion of what the article posits are proposed AI "cures" for higher education's illness: (AI-enabled) personalization, automation, virtual learning environment, and cost.
Personalization in this paper is presented as personal tutoring. "One-to-one tutoring 'is too costly for most societies to bear on a large scale' (Bloom, 1984, p. 4), giving rise to what later became known as the '2 sigma' problem. Today, AI-driven personalization is frequently promoted as a solution to this problem." However, citing Hattie, the authors point out that "individualized instruction is not among the most effective approaches." This speaks to the 'learning styles' approach, but most applications of AI are not based on (to my knowledge) learning styles.
The authors also argue that human-supported 'personal tutoring' is different from that offered by an AI tutor. "Genuine personalization, as exemplified by human tutoring, is inherently idiosyncratic and involves bespoke support tailored to individual learners. AI-enabled personalization, by contrast, amounts at best to mass customization or what has been described (in Chinese) as generic personalization.
This depiction of AI-based personalization may exist in some circles, but it feels to me dated and out of vogue. I've contrasted it in the past under the rubric of 'personalized learning' with an alternative 'personal learning'. It contrasts instructor-led content-driven education as compared to learner-driven task-based creation and exploration. It's the difference between using AI to give you a list of 'personally recommended' videos to watch as compared to using AI to author and create your own video of your holiday trip last year.
While much of the (so-called) 'AI-based educational technology' may still look like a recommender system, most of what appear to be the actual uses of AI (even by students enrolled in formal academic institutions) appear to be constructive and creative.
So, is this model of AI-based learning "synonymous" with educational quality? It depends a lot on what we think "educational quality" looks like. It's not synonymous with "enrollment in a college or university" but it might be synonymous with currency, relevance, pedagogy and assessment. Used as a 'personalized tutor' AI may be unlikely to improve any of these dimensions, but that's probably because personalized tutoring itself doesn't sufficiently address any of these dimensions. The problem isn't AI, but in how AI is being employed.
4. Efficiency should not be mistaken for effectiveness. Automation may increase short-term productivity, but efficiency does not automatically enhance educational quality. In some cases, automation risks: cognitive offloading and long-term "cognitive debt" in students; over-reliance on algorithmic outputs; erosion of teachers' professional expertise and judgement.
This speaks to the second posited 'cure', automation. Here, the push to automation is represented as a push for efficiency. The concept of 'efficiency' speaks to 'cost per unit of output' and is (by definition) not the same as 'quality'. Otherwise there would be no 'iron triangle' to speak of. But that's not the real argument here. The real argument is something like 'lowering costs degrades quality'. The degraded quality in the 'takeaway' is identified as cognitive offloading, over-reliance and erosion of teachers' professional expertise and judgement.
It should be pointed out here that increased efficiency does not always degrade quality. For example, it costs less to hire a backhoe to dig a hole than it does to hire manual laborers for the same work, yet there is no loss in quality of the more efficiently dug hole. It costs less effort per hamburger to locate the hamburger freezer closer to the kitchen (so the cooks don't have to walk as far) but the quality of the hamburger will not be lowered. Efficiency does not always reduce quality.
It remains true that the use of AI in education could reduce quality, and so the question needs to be asked. But it's important to keep a stable definition of quality throughout. But here we seem to have a shifting definition of quality. Consider the three examples offered by the authors:
Cognitive offloading and long-term "cognitive debt" in students
This has been a topic of discussion over the last few months. It shows up as assertions using AI are demonstrating less brain effort and a drop off in memory retention. It is an unsurprising result of using a tool for something previously accomplished manually. A person using a backhoe to dig a hole will also be less skilled at diffing them manually, will display less effort, and not build up digging muscles.
At the same time, though, they will be better hole diggers. And this is an important distinction. The problem with cognitive debt is a problem only if the manual skills remain necessary. And this is a matter of degree, not an absolute. A person using a calculator doesn't need to be able to perform complex calculations in their head any more, but it remains useful to have some mathematical sense so they can detect machine malfunctions or input errors. Similarly, both manual and mechanical hole diggers need to know which way is down.
Clearly, simply showing that less mental effort is required to use AI doesn't demonstrate harm to the degree suggested here. It's an obvious byproduct of the use of the AI being more efficient. Whether degraded performance results remains a hypothetical (especially in the long term, and especially with continued availability of the tool).
Over-reliance on algorithmic outputs
It's not clear what is meant by 'over-reliance'. Presumably the algorithmic output would be the same, if not better, than the human output. The authors refer to "long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, [and] decreased creativity." They warn "AI systems remain vulnerable to misinformation, error, and bias." But this isn't a feature specific to reliance on AI, it is a property of any sort of reliance on external sources at all, including teachers and other authority figures, as well as on external media such as books, newspapers, radio and television.
It is arguable - and worth arguing here - that the reason these are such significant concerns is that students (and people in general) are vulnerable, and that they are vulnerable because they have only their personal mental capacities to rely on. A person who uses a calculator to check the figures in a spreadsheet will be much better able to spot errors than one checking manually (if only because it would be a lot faster). Being able to look up facts in Wikipedia helps people fact-check assertions. Logical and other analysis tools can help them identify fallacious reasoning.
Erosion of teachers' professional expertise and judgement
The authors ask many questions about teachers. "How frequently do teachers face student numbers so large that assessment and feedback become unmanageable without automation? Where such conditions exist, why should investment in AI be preferred over the employment of additional teaching staff?" My quick answer here is: frequently, and because it's cheaper.
The authors also argue that automation creates more work for teachers and that "marking and commenting on assignments play a crucial role in helping teachers understand their students and informing subsequent lesson planning. Without such engagement, it becomes unclear how teachers can judge whether AI-generated lesson plans are fit for purpose."
It is worth recalling at this juncture that the term 'calculator' used to refer to people and their profession of performing mathematical calculations. They were professionals and had skills not accessible to most people. The movie Hidden Figures highlights their early role at NASA and how they took on more challenging tasks as computer operators and programmers as automation overtook their profession. In my own life the work I undertook as a computer operator for Texas Instruments was overtaken by the availability of personal computers and the new-found ability of geophysicists to perform complex operations for themselves. Today we are seeing computer programmers overtaken by applications such as Claude Code.
Some skills remain important, some skills evolve, and some skills become redundant. It seems absurd to protest against AI on the basis that teachers' skills will not longer be the same. Characterizing this, without an empirical basis, as 'erosion', is a rhetorical strategy, not an effective argument.
Indeed: if AI were not effective in supporting improved access to quality education, it would have no impact on the teaching profession. Teachers would still be needed to do the same things they have already done. The fear that these skills will be (shall we say) devalued is in itself evidence that there is genuine concern (and belief) that AI will be able to perform these same functions more efficiently and effectively.
5. Automation may deprofessionalize teaching. Tasks often targeted for automation, such as assessment and lesson planning, are integral to teachers' professional growth and pedagogical insight. Removing these functions without careful evaluation risks undermining academic professionalism and educational quality.
This is a restatement of part of takeaway 4, so I'll move on.
6. Virtual environments show promise but are resource-intensive. AI-enabled virtual learning environments can enhance experiential learning, particularly where physical access is constrained. However, their effectiveness varies significantly by discipline and implementation, and they require substantial upfront and ongoing investment.
By 'virtual learning environment' we should be clear that the authors are not using the term VLE synonymously with learning management system (LMS) as is common practice, but are (probably) referring to immersive and simulated environments such as virtual and augmented reality.
This is suggested in the following phrasing: "such environments enable students to engage in activities that would otherwise be too dangerous, too costly, or impractical to undertake in physical settings." This covers a lot of learning environments, and we should be clear about what is being argued here. The authors suggest, "learning gains vary considerably depending on the specific environment or technology deployed (and) these potential benefits are contingent upon a critical precondition, namely, whether universities are able to afford the substantial costs associated with developing, implementing, and maintaining such environments."
So let's be clear: despite their costs, these environments are typically deployed where the cost of real-world hands-on training would be much more expensive. Compare the cost of practicing brain surgery on an actual brain with practice (as I had) using a neuro-touch simulator. Compare the cost of learning to fly in an actual helicopter, as compared to my experience practicing and crashing a helicopter simulator. Consider the cost of practicing actual hazardous material spill containment with the simulation we developed recently.
Indeed, these simulations are an excellent example of the use of new technology smashing the iron triangle described above. They make hands-on learning in these dangerous and costly environments a lot more effective, more accessible, and higher quality. An aspiring pilot,for example, can train a hundred times in a simulator for a situation that might come up only once in their entire career.
And the comment about whether universities are able to afford the substantial costs raises again the question I posed above, specifically, the question of we even need colleges and universities to provide education and training in many instances. For the most part, more complex simulations are operated outside the institutional environment, at specialized training centres offered by government or industry.
At this juncture it's worth noting that we haven't actually addressed AI with respect to any of these virtual learning environments. But when we pose the question of whether AI could augment or improve learning simulations, the answer is a resounding 'yes'. This result is typical: "the modality of AI-powered virtual agents, and learning environment, appeared to be universally effective among the studies of AI-powered virtual agents in computer-based simulations for learning." AI-driven agents can create far more realistic case studies, mock interviews, medical situations (etc., etc. etc.) than any human or programmed instruction possibly could.
7. AI deployment is rarely cost-neutral. Contrary to policy rhetoric, advanced AI systems are expensive to customize, maintain, and scale. Costs include: software development and licensing; infrastructure and technical support; continuous updates and maintenance; student access to devices and stable connectivity.
In lower-income contexts, subscription fees alone may constitute a barrier to access.
This 'takeaway' point to the fourth 'cure' offered by AI: cost.
Nobody would deny that the deployment of AI, especially at the institutional scale, costs money (indeed, the cost of an institutional deployment makes it a questionable acquisition for universities, though many have already entered into partnership with vendors. But I digress.)
But when discussing cost it is always important to keep in mind a variety of factors. What would doing the same thing the old way cost? Are we generating any new opportunities by doing things the new way?
In many cases, AI has already demonstrated it can perform the same function as the human-based equivalent at lower cost (which is why it is being deployed in so many areas). From software development to financial analysis to long distance trucking, an AI-based approach has proven to be more cost effective.
Additionally, AI provides capabilities (including reduced costs) that were not available without it. I often thing of adaptive cruise-control, which is now available in cars. They make driving more pleasant and safer. They don't make the distances shorter or the speed faster, but they do result in a reduction in insurance premiums.
The question is, would an AI-based infrastructure reduce the cost of equitable access to higher quality higher education?
Here we must post the question which a society-wide perspective, and not merely an institutional perspective. This is because, while AI might produce some savings within an institutional context, it is most likely to provide the most significant savings outside that context.
For example: an institution might use AI to author a textbook and course curriculum. There is some evidence that some savings are possible here, as well as affordances not available in traditional textbooks (such as the ability to chat with the textbook). No doubt this is something students can appreciate.
But that very same textbook is now much more useful outside the university context. There's no real need to sign up for a class or attend the institution in person. The textbook can be available to anyone, can be downloaded to a computer or mobile phone, and interacted with on an as-needed basis. This saves time and tuition costs, and greatly increases accessibility to the very same learning resource.
Indeed, the question of what we mean by the 'same quality' becomes relevant here. A person using the text outside the institution has given something up here - they are no longer taking classes or getting a degree. So access isn't measured as 'enrollment'. But across the four dimensions of quality - currency, relevance, pedagogy and assessment - they are at least arguably getting equivalent results at a fraction of the cost.
8. Technological hype risks policy capture. Higher education repeatedly encounters waves of technological determinism in which each new innovation is framed as transformative. Policymakers should resist "digital resignation", namely the belief that institutions must adapt uncritically to every emerging technology.
Having worked with higher education administrators and professors my entire career, I would say that the risk of their 'uncritical acceptance' of anything is doubtful. It feels to me like 'technological determination' is a perception of a perception as opposed to anything real. I'm pretty sure nobody believes "institutions must adapt uncritically to every emerging technology."
The reference here is to Selwyn (2025) (which I note I was able to access in less than a minute in PDF on my internet-supported computer) and the argument is that "Many mainstream discussions convey a sense of digital resignation – i.e. that there are no feasible alternatives to current dominant forms of digitisation, and that education simply needs to respond to any new technology (such as Generative AI) as best as possible. In contrast, it is important for CSET scholarship to promote the belief that other forms of ed-tech are possible … and find ways of supporting people to imagine what these new forms of ed-tech might be."
I might respond - perhaps a little intemperately - that I have spent an entire career doing that. There is no requirement that anyone respond to "mainstream discussions" of anything, and other forms of ed-tech are always possible. That is one (major) reason why I offered a note of resistance to the most mainstream discussion of the most dominant form of ed-tech that is out there: the traditional university.
While on the one hand that discussion supports "locally sourced, locally owned, locally repairable and locally accountable uses of tech" - which I wholeheartedly endorse - the dominant narrative is of "resistance" - delaying device adoption, opposition to automation, resistance to commodification, and valuing the artisanal. Such a response is a fulsome endorsement of current technology, which I resist, not an endorsement of alternatives.
Indeed, if we offer the current system of technology - the traditional university - against the same four criterion of quality, along with our informal definition of access, it fails miserably:
- Currency - content and skills taught are generally out-of-date, especially in traditional texts and when taught by professors who have not kept up with recent developments in the field
- Relevance - the content and skills taught are criticized as irrelevance by business and industry, and are often notable for failing to improve employment prospects
- Pedagogy - the method and process of instruction employed - typically the stand-up lecture supported by power-point - does not actually advance student knowledge and skills
- Assessment - evaluation of student knowledge and skills is based on proxies (eg., multiple choice questions) and is often graded differently by different instructors and for different student demographics
- How many students - in many societies, only a minority
- Which students - notoriously inequitable, with a significant variation among institutions
- Access to what - expensive campus-based tuition-based long-term programs
The whole discussion of 'technological determinism' works both ways. Simple resistance to new technology is technical determinism in favour of what already exists. But what already exists is increasingly a failure in our rapidly changing information-age society.
9. Many higher education challenges are non-technological. Access gaps, quality disparities, and funding inequities are fundamentally socio-economic and political issues. Technological solutions cannot substitute for structural reform, sustained investment, and institutional agency.
While on the one hand it seems reasonable to agree with this - and to a large degree I do - on the other hand it represents these issues as solely socio-economic and political, which they're not. Indeed, if they were, they would be a lot easier to address, because then they could be addressed by reasonably progressive governments and societies. Yet they remain persistent challenges, even to the most forward-looking.
If we took all the money we had, and spend it building tens of thousands of campuses, hiring hundreds of thousands of instructors, investing billions on curricular resources and professional development, we would make a small dent in the "ills" of higher education, but would achieve nothing like the results we want and deserve. And to actually sustain such a system over time challenges even the strongest economies.
If we can find technology to increase equitable access to quality higher education, we should.
Indeed, when we talk about poor quality education in under-served societies, what is significant is not their resistance to technology but rather the technology they last. I've visited places where the schools and students lack buildings, electricity, running water, books... sure, an AI supported-computer probably won't that helpful to them, but that's because they lack technology, not because they are resisting it.
Resistance to technology reflects power and privilege, not need or necessity.
10. Technology should serve education, not define it.
These are not the only two options. And the idea that anything should "serve" education should be challenged. The education system isn't an end in itself. It is a means to an end - or to different ends, for different people. The education system should not be self-defining, it should reflect the values, needs and interests of the people it serves.


