Recently the Times Higher Education launched a series of ‘Spotlight’ articles and think pieces on AI and the University, claiming ‘artificial intelligence is already impacting higher education, and signs are that the influence of evolving technologies on university life is just getting started’.Footnote 1 The collected pieces are well-considered and in places cautious about AI hype, yet they tend to reflect a widespread assumption that AI will inevitably transform the future of education—for the better.

The problem with such promotion of AI and the future of education is it presupposes AI will operate as planned and intended, with any problems emerging during its development or deployment smoothed out through either technical tweaks or appropriate ethical frameworks. None of these things are necessarily the case. As Meredith Broussard argues in Artificial Unintelligence: How Computers Misunderstand the World, ‘the way people talk about technology is out of sync with what digital technology actually can do’ (Broussard, 2019, p. 6). She coins the phrase ‘technochauvinism’ to describe the flawed assumption that digital technologies like AI are always the solution. Computer technology, Broussaard argues, simply does not always work as expected or intended. It is technochauvinist to assume it will.

There is no good reason to presuppose AI used in education will work as expected either. For all the current enthusiasm for AI-based teaching and learning, the evidence base for their transformative effects on education remains thin (Holmes et al., 2022). Moreover, at the time of writing, the biggest stories about AI in education concern automated natural language generating technologies. While some foresee these language ‘transformer’ models as transformative for student research and writing (as is the case in the Times Higher Education series), they are also extremely problematic instantiations of AI that can reproduce significant biases, generate false information, and risk disproportionately harming those at the margins (Perrotta, Selwyn and Ewin, 2022). This controversy seems far from the expected promise of AI transforming education for good, and surfaces important questions about whether future research and development in AI in education might be supported by a more social and historical sensitivity.

AIED in Social and Historical Context

In a recent editorial article for a special issue entitled ‘AI and Education: critical perspectives and alternative futures’, Rebecca Eynon and I argued that AI in education cannot simply be viewed as a series of technical developments following a path towards an inevitably beneficial future (Williamson & Eynon, 2020). Instead, we insisted on a thoroughly social and historical understanding of AI in education. One of our key points was to see current interest in AI in education as the result of several convergences: the historical result of decades of R&D in the academic field of AIED itself (and associated areas like learning analytics, learning science and education data mining), as well as of growing commercial concerns to put AI to use in education, and of political enthusiasm for AI in the ‘digital transformation’ of education for the future.

Our point was that AI remains a hugely slippery term. Lately, some organizations like the Center on Privacy and Technology at Georgetown Law have begun rejecting the category of ‘AI’ altogether. Talking of AI, they claim, obfuscates the social, technical, economic and political factors that enable a system to function (Tucker, 2022). They ask researchers and policy officials to focus instead on the specificity of technology and what it does, to highlight the specific companies or government centres responsible for developing and diffusing it, and to consider the responsibilities of the human actors building and using the technology. What we currently call AI is an historical accumulation of statistics, algorithm design, data storage and computing power, and new automated data science discovery methods of machine learning, neural networks and deep learning (McQuillan, 2022). AI is also the result of expert scientific and technical practices carried out in academic and commercial settings, of business plans and science funding schemes, and of political struggles over the role of technology in society, all enacted by humans in social context. None of those social and historical factors are secondary to what AI does: they help determine what it does.

The same is true in education, as the variety of perspectives in this collection indicate. AIED is not just a bundle of technologies but is the socially and historically specific result of an accumulation of technical developments, scientific practices, institutional applications, and power struggles - including struggles between supporters and their critics. The social and historical aspects of AI in education are significant because AI can mean different things to those involved. As Rebecca Eynon and Erin Young have recently shown, ‘AI is a complex social, cultural, and material artifact that is understood and constructed by different stakeholders in varied ways, and these differences have significant social and educational implications’ (Eynon and Young, 2021). They argue AI in education is conceived and practised in three ways: as a methodology for academic research to better understand learning and achieve practical impact on learning and education outcomes; as a potential source of profit for industry; and as political rhetoric used to demand educational reforms. How ‘AI’ appears in these three framings will lead to very different outcomes.

AI is then more than a set of technical objects and processes. AI in education is imagined and made by people and organizations with objectives and incentives. It’s a multisector and interdisciplinary site of development and deployment. It is promoted for various purposes—whether for research, commercial gain, or policy aims. And it can generate extraordinarily significant social and educational implications, including unanticipated side effects and major ethical, legal and regulatory problems. This is why considering ‘the social life of AI in education’ may be productive: there are complex social factors involved in the production of AI in education, and AI also produces complex social and educational implications, including unexpected or unintended consequences.

The Economics of AI in Education

Important aspects of the social life of AI in education are the ways it is embedded in and arises from political and economic contexts. The development and deployment of AI in education depends on both economic or market conditions, and on political or policy support.

Regarding the economic or market factors, education technology (edtech) is now a massive multibillion dollar global industry, powered by private venture capital investors. Many of the wealthiest edtech firms have made substantial commitments to developing AI-based approaches in recent years, with the backing of investors’ financial support. The significance here is to see investors as social actors who are funding the AI-based future of education into being, making hugely powerful decisions about allocating money to seemingly transformative or disruptive technologies (Williamson & Komljenovic, 2022). Investors fund edtech companies so they can build and scale AI services even further into educational institutions and practices (Davies et al., 2022).

But what are the reasons for this shift to AI in commercial edtech? Whatever the potential educational merits of AI in education, for the edtech industry AI is also part of a business plan. The business model of AI in education is usually based on the logic of ‘platformization’ and ‘datafication’ (Nichols & Garcia, 2022). The business plan of a platform is to be a subscriptions-based online service. Instead of the short-term business model of selling software products to schools or universities, what edtech companies want is to earn continuous income from institutions and individuals paying fees for the services they offer. In the process, platforms collect significant quantities of data, which promise to generate further value because they can be used to create new kind of data-driven services, like AI upgrades, for which customers might pay additional fees (Komljenovic, 2021). Focusing on the social life of AI in education shows how it is significantly shaped by new platform- and data-based business models in the edtech industry.

Furthermore, the companies that have perfected the platform business plan are the Big Tech companies like Google, Microsoft and Amazon. All three have become prominent players in education in recent years. For example, the cloud-based Google Classroom platform for online learning has exploded in use in schools across the world, and Google has begun launching new AI capacities such as adaptive personalized learning and automated tutoring services, as well as new fee-paying structures, as part of its long-term roadmap for the platform (Williamson, 2022).

Amazon has become a key promoter of AI in education too, particularly by providing cloud computing services to power the wider edtech industry. A large proportion of edtech platforms operate by paying Amazon subscription fees to be hosted on the Amazon Web Services Cloud, enabling them to deliver efficient computing, storage, scale, and reliability, and advanced features like data analytics and other AI services. What this means is that edtech companies planning to offer AI services often depend on Amazon, giving the company enormous power to shape AI development and deployment across the education technology industry, and from there to reach into the practices of education institutions (Williamson et al., 2022).

In tech industry jargon, Big Tech companies are known as ‘hyperscalers’ (Pfotenhauer et al., 2022). They are hyperscaling into education and introducing their own cloud and AI infrastructures into the routines and practices of educational institutions, as part of a business model that tends towards monopoly capture as a route to future revenue streams. This particular aspect of AI in education cannot be considered separate from economic and market factors. Likewise, the capacity of edtech startups to include AI in their platforms cannot be separated from the economic power of investors who allocate funds to the companies they think most likely to generate return on investment, or from their dependency on Big Tech cloud infrastructure.

The Politics of AI in Education

Although the academic AIED and learning analytics fields have always had commercial connections, there is a significant difference between the kinds of AI and analytics research conducted in university labs, and the AI and analytics being introduced at scale into the very digital infrastructures on which education increasingly depends. Indeed, we need to consider this part of the politics of AI in education, or a struggle for power over the direction of AI use in education. As Eynon and Young (2021) put it, while for AIED researchers, AI is a methodology for generating insights into learning, for industry it’s an opportunity for profit. These are not the same things at all, and they are likely to lead to very different outcomes and implications.

One key risk of treating AI as some monolithic thing with expected future beneficial effects is that it gets taken up in potentially deleterious ways by policymaking centres. Promises that AI can improve learning or achievement are hugely appealing from a policy perspective. But education policy is shaped by existing political assumptions and priorities. In many national and regional contexts, education policy has been framed for years by processes of marketization, privatization, and performance-based accountability. Vast information infrastructures have been assembled to collect data from schools and universities as a way of evaluating performance on market-like metrics, from the level of whole national systems to institutions and to individual educators (Wyatt-Smith, Lingard and Heck, 2021). The processes of quantification or datafication on which AIED depends are mirrored by increasing results-based accountability in education systems around the world since the 1980s, with performance accountability processes now increasingly enacted through digitalized and automatized systems (Grek, Maroy and Verger, 2021).

Indeed, AI is of growing interest to education policy authorities because it can seemingly accelerate processes such as accountability measurement, and close the loop to performance improvement by automatically producing feedback and ‘actionable’ insights based on predictive analyses (Gulson et al., 2022). The risks here are of automated decision-making replacing human judgment in political choices over education, and the potential for poor quality data being used to make high-stakes decisions that could affect schools, staff or students alike (Day, 2021). Moreover, there is the possibility of political actors buying in to hype over the potential of AI in education, whether generated by academic research centres or commercial edtech companies, and seeking its widespread deployment in schools despite lacking evidence of its intended direct effects (on measurable improvement in learning) or without considering its possible unintended side effects (in a precautionary way).

AI is likely to be embraced for highly political projects in education, ranging from hardened practices of performance-based accountability to the accomplishment of national AI strategies and as part of geopolitical competition for technological ‘superpower’ status (Knox, 2020). Paying attention to the social life of AI in education therefore means seeing AI not as a neutral technology but as political technology that can be used to serve particular policy objectives and ideologies.

Ethical, Legal and Regulatory Control of AI in Education

If AI in education has a ‘social life’ in terms of how it is produced, promoted, and for what purposes, it also has a social life as it enters into particular contexts and practices, sometimes with deeply harmful consequences. Luci Pangrazio and Julian Sefton-Green (2022) have recently noted that processes of datafication in education, including those associated with AI, are locally embedded and experienced within distinctive social, cultural and political contexts. Just as AI does not name a monolithic technology, nor is AI received or experienced in the same ways in different settings or by diverse groups of people. For example, multiple reports claim remote exam proctoring software, much of it based on automated facial detection technologies, tends to disproportionately ‘flag’ as ‘suspicious’ groups of students who are already most marginalized. Facial AI can worsen existing patterns of structural discrimination, inequality, and exclusion within specific socially, economically and politically located communities.

Another important consideration from a ‘social life of AI in education’ perspective is therefore concerned with ethical, legal and regulatory problems. And this also means being attentive to how ethical, legal and regulatory instruments themselves are socially constructed and acted on. In many contexts, attempts are being made to ensure AI in education is ethical and subject to necessary legal and regulatory constraints. Instruments range from ethics frameworks to national or even regional-level governance and right-based regulatory proposals. One intervention in the UK context is a detailed proposal for the regulation of both the commercial and governmental uses of student data produced by the Digital Futures Commission, which makes a series of regulatory recommendations from an explicitly child rights perspective (Day, 2021). In the US, the Federal Trade Commission has begun targeting the widespread data-driven surveillance of students through education platforms.Footnote 2

But these may be hard to enforce, and it remains as yet unclear what concrete effects they will make. It is also likely that data ethics, rights and regulation will remain significantly contested, unevenly implemented, and subjected to political and commercial challenge. Such contests would reflect the wider context of data ethics and regulatory developments, where there remain significant concerns of ‘ethics washing’ through self-regulatory frameworks and industry-led ‘checklists’ (Greene, Hoffman and Stark, 2019). As the Digital Futures Commission indicates, the burden for ensuring ethical and regulatory compliance falls often upon schools, while edtech—and Big Tech—companies remain unaffected, even when failing to comply with data protection regulation (Hooper, Livingstone and Pothong, 2022). This situation reveals a significant power asymmetry, with companies free to market AI products—often with weak evidence of pedagogic benefit—and schools expected to carry the burden of maintaining data protection and privacy due diligence.

Resisting AIED?

Considering AI in education as having a ‘social life’ brings into the foreground how AI is variously produced, framed and understood by different organizations and groups according to very different projects and purposes. It also highlights how AI can produce social effects as it is deployed in new contexts—whether intended effects like measurable learning gains or unanticipated and ethically problematic side effects like worsened discrimination and inequality. It makes little sense to even talk about ‘AI in education’ as a single category. How AIED R&D advances in research centres is very different from Google rolling out language models and adaptive technologies in its cloud suite for schools. How policymakers and political figures foresee the potential of AI in education is different from the value that edtech investors foresee in AI startups. And as with the long history of computer technologies in schools, how AI actually gets used in education is likely to vary considerably from the visionary promises of its transformative potential.

Even more crucially, education sector professionals need to be thinking much harder about how AI is being put to work, in various ways, in different educational contexts, and with what social effects. In his book Resisting AI, Dan McQuillan argues that ‘When we’re thinking about the actuality of AI, we can’t separate the calculations in the code from the social context of its application’ (McQuillan, 2022, p. 1). McQuillan’s particular concern is how many contemporary applications of AI are amplifying existing inequalities and injustices as well as deepening social divisions and instabilities. His book makes a powerful case for anticipating these effects and actively resisting them for the good of societies. Similarly, a recent Council of Europe report challenges AI in education in terms of its risks for human rights, democracy and rule of law (Holmes et al., 2022). A serious consideration of AI in education should also acknowledge the potential risks it could bring, take seriously its potential to worsen existing problems, and refuse the technochauvinist assumption that AI is the ideal solution.

It is of course hard to anticipate downstream risks of any new technology. Educational professionals can however consider the longer history of AI and related technologies, and see how their effects have rarely played out in the straightforwardly beneficial and idealized ways that their advocates claimed they would. As others have noted in this collection and elsewhere (e.g. Williamson and Eynon, 2020), a productive future for AIED would involve deeper engagement between application developers and more critical voices from the social sciences and history. The former bring invaluable pedagogic, design and technical expertise; to that the latter bring expertise in understanding the complex and often unintended social effects of technologies.