Introduction

The increasing reach of artificial intelligence (AI) has enormous implications for higher education. For instance, essays are graded by AI (Foltz et al., 2013), and AI-based facial recognition is being used to proctor online exams (Swauger, 2020). Moreover, it is not just the university workplace that is changing: graduates’ futures are increasingly dependent upon AI-mediated workplaces where job profiles and work practices may be radically shifting (Moscardini et al., 2020). A working paper of the Organisation for Economic Co-operation and Development (OECD) (Vincent-Lancrin & van der Vlies, 2020, p. 16) captures these sentiments, concluding that there ‘is no doubt that AI will become pervasive in education …’ and that there is need to ‘…prepare students and learners for the transformation of work and society…’.

The pandemic has accelerated the introduction of online technologies in higher education (Bartolic et al., 2022) and the associated opportunity for ‘machine-to-student’ interactions brokered by AI (Rof et al., 2022). These may include commercial interests: learning management systems invoke AI as a selling point (Marachi & Quill, 2020) and ubiquitous commercial learning technologies such as language learning applications rely on some form of AI (Pikhart, 2020). Historically, computerisation has promoted a shift away from routine manual and cognitive tasks towards non-routine analytic and interactive tasks (Autor et al., 2003), suggesting that technologies such as AI could have real impact upon labour markets and thus higher education. Therefore, AI is not just a matter for technological innovation but also represents a fundamental change in the relationship between higher education and broader socioeconomic interests. At this time of accelerated change, where the social shifts are as significant as the technological ones, universities need to set strong policy and research agendas that attend to AI and take account of ethical implications.

How universities respond to AI depends not only on what AI is but also on what it is understood to be. The ways that AI is portrayed within the higher education literature helps shape research, policy and practice, and discourses about technology can be powerful. For example, such discourses can legitimate particular notions of labour and productivity, such as the promotion of flexible working and the conflation of work and home (Fisher, 2010). Indeed, Fisher (2010, p. 231) writes ‘the discourse on technology is not simply a reflection of the centrality of technology in the operation of modern societies; instead, it plays a constitutive role in their operation, and enables precisely that centrality’. Thus, we suggest analysing the discourses of AI within the higher education literature provides insights into how we are constructing AI’s possibilities within our field.

This critical review rigorously examines a corpus of the higher education literature that makes reference to AI. We seek to illuminate how researchers define, debate, neglect or interpret AI. Thus, we aim to critically explore the discursive constructions underpinning higher education’s approach to an increasingly ubiquitous and influential technology. This is not just a matter of summarising gaps and strengths within the literature but a need to promote critical conversations and investigations about AI that grapple with the social and ethical in concert with the technological.

In the remainder of this article, we consider how AI is currently discussed with respect to higher education in the broader literature before outlining our guiding research question. Next, the “Methods” section reports on both critical review methodologies and the discourse analytic approach. We then summarise the 29 included articles and their limited definitions, before detailing two prominent Discourses that our analysis identified: the Discourse of imperative response and the Discourse of altering authority. These provide a platform to critique current conceptualisations of AI in the literature, outlining a research agenda that prioritises the social as much as the technical.

AI research in the context of higher education

A recent systematic review (Zawacki-Richter et al., 2019) details 146 papers that focus on AI in the context of higher education. This review primarily includes articles that have a strongly technical approach and describe applications such as profiling students for reasons such as admission and retention; intelligent tutoring and personalised learning systems; and assessment and feedback systems, including automated assessment and feedback information. There are only three articles from the main higher education journals (listed in Table 1); instead, most articles are from specialist AI, educational technology or computing journals. This is valuable work, but it means that the broader concerns of higher education are not reflected in these studies and vice versa. Concerningly, AI seems to be relegated to being a technological innovation without social implications. For example, Zawacki-Richter et al. (2019, p. 10) note that a ‘stunningly low’ number of articles (2/146 papers) in their review consider ethical implications. However, as Hayward and Maas (2021) outline in their primer on crime and AI, this technology has been used to enhance criminal activity, promote racism and increase surveillance; moreover, AI can be considered part of a ‘digital colonialism’ that entrenches and extends current inequalities (Kwet, 2019, p. 3).

Table 1 Higher education journals included in this search

We suggest that AI requires a debate that is specific to higher education, concerns the broader social impacts and is not only found in technologically focused journals. This need is articulated within Aoun’s (2017) seminal work Robot-proof education, which proposes that universities should be considering how to develop students’ uniquely human skills rather than replicating things that AI can already do better. So, how is AI discussed and investigated within the literature most relevant to our field?

Aim of this review

The overall aim of this critical literature review is to provide insight into the discursive constructions of AI within the field of higher education. We address this aim by answering the following question: how is the term ‘artificial intelligence’ invoked in the higher education research literature?

Methods

Overview

A critical literature review is a methodology that ‘seeks to identify most significant items in a field’ to produce a ‘conceptual contribution’ (Grant & Booth, 2009, p. 94). A critical methodology aligns with our view that language has power beyond that which it is representing (Gee, 2014; Popkewitz, 2013). From this perspective, language, context and society are entwined to mutually constitute each other (Gee, 2004). Our approach is linguistically based: we target the most prominent higher education journals, employing a systematic search for the specific term ‘artificial intelligence’ as detailed below. The value of this approach is illustrated by other educational reviews where the terms such as ‘community of practice’ (McGrath et al., 2020) or ‘feedback’(Jensen et al., 2021) are analysed in a dataset constituted from recent publications in relevant high-profile journals. These analyses provide insight into underlying conceptualisations and assumptions within the research literature.

Gee (2004, p. 45) notes that the ‘situated meanings of words and phrases within specific social language trigger specific cultural models in terms of which speakers (writers) and listeners (readers) make meaning from texts’. Accordingly, we seek the ways in which higher education researchers construct the term ‘artificial intelligence’. We do so by (a) analysing how the term is defined within the texts and (b) through a rigorous discourse analytic process (Gee, 2004). As we outline below, we employed this type of language analysis to illuminate social identities of academics, teachers and students that were connected to debates about the purpose of the university and varying perspectives on human–machine relations.

Search strategy

We established the top ten journals in the field by combining the ‘top ten’ higher education focussed journals from Scimago, JCR and Google Scholar (see Table 1). There was a high degree of concordance, and all journals appeared at least once. As we are interested in how the term ‘artificial intelligence’ is invoked higher education studies to reflect a specific research community, we did not include journals that commonly but not exclusively publish higher education literature nor journals that were primarily part of another field, such as medical or teacher education. While four journals could be regarded as specialist due to their focus on teaching or assessment (3) or technology (1), we argue that these are all commonly read in the field and constitute a broad corpus of literature that should reflect the predominant concerns in AI within the sector.

We individually searched each of these journals for use of the specific term ‘artificial intelligence’ within the text from any time up to November 2020. We included historical texts in order to chart shifts in discursive constructions, if present.

This yielded 92 peer-reviewed articles. We excluded articles based on the lack of meaningful engagement with the term ‘artificial intelligence’. Those that employed the term centrally or meaningfully were automatically included. Those that referenced the term outside of the body of the article (e.g. in the reference list) were automatically excluded. The rest were read for meaningful engagement with the term by two researchers (MB, JR) and discussed iteratively for development of meaning with the third (RA). Twenty-nine articles were included (Supplemental materials contain a full listing by journal).

We read each text for any explicit definitions of AI and any implicit associations with other forms of technology.

Discourse analysis

Discourse analysis is a complex and rigorous qualitative methodology; to make sense of our approach, we describe key analytical moves with associated illustrative examples from the texts included in our review. We followed Gee (2010, 2014), as we elucidate below, to interpret textual references to AI with respect to their: situated meaning, social language, intertextuality, figured worlds and Conversations. These five aspects delineate Discourses: sets of language uses that encompass beliefs, values, interactions and actions (Gee, 2004). Thus, this analysis produces overarching Discourse categories, which provide insight into the social meanings ascribed to AI in the texts.

We commenced the discourse analysis by examining all textual references for specific meanings ascribed to AI within the context of the focus articles to highlight their situated meanings. For example, AI was variously associated with change at speed and scale through terminology such as ‘unprecedented’, ‘radical’, ‘transformation’ and ‘revolution’. Some of these change-associated meanings were dystopian in tone, for example, an ‘unleashed’ phenomenon. Other texts were utopian, constructing AI as ‘generative’ and a ‘solution’. We noticed how this examination of situated meanings intersected with recurrent dualisms, in this instance utopian and dystopian accounts, which persisted across all facets of the analysis. We began to trace these dualisms as we report in our findings.

Consideration was given to the distinctive language used to constitute particular social identities within the text through their social languages. For example, academic identities were variously framed in terms of autonomy, empowerment and leadership or as resistance to disempowerment due to processes of management and datafication. As we progressed, we systematically interpreted how the texts constructed different social identities for the institution, university teacher and student.

We noted intertextuality, that is, the extent to which articles relate to previous or other texts, for instance, in implicitly Orwellian evocations (e.g. Kwet & Prinsloo, 2019) and explicitly, as a reference to science fiction (e.g. Carson, 2019). Figured worlds were interpreted. That is, we considered any projected idealised social imaginaries (e.g. what a university should be) and how they were evoked within the texts. Finally, we looked for Conversations, namely, the debates about AI to which the articles refer explicitly or implicitly. For example, debates about the purpose of the university were directly related to marketisation and datafication in critical and dystopian accounts, while Conversations about quality, efficiency and performance took various forms. We also considered the historical placement of the papers and considered whether Conversations shifted over time periods in our analysis.

Results

Description of articles

The 29 included articles were essays (11), empirical studies that collected data (11), case descriptions (5) or technical model development (2). Only ten had AI or an AI-enabled tool within the primary focus of the article. Papers ranged from the 1980s (4), 2000s (4) and 2010s (6), and 14 were from 2020. The authors’ countries were the USA (9), the UK (7), Australia (7) and Canada (2) and one each from Hong Kong, Ireland, Papua New Guinea and China. Predominant disciplines included education (11), technology (7), multiple disciplines (4) and business/law (3).

How AI is defined in the higher education literature

Few articles articulated definitions of AI. Only five texts discussed or provided definitions, and of these, four were from the 1980s. The definitions were vague or circular, and AI was generally defined by reference to human behaviours. The only current discussion of definitions was provided by Breines and Gallagher (2020, p. 1), who stated ‘Artificial intelligence is often referred to as a technology that can transform “traditional” education where students are passive recipients of information to more dynamic and better forms of education through “highly personalized, scalable, and affordable alternative AI [artificial intelligence] solutions”’ (Popenici & Kerr, 2017, 10). Another take on artificial intelligence is provided by Cukurova, Kent and Luckin (2019, 3033) who see it as a means to support teachers and thereby augment ‘human intelligence’ to create ‘super educator minds’. An example of a historical definition is from Barker (1986, p. 202), who writes ‘“Artificial Intelligence” is said to be exhibited when a computer is made to behave in ways that would be classed as “intelligent” if the behaviour were performed by humans.…’. All definitions are in the supplemental materials.

AI is often found enmeshed with other concepts, notably data and analytics within more recent papers. For example, Loftus and Madden’s (2020, p. 457) article title contains the prominent twinning: ‘data and artificial intelligence’ [bold ours]. Indeed, AI is most frequently described through association with other technological concepts or artefacts. (e.g. ‘an AI/machine learning approach’ (Loftus & Madden, 2020, p. 458) [bold ours]).

Two Discourses of AI within the higher education literature

The discourse analysis reveals remarkably congruent ideas associated with the term ‘artificial intelligence’ across disciplines, type of article and, more surprisingly, 40 years of publication. Through this, we interpret that AI is understood through two major Discourses. The first Discourse centres around the advent of unprecedented sociotechnical change and how higher education has an imperative to respond. The second Discourse focuses on how AI is altering the locus of authority and agency surrounding academic work. We describe each Discourse with respect to three social identities: (1) institutional, (2) staff and (3) student. Throughout both Discourses, we interpret two dominant dualisms: a present dystopia versus a near future utopia and the human versus the machine. Table 2 provides an overview.

Table 2 Overview of the discourse analysis

Discourse of imperative response

This Discourse charts the imperatives to respond to a rapid and significant change towards a technology-driven and AI-mediated society.

How institutions are constructed within a Discourse of imperative response

The Discourse suggests that universities must respond to a rapidly changing technologically mediated landscape, of which AI forms a critical component. How universities should respond appears strongly shaped by a dualism of dystopia-is-now versus utopia-just-around-the-corner. For example, Carson (2019, p. 1041) describes ‘…the dystopian backdrop to a work of science fiction now sets the greatest challenges and opportunities that face universities …’. Alternatively, inferring utopia-just-around-the-corner, Moscardini et al. (2020, p. 1) write ‘Over the last ten years, there has been an unprecedented and exponential growth of technology and artificial intelligence capabilities which is challenging current working practices and will play a prominent role in the way that society develops’.

Accounts tending towards dystopia-is-now propose that universities must resist in order to survive. Here, AI has already changed what universities do: for example, ‘the wider education context is increasingly being shaped by the forces of Artificial Intelligence’ (Loftus & Madden, 2020, p. 457). Taking a critical stance and linking the power of AI to the data upon which it is entangled, Williamson et al. (2020, p.362) contend ‘Academics and students are beginning to address the challenge of how to resist these [data power] trends … starting with a restatement of the inherent public good of higher education’. This response is not only about technology but what it means to be a university.

Accounts tending towards utopia-just-around-the-corner also propose a response as a matter of survival but frame it as one of positive transformation. For example, Moscardini et al. (2020, p. 11) state ‘transformation of the university is not just a good idea, it is imperative for their survival’ [bold ours]. They argue that universities need to shift their educational focus from employment to ethics and sustainability and become a ‘learning organisation’, thus raising the priority of the teaching role above the research role. Further, they foresee an emancipatory turn in which Industry 4.0 will create a ‘Digital Athens’ where citizens will have increased leisure and a living wage. This will lead people to seek education to forge social connections and learn how to live meaningfully, with both pursuits shaping the university purpose instead of the current vocational orientation. At a less lofty level, Jackson and Tomlinson (2020) note graduate labour market impacts of Industry 4.0, including AI, and argue that universities should focus on actively promoting students’ career planning.

How staff and staff practices are constructed within a Discourse of imperative response

This Discourse charts the requisite response as teaching and other academic practices shift to accommodate an AI future. There are competing views. At the dystopian extreme, AI replaces humans, and thus, key capabilities are lost. From the utopian perspective, humans employ AI to free themselves for other work. However, there are more nuanced accounts within the texts. For example, Bayne (2015, p. 460) proposes exploring ‘how human and non-human teachers might work together in a teaching “assemblage”…’.

In general, those articles that explore teaching innovations with AI components suggest AI will enhance staff practices. This spans decades, from Marshall (1986) who proposes expert systems as a support for teacher assessment, through to contemporary innovations such as automated feedback on academic writing (Cheng, 2017; Shibani et al., 2020). Collectively, these texts construct identities of lecturers as hands-off expert guides, students as autonomous learners and technologies as neutral and cost-efficient. Therefore, in these accounts, enhancement of teacher practices underpins an imperative to respond to AI through uptake of practical innovations.

In contrast, Williamson et al. (2020) frame AI as part of a system promoting collective diminishment of teacher opportunities for thinking and judgement. They note (p.357) ‘modern data systems with so-called AI capacity’ require quantitative measures that once in use can alter pedagogy and ‘teachers lose “pedagogic discretion and professional judgment”…’ (p. 358). This creates an imperative for teachers to develop new skills, ranging from critical capabilities with respect to AI (Loftus & Madden, 2020; Williamson et al., 2020) to general understandings of AI (Selwyn & Gašević, 2020; Shibani et al., 2020). For some, administrative and tutor roles are positioned as replaceable by AI (Hickey et al., 2020; Sheehan, 1984). The implication is that the necessary response is to retain expert academics and diminish other staff roles.

How students and learning are constructed within a Discourse of imperative response

The texts construct the AI changes as having deep epistemic impact and thus a need for student response. Loftus and Madden (2020, p. 456) argue that the data revolution is changing subjectivities, altering ‘not only what we see, but how we see it’ as well as ‘learning cultures, philosophies, learning experiences and classroom dynamics’. Williamson et al. (2020, p. 358) note: the students lose ‘opportunities for pedagogic dialogue or critical independent thinking’. Alternatively, Shibani et al., (2020, p. 11) propose that the purpose of their AI-powered automated feedback tool was to ‘develop students’ self-assessment ability, as a method to support their learning…’.

Wherever they sit on the dystopian–utopian spectrum, these accounts suggest that students need to learn new understandings or practices in order to respond to changes brought about by AI-embedded technologies. Williamson et al., (2020, p. 359) note ‘students develop critical skills of using and evaluating data’. As far back as 1985, Haigh (p 168) observes ‘Our students need to acquire an understanding of what expert systems are and what is meant by artificial intelligence and how it resembles human intelligence. They must also develop an appreciation for the appropriate interplay of artificial intelligence and human intelligence in decisions’. The collective implication of these texts is that unless they take necessary responses, students will be diminished or disempowered.

Discourse of altering authority

If the first Discourse traces the imperative of responding to seismic change associated with AI-embedded technologies as a matter of universities’ survival, this second Discourse charts authority and agency in a state of flux within higher education.

How institutions are constructed within a discourse of altering agency

We interpret two competing accounts regarding institutional authority aligned with the dualism of dystopia-is-here versus utopia-is-just-around-the-corner. From the utopian standpoint, Moscardini et al. (2020) contend that AI and big data can proactively and innovatively shape university offerings to meet student demand. At a similar but more practical level, Liu et al. (2020, p. 2) propose that a ‘mature artificial intelligent algorithm’ can guide quality evaluation through running teaching quality questionnaire through a neural net to pick up patterns of teaching behaviours. This implicitly invests authority in quantification. In a similar vein, Grupe (2002) contends that an expert system can offer guidance to students unable to access a human academic advisor. In these accounts, AI is co-present with other technological players such as algorithms, computers and big data; they are afforded a prominent and positive authority position: ‘mature’ and ‘expert’.

The alternative dystopian account invokes a fear of powerful data and quantification processes. In these accounts, AI is part of what Kwet and Prinsloo (2020, p. 512) call the ‘tech hegemony’. It is seen as entangled with a range of administrative technologies that are both controlling and invested with intentionality. Williamson et al. (2020, p. 352) contend that ‘… so-called AI products and platforms can now “learn from experience” in order to optimize their own functioning and adapt to their own use…’ warning against trusting the ‘magic’ of digital quantification, algorithmic calculation, and machine learning. Others share this concern, including Kwet and Prinsloo (2020, p. 520) who caution against the datafication of universities, which they associate with technocratic control, data harvesting and exploitation in ‘regimes of technocratic control whereby actors in the educational system are treated as objectified parts in a larger machine of bureaucracy’.

AI is positioned as an intangible contributor to authority within both dystopian and utopian accounts of advancing datafication. AI’s contribution is rhetorical: a ‘hot topic’ (Selwyn & Gašević, 2020, p. 532). It is also somewhat sinister in its invisibility, given its material effects, e.g. ‘hidden’ (Wilson et al., 2015, p. 20) and ‘deceiving’ (Breines & Gallagher, 2020, p. 11). AI is described as a necessary ‘ingredient’ that requires ‘powering’ with data (Selwyn & Gašević, 2020, p. 536). But references to AI curiously fade away past this point. AI is mentioned but backgrounded in the critique of the deployment of data in university management (e.g. Kwet and Prinsloo (2020), Tsai et al. (2020), Williamson et al. (2020)), which associate data-driven measurement as a tool for control.

How staff and teaching practices are constructed within a Discourse of altering authority

In many of the accounts, expert teaching is privileged and invested with authority. However, the texts chart different notions of where this authority is or will be located: with human teachers or the AI-embedded technologies or corporations or, as explored in the next section, with students.

There are explicit discussions around the power dynamics between teacher and AI. For example, ‘…just as the emergence of AI in other contexts provokes debate about what makes us “truly human” and how we should relate to machines, the emergence of AI-powered feedback adds new dimensions to the concept of what “good feedback” looks like: it may no longer be only what humans can provide… “feedback literacy” may also need to expand to take into account the machine now in the feedback loop’(Shibani et al., 2020, p. 12). As these comments suggest, AI-mediated technologies foreground questions of what it means to be human and what authority teachers hold. In 1986, Knapper (p. 79) describes a utopia-just-round the corner, where computers can incorporate ‘… pedagogical strategies derived from artificial intelligence or “expert systems”. The idea here is that the program would actually learn something about the student and adapt its teaching strategy accordingly (hence mimicking a good human instructor)’. In early accounts, the machines are held to be lesser than humans. But over time this shifts: by 2020 Loftus and Madden (p.457) ask ‘How can we … teach more effectively in this new learning landscape and be critical participants rather than passively tracked objects?’.

This Discourse highlights the way AI challenges what it means to be a teacher. For some, the goal of AI is to invest the authority of the teacher into the technology, as with Knapper (1986). However, in Breines and Gallagher’s (2020) discussion of teacherbots, AI cannot interact with the broader community and hence lacks authority. Moreover, their language suggests AI is deceptive, implying that it intentionally undermines and competes with human agency. They note (p.11) ‘We are not seeking to make bots resemble humans and risk deceiving the users about who they engaging with (Sharkey, 2016), but rather make bots that are recognized as such: automated agents that have been designed… to “leverage human intelligence”…’ In this and other accounts (Breines & Gallagher, 2020; Loftus & Madden, 2020), staff identities are disempowered by AI and AI-embedded technologies, losing agency and authority to the technology and also to corporate interests behind that technology.

How students and learning are constructed within a Discourse of altering authority

The texts construct students’ altering agency and authority within a learning landscape mediated by technology with AI, as an implicit or explicit ingredient, particularly with respect to datafication. For example, Tsai et al. (2020, p. 556) note ‘Critical arguments within and beyond academia often take aim at data-based surveillance, algorithmic manipulation of behaviours and artificial intelligence to ask rather philosophical questions about what it means to be human, or to have “agency”’.

Both utopian and dystopian accounts describe how the AI or the AI-embedded technology constrains what students can be or do. From a utopia-just-around-the-corner sense, this is a kind of benevolent efficiency. Moscardini et al. (2020, p. 13) note ‘There is encouraging progress in developing automated systems that will make online courses more efficient by helping students identify areas where they struggle and by repeating and reinforcing sections as needed’. Many texts construct how AI-embedded technologies will grant agency and authority to students. This is from the earliest accounts: Marshall (1986, p. 205) writes the main advantage of ‘Intelligent Computer-Assisted Instruction’ [is] the student can be “involved actively in generating his [sic] own knowledge base” …’.

From a dystopia-is-now perspective, AI-embedded technologies take away authority and agency from the student and may be ‘harmful’ (Marachi & Quill, 2020, p. 431). For example, Williamson et al.’s (2020, p. 361) account of datafication, with its implicit AI, shifts authority and indeed humanity in the form of sense-making away from students: ‘… students are increasingly viewed as “transmitters” of data than can be sensed from autonomic signals emitted from the body, rather than as sense-making actors who might be engaged in dialogue’. In the dystopian accounts, authority is variously held by institutions, technology companies and within the software itself. Bhatt and MacKenzie (2019, p. 305) write, ‘Without knowing just how such platforms work, how to make sense of complex algorithms, or that data discrimination is a real social problem, students may not be the autonomous and agential learners and pursuers of knowledge they believe themselves to be’. Therefore, without the necessary skills, students will cede agency and authority over what and how they learn.

Discussion

This critical review of the literature demonstrates how little in-depth discussion of AI there is within the leading higher education journals. Despite increasing references to AI from 2020, these are still not substantively concerned with AI itself. While there are some empirical articles exploring concrete technologies where AI is foregrounded, such as automated feedback (Shibani et al., 2020), in other similar articles, there is little explicit reference to AI, except as a ‘hidden ingredient’. This lack of clarity and focus becomes more obvious given that the texts invoke AI almost always in association with other aspects of technology. As the definitional analysis indicates, definitions of AI are rare and take for granted either the notion of AI itself or the notion of human intelligence.

The lack of explicit discussion of what AI means highlights the significance of the implicit conceptualisations within our discourse analysis, particularly when they are continuations of discourses first seen in the 1980s. The Discourse of imperative response describes how the texts construct institutions, staff and students as necessarily responding to a seismic shift, but it is not clear what this shift is or what the response should be with respect to AI. Similarly, while the Discourse of altering authority notes that AI profoundly changes the notions of agency and accountability either now or in the very near future, the implications of such changes tend to be highly speculative.

The clarity of the two dualisms that thread through both Discourses contrasts with, and tend to overwhelm, the ambiguous conceptions of AI. They do not only pertain to AI but to technology in higher education generally. The first dichotomy — utopia-just-around-the-corner versus dystopia-is-now — aligns with the doomster/booster dualism noted by Selwyn (2014). The second and, possibly an even older, duality is of human versus machine, invoking the mythic language historically associated with new technologies over time (Mosco 2005). The presence of these dualisms is not surprising, given the anthropomorphised machine made so popular in fiction or film. However, the insight that this review provides is that this intertextuality appears to be the predominant way in which AI is constructed in the field of higher education.

Dualisms in themselves are not necessarily problematic. The teacherly human may be better conceptualised through the consideration of the teacherly AI. Likewise, the dystopian accounts alert us to the real concerns of an AI-powered datafied world, reining in the more utopian accounts. However, we contend that dualisms have particular limitations when AI itself appears so intangible. Mythic language makes sense when a technology is on the cusp of social integration, like electricity or telegrams, but should dissipate as technologies become familiar (Mosco 2005). Such hyperbole can obscure definitional clarity and prevent more nuanced, generative conceptualisations or conversations, particularly as AI, like other technologies before it, is fundamentally changing higher education practices (see, for example, Foltz et al. (2013) and Swauger (2020)).

Across all the discursive analyses, AI itself is truly the ‘ghost in the machine’, to appropriate Ryle’s classic phrase. By this we mean, despite material effects, AI is seen as intangible, close to invisible and evoked as a haunting, hidden presence, rather than any particular tangible feature. Distinguishing characteristics, aside from being either like a human or not like a human, are that AI is ‘deceiving’, which implies a kind of anthropomorphic ill-intent, or ‘relentlessly consistent… at speed, scale or granularity’, which implies a kind of mechanistic indefatigability, or it is a ‘hot topic’, a valuable rhetorical device.

Zawacki-Richter et al.’s (2019) systematic review of AI and higher education provides a useful point of comparison to our work, with articles drawn primarily from outside higher education journals and with very little overlap. Much of that literature is concerned with interventions and does not consider ethical implications (Zawacki-Richter et al., 2019). By contrast, our review contains considerable concern about the ethical, epistemic and hegemonic impacts of technology. In both reviews, however, there is limited discussion about how teachers might work with AI-powered technologies beyond immediate concrete applications or how teachers might inform future AI development. This is significant: as mentioned in the “Introduction”, AI is also the domain of commercial interests, and the sector would be wise to expand its horizons and investigate nuanced impacts of AI that encompass both the social and the technical.

We propose an alternative research agenda, based on the insights from this literature review, as well as the broader literature. We suggest there are three significant research foci for higher education that need attention: firstly, the language of AI; secondly, issues of accountability and labour; and finally, the implications for teaching and learning contexts beyond single innovations.

Debating a new language for AI

This literature review reinforces how discursively slippery the concept of AI is: it operates as a concrete technology, an abstract ideal, a rhetorical device, a metaphor and a social imaginary all at once. Zawacki-Richter et al.’s (2019) systematic review noted that only 5 out of 146 articles actually provided definitions and in this review, only one paper in the last three decades included a definition. This is a clear area for work. However, a singular definition may not suffice. AI may, simply, mean different things to different people (Bearman & Luckin, 2020; Krafft et al., 2020). Bringing these debates to the higher education literature is a good starting point.

We contend that AI ambiguity is not just a matter of definition. This review suggests the urgent need to rethink the language of AI in order to build a more substantial conversation about the role of AI within the higher education literature rather than its current intangible presence, underpinned by long-standing dualisms of human/machine and utopia/dystopia. For example, Johnson and Verdicchio (2017) make the distinction between computational artefacts that contain AI and AI systems that are sociotechnical arrangements including people, contexts and machines. They suggest that questions of autonomy can be discussed within the AI system but not ascribed to the computational artefacts. By bringing this or similar language to the field, nuanced understandings of AI can disseminate throughout academia and thereby may shed insight into challenges facing universities as they grapple with both AI as technology and social imaginary.

Tracing accountability in AI-mediated higher education

In the texts included in our review, AI did not come as a delineated technology. Indeed, it mostly appeared what Johnson and Verdicchio (2017) might call an AI system. We prefer the term ‘assemblage’ to refer to the entangled, self-generating constantly assembling collection of human and non-human actants that together make a whole (Fenwick & Edwards, 2012). As can be seen in the Discourses above, AI in higher education encompasses an assemblage of data, different kinds of software, bureaucracies and corporations that sometimes include and sometimes exclude teachers, students and administrators. Many critical perspectives in our review point out how much this assemblage impacts upon the authority of teachers and students, as we outline in the Discourse of altering authority. However, it remains unclear where accountability, a concrete and highly significant manifestation of authority, rests or should rest, now or into the future.

Tracing accountability — or other manifestations of authority — associated with AI offers significant value for higher education researchers. A new form of accountability, diffuse and held between actants, is seen in data-driven contexts such as healthcare (Hoeyer & Wadmann, 2020) but remains understudied in higher education. In an AI-mediated higher education landscape, who takes responsibility for decisions? What happens when mistakes are made? Studying the effects in a detailed empirical manner may prove valuable. We also note that lower status staff such as tutors and administrators was sometimes represented as expendable in the Discourse of imperative response. It may be worth interrogating the assumption that ‘lower level’ jobs or functions are seen as replaceable with AI. What actually happens when this labour is automated? What other assumptions are we making about labour and about humanity?

The learning and teaching perspective: exploring relationships rather than innovations

The literature in our review constructs a landscape of shifting authority and agency and hence relationships between teachers, students, technologies and institutions. Interestingly, while there may be an imperative to respond to the shifting landscape, the literature charted limited concrete scholarly consideration of what this change might be from a learning and teaching perspective. The few papers within our review which grappled with this directly raised interesting questions about AI-mediated technologies (Bayne, 2015; Breines & Gallagher, 2020; Shibani et al., 2020). While there are some publications outside of the journals listed in Table 1, particularly in the field of feedback and assessment (e.g.Bearman & Luckin, 2020; González-Calatayud et al., 2021), there is scope for further inquiry, particularly empirical work (Bates et al., 2020). There would be great value in expanding empirical inquiry beyond investigating one AI education-specific tool such as a chatbot to consider multiple instances of increasingly common technologies. In coming to understand how AI works in situ, between and across other actants, we can gain more insight into how teachers, administrators, corporations, machines and students work with and around each other from a learning and teaching perspective. What pedagogies are needed? How can students learn to work in an AI-mediated world? Such studies might investigate how automated feedback tools on writing are shaping teaching and learning across the sector or by exploring how the algorithms in learning management systems influence teacher and student practice. In answering these questions, it is important to consider what is taught or learnt or transformed and also what is avoided, omitted and reconsidered.

Strengths and limitations

This critical literature review has closely examined how the term ‘artificial intelligence’ is invoked in the prominent higher education literature, using a rigorous discourse analytic approach to explore the included texts. We have drawn only on the most prominent higher education texts to capture the field; this is both a strength and a limitation. We have searched only on the term ‘artificial intelligence’ and therefore may have omitted texts which are focussing on AI but concern machine learning or data mining or a range of synonyms for technology that uses AI approaches to software development. While this may mean that parallel discussions have not be included within this literature review, it allows for an unambiguous focus. We searched for this term with no date limits, which has afforded an interesting perspective: while the technology has radically changed, some conceptualisations remain remarkably persistent over decades.

Conclusions

This critical literature review indicates a significant opportunity to investigate the role of AI in higher education. The two Discourses provide an insight into how higher education researchers fundamentally regard AI. It is an invisible ingredient but a force nonetheless. AI demands response, even as it alters the fundamental structures of agency and authority within the academy. These Discourses themselves lead to a research agenda for higher education studies by developing more nuanced language for AI; a sophisticated grasp of how AI-mediated technologies shift accountability; and a deep understanding of how AI is, or could, influence learning and teaching relationships.