Introduction

The field of research on higher education has been characterized by an enormous growth (Tight 2018; Jung and Horta 2013). This is not a big surprise, for this claim has been made and supported for many scientific fields and disciplines (Bornmann and Mutz 2015). This growth, however, has important repercussions. For novice researchers, it generates challenges to come to terms with the state of the art in higher education research. Similarly, seasoned scholars may get isolated in their area of specialization and experience problems to keep up with developments in the field. Parallel to the growth of the field, we see various attempts to take stock of achievements of our field, exemplified by a number of scholarly reflections ranging from systematic literature reviews (e.g. Horta and Jung 2014; Tight 2018a), to accounts of journal editors (e.g. Dobson 2009), to substantive analyses of journal contributions (e.g. Tight 2014; Bedenlier et al. 2018) and of citation patterns (e.g. Calma and Davies 2015, 2017). These scholarly works aiming to synthesise, criticise, and reflect on the field are a sign of a certain level of maturity of the field. After all, one needs a certain body of research before one can look back at achievements and shortcomings (see also Teichler 2000:21–23).

Improving our understanding of the vast field of research on higher education is indispensable, and previous work has provided important insights. A pertinent theme many analyses have focussed on is the level of (in)coherence of our field. Coherence has many connotations, for example related to theoretical, conceptual, or methodological aspects, and can yield a sense of community among higher education researchers. On the other hand, a lack of coherence can bring about the feeling of working in a disintegrated field where individual researchers work in ‘separate silos’ (Tight 2014) or ‘on isolated islands in the higher education research archipelago’ (Macfarlane 2012). Studies on other scientific fields contemplate similar themes, but—given our focus on higher education—it is appropriate to address two features. First, various observers note the many different disciplines involved in the study of higher education (Teichler 2000). After all, higher education is a theme of research, rather than a discipline. Whereas educational and pedagogical theories seem to dominate the field (Tight 2013), there is also much reliance on affiliated disciplines such as psychology, political science, sociology, business administration, and humanities. This increases the possibility that researchers get isolated in their ‘research bubble’ and exclusively consider their own disciplinary perspectives and methods for studying higher education. Second, research on higher education covers themes ranging from micro-level (e.g. student learning or student satisfaction) to macro-level (e.g. higher education policy, comparative system studies), and studies considering multiple levels simultaneously are rare. In that regard, Clegg (2012) argues that we should consider research on higher education as a series of fields. Similarly, Macfarlane and Grant (2012:622) describe the field of research on higher education as an organized interdisciplinary field characterized by a lack of communication between different research communities.

The scattered nature of the field of research on higher education highlights both the need and the potential pitfalls for a comprehensive overview. While previous contributions mapping the field of research on higher education have generated illuminating insights, they fall short in one way or another. We identify three important pitfalls previous work struggles with, i.e. a limited scope, a lack of a content-related analysis, and/or a lack of an inductive approach. To improve on previous work, we apply an approach which takes these pitfalls into account. We provide an automated content analysis of the abstracts of almost 17,000 research articles on higher education, covered in 28 higher education journals that appeared in the period 1991–2018. We explore how the topics that emerged from our analysis have evolved over time, and whether and how they are combined in research. In this way, we provide a comprehensive and empirically grounded mapping of the field of research in higher education.

Potential pitfalls for a comprehensive overview

Various studies have addressed a similar (or closely related) research goal as ours, i.e. presenting a comprehensive overview of the field of research on higher education. These studies provide valuable insights and have inspired us for the current contribution. However, they fall short in one (or more) of the following aspects: (i) a lack of a large-scale approach, (ii) a lack of a content-related approach, and (iii) a lack of an inductive approach.

As a method of analysis, previous work has, for example applied close reading. While highly valuable, such methods obviously pose problems for analysing large quantities of text data. Tight (2007) studies three groups of higher education journals. For the analysis, he read all 406 articles included in his sample and categorizes these based on their keywords. However, such an approach is limited in terms of scope (see also: Clegg 2012) and is not able to fully capture the variety of researched themes in the vast literature on higher education. Moreover, many studies use a cross-sectional design which limits our understanding of the development of the field over time. This highlights the need for large-scale analyses which are able to do justice to the enormous amount of journals and articles, and subject variety in the field.

There are, however, studies that have applied a large-scale approach, but this goes hand in hand with a lack of content-related analyses. For example, various studies have followed up on Budd (1990) and have analysed citation patterns of research on higher education. Calma and Davies (2015) study 1056 articles published in the journal Studies in Higher Education using citation network analysis. This analysis highlights the most cited articles and authors. These analyses of citation patterns are often combined with, for example an analysis of the most frequently used keywords. Similarly, Huisman (2013) provides an analysis of 812 articles to the journal Higher Education Policy and maps the contributions to the journal by analysing the titles. However, the analysis based on keywords and/or titles inherently has shortcomings and lacks depth. These studies clearly sacrifice in-depth content-related analysis to the breadth of the analysis and are therefore only able to offer us modest insights in the different themes in the field of research.

A third way in which previous studies fall short, is that they do not apply an inductive approach. This has been remarked by the authors of these studies themselves. For example, Clark (1973:2) presents an overview of the sociology of higher education and argues that his review is ‘selective and the assessment biased by personal perception and preference’. Likewise, contributors to Clark’s (1984) Perspectives on higher education acknowledge that their perspective is strongly affected by their disciplinary focus. Also, the national case studies in Schwarz and Teichler (2000) ‘suffer’ from the fact that the analyses are strongly limited by the expertise and experience of the respective authors and the particular idiosyncracies of higher education research in their countries. In a more comprehensive way, Macfarlane (2012:129) develops a map showing different ‘islands of research’. This map is based on his experience and intuition and, while these are valuable tools for experienced researchers, they may be thoroughly biased by researchers’ own position in the field of research (see also Santos and Horta 2018). Macfarlane himself (2012:131) warrants his readers not to take his map too seriously and posits that his contribution is ‘intended to be, at least in part, tongue-in-cheek’. For sure, given the authors’ expertise, these reflections are more than just a hunch. But, personal experience and preferences and other contextual factors do bias reflections on the field. An inductive approach allows to deal with this issue and permits ‘researchers to discover the structure of the corpus before imposing their priors on the analysis’ (DiMaggio et al. 2013:577, original emphasis). Such an approach is especially valuable considering the size and scope of available materials and the scattered nature of the field. Applying an inductive approach allows us to go beyond a ‘perspectivist’ vision on the field (Bourdieu 1988:17), and to grasp the field of research in a more objective manner.

An inductive, large-scale, and content-related analysis

Selection corpus

In light of the objective of the paper, namely mapping the diversity of research themes in the field on higher education, we used the following criteria to select journals to arrive at a fairly comprehensive set of journal contributions for our analysis. Two challenges were encountered. First, whether to take the Scopus list of educational journals as a point of departure or to rely on the Web of Science (WoS). Many Scopus journals have very low citation scores and can be considered to play a rather peripheral role in our field. Including these peripheral journals in our corpus would introduce unnecessary complexity to our analysis and would impede the interpretability of our results which would become dominated by marginal research topics. Therefore, we rely on the 2017 WoS list (subcategory Education & Educational Research Education; none of the journals in the field of Special Education focus on higher education).

This led to the second challenge, i.e. selecting higher educatio journals. Tight’s (2018a2) classification of higher education journals (generic, topic-specific, discipline-specific, and nation-specific) was helpful in making decisions. We first considered the titles and objectives of the journals. The generic journals—as their names indicate (e.g. Higher Education, Studies in Higher Education, Journal of Higher Education, and Research in Higher Education), and their objectives or mission statements confirmed this—were easy to select. Also, most topic-specific journals were relatively easy to select as they often combine the term ‘higher education’ with a specific topic, e.g. policy, teaching, assessment and evaluation, active learning, and diversity and Internet. For some discipline-specific journals, however, we needed a closer look at the journal objectives and contents of issues in various years. This led us to include, e.g. Academy of Management Learning and Education and Teaching Sociology for it primarily focuses on education in these disciplines at tertiary levels. But, we excluded disciplinary journals like the Journal of Biological Education, for they only in part focus on higher education. We also excluded disciplinary journals that focused much more on the profession in general than on higher education, e.g. BMC Medical Education and Journal of Social Work Education. Table 1 presents the journals included in our corpus.

Table 1 The 28 higher education journals included in our analysis

Topic models

To analyse our corpus, we apply topic models. Topic models are a collection of automatic content analysis methods that allow to map the structure of large text data by identifying topics. A topic model uses patterns of word co-occurrences in documents to reveal latent themes across documents and models each document as a mixture of multiple topics (Blei, Ng, and Jordan 2003). A topic is represented by a set of word probabilities. When these words are ordered in decreasing probability, they closely relate to what humans would call a topic or a theme (Mohr and Bogdanov 2013:547). For example, a topic model analysis on articles in newspapers may uncover a topic including the words ‘climate’, ‘temperature’, ‘earth’, ‘nature’, and ‘emissions’ with high probability, which indicates that this topic deals with ‘global warming’.

Most topic models—also the one we will apply—are unsupervised machine learning approaches. This means that the methods analyse texts without the researcher explicitly imposing categories of interest (Grimmer and Stewart 2013:281). In this way, our approach aligns with an inductive mapping of the literature as it is not biased by our position in the field of research on higher education and is not guided by assumptions of topics we expect to find.

Data

We collected the abstracts of every research article published in our selection of 28 journals (cf. Table 1). Other document types, such as editorials or book reviews, were ignored. Our focus on abstracts is in line with previous studies (e.g. Griffiths and Steyvers 2004) and is informed by four criteria: (i) availability for automated scraping of Web of Science, (ii) abstracts are freely available, and (iii) abstracts represent a concise summary of the article, which minimizes the chance of identifying peripheral/minor topics. In addition, (iv) abstracts are fairly comparable in terms of format/style across journals. In contrast, if we would analyse complete articles, our analysis may be biased by style guidelines regarding full texts which diverge substantially between journals. One disadvantage with our approach is that abstracts of articles published before 1991 are not available in the Web of Science database. As topic models perform poorly on short documents, we removed the 63 abstracts which have less than 50 words (Tang et al. 2014). Our final corpus consists of the abstracts of 16,928 articles, with a total of 2,179,915 words for the period 1991–2018.

Pre-processing the corpus is common in text mining research and is necessary to prepare the corpus for analysis. In the first place, we lowercased all letters and removed punctuation and numbers. Next, we removed stopwords (e.g. ‘the’, ‘and’) and frequent expressions and words (e.g. ‘higher education’, ‘results’, and ‘article’) because they do not contribute to the identification of topics as they appear in almost every abstract. We also normalized differences between UK and US spelling (e.g. ‘organisation’ and ‘organization’). Next, we stemmed words using Porter’s word stemming algorithm (Porter 2001). Stemming reduces complexity without severe loss of information by removing the ends of words to reduce the total number of unique words. For example, the words ‘argue’, ‘argued’, ‘argues’ share the stem ‘argu’ and are hence replaced with ‘argu’. Infrequently used terms are also removed from the corpus as these do not contribute to understand general patterns in the corpus. Words that appear in less than 1% of the documents were removed (e.g. Grimmer and Stewart 2013).

Results

Model selection

Using the stm package in R (Roberts et al. 2014a), we estimate a correlated topic model (CTM) which is an extension of Latent Dirichlet Allocation (LDA). CTM relaxes an assumption made by LDA by allowing the occurrence of topics to be correlated (Blei and Lafferty 2007; Blei and Lafferty 2009). We use the spectral method of initialization as it guarantees obtaining the globally optimal parameters and, compared with other methods of initialization, is faster and produces better results (Roberts et al. 2016). Topic modeling is an exploratory technique and will give researchers any number of topics they request. There is no ‘right’ number of topics, and the number of topics should be chosen based on interpretability and analytic utility with regard to the research question (DiMaggio et al. 2013; Grimmer and Stewart 2013; Roberts et al. 2014b). We generated a set of candidate models with a different number of topics (i.e. 5, 10, 20, 30, 50, and 100). This informed us that the ideal ‘level of granularity of the view into the data’ (Roberts et al. 2014b:1069) ranged between 30 and 40. Estimating more models in the range of 30–40, we finally selected the 31 topic solution as it is superior to the other models with regard to our research question. This does not mean that our analysis proves that there are exactly 31 topics in the field of research on higher education. Rather, we select the model with 31 topics as this gave us the best compromise between parsimony, doing justice to the variety of themes in the field, and substantial analytical interpretability. In addition, models with more than 31 topics include topics with very low relative prevalence indicating that these topics are peripheral to the field, which do not offer insights in the overall structure of the field.

As mentioned earlier, a topic is defined as a distribution over all observed words in the corpus. Investigating the most probable words of topics is the way to interpret the substantial meaning of each topic and to label each topic. To improve interpretability, we rank words on topics based on frequency—i.e. the occurrence rate of a word in a topic—as well as their exclusivity—i.e. the extent a word is exclusive to a topic. For this, we rely on the FREX measure (FRequency and EXclusivity) which is the mean of a word’s rank in terms of exclusivity and frequency (Airoldi and Bischof 2016). Next to the per-topic-per-word probabilities (β), the analysis also yields the per-document-per-topic probabilities (γ). The γ-probabilities inform us which articles load on which topic. A close reading of high loading documents on each topic further helps us in the interpretation and the validation of the model.

To improve interpretability of the results, we present the 31 topics in three sets. Note that this is an ex post decision after having estimated the topic model. The first set relates to topics characteristic for research with a theoretical focus on individuals, while the second set relates to research interested in organisation- and system-level mechanisms. The third set of topics includes discipline-specific topics and a topic related to methods. Next to a label (which both authors agreed upon after close reading of the highest loading articles and in-depth discussion), we also assign a number to each topic. This number is not substantially meaningful and is just a way to guide the reader through the results.

Individual level topics

The 16 topics related to individual level themes can be grouped in four categories. The most prominent words for the topics in these categories are presented in Table 2. The first category relates to student health. For example, the three most prominent words for the first topic—which we label substance use and health—are ‘behavior’, ‘intervention’, and ‘alcohol’. The article most strongly associated with this topic studies drinking and driving among college students (Fromme et al. 2010). Similarly, the other three topics relate to aspects of student health, i.e. stress and anxiety (2) (e.g. ‘stress’, ‘emotion’, and ‘anxiety’), sexual activity and health (3) (e.g. ‘female’, ‘sexual’, and ‘male’), and mental health (4) (e.g. ‘mental’ and ‘treatment’).

Table 2 The ten highest-ranked words on the individual level topics (relative prevalence of each topic between parentheses)

The second group of individual level topics relates to different subgroups of students. For example, the topic internationally mobile students (5) includes words such as ‘international’, ‘mobility’, and ‘abroad’. The highest loading article on this topic studies the way perception of domestic and foreign higher education systems drives students’ international mobility (Park 2009). One of the top words of this topic is ‘Chinese’. This indicates that international mobility is often studied with regard to Chinese students and that ‘Chinese’ is an exclusive word to that topic (i.e. it occurs very infrequently in other topics). The two other topics pertain to subgroups related to ethnicity. Topic 6 pertains to racial and ethnic minorities (e.g. ‘African’, ‘black’, ‘white’, and ‘ethnic’), while topic 7 focuses on ethnic diversity and on the way this affects experiences on campus.

The next group of individual level topics relate to various aspects of pedagogy. For example, topic 10 focuses on student performance in relation to parenting styles. The highest loading article on this topic studies how parental attachment, parental education, and parental expectations affect academic achievement (Yazedjian et al. 2009). Other topics in this group relate to feedback on assessment (8) (e.g. ‘feedback’ and ‘assess’), cognitive styles (9) (e.g. ‘complex’, ‘cognition’, and ‘understand’), skills, training and development (12) (e.g. ‘skill’, ‘develop’, and ‘curriculum’), and educational technology (11) (e.g. ‘learn’, ‘online’, and ‘technology’).

The final group of topics on the individual level focuses on academics. For example, topic 13 centres on academic careers and mentoring (e.g. ‘faculty’, ‘career’, and ‘program’). The highest loading article on this topic provides information for undergraduate advisors on how to assist students in identifying graduate programs (Shoenfelt et al. 2015). The second highest loading article studies the retired professor (McMorrow and Baldwin 1991). A second topic, i.e. teaching practices (14), captures research on teaching styles and preferences. A third topic, i.e. changing academic careers (15), relates to, for example, work/life balance of academics with young children (Currie and Eveline 2011), and identity-formation (Cox et al. 2012). The final topic in this group relates to research on doctoral students and supervision (16) (e.g. ‘supervisor’, and ‘PhD’).

Organisation- and system-level topics

The most prominent words for the topics in this set are presented in Table 3. Two groups of topics can be distinguished among this set, i.e. topics related to the organisational and to the system level. At the organisation level, the first, rather big topic (4.0% of the corpus), includes top words such as ‘policies’, ‘quality’, and ‘governance’. This topic (17) captures research dealing with quality assurance and accountability of higher education institutions. The highest loading article on this topic studies the impact of cross-border accreditation on national quality assurance agencies (Hou 2014). The next two topics on the organisation level pertain to leadership (18) and strategy and mission (19). For example, the highest loading article on leadership studies the development of a leadership identity (Komives et al. 2005). The topic model also discovers a topic related to sustainability (20), with top words such as ‘sustainability’, ‘plan’, and ‘environment’. Articles loading high on this topic study, for example, sustainability plans of higher education institutions (e.g. Swearingen 2014).

A final topic on the organisational level focuses on organisational change (21). The highest loading article on this topic studies decision-making processes in universities and develops a typology of strategies (Bourgeois and Nizet 1993). It was challenging to find an appropriate label for this topic, as it is characterized by quite some internal variation. For example, the second highest loading article studies inter-institutional cooperation (Lang 2002), while the third highest loading article addresses changes in funding American higher education (Johnstone 1998).

Table 3 The ten highest-ranked words on the organisation- and system-level topics (relative prevalence of each topic between parentheses)

The second group of topics captures research on the system level. Topic 23, for example deals with university rankings and performance (e.g. ‘rank’, ‘fund’, and ‘university’). The highest loading article on this topic studies how different indicators of university rankings are related to each other (Soh 2015). Topic 22 relates to knowledge society and globalization, and the most characteristic article for this topic applies a network approach to globalizing higher education (Chow and Loo 2015).

The final topic on system level focuses on student financial aid (24) (‘enrol’, ‘financial’, and ‘aid’). Articles loading high on this topic approach financial aid from a system-level approach, for example, by studying state support for financial aid programs (Doyle 2010). However, this topic also captures research approach financial aid on an individual level. For example, Kofoed (2017) studies why some students apply for financial aid, while other do not.

Other topics

The final set of topics contains topics that are either very specific or very generic and the most prominent words for these topics are presented in Table 4. For example, top words for the topic on physics and engineering (25) are, for example ‘engineer’, ‘instruct’, and ‘solve’. All articles loading high on this topic are published in the Journal of Engineering Education. The other discipline-specific topics, i.e. topics 26–29, are also very specific to certain journals. We also find two generic topics: one on research ethics (30) and methods (31). Because these other topics are very specific to a journal or very generic, they do not provide us much insight into the structure of the field of research on higher education. Therefore, in the following analysis, we pay less attention to this set of topics.

Table 4 The ten highest-ranked words on the other topics (relative prevalence of each topic between parentheses)

Topics through time

To study the way topics evolve over time, we plot the posterior per-document-per-topic probabilities (γ) against the year of publication. We estimate Locally Estimated Scatterplot Smoothing (LOESS) which gives use a smooth curve summarizing the data points (smoothing span/alpha set to 0.7). In this way, Fig. 1 shows the evolution of the relative prevalence of each topic through time. It is clear that topics within a certain group of topics do not necessarily follow similar evolutions.

Fig. 1
figure 1

The evolution of the relative prevalence of each topic

Several topics seem to be on the rise: internationally mobile students (5), feedback on assessment (8), educational technology (11), doctoral students and supervision (16), leadership (18), and knowledge society, and globalization (22). Other topics, on the other hand, clearly lose ground over time: sexual activity and health (3), parenting styles and student performance (10), and academic careers and mentoring (13). In addition, there are a few topics clearly going up and down: substance use and health (1), quality assurance and accountability (17), and student financial aid (24). Some developments are counterintuitive. For example, one might expect increased attention to the efficiency and economics of higher education in light of greater attention to these themes in public policy. Also, we expected a more monotonous growth in attention to quality assurance and accountability.

Clusters of topics

Remember that our approach models each article as a mixture of topics. Therefore, a way to map the structure of the field of research on higher education is to reveal which topics tend to be combined in articles. For this, we use a Q-mode cluster analysis on the document-topic probability distributions.Footnote 1 We tested whether the cluster solution is robust over time by comparing the cluster solution presented here with the cluster solution for the oldest articles in our data and with the one for the most recent articles. The substantial interpretation remains the same. So, while the topics are in constant flux (see above), the way topics are clustered remains stable over time. Therefore, we present the cluster solution for the complete data which covers the period 1991–2018.

Figure 2 presents the dendrogram yielded by the hierarchical clustering on the topics. The distance indicates similarity between topics. Topics that ‘meet’ at a smaller distance are more similar in terms of their distribution over the documents, compared with topics that meet at a higher distance. In this way, topics that cluster together are topics that are often combined in research. For example, ‘racial and ethnic minorities’ and ‘racial/ethnic diversity and campus climate’ are the most similar topics as they link around 0.75. The topic ‘racial and ethnic minorities’ is, on the other hand, very dissimilar to other topics. For example, the topics ‘racial and ethnic minorities’ and ‘feedback on assessment’ only meet at distance 7, which indicates that both topics are very rarely combined in research.

Fig. 2
figure 2

Dendrogram from the hierarchical clustering using Ward’s method

The dendrogram is valuable in showing the different ‘islands’ in the field of research on higher education. At distance 7, we see two clusters. The left cluster includes topics related to subgroups of students and students’ health. The right cluster combines topics on the system/organisation level and the pedagogical topics. Around the distance of 5, we see that this second cluster is again split up.

The clustering shows that topics we included in the same set often tend to cluster together. For example, the four topics on student health cluster together in the dendrogram. But, topics on subgroups of students and topics on pedagogy—both individuals level based topics—are very distant from each other. In this way, our analysis identifies potential gaps in the literature. Our analysis, for example, suggests that there are very few studies in the field of research on higher education that combine a focus on pedagogy with a focus on racial and ethnic minorities. The lack of research combining both sets of topics is remarkable as both topics are very often combined in the sociology of education. Indeed, the relation between ethnicity and achievement in primary and secondary education is a blossoming field (e.g. Dworkin and Stevens 2014). Similarly, the dendrogram exposes a lack of research on the combination between organisational- and system-level topics on the one hand, and student outcomes on the other. This is reminiscent of Jackson and Kile (2004:286) who argued that ‘new frameworks (e.g. theories, models, and concepts) are needed to help understand the various ways institutions affect students’. We believe that addressing the gaps identified by our analysis—i.e. topics that are rarely considered simultaneously in existing research—provide a lot of opportunities for research.

Topic diversity

We already shared that the clusters of topics (‘islands’) remain stable over time. However, the cluster analysis does not indicate whether or not the islands tend to move further apart from each other. To address this, we compute the Shannon Entropy index (for topics 1 to 24) which measures the relative balance of the topics in each abstract and gives us an indication of the topic diversity of each article. The lower the value on the Shannon entropy index, the more an article exclusively focusses on one topic (and vice versa). The left-hand panel of Fig. 3 shows the evolution of topic diversity over time for the entire corpus, while the right-hand panel breaks it down for each journal type. The general trend is clearly downward. That is, more recent articles are characterized by less topic diversity.Footnote 2

Fig. 3
figure 3

Evolution of abstracts’ topic diversity

The right-hand panel of Fig. 3 shows that the general downward trend can be attributed to articles published in topic-specific and discipline-specific journals. The topic diversity of articles in generic journals is the highest, which indicates that our measure of topic diversity indeed captures topic specialisation and has remained stable over time. Articles published in other journals, however, show a clear trend towards specialization.

Discussion and conclusion

The field of research on higher education has been described as ‘an open access discipline’ as it includes researchers from various academic origins, one-timers—i.e. researchers that contribute once to the field—and contributions from both practitioners and researchers (Harland 2009; Horta and Jung 2014; Jung and Horta 2013; Kelly and Brailsford 2013). These characteristics impede the development of an integrated research field. Indeed, previous work has lamented that our field is limited in terms of theoretical foundations and that it lacks integration (Clegg 2012; Macfarlane and Grant 2012; Milan 1991; Tight 2014).

To contribute to this debate, we have presented a comprehensive mapping of the field of research on higher education. While previous studies with a similar goal have offered valuable insights, we identified three pitfalls as these studies do not apply (i) a large-scale, (ii) content-related, and (iii) inductive approach. In our analytical approach, we take these three criteria into account and we present a comprehensive overview of themes addressed in 17,000 articles published in 28 journals that focus on higher education. In our large scale analysis, we differentiated 31 topics which can be used to get firm grasp of the literature and, when reading into a new theme, to use the appropriate keywords to find relevant research. In this way, our article adds to the debate on the fragmentation of the field, by sophisticating this notion. Our analysis confirms many of the findings of other scholars who have categorised and mapped our field (Macfarlane 2012; Tight 2007; 2018b). We believe, however, that our analysis offers a more robust underpinning of the different themes and how they relate to each other by applying a large-scale, inductive, and content-related approach.

Three key conclusions can be drawn from our analyses. (1) We find that scholarly attention to themes varies over time: Some themes become more central, while other themes that may have once dominated the field become more peripheral. The increase in themes related to teaching and learning resonates with Tight’s (2003) observations. The various evolutions demonstrate that in the field of research on higher education is characterized by constantly occurring struggles over attention between different topics/themes. This is an important finding, for it brings nuance to the overall growth in the field. That is, there is growth, but topics wax and wane indicating that the field of research on higher education is in a constant flux.

(2) We also studied the way topics are clustered. That is, we studied which topics tend to be combined with each other. We found that this clustering is stable over time. Two conclusions can be drawn here. First of all, our cluster analysis identifies various interesting gaps in the literature—i.e. topics that are rarely considered simultaneously in existing research. Addressing the gaps identified by our analysis provides valuable starting points for future research. Obviously, the gaps identified in our analysis need to be corroborated by close(r) reading of the pertinent literatures. Secondly, the lack of change in the clustering of topics over time suggests that higher education researchers are ‘stuck on their island’. Indeed, topics that were not combined in the nineties are still not combined two decades later. We believe that this is problematic as it generates tunnel vision and impedes the development of a general theoretical body of work unifying the different ‘research islands’ and, more general, the development of an integrated and established field of research on higher education.

(3) We also found a steady decline in topic diversity. That is, more recent articles combine less topics compared with older articles. This aligns with the observation of a rapid increase of scientific fields over time and the argument that various characteristics of modern science encourage researchers to specialise (Kuhn 341962; Leahey and Reikowsky 2008). The clear evolution towards greater topic specialisation is concerning because increased specialization generates divisions among researchers in that they only tend to consider research in their speciality area (Leahey and Reikowsky 2008). The combination of the systematic clustering of topics and the trend of specialisation yields an interesting, though bleak, conclusion: While the islands making up the scattered field of research on higher education are relatively stable over time, they are drifting further apart. Indeed, it seems that our field becomes even more scattered and disintegrated over time.

As almost any research that tries to answer questions, it raises subsequent questions. We address a few of these. First, related to the scattered nature of our field: which new synergies can be achieved? We found, for instance, that ‘teaching discipline X’ topics appeared in various clusters and to some extent were located at quite a distance from the other teaching and learning topics. Would this suggest that discipline-rooted papers hardly rely on insights from the broad teaching and learning domain? More generally, are we able to detect inefficiencies in our field because of the different disciplines involved, who may speak to the same theme, but use different concepts, theories, and methodologies? This would require additional analyses, e.g. network analysis of citation patterns, and we encourage other researchers to build on our insights to address these questions. A second set of questions relates to way the field has developed. We noted that topics wax and wane, but what exactly drives the attention for particular topics and more broadly, the research agendas of researchers and researcher centres? We may expect that research agendas are fed by internal field dynamics (building on preceding work, using robust theories that have proven their value, etc.). Likewise, researchers may as well be affected by what happens in and around higher education, e.g. rankings, international mobility, and globalisation.

Finally, an important contribution of our article is that it demonstrates the value of automated text analyses for the field of research on higher education. These methods allow researchers to analyse large quantities of text data in an effective and efficient manner. We, however, admit that these analyses have limitations in that they are not that in-depth as a ‘true close reading’ of texts (cf. Grimmer and Stewart 2013:268). Notwithstanding, automated text analysis provides exciting opportunities for higher education researchers as we are often confronted with data that seem impossible to analyse due to their sheer volume. For example, the study of the content of curricula in higher education, policy documents on higher education reforms, media coverage on higher education institutions, etc. Indeed, ‘Topic modeling provides a valuable method for identifying the linguistic contexts that surround social institutions’ (DiMaggio et al., 2013:570; see also Roose et al. 2013). We hope that our article inspires higher education researchers to make use of the great potential of automated text analyses.