Interface University and Other Scenarios for the AI Economy

min read

As artificial intelligence moves us to a world without work, what does that mean for higher education institutions and their mission in the new economy?

abstract faces
Credit: Mark Pernice © 2020

In March 2016, AlphaGo—a computer algorithm developed by Google's DeepMind—defeated Lee Sedol, one of the world's top Go players, 4 games to 1. The result was a worldwide sensation: twenty years after World Chess Champion Garry Kasparov was defeated by IBM's parallel-processing computer Deep Blue and five years after IBM's Watson easily beat the two best Jeopardy champions, artificial intelligence had once again seemingly surpassed human intelligence.

At the time of Kasparov's defeat, many observers (myself included) wondered if a computer would ever defeat a human at Go.1 Chess is a complex game, of course, but at its heart, it is a game of logic and calculation. Given a particular board configuration, a player need only calculate all the possible combinations of moves and decide the best path among those choices. Computers are particularly good at brute-force calculation of this type, and thus it seemed inevitable that as computational power grew exponentially, someone would eventually create a device that could calculate more combinations faster than a human might.

Go, however, is not a game that is easily given over to brute-force calculation, and that is why so many of us thought it unlikely that a computer would defeat a human. Go, invented in China more than 2,5000 years ago, is a deceptively simple game: on a board with a grid of black lines (usually 19x19), two players alternately place black and white stones on the intersections. Chains of like stones encircle territory, surrounded stones may be captured and removed from the board, and the player who encircles more territory is the winner. From these relatively simple rules, however, emerges a game of great beauty and complexity. Players do not calculate moves as much as they intuit patterns in the stones. Determining all possible moves would mean calculating as many variations as there are stars in the galaxy. Thus, human intuition and pattern recognition would always defeat computer calculation—or so went the conventional wisdom at the time.

Additionally troubling was how emphatic the victory was: 4 out of 5 games. In one game, AlphaGo made a particularly intriguing move. Observers were stunned by the ploy. The current European Go champion noted: "I've never seen a human play this move. So beautiful."2 That an algorithm had defeated the best human player was surprising enough, but the fact that it was also capable of generating something no human had ever devised was stunning.

AlphaGo was programmed using machine learning techniques. Unlike Deep Blue, AlphaGo was programmed to learn via experience. It played thousands and thousands of games, each time being programmed to "learn" from the experience of playing. It has been said that to master any domain, one must practice for 10,000 hours.3 With machine learning algorithms, however, computers are developing the ability to become masters.

A World Without Work

What some find unsettling about AlphaGo's victory is that it portends yet another instance of an intellectual skill that, previously considered unique to humans, is now being superseded by computer intelligence. Many people today are thus imagining and considering the implications of a "world without work" as algorithms perform cognitive tasks previously handled only by humans.4 Meanwhile, a commonly stated purpose of higher education is to prepare young people for work, to fill positions in our complex global economy. But if predictions of a world without work come to pass, the link between higher education and job preparation will be torn apart.

As a result, higher education will become unnecessary for many. A small number of institutions of higher learning might remain, as places where students go to engage their minds, but many colleges and universities will be shuttered if a central core of their mission has been eliminated. Higher education would return to its pre–Morrill Act status as a leisure activity for the few. Those seeking higher learning will do so without a specific goal—and certainly not with the need for employment at the end. In this scenario, higher education will exist only for those interested and curious enough to attain it. Others will seek out free, informal, nondegree learning sources such as TED talks and other online resources or visits to public libraries.

A 2019 survey by Northeastern University and Gallup suggests that a large number of people in the United States, the United Kingdom, and Canada do not believe that higher education, as currently designed, can adequately provide the skills training necessary for the new AI economy: "This lack of confidence in any one institution to plan for AI adoption provides a clear opportunity for higher education to take the lead in . . . developing new and more innovative ways to deliver skills training and education."5 But in a world where so many skills have been automated by artificial intelligence, how can skills training remain the raison d'être of higher education?

The "computers become more intelligent than humans" scenario is just one possible future. Indeed, this scenario will come to pass only if predictions about increasing computing speed are borne out. Many predictions for super-smart computers are based on extrapolating Moore's Law into the future. But if such continued exponential expansion is not physically possible, then computers, algorithms, and other digital devices might run up against limits on their processing speed and, thus, their intelligence. We still know so little about how our brains work and the basis of our own cognition and consciousness that we cannot yet hope to replicate that feat in our AI electronic brains. Algorithms will reach a "peak intelligence" beyond which they will not be able to surpass.

The implication here is that artificial intelligence will not reach a stage where it will supplant human intelligence in all areas. Artificial intelligence will be able to perform many cognitive tasks, even those that may replace some human labor (especially any cognitive tasks that are easily repeatable), but many intellectual abilities will remain exclusively "human." Our capacity for wonder, for example, or our need for play or our empathy for others will define human intelligence in what Erik Brynjolfsson and Andrew McAfee call the "Second Machine Age."6

In such a scenario, the purpose of higher education is to cultivate uniquely human attributes in students, not train them for (automatable) skills. Should artificial intelligence advance to the point that many human skills are rendered unnecessary, higher education can shift its focus to the cultivation of those attributes that cannot be mimicked by machines. Students would no longer arrive on campus to study accounting, engineering, or information technology, since these professions will have been taken over by algorithms. Instead, students would arrive on campus to develop curiosity, creativity, imagination, play, wonder, and meaning-making: attributes that no algorithm has yet mastered. The author Daniel Pink has observed that such right-brain attributes would triumph in the Second Machine Age.7

In his 2017 book Robot-Proof, Northeastern University President Joseph Aoun wrote:

Other animals apply intelligence to solving problems. . . . But only human beings are able to create imaginary stories, invent works of art, and even construct carefully reasoned theories explaining perceived reality. . . . Creativity combined with mental flexibility has made us unique—and the most successful species on the planet. They will continue to be how we distinguish ourselves as individual actors in the economy. Whatever the field or profession, the most important work that human beings perform will be its creative work. That is why our education should teach us how to do it well.8

As the Second Machine Age arrives, higher education might do well to shift its curricular mission to focus on the cultivation of right-brain attributes.

Cultivating an Interface

Another very plausible scenario is that instead of competing against each other, humans and machines will work together to reach a cognition level that each entity alone cannot achieve. In 2005 Team ZackS, consisting of two amateur chess players (Steven Cramton and Zackary Stephen) and three computers, won the PAL/CSS Freestyle Chess Tournament9 The tournament was based on freestyle chess, developed by Kasparov shortly after he lost to Deep Blue. Human players are permitted to use computers in teams that Kasparov described as "centaurs." Cramton and Stephen were both rated as average players, and the computer program they were using was an off-the-shelf brand. Yet this "average" human/machine centaur beat some of the best humans and best computers in the world. Less than a decade after Kasparov's defeat by Deep Blue, humans and computers were working together to obtain results that neither could achieve alone.10

In a world where artificial intelligence carries out many cognitive tasks with greater efficiency than humans, human intelligence will also be necessary to complete many cognitive tasks. In other words, humans and algorithms working together prove more effective than either algorithms or humans alone.11 Wired magazine's transportation writer, Alex Davies, thus asks: "As increasingly intelligent machines come to life, how should they interact with humanity?"12

To prepare for such a future, we need to imagine a new kind of higher education institution, one where humans and artificial intelligences learn to think together. The mission of "Interface University" would be the cultivation of the interface, or relationship, between human and artificial intelligences. Interface University would be based on the idea that machines cannot fully supplant human cognition and that thinking with machines allows students to engage in a level of cognition not possible with the brain alone. Thus, at Interface University, students would learn how to think together with computers.

This means more than simply giving students iPads during their first year. As noted above, the curriculum would be based on enhancing the quality of the interface between the computer and the individual brain. Students are ready to graduate when they have demonstrated this unified condition, this "state of interface." The pedagogical and epistemological philosophy of Interface University is that the highest goal of education is to achieve the kind of symbiosis between human and computer intelligences as exists between a horse and a rider.

Because so many cognitive skills—especially left-brain skills—can be carried out by artificial intelligence, students at Interface University would learn to develop right-brain attributes that cannot be mimicked by machines. Students would cultivate curiosity, creativity, imagination, play, meaning-making, and wonder—attributes no algorithm has mastered. Yet the computer would not be treated as a mere tool or even a junior partner at Interface University. It is seen instead as a "third hemisphere" of the brain, and higher learning would require developing a metaphorical corpus callosum with this third digital hemisphere.13 The computer is a partner in creativity, in thinking, in cognition. When the state of interface has been achieved, the artificial intellect serves as a muse, a source of inspiration, for the human student.

Learning would become a noisy affair, with humans and artificial intelligence engaged in continual conversations. In the same way that we today converse with Siri or Alexa, students at Interface University would be constantly speaking with their third hemisphere, as they think, solve problems, make, research, and create together. Education thus involves learning how to engage in a conversation with artificial intelligence. Some artificial intelligence would be tethered to a robotic "body," providing a physical presence of artificial intelligence at Interface University. Another part of the educational mission would be teaching students how to navigate and mediate this social interaction between artificially embodied intelligence and human intelligence.

Students would major in individual disciplines, and faculty would engage in research in those disciplines—which would be similar to those at today's colleges and universities. But at the same time, the kinds of questions addressed and the nature of the research conducted in those disciplines would be different from the questions and research of today. Competency in each discipline would be demonstrated by the results generated via human/algorithm cooperation. The form and appearance of artificial intelligence would differ from department to department. For example, students would develop new architectural forms both from the manipulation of material objects and from suggested algorithms, with the architecture student "mentoring" the algorithm. Students in the digital humanities would use text-mining algorithms to "read" volumes of texts as a way to discern and interpret patterns that would have gone unobserved without the algorithms. Thus, students would achieve a degree in a subject/discipline, but their understanding would be enhanced by these augmented thinking skills.

Student assessment would be based on projects. Indeed, the acquisition of knowledge/information would not even be tested at Interface University. Because so much information is accessible via networked knowledge bases, the idea of standardized tests of knowledge would make little sense. Interface University would educate students to develop their own questions and to construct the cognitive tools they can use to answer those questions. "Know-how" would be valued more than "know-what." In each class, with every encounter within the disciplines, students would be evaluated on the insights gained from the AI/human interface.

Students would also learn the history, philosophy, and ethics of interface as part of their education. They would examine the basis of human cognition, with "thinking about thinking" as a central feature of this education. A consideration of the nature of cognition (human and machine, human+machine) would form an important part of the general education curriculum. Students would learn that even though algorithms can sift through data and uncover patterns, humans interpret and make sense of those patterns. Interface University would redefine and reconfigure what we mean by human cognition.

This education would also involve understanding the limitations of artificial intelligence. Students would learn about human decision-making and about when it is appropriate and ethical for humans alone to make decisions. Students at Interface University would know when the artificial intelligence is "wrong." In turn, as part of the university's curricular mission, both students and faculty would guide artificial intelligence in the ethics and morality of decision-making.14

Better Together

Philosophers have identified the current times as the age of the "post-human." As Mark C. Taylor, professor of religion at Columbia University, once described himself: "I am plugged into other objects and subjects in such a way that I become myself in and through them, even as they become themselves in and through me."15 Meanwhile the technology/feminist scholar Donna Haraway wrote in 1985: "We are all chimeras, theorized and fabricated hybrids of machine and organism; in short, we are cyborgs."16 The mission of Interface University would be to educate post-humans.

Education has always involved, to some degree, learning how to develop an interface with our cognitive prostheses, specifically with books. Interface University would be based on a similar intimacy with cognitive technologies. Such an interface assures students that they cannot be replaced by computers and other machines and that they are in fact better together with these machines.

The computer scientist Edward Ashford Lee maintains that we are today witnessing "the emergence of symbiotic coevolution" between humans and artificial intelligence, "where the complementarity between humans and machines dominates over their competition." When we consider symbiotic species in nature, we do not assume that one dominates the other or that one will kill off the other. Lee believes a similar sort of cognitive cooperation is forming between humans and machines: "Stronger connections and interdependencies between man and machine could create a more robust ecosystem." He continues: "To understand that complementarity" between human and artificial intelligence, "we have to understand the fundamental strengths and limitations of both partners. Software is restricted to a formal, discrete, and algorithmic world. Humans connect to that world through the notion of semantics, where we assign meaning to bits."17

Conclusion

It is indeed possible that artificial intelligence will advance to such a degree that it achieves "general intelligence." Should that day arrive, it is likely that artificial intelligence will have taken over most jobs.18 In such a scenario, the nature and purpose of higher education will have irrevocably changed: higher education will have reverted to its pre-Morrill condition as a luxury, perhaps even a luxury for the many. But in such a scenario, college for human capital development—the guiding logic of higher education since the 1980s—would no longer be the rationale.

This is the less plausible scenario, however. Instead, artificial intelligence most probably will have reached the stage of development where it is replacing many human tasks, even complex cognitive tasks—but not every human task. The more likely future is one in which humans and artificial intelligence work in tandem to engage in cognition, in a division of labor between what artificial intelligence does better and what humans do better. Learning to cooperate, learning to think together, will become the raison d'être of higher education.

This article is adapted from David J. Staley, Alternative Universities: Speculative Design for Innovation in Higher Education, pp. 121–140. © 2019 Johns Hopkins University Press. Reprinted with permission of Johns Hopkins University Press.

Notes

  1. David J. Staley, "Digital Historiography: Kasparov vs. Deep Blue," Journal of the Association for History and Computing 3, no. 2 (August 2000).
  2. Fan Hui quoted in Christopher Moyer, "How Google's AlphaGo Beat a Go World Champion," The Atlantic, March 28, 2016.
  3. Malcolm Gladwell, "Complexity and the Ten-Thousand-Hour Rule," New Yorker, August 21, 2013.
  4. See Derek Thompson, "A World Without Work," The Atlantic, July/August 2015; Jerry Kaplan, Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence (New Haven: Yale University Press, 2015).
  5. Northeastern University and Gallup, Facing the Future: U.S., U.K. and Canadian Citizens Call for a Unified Skills Strategy for the AI Age (Washington, DC: Gallup, 2019), p. 7.
  6. Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (New York: W. W. Norton & Company, 2014). See also Tyler Cowen, Average Is Over: Powering America Beyond the Age of the Great Stagnation (New York: Dutton, 2013).
  7. Daniel H. Pink, A Whole New Mind: Moving from the Information Age to the Conceptual Age (New York: Riverhead Books, 2005).
  8. Joseph E. Aoun, Robot-Proof: Higher Education in the Age of Artificial Intelligence (Cambridge: MIT Press, 2017), p. 21.
  9. Clive Thompson, Smarter Than You Think (New York: Penguin Press, 2013), 4.
  10. See Garry Kasparov, Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins (New York: Public Affairs, 2017).
  11. Linda Baer, Amanda Hagman, and David Kil, "Preventing a Winter of Disillusionment: Artificial Intelligence and Human Intelligence in Student Success," EDUCAUSE Review 55, no. 1 (2020); Thomas E. Miller and Melissa Irvin, "Using Artificial Intelligence with Human Intelligence for Student Success," EDUCAUSE Review, December 9, 2019.
  12. Alex Davies, "Audi's New A8 Shows How Robocars Can Work with Humans," Wired, July 11, 2017.
  13. On the idea of a corpus callosum being developed between brain and machine, see Michael Chorost, World Wide Mind: The Coming Integration of Humanity, Machines and the Internet (New York: Free Press, 2011).
  14. Good AI, "School for AI" (website), accessed June 15, 2020; Simon Parkin, "Teaching Robots Right from Wrong," The Economist 1843 (June/July 2017).
  15. Mark C. Taylor, The Moment of Complexity: Emerging Network Culture (Chicago: University of Chicago Press, 2003), p. 231.
  16. Donna Haraway, "A Cyborg Manifesto," Socialist Review, 1985, reprinted in Simians, Cyborgs, and Women: The Reinvention of Nature (New York: Routledge, 1991), p. 150.
  17. Edward Ashford Lee, Plato and the Nerd: The Creative Partnership of Humans and Technology (Cambridge: MIT Press, 2017), pp. x, 185–86.
  18. See Aaron Bastani, Fully Automated Luxury Communism (London: Verso, 2019).

David Staley is Director of the Humanities Institute at The Ohio State University. He is an associate professor in the Department of History—where he teaches courses in digital history and historical methods—and holds courtesy appointments in the Department of Design and the Department of Educational Studies.

EDUCAUSE Review 55, no. 3 (2020)

© 2020 John Hopkins University Press