Elon University

Analysis: 2005-2011 Predictions for Digital Life 2020

How experts expected 2020 to look 10-15 years ago and how some who shared insights back then see it now, in the 2020s

Many forecasts were on the mark, while some did not fully anticipate many of the elements influencing the way networked technology might evolve. Experts who offered updated views said the internet’s overwhelming impact on humans’ intelligence and emotional condition was vastly underappreciated.

Background
December 28, 2020 – Elon University’s Imagining the Internet Center and Pew Research Center have conducted 12 “Future of the Internet” canvassings of experts since 2004, gathering forecasts about the evolution of networked technologies and the impact on digital life and societies. In four of those canvassings – 2005-06, 2007-08, 2009-10 and 2011 – the questions focused on what life would be like in the year 2020. We have chosen eight trend predictions recorded in those reports to examine in this report, looking at the experts’ original responses and exploring developments since to see how well those forecasts fared. We also interviewed nine of the experts in October/November 2020 for their thoughts on where they see trends moving as we move into 2021.

The database of experts who were canvassed over the years was developed from several sources. They are professionals and policy experts from government bodies, technology businesses, think tanks and networks of interested academics and technology innovators.

The expert canvassings represent only the opinions of the individuals who responded to queries we posed in a non-random sample. Results were and are not projectable to any other population. The canvassings were opt-in. Each of the reports emerged from different groups of expert respondents – some of whom were regular participants in several canvassings and some of whom were one-time participants. The number of respondents who participated in each canvassing varied. The methodology underlying each canvassing is elaborated in a methodology section in each of the original reports.

SCROLL DOWN TO READ THE FULL REPORT JUST UNDER THE FOLLOWING RELATED LINKS.

Download the report in PDF format: How Experts Expected Digital Life 2020 to Look 10-15 Years Ago and How Some See It Now 

For a briefing on the details in this report: Read the news release

Predicting the internet’s role and impact in 2020: The things experts said years ago about what digital life would be like today. How well did they see the future?

Since 2004, Elon University’s Imagining the Internet Center and Pew Research Center have conducted 12 “Future of the Internet” canvassings of experts. In four of those efforts between 2005 and 2011 the questions asked experts to imagine what digital life might be like in the year 2020.

In this analysis, we focus on eight predictions in those canvassings. For each, we very briefly summarize views expressed by some experts at that point in time and then outline a few of the related developments that have taken place in the realm of people-plus-technology to assess how those near-past predictions look today. We also interviewed some of the experts who made the original predictions to get their points of view today.

Following is a summary of the eight topic areas explored in this report. This section is followed by a more-detailed look at each of the eight.

1 – The impact of the internet on intelligence and emotion

In the 2009-10 canvassing, 81% of experts agreed with a statement that the internet would enhance human intelligence by 2020. Some experts argued that the impacts would likely affect individuals and groups differently. For the purposes of this new report looking back at earlier predictions, two key respondents shared their circa-2020 thoughts: Nicholas Carr and Jamais Cascio.

Nicholas Carr has long been known for his views that the internet is affecting people’s intelligence for the worse. Carr, the author of several acclaimed books on technology and culture, including “The Shallows: What the Internet Is Doing to Our Brains” (a Pulitzer Prize finalist), and “The Glass Cage: Automation and Us,” told us in an October 2020 interview: “Some of the recent studies reinforce what was already becoming clear 10 years ago: that the net, and social media in particular, keeps us in a perpetual state of distraction, producing a ‘cognitive overload’ that makes it harder to think critically, to engage in contemplation and reflection, and to place information into context…. The speed with which smartphones and social media have come to shape people’s moment-by-moment thoughts and behavior, and, more broadly to transform social norms and relations, has been stunning. I think it stands as one of the most dramatic and far-reaching cultural revolutions we’ve ever seen.”

Jamais Casio, distinguished fellow at the Institute for the Future, an expert who challenged Carr’s views in the 2009-10 canvassing, now says he and most futurists did not anticipate a major part of the story. He told us in October 2020, “Something I entirely missed [back then] was the impact of the internet on emotion…. That’s been a notable failing of foresight work, especially a decade ago – we too often pay insufficient attention to emotional and cultural complexities. We saw the internet as a technology of the mind and conflated ‘mind’ with ‘intelligence’ …. Blogs, RSS feeds, and many of the other popular technologies of the time focused largely on making information easier to access. But it turns out that this obscured a deeper reality, one that the evolution of online life hinted at, if we’d been paying attention: The internet is an interaction-centric medium…. If Carr wrote his Atlantic essay now with the title ‘Is Facebook Making Us Stupid?’ it would be difficult to argue ‘No.’”

2 – The fading chances for social tolerance

In the 2007-08 canvassing, 56% of experts disagreed with the statement that social tolerance would have advanced significantly by 2020 due in great part to the internet.

Among those who expected that social tolerance would not advance over time was Christine Boese, a digital strategy professional. In a November 2020 interview she told us about how life in the last decade since the bleak prediction she made back in 2007-08 has “Greatly surprised” her because things became worse than she foresaw. She explains, “I could project an increase in rhetorical polarization, authoritarianism and an extremist ethos, but I did not predict the bifurcated realities in the face-to-face world we see now…. Our authoritarian fellow citizens whose disassociation from reason and proof in a bifurcated reality could lead us to nothing less than a decline of civilization, to a new know-nothing dark age of plagues and wealth inequalities that are positively medieval.”

3 – The fraught relationship between tech firms and governments

In our 2011 canvassing, 51% of experts said tech firms would protect users from government interference by 2020. Nine years later, tech firms and governments have complicated, cooperative and sometimes contentious relations.

One of those who accurately saw this complexity and complicity between governments and tech firms in 2011 was Stowe Boyd, founder and managing director of Work Futures. In an October 2020 interview he said, “The internet has fallen into a patchwork quilt of government-constrained internets, that are not at all equivalent. In many of those internets, tech companies openly work with repressive governments to surveil citizens and block their access to information and connections considered dangerous to the state. States employ hackers and cyber-hucksters to disrupt and influence foreign interests. Even in more enlightened countries, internet use can be perverted by political interests, like the unemployment insurance system engineered by the GOP in Florida to intentionally make applying for benefits maddeningly difficult or impossible. Some parts of this state of affairs can be fixed by better governance, but we can anticipate a long stretch of bad road ahead.”

4 – The fate of culture and languages on a balkanized internet

Also in the 2011 canvassing, 57% of experts disagreed with the statement that the English language would be so indispensable in communicating online by 2020 that it would have displaced some languages.

One of the key resisters to the idea that English would overtake other languages as the internet grew was David Clark, senior research scientist at MIT and internet pioneer. In the canvassing, he argued that while English would be common online, local languages would grow online and translation would increasingly be used for some kinds of encounters. Reflecting in a November 2020 interview about what has happened since, he says, “Even then it was clear to me that for most internet users, the experience they had on the internet would be localized to their cultures and their language. Putting it sort of bluntly, it’s easier for 1,000 entrepreneurs to translate a webpage into Chinese than it is to have a billion Chinese learn English. I was pretty confident that what we were going to see on the internet for most users was going to be domestic and localized in terms of culture and language. Once the internet is available to the core populations of different countries, you have to deal with people in their own language…. I knew there would be a natural resistance to intrusions from other cultures – and American hegemony on the internet. People were going to fight to preserve their cultures and they are doing that aggressively in the age of globalization. They wanted to challenge the dilution of their cultures.”

5 – The future of virtual and augmented realities

In the 2005-06 and 2008 canvassings, experts responded to questions about the spread of virtual reality (VR) and augmented reality (AR) and the prospects for those technologies being attractive to many users and even addictive. Both technologies have reached bigger user bases since then, but they are not as mainstream as some experts thought they might be by 2020. VR gaming and AR experiences are still emerging slowly into the mainstream marketplace.

A cutting-edge analyst then and now is Susan Mernit, director of The Crucible, formerly an executive with Yahoo and America Online. She was among those who were skeptical of mainstream adoption of these technologies, writing in 2007-2008 that VR and AR appealed “to the geeks and the gamers among us, but … it’s elitist and too far out of the mainstream for many Americans, especially those with less free time.” In a December 2020 interview Mernit told us that “processing capacity and platform tools drive user behavior more than user behavior drives those tools.”  She observed that she has seen lots of brilliant ideas for which there was no technical capacity at the time to really build it. “The major insight is that the big innovations come from the blend of technologies and the platforms that convey them … you need the technologies and the platforms in place first and then the innovations are built on them.”

6 – The impact on the balance between professional and personal life

In the 2007-08 canvassing, 56% of the responding experts agreed with the statement that by 2020 “few lines divide professional time from personal time, and that’s OK.

In that canvassing, ’Gbenga Sesan, executive director for Paradigm Initiative, a pan-African digital rights organization based in Nigeria, argued professional and personal life would run simultaneously with mobile phones playing a key role in this blurring process between work life and home life. Asked in a November 2020 interview about how he sees things now, he wrote that blended time is – and will be – the norm: “Especially in the Global North, people will be able to determine when to work, when to rest, when to play. During the day when there is sunshine, they want to move around and be more healthy, rather than being in offices. Then, in the evening people will do the work for the tasks set before them.”

7 – The rise and meaning of mobile connectivity

In the 2005-06 canvassing, 56% of experts backed the statement that a worldwide, interoperable network would exist and that mobile wireless communications would be available to anyone anywhere on the globe at an extremely low cost by 2020.

The implications of this move towards mobile connectivity were well anticipated by Louis Nauges, the current chief strategy officer at Wizy.io and longtime internet strategist, who spoke then about today’s Internet of Things and about the connectivity changes being rolled out under 5G technology. In an October 2020 interview he said, “Universal, cheap access to very fast networks could help developing economies grow faster. Fast internet access will be a great equalizer of opportunities worldwide. Where I live will no longer be a barrier to high-value jobs. This is possible, but not certain: political resistance, corruption, denial of access to specific groups of people could block the potential benefits of technology and create huge disruptions in the world.”

8 – The growth of more-responsive organizations

Also in the 2005-06 canvassing, 72% of experts agreed with a statement that predicted by 2020 the internet would lead to significantly more efficient and responsive governments, businesses, non-profits and other mainstream institutions.

These days, customers and citizens have conflicted views about the responsiveness and accountability of organizations and one major analyst who saw this dynamic playing out in the future was Stephen Downes, senior research officer, National Research Council, Canada. Asked in a November 2020 interview about how he sees things now when it comes to organizational responsiveness and change, he said, “I was surprised how slowly it has all happened. For all the talk about future is quickly arriving and affecting organizational change, it actually moves quite slowly. I look at this as a 20- or 30-year process and we’re not there yet. It’s going to be another 10 years before we’re clearly, identifiably there…. [We are seeing] first signs now that we’re moving towards this self-serve, DIY culture, even though it’s still not mainstream for organizations…. It’ll be another 10 years before we’re in a distributed governance mode. The change will come first at the local organizations – communities, companies, co-ops. National organizations will be the last to change because they’re national organizations and inherently centralized.”

Deeper details:
Eight 2005-11 Predictions About Digital Life in 2020

1 – Human intelligence

In the 2009-10 canvassing, 81% of experts believed the internet would enhance human intelligence.

In mid-2008, Nicholas Carr wrote a piece for The Atlantic titled “Is Google Making Us Stupid?” He argued that the ease of online searching and distractions of browsing through the web were possibly limiting people’s capacity to concentrate. “I’m not thinking the way I used to,” he wrote, in part because he was becoming a skimming, browsing reader, rather than a deep and engaged reader. A year later, Jamais Cascio wrote a rebuttal to Carr in The Atlantic titled “Get Smarter,” arguing that while the proliferation of technology and media can challenge humans’ capacity to concentrate, there were signs that tech-enabled people were developing “fluid intelligence—the ability to find meaning in confusion and solve new problems, independent of acquired knowledge.”

These articles were the inspiration behind the one of the 10 2020 scenarios posed to tech experts in Elon and Pew’s 2009-2010 canvassing. The vast majority (81%) agreed with the following scenario:

By 2020, people’s use of the internet has enhanced human intelligence; as people are allowed unprecedented access to more information, they become smarter and make better choices. Nicholas Carr was wrong: Google does not make us stupid. (http://www.theatlantic.com/doc/200807/google).

Only 16% of the experts selected the following scenario as the most likely option:

By 2020, people’s use of the internet has not enhanced human intelligence and it could even be lowering the IQs of most people who use it a lot. Nicholas Carr was right: Google makes us stupid.

The issue has progressed since then in a complex way. Of course, “intelligence” is a nebulous concept with a no standard definition. IQ tests have been used since the early 1900s to measure intelligence. However these tests have been rife with issues and have been found to hold racial and cultural biases. In 1983, Howard Gardner proposed that there are multiple types of intelligence: linguistic, musical, logical-mathematical, spatial, bodily-kinesthetic, interpersonal, intrapersonal, naturalistic and existential (added in 1999). This theory revealed that apart from mere book smarts, there are many forms of intelligence someone may possess. In addition, the American Psychological Association points out that intelligence is defined in many different ways across cultures. Of course, “intelligence” is generally perceived by mainstream society to be seen through a prototypical IQ-test-based or scholastic achievement-based lens.

Differences in definition arose in the two original pieces and continue to color conversations of intelligence today. A 2019 article published in World Psychiatry explored how internet use could be changing how our brains think. This overview article took note of signs that the internet may be negatively affecting attentional processes, in line with Carr’s argument. However, Cascio’s assertion that the internet allows humans to go beyond their acquired knowledge is also supported in by this research overview.

Large quantities of information are accessible to humans because of the ease of searching the internet, and people are using the internet effectively to retrieve information (however, the information is often not from an unbiased, informed and reliable source).

In October of 2020 we revisited the debate with the main protagonists. In the 2009-2010 canvassing, Carr, also the author of several acclaimed books on technology and culture, including “The Shallows: What the Internet Is Doing to Our Brains,” a Pulitzer Prize finalist, and “The Glass Cage: Automation and Us,” had said that not just IQ scores would be affected by the effects of the internet but other types of intelligence as well. He wrote back then, “The net’s effect on our intellectual lives will not be measured simply by average IQ scores. What the net does is shift the emphasis of our intelligence, away from what might be called a meditative or contemplative intelligence and more toward what might be called a utilitarian intelligence. The price of zipping among lots of bits of information is a loss of depth in our thinking.”

In an email interview on October 21, 2020, Carr shared an update on his thoughts with us:

Imagining the Internet Center (ITI): In the past decade what new insights do we have about how the internet affects human intelligence? What do you think are the most important findings since then?

Carr: In 2010, research on the internet’s cognitive, psychological and cultural effects was in its infancy. We know a lot more today, thanks to the work of many sociologists, psychologists and neuroscientists around the world. I wish I could say that the research has proved me wrong, but, alas, it provides broad new support for my argument that the net impedes deep, considered thinking and pushes us toward superficial, scattered thinking.

Some of the recent studies reinforce what was already becoming clear 10 years ago: that the net – social media in particular – keeps us in a perpetual state of distraction, producing a “cognitive overload” that makes it harder to think critically, to engage in contemplation and reflection, and to place information into context.

But we’ve also begun to see even more troubling findings. Work by scholars like University of Texas cognitive psychologist Adrian Ward has revealed that our ever-present smartphones exert a constant, debilitating pull on our attention, leading to what Ward calls a “brain drain.”

Experiments show that the mere presence of a phone, even when it’s not in use, actually shrinks people’s available working memory and degrades their fluid intelligence and problem-solving abilities. The research also suggests that our phone’s grip on our mind may be taking an emotional toll, increasing stress and anxiety, eroding empathy and making our personal relationships shallower.

ITI: Has anything surprised you about the way things have unfolded between then and now?

Carr: The speed with which smartphones and social media have come to shape people’s moment-by-moment thoughts and behavior, and, more broadly, to transform social norms and relations, has been stunning. I think it stands as one of the most dramatic and far-reaching cultural revolutions we’ve ever seen.

The technologies provide plenty of benefits, which is why we’re so enamored with them, but because they’re so addictive and distracting, and so easily exploited by bad actors, we’re now facing a raft of cultural, political and social problems that are proving extraordinarily difficult to solve. These problems extend well beyond the cognitive and intellectual toll that the technologies take, but I would argue that most of the problems stem from the same source: the tendency of digital media to encourage superficiality, bias and emotionalism and discourage rigorous, rational and open-minded thought. I think we’re paying a very heavy price for the Silicon Valley brand of technological enthusiasm that held society in its sway during the early years of this century.

ITI: Where do you see things going in the next decade when it comes to the effects of the internet on human intelligence?

Carr: Once a technological system becomes deeply embedded in a society’s norms and processes it takes on what the late historian Thomas Hughes termed “technological momentum.” At that point, it becomes very difficult if not impossible to back up and take a different route. The technology begins to shape society, rather than vice-versa.

I think that’s where we are with the internet today. We still, as individuals and as a society, have the ability to make certain beneficial changes — and I would suggest we begin with a thoroughgoing reassessment of the role of computers in education. But I don’t think we’re going to be able to alter the net’s fundamental reshaping of our lives and thoughts. We’ve made our bed, and we’re going to be sleeping in it for a good long while.

Jamais Casio, distinguished fellow at the Institute for the Future, was in the thick of this argument a decade ago. He offered updated thoughts in an email interview October 28, 2020.

Imagining the Internet Center (ITI): How have things evolved from that era and what do you think are the most important insights since then?

Cascio: In some ways, the evolution since then has been a straightforward acceleration of trends already visible at the time: mobility, increased bandwidth, video, that sort of thing. At the same time, we’ve seen a fundamental transformation of what we use the internet for. In the 2000s, you could make a persuasive argument that the internet was an information-centric medium. Blogs, RSS feeds and many of the other popular technologies of the time focused largely on making information easier to access. But it turns out that this obscured a deeper reality, one that the evolution of online life hinted at, if we’d been paying attention: The internet is an interaction-centric medium.

The pre-web days of the internet were a period of explosive growth, but we sometimes forget about the proliferation of communication tools. Beyond email, there was ICQ, AIM, IRC and even before that Unix write and talk (and much, much more). I don’t think we recognized just how critical these tools were.

This points to what turns out to be, for me, the most important insight, and it’s something I entirely missed: the impact of the internet on emotion. This is, far and away, the key story of the evolution of the internet over the 2010s, and it was not on most people’s foresight radar at the time. That’s been a notable failing of foresight work, especially a decade ago – we too often pay insufficient attention to emotional and cultural complexities. We saw the internet as a technology of the mind and immediately conflated “mind” with “intelligence.” The mind is the center of how we think, yes, but it’s also the center of how we feel. And as good as the internet is at illuminating knowledge, it’s even better at manipulating emotion. Google (or the internet writ large) may or may not be making us stupid, but it certainly seems to be making us angry.

It turns out that Google wasn’t the real villain in the story, anyway. Facebook (and, to a lesser extent, Twitter and other mass-audience social media) has turned out to be far more impactful – and dangerous – for the human mind and human behavior than Google could ever have hoped to be. I recall discussions at the time [a decade ago] fearful that Google organizing humankind’s information would lead to a kind of uniformity of thought; while I didn’t think that would happen, I could certainly understand how that fear could arise. But rather than a global alignment of information in 2020, we have this cascade of alternative facts and conflicting realities, driven to a significant degree by the spread of social media.

That divergence between the effects of information-centric media and interaction-centric media needs to be kept in mind when looking at research studies on the impact of the internet on our brains. Information-centric media can certainly be attention sinkholes (as anyone who has checked TVTropes.com can attest), but interaction-centric media are attention tyrants. They demand we give them attention. So, when we look at how the internet changes us, we need to ask what part of the internet do we mean? Having fingertip access to information may (for example) reduce time spent deep thinking, but having relentless interruptions by social platforms with responses or alerts is very likely far more disruptive, especially if those messages are perfectly designed to provoke strong reactions.

Following a Wikipedia rabbit-hole is far less likely to corrupt my ability to think than would arguing with someone on Facebook or doom-scrolling on Twitter.

If Carr wrote his Atlantic essay now with the title “Is Facebook Making Us Stupid?” it would be difficult to argue in favor of “No.”

ITI: Has anything surprised you about the way things have unfolded between then and now?

Cascio: The intensity of resistance to unwelcome facts across the social spectrum is unexpected and disheartening. I wrote a piece for the Bulletin of the Atomic Scientists a few months ago noting that people writing stories about the end of the world rarely (if ever) included a large part of the population simply refusing to believe that an unmistakable global disaster was really happening. It’s ridiculous – and it’s real. We have more access to information and facts than at any point in human history, and we seem to be fighting against shared knowledge harder than ever.

It wouldn’t surprise me if those two data points were somehow connected, but thinking too much about how they’re linked is just depressing.

The persuasive gravity of authoritarian, nationalist and neo-fascist voices that grew so rapidly over the past decade also came as a surprise. It turns out that a lot of us a decade ago were simply too optimistic. One of the fundamental errors many of us made back in the 2000s was taking the (arguably correct) claim that “the internet allows marginalized groups and suppressed voices to connect with each other and form communities” to be an inherently good thing. We let ourselves forget, for the most part, that some groups and voices are marginalized and suppressed because they are actively harmful. I wouldn’t want to have pulled back on the freedoms enabled by the internet for everyone just because of the dangerous few, but it would have been good to really think about this possibility before it bit us hard.

A more pleasant surprise has been the explosion of creativity the various internet platforms have enabled. Younger people in particular are far more actively making content – whether via Twitch-streaming, TikTok videos, even “memes” – than any of us were back in the early days of the internet. Podcasts have an even wider demographic reach. For good and for bad, certainly, but this is the golden age of personal expression, made possible by the internet.

ITI: Where do you see things going in the next decade when it comes to the effects of the internet on human intelligence?

Cascio: My default answer for any forecast at this point is “we’re doomed.”

I suspect that the dynamic that will shape the next decade at the nexus of internet and intelligence is filtering – pre-culling or even re-envisioning the world around us to remove the unappealing, the inappropriate, the antagonistic. The growth of mixed- or augmented-reality technologies will engender a desire for an ability to control what we’re experiencing, especially as they become advertising platforms. As they become more advanced, those tools will let us place a layer over what we see to make it more beautiful, or amusing, or surreal. Imagine Snapchat filters for your perception of reality.

But with that comes an even greater ability to remove the aspects of reality that don’t match up with what we already believe or think we know. It might end up making things a bit more peaceful, I suppose, if two people can sit side-by-side and see themselves in entirely different (mutually incompatible) worlds. It will certainly further degrade our ability to have a meaningful conversation about complex problems.

I don’t think that artificial intelligence (machine learning, etc.) will be of much help, unfortunately. We finally seem to be recognizing that AI systems are human-designed systems, and therefore subject to very human foibles. And we’ll probably learn what happens when a distinct ideological template gets added intentionally to a machine learning system to tweak how it understands its inputs. But that can’t be the whole story. I still believe that the increasing sophistication of human-machine hybrid minds will help us figure out how to grapple with some of the hideous complexities we’re facing going forward.

Maybe I’m still too optimistic. I still believe that there are people out there who see facts, even or especially unpleasant facts, as important. That we still have the ability to use these technologies of mind – both intellect and emotion – as enhancements to our humanity, not replacements for it or suppressants of it. That we still have a chance to fix things. Maybe we’ll rise to the challenges we face.

2 – Social tolerance

In the United States, the 2008 election was dubbed the first internet election. Running on a platform of social acceptance and the slogan of “Hope,” Barack Obama had effectively rallied support using the internet and more traditional campaigning strategies, becoming America’s first Black president. Some cultural commentators talked about a “post-racial America,” while others pushed back against that notion.

On a broad level, experts canvassed in late 2007 into early 2008 were skeptical the internet-enabled connectivity would lead to greater social tolerance. In that canvassing, 56% of experts disagreed with this scenario:

Social tolerance has advanced significantly due in great part to the Internet. In 2020, people are more tolerant than they are today, thanks to wider exposure to others and their views that has been brought about by the Internet and other information and communication technologies. The greater tolerance shows up in several metrics, including declining levels of violence, lower levels of sectarian strife, and reduced incidence of overt acts of bigotry and hate crimes.

In subsequent years, experts have warned that the internet may be leading people to be less accepting and more closed-minded. While algorithms continue to deliver content that seems especially relevant to individual users, many worry that these highly curated online environments propagate a cycle of filter bubbles and echo chambers which can result in serious, divisive and dangerous consequences.

The internet helps connect like-minded individuals, but that can mean it brings together those who seek to harm or oppress others. As of early 2019, more than 1,000 hate groups were active in the United States, according to the Southern Poverty Law Center. These groups have proven be difficult to police online, even when they radicalize users and incite violence.

Even in less extreme instances, many worry about the connections between internet use – particularly social media – and the risk of falling into group polarization (i.e., taking on more extreme viewpoints as people gather in online groups). Algorithms can supplement this psychological tendency by pushing people toward more extreme content when they engage with the initial suggestions. Recent research found that this was happening when people watched political content on YouTube. And a majority of Americans feel President Trump’s rhetoric has been part of this dynamic.

In the 2007-2008 canvassing, Christine Boese, a digital strategy professional, argued the internet would enhance polarization and shrink the middle ground. She wrote then, “There are aspects of both greater and lesser social tolerance online. If the technology tends to lead cultures in any particular direction, it is leading to greater polarization of extremes, and less of the middle. Does greater tolerance constitute the middle? Not in this case. The extremes find support for their views online, more so than in the less-connected, face-to-face world, so bigots find their views reinforced and even the far extremes of social relativists find their views reinforced…. Is everyone really entitled to his or her own opinion, or are there very real and socially-constructed methods to evaluate whether some opinions and views are indeed superior to others? I believe the latter. Perhaps we should all go back and read that dated study by William Perry on the intellectual development of Harvard undergraduates in the homogenous 1950s.”

In a November 17, 2020, email interview, Boese responded to some questions about these issues:

Imagining the Internet Center (ITI): What do you remember about the period when we asked this question and you gave your answer?

Boese: This quotation takes me back! At the time, I was exactly 10 years out from completing my doctoral dissertation with a central research question about the polarization of extremes in online cybercultures. Here is what I wrote about it in the conclusion in 1998: http://www.nutball.com/dissertation/mains/Conclusion.html

“The nerdish, divergent-thinking, outspoken fanatic is one of the most prominent stereotypes of inhabitants of cyberspace. This study still cannot tell us if the Internet spawns such characters, or if it simply attracts people who are already a bit extremist in their views, or if it reveals the common, if hidden, extremism in many of us. I have long been drawn to the apparent proliferation of highly polarized argumentative positions, of rhetorical polemic, in cyberspace. At first, I focused on the phenomenon of “flaming” as the most prominent manifestation of this polemic, but flaming itself proved too difficult to consistently define, and the term failed to distinguish between ad hominem abuse and the argumentative rant. I was also intrigued by the interaction of incommensurate views, or what happens when extreme and oppositional rhetorical positions occupy the same cultural and discursive spaces, in interactions which are very easy to document in cyberspace.”

I had speculated that a “paradox of insularity and interactivity” led self-selecting groups online to polarize their rhetoric and amplify the extremism of insular groups.

“This seeming paradox of insularity and interactivity may help keep such Internet fringe groups thriving, for the people in the group have drawn together in their mutual interest, yet if the group faced no challenges from within or without, its insularity would leave little to discuss, since most members are in agreement.

“The paradox of insularity and interactivity plays out in cycles of retreat into insular enclaves and expansion back into interactive (and often polemical) engagement. The insular base of like-minded rhetors gives a rhetor strength to take and hold a polemical argumentative position, yet that position can never rest easy, for online there is always the potential for interactive challenge to that position. This cycle of insularity and interactivity helps to further polarize arguments in cyberspace.  . . . [and], when combined, can be seen as partially responsible for some of the extremist rhetoric and polemical argumentative strategies that often make up the general ethos for certain online cultures…”

In 2007-08, with the rise of social media, I saw a continuation of the patterns I’d documented in 1998: a pendulum-style movement from larger groups splintering into smaller, specialized groups, and then back into the larger groups again. Flame wars will splinter online groups just as small-town churches are periodically rift by differences over dogma and social issues.

In the face-to-face world in 2007-08, I also saw increasing rhetorical polarization, but nothing on the scale I’d documented online. I was still confident that online factors amplified the polarization and flame wars in ways face-to-face social niceties and manners generally block. Most Internet scholarship assumed the lack of eye contact and vocal tones contributed to misunderstandings that escalated flame wars. I added the “paradox of insularity and interactivity.”

I was working at CNN in 2006 when my network, “Headline News,” launched the primetime career of the racist radio fabulist, Glenn Beck. Few knew that Rachel Maddow had also been screen tested, taping a pilot at the same time. The network brass picked Beck over Maddow. They weighed the two against each other and chose Beck. I was in the newsroom town hall meeting when the decision was announced.

I knew Maddow’s intellectual pedigree, and I could no longer remain at a network that basically said, “Beck is the kind of voice we want to give a highly-promoted platform here.” From that seemingly innocuous beginning, Beck eventually migrated to Fox News, and the unreality took hold. Conspiracy spaces had a definitive platform.

Barack Obama was elected president in 2008. Right-wing media had been in denial about the possibility of a Black president, but the “birther” movement did not gain real momentum until Obama’s campaign and election. That provided the essential turn that led some mass media properties to migrate the ethos of online flame wars into the non-Internet spaces wholesale, into ordinary people’s living rooms.

ITI: How confident were you in your prediction? What factors were you weighing as you wrote the answer you did?

Boese: Very confident, informed by my scholarly work with interactive media and my time in cable news, watching that mass media polarization, or “bifurcating realities.” What I’d originally observed online had escalated and migrated to mass media.

What did not exist as a powerful social force in 2007-08 was the concept of a “social feed” or algorithmic personalization. These feeds have almost completely replaced most other forum software for online communities.

I also saw the rise of authoritarianism, or “argument from authority” as a debate style within Fox News, even previously, on CNN’s “Crossfire.”

What informed my conclusion in 2008 was a sense that the Fox News-influenced universe embraced what William Perry calls an “intellectual retreat” to authoritarianism, with argumentative positions as unsupported truisms or hasty generalizations, topics name-checked as tropes in insider discourse, without the kind of argumentative support required in first-year composition at U.S. universities for a passing grade.

ITI: Has anything surprised you about the way things have unfolded between then and now?

Boese: Between 2008 and now greatly surprised me. I could project an increase in rhetorical polarization, authoritarianism and an extremist ethos, but I did not predict the bifurcated realities in the face-to-face world we see now. I thought such phenomena needed the social disconnection of online spaces to escalate and amplify.

I could not have predicted the owners of a cable news property would double-down on propaganda-focused, right-wing authoritarianism and flame war-style provocations. Fox News developed the kind of social influence to create bifurcated realities, amplified by algorithmic, feed-driven, online forums that would insulate and distort U.S. civil society to the extent that people would elect and back a president with authoritarian or fascist leanings, informed by a closed loop of “big lie” propaganda so gaspingly false that fact-checking simply doesn’t matter.

The concurrent rise of a global right-wing, fascist movement indicates to me how the extremism of insular and interactive online feeds and forums have migrated into the face-to-face world, not just as a presence, but with considerable and frightening influence.

Here is the puzzle of this right-wing bifurcated reality: those who fully embrace the Fox News universe (I count a number of those close to me in this group) express an impenetrable certainty in the rightness of their easily disproven beliefs. No counter-factual information makes any impression, and most is dismissed out of hand. Scholars refer to “confirmation bias,” or discuss the link between authoritarian thinking and fear-based psychology as a source of that impenetrability.

But there is one other feature of our new, mass media and algorithmic social feed-based bifurcated realities: relativism. Relativism in the context of authoritarianism makes no logical sense, but it is observable and easily documented. It may be a right-wing manifestation of the condition of postmodernity – marrying absolute certainty in positions, principles, and beliefs with a lack of consistency and willingness to flip those beliefs on a dime if a revered authority calls for it.

I have called it “Authoritarian Relativism.” Orwell fans will note that this is another version of “Oceania is at war with Eastasia. Oceania has always been at war with Eastasia.”

Within the bifurcated, right-wing bubble, authoritarianism disregards logical consistency. Authorities who determine the orthodoxy and dogmas of this bifurcated reality change those orthodoxies with no concern about being a “flip-flopper” or containing contradictory multitudes.

In the reality distortion fields of the Trump Administration, God and patriarchal authorities are a named center of the universe. Believers must have a “Salvation Story,” must know the Bible, must live righteous lives, must value human embryos, and save children from pedophiles. People on the left are criticized for not meeting this litmus test. Meanwhile, valued leaders on the right can have affairs, embezzle money, be involved with pedophiles, etc. and followers on the right find no incongruity (well-documented by journalist Sarah Posner in her book, “Unholy”).

Online, there is even a name for this: “It’s OK if you’re a Republican,” or IOKIYAR.

Marshall McLuhan predicted a Global Village of images, impressions, and logical inconsistency as a media effect, a world of orality without memories, where the only moment that matters is now. McLuhan would probably name “Authoritarian Relativism” as a “media reversal,” where the far reaches of authoritarianism flip the foundationalist concept of capital-T Truth of authoritarianism into fluid, “anything goes” relativism.

ITI: Where do you see things going in the next decade when it comes to social tolerance and the internet?

Boese: Here’s a factor often missed by focusing on the dangers of fascism: tolerance of other once-fringe behaviors rejected by mainstream society has also increased. This is not the middle; it is acceptance of what was previously considered extreme, amplified through self-selected online communities with the paradox of insularity and interactivity.

Once gay marriage had been ruled legal, it swept the U.S. with such speed that social levers of homophobic repression were useless. LGBTQ people were not cowering in fear. Even as the right turns to fascism and dreams of more overt methods of oppression, tolerance is increasing on the opposite pole, growing quietly, and perhaps, pervasively. The edges are not racing to capture the uncommitted, homogenous middle. The middle is systematically migrating to the divergent, identity-affirming edges.

What fuels the right-wing side of this bifurcated universe is structural, systemic racism, with a white supremacist, Apartheid-like minority rule. This was the element of U.S. culture that the Nazis studied and sought to emulate with the Jews, the way that the American South held down those they sought to oppress with lower status (as documented in Isabel Wilkerson’s book, “Caste”).

Religious conservatives, with no attachment to logical consistency, seek to refuse services to people who offend their religious beliefs, seemingly unaware they are demanding a bigotry that has been illegal since John Lewis led the integration of lunch counters.

Online, groups are actively splintering, not just into the dark web of QAnon, 4chan/8chan, or even the odd sub-Reddits. Now, neo-fascist and white supremacist groups actively recruit online and show up heavily armed in face-to-face spaces. Their boogeyman is “antifa,” a made-up foil that, like abortion activism, motivates the base, even though the Antifa or antifascist groups have no real organization or structures.

“Enemy-blaming” has its own precedent in Germany’s “Reichstag Fire.” At this point, with “Black Lives Matter” angry George Floyd demonstration violence amplified in a B-roll loop on right-wing mass media, we can reasonably name the symbiosis between far-right online media fringe groups and mass media propagandizing as a fully integrated system, both centralized and distributed, working on multiple fronts toward a stated goal of starting a race war, a second Civil War, in the United States.

And no, I could not have predicted that. I pray it will never occur.

ITI: So, what can we do now?

Boese: I’m reading Shoshana Zuboff’s “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.” I am working with data scientists to learn more about AI and machine learning from a user-experience standpoint. Like Zuboff, I believe that aggregated data paired with machine learning will determine the paths we are guided toward.

I’ve never been a technological determinist, but reduced choices affect everyone. I continue to focus on the propaganda power being marshalled through this symbiotic relationship between interactive media (many-to-many) and mass media (one-to-many).

Among the many cultural shocks of 2020, I find the biggest shock is discovering the power and impenetrability of right-wing media propaganda in creating a truly bifurcated, MAGA-reality.

As I watched the rise of the right-wing mediasphere from the extremist polarization I’d first started observing online, I thought, “Where are their rallies? Where are their mass gatherings?” I thought we would not be in danger until there were mass gatherings such as depicted in “Triumph of the Will.” I was wrong.

MAGA rallies are not the same. They take place in the intimacy of living rooms, in front of televisions and on YouTube, with fragmented mass media audiences where the watchers can’t usually see other watchers. Audience conformity is presumed, with right-wing identity politics. And then, the online spaces offer solace, connection, as they always have, for those who are isolated in their face-to-face communities. Avid viewers are united around an organizing principle. They don’t feel alone in the dark, listening to the right-wing call-in radio host. They have their friends on Fox News and other networks now as well, Newsmax, OAN.

And, with the paradox of insularity and interactivity, the bubble they have constructed grows stronger. What reality can intrude to pop the bubble? Only one election result is accepted. Even COVID mortalities are denied. Morgue trucks are denied. Health care workers tell stories of COVID-sick MAGA believers abusing caregivers in the ER and ICU, denying the severity of their illnesses even as they struggle to breathe, right up to the moment they are put on ventilators.

As Zuboff would point out, responsibility also lies in what are truly “dark UX patterns” created through surveillance data aggregation, algorithms, AI, and machines that can process and act on data at scale toward outcomes that Explainable AI can’t explain – yet!

Our authoritarian fellow citizens whose disassociation from reason and proof in a bifurcated reality could lead us to nothing less than a decline of civilization, to a new know-nothing dark age of plagues and wealth inequalities that are positively medieval.

3 – Tech firms and the government

In the 2011 canvassing, 51% of experts said tech firms would protect users from government interference by 2020.

After the events of 9/11, the Patriot Act was signed into law, giving the government greater ability to detect and deter terrorism. This law has drawn controversy since, with mixed support from the public. One of the highly controversial elements of the Patriot Act is that the government can search third-party sources that hold users’ personal information without users’ consent or knowledge (Section 215). Early in 2011, a news story broke that the U.S. government had been secretly subpoenaing tech companies for user information, with more than 50,000 requests being sent each year. The subpoenas involved gag orders which prevented these companies from telling users what was happening. The story broke when Twitter was subpoenaed to release personal information associated with the WikiLeaks Twitter account and notable figures associated with WikiLeaks. Twitter fought the gag order and notified affected users. In May 2011, the U.S. government signed an extension for several key elements of the Patriot Act, including searches of third-party data sources of personal information.

In 2011, we presented tech experts with paired scenarios tied to each of eight issues and trends and asked what was most likely to happen in each realm by the year 2020. A plurality (51%) of experts selected that the following 2020 tech firms scenario might be most likely:

In 2020, technology firms with their headquarters in democratic countries will be expected to abide by a set of norms—for instance, the “Responsibility to Protect” (R2P)[1] citizens being attacked or challenged by their governments. In this world, for instance, a Western telecommunications firm would not be able to selectively monitor or block the internet activity of protestors at the behest of an authoritarian government without significant penalties in other markets.

Conversely, 34% experts said the following would be the case:

In 2020, technology firms headquartered in democratic countries will have taken steps to minimize their usefulness as tools for political organizing by dissidents. They will reason that too much association with sensitive activities will put them in disfavor with autocratic governments. Indeed, in this world, commercial firms derive significant income from filtering and editing their services on behalf of the world’s authoritarian regimes.

Neither of these scenarios came to fruition of course. The 2020 reality is much more complicated in regard to relationships of tech companies, government agencies and users/citizens. In 2019, there were 213 documented internet shutdowns across 33 different countries. Some social media companies have worked specifically with certain governments to censor dissenting opinions regarding said government or its practices. Yet in some circumstances tech companies are seen as providing outlets for free speech. In the U.S., acceptable free speech seems to differ across social media platforms and it is situational at times. The controversy over what should be considered free speech online is an ongoing issue. There is some worry over content moderation and censorship practices on social media platforms. There has been a call by the public and by some tech companies for the government to be more involved with the operations and regulation of tech companies.

In addition, in the era of COVD-19, new concerns have been raised regarding the protection of tech users from unwanted data collection and sharing amongst tech companies and government entities. Tech companies have access to data that would allow public health officials to more easily track and trace people’s movements in order to trace the potential spread of COVID-19, thus some governments have been calling on tech companies to share their data. Google and Apple launched contact-tracing software in May 2020 that would allow public health authorities to make their own apps to help notify people if they were exposed the COVID-19. However, the public is skeptical about the efficacy of these tracing efforts.

In the 2011 canvassing, Stowe Boyd, founder and managing director of Work Futures, argued experiences would differ greatly around the world and that civil unrest would play a key role in determining the outcome, writing, “Tech firms based in Western democratic countries will continue to support the compromises of political free speech and personal privacy that are, more or less, encoded in law and policy today. The wild card in the next decade is the degree to which civil unrest is limited to countries outside that circle. If disaffected youth, workers, students, or minorities begin to burn the blighted centers of Western cities, all bets are off because the forces of law and order may rise and demand control of the Web. And, of course, as China and other countries with large populations—like India, Malaysia, and Brazil—begin to create their own software communities, who knows what forms will evolve, or what norms will prevail? But they are unlikely to be what we see in the West. So, we can expect a fragmented Web, where different regions are governed by very different principles and principals.”

In an email interview on October 19, 2020, Boyd was asked to update his thoughts:

Imagining the Internet Center (ITI): What do you remember about the period when we asked this question and you gave your answer?

Boyd: It was a time of Occupy Wall Street, and the Federal Trade Commission settlement with Facebook that blocked Facebook from making explicitly private information public. I coined the term ‘publicy’ in 2010 as the opposite of privacy, which I argued was the new reality: Our once private information would default to public, no matter what we say.

Civil unrest was growing across the world, as reported by many, like “As Scorn for Vote Grows, Protests Surge Around Globe” by Nicholas Kulish, who wrote in 2011:

“Increasingly, citizens of all ages, but particularly the young, are rejecting conventional structures like parties and trade unions in favor of a less hierarchical, more participatory system modeled in many ways on the culture of the Web. In that sense, the protest movements in democracies are not altogether unlike those that have rocked authoritarian governments this year, toppling longtime leaders in Tunisia, Egypt and Libya. Protesters have created their own political space online that is chilly, sometimes openly hostile, toward traditional institutions of the elite.”

And, as we have seen, those protests led to (or paralleled) the rise of authoritarian regimes, the emergence of divisive populism, and – of course – the growing desire by governments to control what activists say and how they organize online.

ITI: How confident were you in your prediction? What factors were you weighing as you wrote the answer you did?

Boyd: I believed in the narrative of the scenario I laid out, and I tried to connect the threads. But for me thinking about the future is always more like science fiction than drawing a flowchart.

ITI: Has anything surprised you about the way things have unfolded between then and now?

Boyd: I am surprised that no one has tried very hard to break up Facebook, despite all the harm they’ve caused and the monopolistic approach they’ve taken. I was surprised by Trump. I wasn’t surprised that China has moved to curtail the freedoms that Hong Kong kind of enjoyed.

ITI: Where do you see things going in the next decade when it comes to the relationship among tech companies, government factions, and tech users/citizens?

Boyd: The internet has fallen into a patchwork quilt of government-constrained internets, that are not at all equivalent. In many of those internets, tech companies openly work with repressive governments to surveil citizens and block their access to information and connections considered dangerous to the state. States employ hackers and cyber-hucksters to disrupt and influence foreign interests. Even in more-enlightened countries, internet use can be perverted by political interests, like the unemployment insurance system engineered by the GOP in Florida to intentionally make applying for benefits maddeningly difficult or impossible. Some parts of this state of affairs can be fixed by better governance, but we can anticipate a long stretch of bad road ahead.

4 – The role of English online

In the 2006 canvassing, 57% of experts said English would not displace other languages online.

The lingual diversity of the internet has been in flux for decades. While about 80% of internet content was in English in the mid-1990s, this rate dropped drastically as the internet grew in popularity around the world. By 2005, English content only constituted about 45% of internet content. Still, concern remained that English could displace other languages online.

In late 2005-early 2006, we asked tech experts about seven scenarios about possible changes that might occur by the year 2020, one of them regarding the language of the internet; 57% disagreed with the following scenario:

In 2020, networked communications have leveled the world into one big political, social and economic space in which people everywhere can meet and have verbal and visual exchanges regularly, face-to-face, over the internet. English will be so indispensable in communicating that it displaces some languages.

In retrospect today in 2020, the experts who were wary of this prediction were somewhat accurate, but with large caveats. The internet has many active language communities. However, English remains the most common language online, both in terms of users’ native languages and websites’ content language. There are varying figures about the overall number of websites and active websites. Estimates for the total number of websites range from 1 billion to nearly 2 billion and estimates for number of active sites is reported to be 200-400 million.

As of December 2020, W3Techs reports that 61% of all websites use English while Russian – the second most common language for websites – constitutes just 9% of all websites. By comparison, one estimate puts the share of Chinese language sites at slightly over 1% and another estimate of the total number of Chinese sites puts it at around five million. By language spoken online, Internet World Stats says that that about a quarter of users speak English, while about a fifth speak Chinese.

A 2018 survey by Pew Research found that some individuals in emerging economies reported issues finding content available in their preferred language and that not being able to read at least some English was associated with issues in accessing and using mobile phones. Although English has not become ubiquitous online or necessarily diminished other languages’ content, the internet is largely tailored for English speakers, which results in many notable digital language divides.

Some experts suggested in the Elon-Pew canvassing in 2005-06 that language-translation software would be perfected in the future, allow all to use their own language as they navigate the internet and eliminating the domination of any one language online. Indeed, Google Translate – a popular translation tool – was released only a few weeks after the canvassing was completed (April 28, 2006). Although still imperfect, the constantly-improving translation tool and others allow users to understand words, phrases and even entire webpages in more than 100 different languages.

In the 2005-2006 canvassing, David Clark, senior research scientist at MIT and internet pioneer, argued that while English would remain common online the use of myriad local languages would grow online, and translation would increasingly be used for some kinds of encounters. He wrote: “English is going to be the common language, but we will see an upsurge in use and propagation of local languages. For many users, their local language will still be the only language they use on the internet. And of course, for low-complexity uses, we will see more translation.”

In a November 25, 2020 interview with us, Clark provided updated remarks:

Imagining the Internet Center (ITI): What do you remember about the period when we asked this question and you gave your answer?

Clark: Sixteen years ago is forever and it was a completely different world. The internet was spreading rapidly. I remember a Middle East expert telling me around that time that the fastest growing language on the internet was Arabic.

My sense then was that we were going through a transformation that was really showing us the globalized character of the internet. Even then it was clear to me that for most internet users, the experience they had on the internet would be localized to their cultures and their language. Putting it sort of bluntly, it’s easier for 1,000 entrepreneurs to translate a webpage into Chinese than it is to have a billion Chinese learn English.

I was pretty confident that what we were going to see on the internet for most users was going to be domestic and localized in terms of culture and language. Once the internet is available to the core populations of different countries, you have to deal with people in their own language.

Even though much of the internet is now commercial – and you’d think that might push towards a “universal language” – it’s much more the case that even commercial interactions are shaped by local culture and customs. Beyond that, the dominance of local norms is even more evident in discourse. Conversations are shaped by local traditions and norms around discourse. Think of the similes that people draw on to explain things to others. They are subtle and particularly local.

That’s why I knew there would be a natural resistance to intrusions from other cultures – and American hegemony on the internet. People fight to preserve their cultures.They are doing it aggressively in the age of globalization. They want to challenge the dilution of their cultures.

ITI: How confident were you in your prediction? What factors were you weighing as you wrote the answer you did?

Clark: I was pretty confident because I have a particular view of what we should make of the internet. When we were building the internet, our aspiration was that it would be a platform where anyone could build anything they wanted. It shouldn’t surprise us that if you create a space in which you can do anything you want that different people will do different things.

I was excited then that the internet wasn’t becoming an English-language cultural hegemony. It was great that people around the world were doing what they wanted to do. They seized the empowerment for their own purposes. We might not be excited about everything that’s been done, but it’s the consequence of what we built, and we should be pleased with that.

What are the real implications of the generality of the internet? One that we’re struggling with today is that some people want to exploit that generality and be malicious. There are lots of ways to be a creative crook, if that’s what you want to be.

ITI: Has anything surprised you about the way things have unfolded between then and now?

Clark: A lot of it has to do with timing. We knew a long time ago, for example, that you could use the internet for streaming video. We had it in the mid-1980s. But the system just didn’t have the capacity to make it work as a commercial product. We knew it would work. We were just waiting for the infrastructure to catch up. That’s a good example of a lot of what you see on the internet. Our ideas outrun the capacity of the system to support them. Once the full system is built, it takes the bottleneck away and a lot of those pent-up ideas that people were waiting to do.

The explosion in the apps space is another example of the nice consequence of building out the infrastructure. It’s tied to the trend toward mobile connectivity. And then look at all the devices we can talk to now – tell them what we want them to do. None of this was really surprising because we knew it could be done.

What has been surprising is how quickly they took hold once we had the infrastructure to support them. The lesson of this is that you should be careful about assuming something won’t happen. The truth is that you should almost never be surprised because people will move into a space as soon as it’s available and start creating things.

ITI: Where do you see things going in the next decade when it comes to lingual diversity and other cultural issues online?

Clark: When we look to the future, the thing that distresses me is the possible balkanization of the internet. The first use of the term I can find is a paper written in 1997 by Marshall Van Alstyne and Eric Brynjolfsson, “Electronic Communities: Global Village or Cyberbalkans?” It’s very forward looking. We are seeing forces now where the Russians are trying to create a Russian internet experience and the Chinese are trying to create a Chinese internet experience, the Great Chinese Firewall, and a lot of smaller countries would like to do the same but don’t yet know how to do it.

It’s not that they want to cut themselves off entirely, but the aim is to create a domestic internet experience entirely shaped by applications that were built inside the country. It’s designed to meet the language needs and state control needs and all the other things we associate with these regimes. Clearly that trend is going to amplify the localization and language experience of particular cultures. Part of what they are trying to do is control what people see – so that they aren’t tempted to go and find material that was created in other languages. When you think about this situation, you have to start by understanding that “we ain’t them.” And then try to understand the expectations and constraints of the regimes and the things that ordinary users are willing to undertake. When everyone is on the internet, the tools for using it have to be tailored to them.

We are struggling and we are at a fork in the road on these issues. It’s clear that globalization is triggering a nationalistic backlash. It’s happening in lots of countries, including in the United States now. At its worst, this could lead to an even more intentional desire to localize the experience for users. On the other hand, over the next 10 or 20, you might see a washing out of cultural differences. Will the internet become an American cosmopolitan experience? The tension here is between nationalistic resistance, on the one hand, and global mixing on the other hand. Nationalistic tendencies point in one direction, but a natural homogenization of culture also could unfold. I hope we don’t lose cultural diversity and become too homogenized. My guess is that we probably won’t. But at the most positive edge of this, I also hope we become more cosmopolitan.

5 – The rise of virtual and augmented reality

In two different canvassings, many experts expected virtual reality and augmented reality technologies to be a part of everyday life in 2020, and that is starting to happen.

Virtual reality (VR) in the digital age generally refers to a computer-simulated experience. VR has been used in many different ways from gaming to aeronautics, combat training to shopping. While the late 20th century saw extensive growth in VR technology, the early 2000s saw a lull. Despite this downtick in innovation on the VR front, a majority of experts in late 2005 into early 2006 expected fairly vivid virtual realities would exist by 2020 and some even expected that some people would be addicted, spending a great deal of time in virtual realms and mostly eschewing “reality.”

In late 2005-early 2006, the Pew Research Center and Elon’s Imagining the Internet Center asked tech experts to weigh in on seven differing scenarios tied to possible changes that might occur by the year 2020. Nearly six-in-ten experts (56%) agreed with the following scenario:

By the year 2020, virtual reality on the internet will come to allow more productivity from most people in technologically- communities than working in the ‘real world.’ But the attractive nature of virtual-reality worlds will also lead to serious addiction problems for many, as we lose people to alternate realities.

This prediction has not quite come to fruition – at least not as experts had initially expected. A virtual reality akin to “The Oasis” in Earnest Cline’s “Ready Player One,” remains science fiction. In fact, fully-realized VR has yet to find common mass-market use. Virtual reality gaming is still struggling to break into the mainstream gaming market. Still leading in popularity are the non-VR games that allow individuals to build simulated lives; these games have proven addictive and have been found to lead some people to unhealthy obsessions.

Augmented reality also saw some gains in the latter half of the 20th century, but it did not gain as much ground as was anticipated by some experts at the turn of the millennium. Augmented reality (AR) is an interactive experience in which technology is used to enhance experiences in the current offline environment, such as using your camera to recognize your location to give you visual directions to your destination when you get lost, or using your camera to reveal virtual art pieces in a museum, “try on” virtual clothing or see how new furniture might fit into your home.

In 2007-2008, when we asked tech experts about eight possible future trends that might occur by the year 2020, 55% agreed with the following scenario:

Many lives are touched by the use of augmented reality or spent interacting in artificial spaces. In 2020, virtual worlds, mirror worlds, and augmented reality are popular network formats, thanks to the rapid evolution of natural, intuitive technology interfaces and personalized information overlays. To be fully connected, advanced organizations and individuals must have a presence in the “metaverse” and/or the “geoWeb.” Most well-equipped internet users will spend some part of their waking hours—at work and at play—at least partially linked to augmentations of the real world or alternate worlds. This lifestyle involves seamless transitions between artificial reality, virtual reality, and the status formerly known as “real life.”

This statement has not yet to come to fruition. While some may have a loyalty to Pokémon Go, Google Maps AR mode or Snapchat’s AR filters and use them daily, most people are not using AR on a regular basis and a smaller share are devoted users of VR.

In the 2007-2008 canvassing, Susan Mernit, executive director of The Crucible and formerly an executive with Yahoo and America Online, argued that while AR and VR may appeal to certain subcultures, it would take much longer for these technologies to go mainstream. She wrote: “This 2020 scenario is appealing to the geeks and the gamers among us, but I don’t see the seamless transitions that this posits happening this quickly—it’s elitist and too far out of the mainstream for many Americans, especially those with less free time. Having said that, I do think there are sectors of society that will use the metaverse to play and to train in disproportionate numbers—and that we will see a rise in virtual worlds as entertainment spaces outside of gaming (think sex, travel, historic simulations).”

In a December 1, 2020 interview, Mernit got another chance to reflect on this topic:

Imagining the Internet Center (ITI): What do you remember about the period when we asked this question and you gave your answer?

Mernit: In 2008, the memory of using a 300-baud modem was not that old. It was still pretty fresh in my memory. So much of what was possible was dictated by access to whatever speed the internet could provide. AR and VR were in their very early stages, but the computing power for them was really a question. Two of the things I couldn’t imagine at the time were today’s increase in computing power and the miniaturization of technology. Having a mobile phone with more computing power than I had on a computer when I answered those questions wasn’t something I could predict.

I didn’t see around those corners in 2008 so my answer was driven by the consumer behavior I saw around me. The internet is a hospitable home to all sorts of obsessive, niche networks.
There are lots of narrow and deep communities and some of them had an interest in AR and VR early on.

Now there are so many applications for AR and VR – telemedicine, virtual travel, arts communities like Meow Wolf, the realty business, lots of the changes spawned by COVID-19. We see so many ways that people are integrating AR and VR as tools for everyday activities.

When I answered these questions, I worked at America Online. Soon after I was in a big layoff and tweeted about it (nobody really did that back then). Then, I went to TechStars in Boulder,
Colorado. I was in a startup that failed very quickly. But one of the other companies there was going to change history. It was Jeff Powers and Vikas Reddy with Occipital. They were developing the early technologies for Occipital VR and AR and not knowing what the best uses might be. They had this amazing technology, but they didn’t know its purpose – that it could become this “middleware” that could serve thousands of companies and applications.

We can’t see around the corner to see what technologies are going to take hold. When we were at TechStars, they were not the favorite. I had no idea that these brilliant people that I was sitting next to were creating something that was a giant game changer.

ITI: How confident were you in your prediction? What factors were you weighing as you wrote the answer you did?

Mernit: What I did know then that is true today is that processing capacity and platform tools drive user behavior more than user behavior drives those tools. With Occipital, there was not a consumer or business problem they were trying to solve. Their interest was in creating capacity.

Over my long career in tech, I’ve seen lots of brilliant ideas for which there was no technical capacity at the time to really build it. When I was starting to work with Wide Area Networks with Waze we were trying to build these big computing tables at a point when there was no World Wide Web. It was hard. Now, there is so much capacity, including on our phones, this is possible. The major insight is that the big innovations come from a blend of technologies and the platforms that convey them.

Look at VSCO in Oakland. They had a very innovative tagging platform for photography which then became a phenomenon with girls. But if those tools hadn’t been there, the capacity to build that niche wouldn’t have been possible. It’s easy for us to think that this cool game will drive usage or this celebrity will draw the audience we need. But it’s really the case that you need the technologies and the platforms in place first and then the innovations are built on them.

AR and VR technologies keep growing. The issue is: How do they capture your attention? How you become aware of them?”

ITI: Has anything surprised you about the way things have unfolded between then and now?

Mernit: I am always surprised – being outside the demographic that is most into this – at the incredible persistence and value of gaming. The amount of time people spend on gaming as a cultural experience is perhaps as great or greater than social media or watching and streaming media that we used to call TV. It’s become so endemic to the life experiences and thought processes of people who are 40 or younger. In 2008, gaming was something that kids did. I work with 35-year-olds now where gaming is a central part of their life. The same thing applies to some people and the social media platform that is central to their lives.

The other big shift is from the “attention economy” to the “like economy.” People’s need to feel personal validation and approval is striking and it ties to people being the owner of their own brand. Their brand brings them acceptance and celebrity – ideally – at a level equivalent to or greater than their peers. That starts with people at age 12. It’s core to how people experience themselves today. It was unimaginable in 2008.

ITI: Where do you see things going in the next decade when it comes to augmented and virtual realities?

Mernit: I’m excited about the idea of immersive environments. As someone who’s moved into a non-profit where we do very “high tech” like blacksmithing and foundry testing metal and glass blowing, I’m wondering how we take these immersive experiences and create virtual ways that bring them really close for users to touch them.

It’s the same impulse that makes people want to do scans of places or monuments or places that they can’t visit directly. We are moving to a world where we will keep growing ways to have alternative experiences of locations and realities. The educational applications are powerful. For instance, you can teach people about the impact of trauma on a child, exploring Adverse Childhood Experiences (ACEs). These are tools to get people the chance to go deep on any subject that interests them.

Another thing I’d like in the future is for search technology to get a lot better. We haven’t yet solved search. Google is so dominant that it’s probably prevented a lot of innovation. Everyone struggles to find what they really want. The tools to save and bookmark what you really want are a pain.

The same goes for personal computing experiences. In the future it will be a lot more voice-driven – it’s getting better and better and smarter and smarter. There will be better ways to use computing that will be “fingerless” and much more helpful in the mobile environment.

The final thing I see as I’m getting older is assistive technologies for people who are aging. They will help people maintain their wellness and use tools to help them lead productive lives. And we’ll see more things like what’s happening in Japan for artificial companionship. People will be able to feed simulacrums of their loved ones into technology and get approximations of those loved ones to spend time with.

6 – The line between home and work

In the 2007-08 canvassing, 56% of experts agreed there would be a blurring of the distinction between work and home life

In 2007, the internet was seeping into nondigital walks of life in ways which had not been seen before. Arguably, that was the year the internet “turned upside down” with the release of the first iPhone just as Facebook and Twitter were really starting to take off. Clearly, the seeds had been planted for work to more easily seep into home life. Whereas some other cellphones could connect to the internet in a limited fashion, the iPhone allowed people to use the internet and access constantly emerging new digital applications anywhere, anytime, all within the palm of their hand.

While BlackBerry had gained popularity with businesspeople for its work-on-the-go capabilities in the late 1990s and early 2000s, the iPhone appealed to a wider market. Suddenly, a more diverse group of Americans had work-capable devices on them at all times. In addition, video conferencing was on the rise. Skype, founded in 2003, had been purchased by eBay in 2005 and would go on to be owned by Microsoft in 2011. In 2010 Apple introduced Facetime. This free video-chat software that could be easily downloaded to any digital device allowed people to talk face to face without concern of location, specialty equipment access or costs. It made it easier for families and bosses and co-workers to stay connected at a distance; it allowed for work to be done from home.

During late 2007 into early 2008, we asked tech experts about eight possible scenarios that might occur by the year 2020. Some 56% of experts agreed with this scenario:

Few lines divide professional time from personal time, and that’s OK. In 2020, well-connected knowledge workers in more-developed nations have willingly eliminated the industrial-age boundaries between work hours and personal time. Outside of formally scheduled activities, work and play are seamlessly integrated in most of these workers’ lives. This is a net-positive for people. They blend personal/professional duties wherever they happen to be when they are called upon to perform them—from their homes, the gym, the mall, a library, and possibly even their company’s communal meeting space, which may exist in a new virtual-reality format.

This prediction has become indisputably true in 2020, as the COVID-19 outbreak drove many Americans to work entirely from home. Video teleconferencing services aided businesses in continuing to operate while employees carried out their work whether it be on their home computers or company-issued devices. Cellphones also blurred the work-home divide, as work calls could be made to personal devices and emails could be sent while on the go. Even before the COVID-19 outbreak, these behaviors were seen, though not on such a simultaneously global scale. While the overall sentiment of the prediction has held true, a few specifics did not come to fruition. Meetings are not held via a virtual or augmented reality environment. However, some video call services do provide levity through their augmented backgrounds and facial filters that change how a person looks.

In the 2007-2008 canvassing, ’Gbenga Sesan, executive director for Paradigm Initiative, based in Nigeria, argued professional and personal life would run simultaneously, with mobile phones playing a key role in this blurring process. He wrote: “Even those who live in developing (or underdeveloped) nations will be able to overcome the barrier of geography through internet access and other connected devices. It may be ‘plug-and-pray’ and not ‘plug-and-play’ but it plugs anyway! It’s now 4:05 a.m. in Lagos, Nigeria, and I’m asking myself if everything I’ve done in the last 5 hours will count as work, rest, play, or sleep-mode tasks. In 2020, professional and personal time will be as far from each other as fingers from the keys on a mobile phone. Multitasking will no longer mean driving and talking alone, but it will include work and play at the same time.”

In an interview on November 25, 2020, he shared new insights:

Imagining the Internet Center (ITI): What do you remember about the period when we asked this question and you gave your answer?

Sesan: When I was in high school in 1991 in Ondo State, Nigeria, there were two computers in the school and the teacher told me, “Computers are not for people like you, you can’t understand how to use them.” It was painful. I made up my mind that I am going to learn how to use computers and teach other people.

That led me to learn computers and start a group of young people online in a Yahoo! Group called “Black Pioneers” and then another group called “E-Nigeria.” I went on to work for a nonprofit because I wanted to give back a year in thanks for the people who had taught me computers. But it turned into six years doing programs that straddled business education and information technology (IT). After that, I founded the Paradigm Initiative and we did lots of training of young people and then turned to policy advocacy. We are in six countries now, working on digital rights and digital inclusion.

The reason I gave the answer I did was that I was speaking from a place of hope. At the time, the big issue in Nigeria was access to the internet. I was thinking that maybe the day will come when we will all have good internet access and I wouldn’t have to be in DC or London or San Francisco to have access to the best tech opportunities.

I didn’t have a crystal ball, but a decade ago it was pretty predictable that work and leisure would be changing. Mobile devices were more widespread and getting smaller and smaller and more affordable. And they could do more things besides phone calls. My training in electrical engineering gave me confidence in Moore’s Law – that every 18 months the power of microchips would double and the price would fall. Phones were becoming as powerful as desktop computers. It only made sense that this would change how and where work gets done and leisure is enjoyed.

ITI: How confident were you in your prediction? What factors were you weighing as you wrote the answer you did?

Sesan: Blackberry phones were available then, but at work they were only used by the “big woman or man.” You had to be a senior businessperson to have a Blackberry phone. I could see that the more people had these devices, the more likely it would be that they would send assignments for people to do in advance of meetings.

In Nigeria in those days all the workers got to the job at 8:00 a.m. and the boss would send instructions to them and then arrive at 10:00 a.m. and want to see how they had handled their tasks. The distinction between being at work and working, and working somewhere outside the office became irrelevant. It was only natural to note that the mobile phone was going to play a much bigger role in work and allow for work to be done anywhere, any time.

ITI: Has anything surprised you about the way things have unfolded between then and now?

Sesan: What could go wrong? One of the easiest things to do is predict the future you are going to work in. I was working in this space and was telling people that they would be able to work remotely in the future and not have to be in London, DC or San Francisco to work with the people who are there.

I confess I’m a big fiction fan and was excited when James Bond could whip out technology and do amazing things with it. I remember the Sandra Bullock movie, “The Net,” and everything that happened in that movie has eventually happened. So, my view is that if writers and Hollywood can think of it, then it’s probably going to happen.

It was still surprising to see the speed at which things happened. It took Nigeria about five years between 1995 and 2000 to get less than 100,000 internet users. In another five years we had almost 5 million more users, and between 2006 and 2010, the number of internet users quintupled. There were 4.5 million internet users in all of Africa in 2000 and that number has now grown to over 526 million.

Many analysts underestimated people’s appetite for technology. Everybody wants an easy life – wants to be comfortable. That’s one driver. Another is that sometimes we are forced to move quickly. The coronavirus hit, and human beings adapted quickly. That was what I was banking on – the selfishness and adaptability of humans.

ITI: Where do you see things going in the next decade when it comes to the distinction between personal and professional life?

Sesan: Three things come to mind for the future. One is that entertainment will become work. People will plant tasks inside entertainment. You will watch a movie and then solve a strategy problem. Scenario planning is another example. We will see businesses that allow people to have fun and in the midst of fun getting their work done. You see some things like that now. Captcha is an example. You go to a website and put in information to prove you are a human being. While you’re doing that you are solving part of a bigger problem. Many people don’t even know how captchas work, but I see the captcha-ization of real-life work.

The second change is a bit worrying for me is that social life is going to disappear. Introverts used to be forced to go to workplaces. Now that technology allows you to work from anywhere you will see lots of people looking for jobs that will allow them to be by themselves. They can determine the pace of their work. Work hours – 9 a.m. to 5 p.m. – will be a thing of the past in many jobs. Especially in the Global North, people will be able to determine when to work, when to rest, when to play. During the day when there is sunshine, they want to move around and be more healthy, rather than being in offices. Then, in the evening people will do the work for the tasks set before them.

The third will be fun, if it happens. It will be an integration of virtual reality into people’s lives. It will allow me to play with my children and still be part of a meeting. My 3-D hologram will be at the meeting and speak for me. The biggest breakthrough, maybe not in 10 years, will be machines can translate thoughts into words. When I think it, you hear it. When that innovation happens, I will be one of the first customers. Imagine spending more time with your family and still being able to be part of serious meetings or conference panels. You could still speak with wisdom, while still feeding your two-year-old the egg that she doesn’t like for breakfast.

7 – Mobile connectivity

In the 2005-06 canvassing, 56% of experts predicted there would be mobile and internet coverage worldwide and that these services would be low-cost.

In late 2005 heading into early 2006 two-thirds of American adults owned a cellphone. The first iPhone would not be released until the summer of 2007, and the first Android wouldn’t come onto the scene until Fall 2008. Similarly, only two-thirds of American adults used the internet. It was the first year that broadband internet use was higher than dial-up. Google had just recently dethroned Yahoo as the most popular search engine. The age of social media was in its infancy. Myspace was starting is meteoric rise and Facebook had just dropped the “The” from its name. Amidst all this change, we asked tech experts about seven scenarios about possible changes that might occur by the year 2020. Some 56% of these experts agreed this scenario would unfold:

By 2020, worldwide network interoperability will be perfected, allowing smooth data flow, authentication and billing; mobile wireless communications will be available to anyone anywhere on the globe at an extremely low cost.

Recent reports by the International Telecommunications Union report that about 93% of the world has access to 3G mobile coverage or a faster service that allows smartphone users to access the internet in addition to being able to call and text. It says 85% of the world population had coverage by a 4G network as of the end of 2020. But access does not guarantee usage. The ITU reported as of 2019, 53.6% of the world’s population used the internet and 3.6 billion people did not. Pew Research surveys around the globe have documented this same pattern of rapid but not universal internet adoption.

At the same time, it is important to note that expenses associated with mobile data vary drastically from country to country. In some places each gigabyte of data costs less than $10, but people in some countries pay much more. The African continent has some of the largest price differences in a country-by-country comparison. Despite there being less than 600 miles separating their nearest boarders, residents of Zimbabwe pay $75.20 per gigabyte on average, whereas each gigabyte of data costs only $.88 cents in the Democratic Republic of the Congo.

These disparities reflect one of the concerns of the experts who responded about this prediction more than 15 years ago. They noted that profit-driven businesses – especially those without competition – would be creating the infrastructure for connectivity and that this might impede consumer access, especially for those who might struggle to afford it.

Additionally, data flows over today’s internet are not always smooth. Governments have shut down the internet as a political act. The human-rights group AccessNow reported that there were 213 internet shutdowns in 33 countries in 2019, an increase from previous years, and in COVID-ravaged 2020 more could be likely.

In the 2005-2006 Elon-Pew canvassing of experts, Louis Nauges, the current chief strategy officer at Wizy.io and longtime internet strategist, predicted, “Mobile internet will be dominant. By 2020, most mobile networks will provide 1-gigabit-per-second-minimum speed, anywhere, anytime. Dominant access tools will be mobile, with powerful infrastructure characteristics (memory, processing power, access tools), but zero applications; all applications will come from the Net.”

In an email interview with us on October 28, 2020, Nauges discussed then and now:

Imagining the Internet Center (ITI): What do you remember about the period when we asked this question and you gave your answer?

Nauges: A time of tremendous innovations, lots of excitement about the potentials of technology and the internet. I was more focused on new technologies and less on their impacts on society.

ITI: How confident were you in your prediction? What factors were you weighing as you wrote the answer you did?

Nauges: I have always been curious about technology, since the beginning of my professional activities which have been in Information Technology (IT) since day 1. I spend an average of two hours per day looking for new ideas, new solutions, new providers. This may explain why my predictions were reasonably accurate.

ITI: When it comes to the spread of internet connectivity – especially via mobile devices – has anything surprised you about the way things have unfolded between then and now?

Nauges: I am surprised by the fact that the U.S. has lost its leadership in internet technologies and usage. Asia is now way ahead of the U.S. in mobile usage, in 5G deployments, low-cost access to internet, low-cost devices. I am also surprised by the collapse of many former leading companies like Nokia, Blackberry and the emergence of a duopoly: iOS and Android. iOS and Android did not exist in 2006!

ITI: Where do you see things going in the next decade when it comes to basic access to the internet and how digital technology will be used?

Nauges: Technology is no longer an issue. Individual usage and, even more importantly, enterprise usage are way behind what technology permits. I am now trying to forecast what could be the most important impacts of digital technology on our world, and here are my bets:

  1. Digital Technology will become the “best friend” of our planet. It will help reduce our energy consumption and CO2 emissions, thanks to a huge “dematerialisation” of the economy. Some examples of this incoming “Digital Frugality”:
  • Universal 3D printing of material, food and medical supplies will reduce the need for transportation of people and materials. Technology-based vertical farming will be everywhere, reducing the need for land for agriculture and animal breeding.
  • Public cloud leaders AWS, Google, Azure, with efficient and carbon-neutral data centers, will replace 90% of legacy private data centers, reducing by more than 80% the use of carbon-based electricity.
  • Travelling for business will be reduced by more than 80%, thanks to very high-quality tools for remote working.
  • Worldwide distribution of work will explode, helping people living in remote places, developing countries, small cities have the same potential for success as people living in Silicon Valley, New York or Paris.

2. FLW (Front Line Workers) equipped with innovative digital tools and applications will become the key actors of the digital transformation of the economy. They represent more than 75% of workers in the world, and their essential roles will finally be recognized. The pyramid of power will be turned upside-down thanks to mobile technologies. Headquarters roles will diminish, the number of white-collar workers will shrink, and they will be at the service of FLW, not the reverse.

3. Universal, cheap access to very fast networks will help developing economies grow faster. Fast internet access will be a great equalizer of opportunities worldwide. Where I live will no longer be a barrier to high-value jobs. And, this is possible, but not certain: political resistance, corruption, denial of access to specific groups of people could block the potential benefits of technology and create huge disruptions in the world.

8 – Responsive organizations

In the 2009-10 canvassing, 72% of experts said they expect that uses of the internet will lead to significantly more efficient and responsive governments, businesses, non-profits and other mainstream institutions.

There has been an irresistible urge among technology builders, scholars and other analysts since the earliest days of the internet to forecast the likely effects that digital connectivity would have on almost every conceivable aspect of life, including on organizations of all kinds.

In 2009-10, we asked tech experts about 10 scenarios and possibilities that could happen by the year 2020. A majority (72%) of experts agreed with the following scenario:

By 2020, innovative forms of online cooperation will result in significantly more efficient and responsive governments, business, non-profits, and other mainstream institutions.

Conversely, 26% experts said the following would be the case:

By 2020, governments, businesses, non-profits and other mainstream institutions will primarily retain familiar 20th century models for conduct of relationships with citizens and consumers online and offline.

While the internet has made it easier for people to voice their questions, concerns and opinions directly with governments, businesses and other institutions, the ability of these groups to adequately respond to the mass amounts of feedback they get can be mixed, sometimes to no fault of their own. For instance, bad actors online can try to sway these institutions by using bots or spamming systems with misleading feedback.

While government responsiveness is up in most countries around the world as of 2019, research suggest that most Americans question the responsiveness of their elected officials. Indeed, they have deep concerns about the bedrock trustworthiness of all kinds of democratic, commercial and other institutions.

In the 2009-10 canvassing, Stephen Downes, senior research officer with the National Research Council in Canada, wrote about a larger kind of transformation that he thought organizations would go through by the year 2020: “This question presupposes that ‘governments, businesses, non-profits, and other mainstream institutions’ will continue to exist, and will either be more responsive or not. In fact, by 2020, the changing nature of these institutions will have become clear, and we will be well into the process of replacing industrial-age institutions with information-age ones. It won’t even make sense to talk of these institutions as ‘efficient’ or ‘responsive’ – these are economists’ terms presuppose a client-server model of governance. But by 2020, it will be clear that people are governing, managing, educating, and supporting themselves, not waiting for some institution to be ‘effective’ or ‘responsive’ to these needs.”

In an interview with us on November 23, 2020, Downes looked back and forward:

Imagining the Internet Center (ITI): What do you remember about the period when we asked this question and you gave your answer?

Downes: In 2010 we’d been through the financial collapse and the failure of old institutions. It was a couple of years after George Siemens and I launched our first massive online course and just before the MOOC hype hit. The core concept we were talking about was “networked learning,” instead of learning being this mass phenomenon. The networked approach was different because the communication of ideas didn’t go from one individual source at the center to everybody all at once, but rather from community to community through a pattern of diffusion through a network.

That meant you’d need a different kind of organization that supported do-it-yourself kind of things. People need to do things for themselves and not depend on a single central source because the bottleneck would be incredible. That’s what I was thinking at the time – that a new kind of organization was better suited to learning.

ITI: How confident were you in your prediction? What factors were you weighing as you wrote the answer you did, especially when you think about the forces that might resist?

Downes: Every existing organization was going to be a force against it. Nothing changes without resistance. The MOOC was one step in an ongoing process of decentralization and distribution that people could see being applied to all kinds of goods and services. There were other things. Yochai Benkler’s “Wealth of Networks” had been out for a few years. It was before deep learning and artificial intelligence, but people knew that way of thinking and that the technology would soon be coming.

I actually wouldn’t call it “resistance.” It’s more inertia. People weren’t really against networked society. It was just not how things were done. And until people see a compelling reason to do it and unless doing it is easy to do, people were probably not going to change.

ITI: Has anything surprised you about the way things have unfolded between then and now?

Downes: I was surprised how slowly it has all happened. For all the talk about ‘the future is quickly arriving and affecting organizational change,’ it actually moves quite slowly. I look at this as a 20- or 30-year process and we’re not there yet. It’s going to be another 10 years before we’re clearly, identifiably there.

That said, as I predicted, we’re beginning to see the first signs now that we’re moving towards a self-serve, DIY culture, even though it’s still not mainstream for organizations. Governments and companies are still centrally organized and managed, despite the spread of distributed sub-systems. Even the big technology companies are centrally managed systems.

But if you look at what’s happening in places like the DevOps community and the trend toward self-service portals, we begin to see movement. Things still have centralized management, but when we get to decentralized decision-making and management, the future I was describing 10 years ago actually takes hold. It’ll be another 10 years before we’re in a distributed governance mode. The change will come first at the local organizations – communities, companies, co-ops. National organizations will be the last to change because they’re national organizations and inherently centralized.

ITI: Where do you see things going in the next decade when it comes to the effects of the internet on responsiveness and effectiveness of governments, businesses, and other institutions?

Downes: I’m looking for gradualism. The internet is funny and society is funny. You get these changes that are 20 years in the making and then they arrive in front of everyone overnight and “everything has changed.”

The pandemic is a good example of that. All the stuff that made the pandemic possible was stuff that was 20 years in development. Looking back, you can see the precursors – Ebola, SARS, even HIV. You can see the way how our more interconnected society made a pandemic more and more likely. And then it happens “overnight,” and people say, “Oh, this pandemic came out of nowhere.”

Distributed technology will roll out gradually, gradually, gradually, gradually, with a few blips. Blockchain was one of those blips. There were some elements of decentralized, distributed networked-based technology in blockchain and people said, “Aha, that’s the things that is going to change it all!” But it wasn’t. But you could see how people were ready to pounce on that. One thing will catch on and overturn an entire industry, and all of a sudden we will be in this age of decentralized, distributed technology management – and organization and governance.

When I started at NRC in 2001, they had everybody do an introductory lecture and my intro lecture was called “the budget calculator.” The calculator would be basically for the federal budget of Canada and everybody could use the budget calculator to create a budget – at a high level or at a really granular level. And it would be connected to all the social indicators and economic indicators like the inflation rate, price of oil and the price of wheat, etc. The system would make projections or for the previous government, or how your budget would have done if it had been implemented. Everybody could create their own, or form groups to create a collaborative budget, and the system would produce an overall report that is a “consensus budget.”

At a certain point in time when everybody can contribute to this consensus budget, it becomes very hard for a government to deviate from that consensus budget. They’d have to explain the deviations. Over time, the actual government mechanism for setting the budget of the country would become the consensus budget. This process would become the norm for how we make decisions. That will be the time when people say, “Oh, now we’re in this new mode of governance and organization.”

I still haven’t built the budget calculator, but I want to. And one day, we will have been building it for 20 years and then it will have arrived “overnight.”

Acknowledgments

We are extremely thankful for the contributions of the thousands of people who have participated since 2005 in the ongoing series of the Pew Research Center and Imagining the Internet Center canvassings of experts on digital life.

Special thanks to the folks who spoke with us again in 2020 about their earlier expert responses and how these issues and topics look today: Nicholas Carr, Jamais Cascio, Christine Boese, Stowe Boyd, David Clark, Susan Mernit, ’Gbenga Sesan, Louis Nauges and Stephen Downes.

This report is a collaborative effort based on the input and analysis of the following individuals.

Primary researchers
Emily A. Vogels, Research Associate, Pew Research Center
Lee Rainie, Director, Internet and Technology Research at the Pew Research Center
Janna Quitney Anderson, Director, Elon University’s Imagining the Internet Center