Apprehending the Future: Emerging Technologies, from Science Fiction to Campus Reality

min read

©2009 Bryan Alexander. The text of this article is licensed under the Creative Commons Attribution 3.0 License (http://creativecommons.org/licenses/by/3.0/).

EDUCAUSE Review, vol. 44, no. 3 (May/June 2009): 12–29

Bryan Alexander ([email protected]) is Director of Research at the National Institute for Technology and Liberal Education (NITLE). He blogs at <http://b2e.nitle.org/>.

Comments on this article can be posted to the web via the link at the bottom of this page.

How can those of us in higher education best understand new technologies? The phrases "emerging technologies" and "evolving technologies" remind us that the digital world is largely in flux. New devices, altered applications, and shifting practices keep crossing over the horizon—or quietly appearing in our midst.

Deciding which technologies to support for teaching and learning—and how to support them—depends, first, on our ability to learn about each emerging development. Selecting a platform without knowing what is coming right behind it can be risky. Similarly, it is folly to grasp onto a technology without seeing the variety of ways that the technology can actually be used. If William Gibson was right—"the street finds its own uses for things"—then academic computing needs to be sure of its "street smarts."1

But trying to grapple with what comes next is a deep problem. Doing so is partly a matter of science fiction, which consists, after all, of the stories we tell about the future. Doing so is also an issue of complexity, since each practice, or device, or network, or application comes embedded in a nest of other practices, or devices, or networks, or applications. Emerging technologies are a matter not only of qualitative challenge but also of sheer quantitative overload. Web 2.0, gaming, wireless and mobile devices, virtual worlds, even Web 3.0 in all its unrealized potential—each churns out new developments daily and connects with other domains to ramp up the problem still further.

This article will introduce and explore methods for apprehending the future as it applies to the world of higher education and information technology.2 These are not hypothetical approaches; they are realized, documented, and applied methods. There is no perfect method; nor has any one approach emerged to overshadow the others. This article will thus explore each for its specific affordances, structures, and practical usage. Together, they represent an aggregate, sector-wide movement that tries to help academics understand the future as it hits the present. Put another way, these future-scanners seek to follow the translation of digital ideas from science fiction to campus reality.

Keeping an Eye on What's Next: The Environmental Scan

One popular method for seeing what's coming over the horizon is to repeatedly survey that horizon, looking for the leading edges of new projects and trends. This is usually referred to as an environmental scan and is based on using quantity to defeat the problem of complexity. Such projects consult multiple sources, comparing details across the spectrum and trying to find complementary perspectives. The projects can be conducted on various timelines, from a single, once-off attempt to continuous monitoring.3

Examples of such approaches are plentiful. Several journals and many blogs offer continuous surveys of emerging technologies: MIT's Technology Review (http://www.technologyreview.com/); Ray Kurzweil's AI.net site (http://www.kurzweilai.net/news/frame.html?main=news.html);and Jamais Cascio's "Open the Future" (http://www.openthefuture.com/). Of course, journals and blogs themselves are fodder for environmental scanning, since they offer bits of content oriented toward the present. A set of RSS feeds is one of the best tools that an environmental-scanner can possess.

In higher education, the Online Computer Library Center (OCLC) has conducted one environmental scan every few years, the most recent being in 2000 and 2003.4 The Association of Research Libraries (ARL) published a more recent study, surveying a series of threats, transformations, and opportunities for that sector: scholarly communication, public policy, and the library's role on campus.5 The EDUCAUSE Evolving Technologies Committee issues reports and an article annually on these technologies, addressing virtual worlds, business process management, location-aware computing, regulatory compliance, and green enterprise computing in 2008.6

Another form of environmental scanning for emerging technologies in higher education could consist of members of an academic computing group monitoring a source or a small group of sources and then pooling their observations through regular meetings and/or a blog. A cross-population campus group, perhaps organized by a computing committee or the library, could do something similar, taking advantage of different professional perspectives and backgrounds: faculty, students, librarians, instructional technologists, administrators. With each round of observation and sharing, some themes will begin to emerge. Indeed, such scanning projects can generate their own vocabulary of key terms, an ontology of their futures.

The environmental scan method offers several advantages, starting with the fact that drawing on multiple sources and perspectives can reduce the chances of bias or sample error. The wider the scan, the better will be the chance of hitting the first trace of items that, although small at the moment, could expand into prominence. A further advantage is pedagogical: trying to keep track of a diverse set of domains requires a wide range of intellectual competencies. As new technologies emerge, more learning is required in subfields or entire disciplines, such as nanotechnology or digital copyright policy.

Disadvantages of this method start from its strengths: environmental scanning requires a great deal of sifting, searching, and analyzing. Finding the proverbial needle in the haystack isn't useful if its significance can't be recognized. Furthermore, the large amount of work necessary for both scanning and analyzing can be daunting, especially for smaller schools or enterprises.

Working the Experts: The Delphi Method

A different approach to identifying emerging technologies focuses on experts and their interpretation of events. The Delphi method, named after the Greek oracle (the ultimate expert), is process-driven. Experts in a field are assembled, either physically or virtually, and consulted on emergent developments in that domain. The consultation begins as a query set, with the experts asked to provide feedback on a group of questions. For emerging situations, these questions concern trends or possible outcomes. The organizer collates the results and then uses them to generate a second query round, which is submitted and processed. The Delphi process can be implemented in a single face-to-face sitting or over an extended period of time. The Delphi organizer might structure a series of discussions, in which the group members compare notes, assess others' observations, and gradually surface a set of topics. That set is then narrowed down through a consensus process.7

One of the best-known Delphi projects in higher education is the Horizon Project (http://www.nmc.org/horizon). Launched by the New Media Consortium (NMC) in 2002 and now conducted in collaboration with the EDUCAUSE Learning Initiative (ELI), this project draws on a large body of experts across academia. Over several months, this group identifies trends, ranks their impact, compares estimates, and progressively builds up a profile of emerging technologies. NMC staff then write the annual Horizon Report, which is published in print and made freely available on the web. In half a year, the process begins once again. The January 2009 report identified the following technologies:

  • Mobiles (time-to-adoption: one year or less)
  • Cloud computing (time-to-adoption: one year or less)
  • Geo-Everything (time-to-adoption: two to three years)
  • The Personal Web (time-to-adoption: two to three years)
  • Semantic-Aware Applications (time-to-adoption: four to five years)
  • Smart Objects (time-to-adoption: four to five years)8

Another application of the Delphi method in higher education is "The Future of the Internet III," produced by another collaboration, this one between the Pew Internet & American Life Project and Elon University. The project leaders developed an instrument and then surveyed a series of experts and thought leaders. Questions concerned possible future scenarios, as well as extrapolations from the present. Quantitative and qualitative responses were ranked and then combined into a profile: "In this web-based survey, 578 leading Internet activists, builders, and commentators and 618 additional stakeholders (1,196 respondents) were asked to assess thought-provoking proposed scenarios for the year 2020. The point of this non-random survey was to add focused input to the ongoing conversation about the future of the Internet; respondents' written elaborations—the qualitative results—were the most valuable data gathered by the study."9 Though similar to the topics derived by NMC/ELI, the Pew/Elon findings differ in several key ways. The time period is longer (looking ahead to 2020), and the frame of reference is broader:

  • The mobile device will be the primary connection tool to the Internet for most people in the world in 2020.
  • The transparency of people and organizations will increase, but that will not necessarily yield more personal integrity, social tolerance, or forgiveness.
  • Talk and touch user-interfaces with the Internet will be more prevalent and accepted by 2020.
  • Those working to enforce intellectual property law and copyright protection will remain in a continuing "arms race," with the crackers who will find ways to copy and share content without payment.
  • The divisions between "personal" time and work time and between physical and virtual reality will be further erased for everyone who's connected, and the results will be mixed in terms of social relations.
  • Next-generation engineering of the network to improve the current Internet architecture is more likely than an effort to rebuild the architecture from scratch.10

A third project using the Delphi method is the EDUCAUSE Top Teaching and Learning Challenges 2009 (http://www.educause.edu/eli/Challenges/127397). In phase one, the project leaders queried experts and practitioners in numerous focus groups, gradually building up an aggregate model of key issues as identified by the members of this community. Addressing topics that are both similar to and different from those identified by the Pew/Elon and NMC/ELI projects, the top-five issues are cast in a different syntactical form, namely that of professional challenges:

  1. Creating learning environments that promote active learning, critical thinking, collaborative learning, and knowledge creation
  2. Developing 21st-century literacies (information, digital, and visual) among students and faculty
  3. Reaching and engaging today's learner
  4. Encouraging faculty adoption and innovation in teaching and learning with IT
  5. Advancing innovation in teaching and learning with technology in an era of budget cuts

Phase two of this project involves the cultivation of a network of community solutions, including the use of wikis for sharing content (http://www.educause.edu/wiki/TLChallenges09?redir).11

The Delphi method offers several advantages. Drawing on experts lets the process leverage professional knowledge. The iteration approach generates a wide range of concepts. And since the method has been practiced extensively and over time, best practices are readily available.12 The drawbacks are subtle and largely social. One problem is that Delphi outcomes can be driven by a desire for consensus, rather than actual agreement, meaning that divergent ideas can get quashed.13 In addition, the process can be resource-intensive, especially in terms of time.

Gaming the Futures: Prediction Markets

A very different approach to identifying emerging trends involves gaming, the wisdom of crowds, and a metaphorical use of economic models. Prediction markets are games structured like commodity futures markets but using pretend (usually) currencies and trading on ideas or events rather than goods.

The first such market was (and is) the Iowa Electronic Markets. Starting in the 1960s, a group of political scientists set up a mock market in U.S. presidential candidates (http://www.biz.uiowa.edu/iem/markets/Pres08.html). Players were issued fake money and instructed to "buy" or "sell" in politicians based on the politicians' current performance. A well-delivered speech, for example, could lead one trader to buy shares in the speaker; a press conference flub, on the other hand, might drive sell-offs. Over time and many refinements, the Iowa markets have emerged as unusually accurate predictors.

Markets have since appeared for a wide variety of topics, from politics to sports, from the Oscar Awards to new technologies. One example is Pop!Tech Markets (http://markets.poptech.org/). Companies— including Google14—and governments have also run prediction markets. Prediction markets have even begun to appear in pop culture and fiction. In Ken MacLeod's science fiction novel Learning the World (2005), for example, a group of human colonists approaching a new world continually run a futures market based on aspects of that planet's biosphere and physical composition, trading in response to each new piece of intelligence as it arrives.

A bit closer to Earth is the higher education prediction market that I helped to organize, as director of research for the National Institute for Technology and Liberal Education (NITLE). Started as a pilot project in early 2008, the NITLE market drew on our dual interests in gaming and emerging technologies. A group of advisors helped us understand how the markets were used, and they advised us on shaping the next phase. Together with our advisors and a small number of invited traders, NITLE tested the pilot market site over the spring and summer of 2008. In September 2008, we launched a free and beta version of the NITLE Prediction Markets (http://markets.nitle.org/), aimed at a broader audience. Currently, around two hundred traders explore several dozen propositions. These propositions, which appear about every two weeks, cover a wide range of topics, from smartphone market dominance to open-source course management systems.15

Prediction markets possess several advantages. First, their distributed nature makes it difficult for single traders or organizers to impose their will, without breaking rules. Second, the continuous nature of prediction markets opens them up to continual revision. So, unlike polls, the markets are finely grained, changing frequently in response to conditions. Third, the game nature of this venue can build player excitement, again in contrast to polling, in which repetition can lead to audience fatigue. Altogether, prediction markets have the ability to boost collective intelligence.16

Disadvantages have also been extensively documented. Whereas the larger the pool of traders, the better the results are likely to be, the converse is also applicable: smaller groups are increasingly likely to fall victim to small-group errors or other biases. In addition, market shares—or "propositions"—need to be carefully phrased in order to let traders play effectively; unclear statements, like badly formed test questions, can increase the amount of noise in a market's signals.

Role-Playing Futures: Scenarios

Games enter into another futurological form: the scenario. Unlike polls, surveys, or markets, scenarios are social processes based on role-playing. Individuals or teams represent actors in a situation. Scenario organizers portray events through various media and then facilitate as players react in accordance with the actors they are simulating. As defined pithily in the Forecasting Dictionary, a scenario is "a story about what happened in the future" (http://www.forecastingprinciples.com/forecastingdictionary.html).

Like theater or performance art, scenarios are open to many styles of organization and implementation.17 Background information can be conveyed by oral presentation or by multimedia documents. Participants may represent themselves, or they may act as exemplars of their professional role, or they may play some other type of person entirely. A scenario could be run in an hour or over weeks, and it could be held around a table or across an urban landscape. Scenarios might be run only once, or they might be repeated with iterations. On the one hand, three people can sit around a table and play by speaking aloud, using laptops in front of them for information. On the other hand, the Great Southern California ShakeOut (http://www.shakeout.org/) used thousands of people to simulate a Los Angeles–area crisis in 2008. Scenarios can, of course, be run entirely online or even in a virtual world; for example, an Illinois Wesleyan University psychology professor is experimenting with health-based scenarios in Second Life.18

Scenarios can even be turned around so that the social process is about creating scenarios rather than playing them. A good example of this is the exercise that the consulting firm Adaptive Path organized for Mozilla Labs in the fall of 2008:

[The leader] called on a whole lot of smart people and led them (and a bunch more from both Adaptive Path and Mozilla) through a two-day workshop to forecast one possible future for browsers and the Web. Through a series of group exercises, we identified three major trends that we thought would have the biggest impact on the web:

  • Augmented Reality: The gap is closing between the Web and the world. Services that know where you are and adapt accordingly will become commonplace. The web becomes fully integrated into every physical environment.
  • Data Abundance: There's more data available to us all the time—both the data we produce intentionally and the data we throw off as a by-product of other activities. The web will play a key role in how people access, manage, and make sense of all that data.
  • Virtual Identity: People are increasingly expected to have a digital presence as well as a physical one. We inhabit spaces online, but we also create them through our personal expression and participation in the digital realm19

As with prediction markets, scenarios offer the advantages of play: participant engagement; an opening for creativity. Role-play allows participants to think through processes differently than they would by acting as themselves, since assumptions are now made explicit through background information or through an understanding of other actors' positions. The flexibility that scenarios offer organizers lets them map easily onto the resource constraints of different situations—a flexibility especially appealing in today's economic crisis.

Daniel Rasmus describes running several scenarios with Microsoft, exploring the future of learning and technology. One of the scenarios was "Proud Tower":

In Proud Tower, for instance, a world where corporate interests dominate and corporations subsume much of the role now played by government, education is closely aligned with corporate objectives. In this scenario, education must ensure that workers can contribute appropriate levels of value to corporations. Curriculum is targeted toward the requirements of local organizations as those are the most likely employers for graduates. Although travel is not restricted, economic forces motivate people to remain associated with their local employment environment. Colleges and universities are seen not as separate institutions but as part of a continuum of learning and preparation that extends through employment. Students who excel and demonstrate the motivation for higher education receive that education with the expectation that they will later return the corporation's investment. Early identification of aptitude is seen as a competitive advantage as measures can be taken early in a child's education to motivate him or her toward local corporate loyalty, avoiding the costs of losing talent to external recruiting.20

Although few large-scale or public examples of scenarios exist in the field of higher education technology, colleges and universities could implement scenario planning in a fairly wide variety of ways. For instance, in a cross-campus group representing major professions (e.g., faculty, librarian, technologist), players can be assigned roles, possibly the ones they have in everyday life. Scenario organizers then reveal the situation: Google has launched a course management service, iLMS. Players analyze the situation, confer, and then describe their likely reactions. Scenario organizers process the reactions and present the likely results, asking for another round of reactions (the group decides to reject Google, but a student-led petition drive to adopt iLMS wins broad support; how to respond?). In the process, participants learn more about each other, their institutional functioning, and perhaps some content—here, Web 2.0—and all at a fairly low cost.

Scenario planning is flexible enough to be included in other prediction methods. In the Delphi method, for example, the Pew/Elon report asked respondents to imagine scenarios of possible technological systems. In prediction markets, the possible outcomes for each proposition could be considered mini-scenarios.

Building Networks for Worlds Unborn: Crowdsourcing

Broadening our perspectives by following or working with others in disparate locations can greatly benefit our ability to apprehend the future. Crowdsourcing has gradually become popular as a strategy for groups to solve problems. This method involves packaging a problem in such a way as to invite nonexpert contributions and then distributing that request for help to the world at large. Examples from the sciences are well-known (e.g., SETI@home). The twist is that we can now crowdsource the future.

For example, in December 2008, I posted a question on Twitter about library issues in the next twelve months: "What are the most important emerging technologies, or issues, for American libraries, going into 2009? Summoning Twittersource!" I received a steady stream of responses (see Figure 1).21 Unlike Delphi methods, this was not necessarily a poll of experts. Unlike prediction markets, this was a one-time query, with responses ceasing after about two days. Unlike scenarios, no specific framework was laid out—neither background nor sequence of events. But note the benefits: a rapid stream of feedback; and the fact that this event is now part of the publicly accessible Twitterstream and blogosphere, to be drawn on by others.

Figure 1. A Sample of Twitter Responses

Figure 1
Reprinted with permission.

In another example, the Institute for the Future (IFTF) used crowdsourcing to develop scenarios for the year 2019:

At our Fall 2008 Technology Horizons Conference, we crowd-sourced five questions (via Twitter, blogs, email, and SMS) about Blended Realities in 2019.

We received over 300 micro-forecasts in less than 24 hours. Here are the massively collaborative results!

The future of SOCIETY: It is 2019. How do you share your feelings?

The future of HEALTH: It's 2019. Describe your experience with health care.

The future of SECURITY: In 2019, who defines your identities, and who governs them?

The future of ENERGY: In 2019, what are your energy sources for your mobile, home and business life? How are they advantageous? Disadvantageous?

The future of FOOD: In 2019, how do you decide what is for dinner?22

At the same time, IFTF launched Superstruct (http://www.superstructgame.org/), a massively multiplayer online forecasting game. In the six-week game, the Ten-Year Forecast team at IFTF published a series of prompts for issues that could be important ten years from now and then invited users to contribute content that would turn the prompts into fleshed-out scenarios.

Using crowdsourcing to build scenarios regarding a political science problem, Cheryl Rofer ran two "blog tanks" this past summer on Whirled View. Responses came via comments to the original blog post, e-mail messages, and posts on other blogs.23 Also in academia, a U.K. project (Beyond Current Horizons) recently crowdsourced the overall topic of technology-enabled teaching and learning in 2025. The Million Futures website (http://www.millionfutures.org.uk/) aggregated responses from anywhere in the world.

Like each of the other strategies, crowdsourcing offers advantages and disadvantages. Crowdsourcing can draw on diverse perspectives, avoiding standpoint bias and groupthink. By throwing a question to a very large public, this method is capable of scaling up numbers without expending a great deal of resources. Drawing on the passions of individuals can tap deep sources of creativity and energy. However, that very strength turns into a weakness, since rich responses require corresponding investments of processing and sifting. An opposite problem occurs when a crowd doesn't form in sufficient numbers.

The Black Swan (and Other Challenges)

Futurological methods are still, at best, partial works in progress. No method has yet succeeded in accurately predicting the future.

One challenge to any futures method is the sheer complexity of the future. The present-day world is teeming with multiple and ramifying details. These are rendered into a higher order of complexity when advanced in the stream of time. The methods discussed above try to solve this problem by abstracting the details into simpler shapes or by isolating the details out from larger backgrounds. Both of these strategies run obvious risks (inaccuracy and accident, respectively).

Another challenge to futurism is the "unknown unknown," a recently coined phrase so resonant as to spawn a slang contraction: "unk-unk."24 Ultimately, it is impossible to imagine a development that we don't know exists or don't know is about to emerge. We can imagine possibilities, from jetpacks to fast interstellar travel to a generally accepted solution to copyright problems. But new categories of technologies, new types we aren't even thinking of, are sometimes precisely the ones that erupt most noisily. Consider, for example, Twitter (http://www.twitter.com). "Microblogging" was not considered to be a likely development when the Odeo team first launched Twitter. In retrospect, of course, such developments seem clear, even logical. These categories appear as fusions of—or in now-obvious gaps between—other categories. Obviously, the task of getting at these new categories would be far easier if our view were historical rather than prospective.

Perhaps the gravest challenge to any approach for apprehending the future is what Nassim Nicholas Taleb has memorably dubbed "The Black Swan." Taleb uses the phrase to refer to unlikely events, either unperceived in the present or determined to be statistically improbable—until they occur and have enormous effects. Taleb finds black swans to be quite rare in the world but game-changers when they occur: "Histories and societies do not crawl. They make jumps. They go from fracture to fracture, with a few vibrations in between. Yet we (and historians) like to believe in the predictable, small incremental progression." Put extremely, "events, it turns out, are almost always outlandish."25

Well-known examples of black swans include the rise of most religions and the September 11 attacks in the United States. A more recent example appeared on Wall Street, when an influential mathematical formula, known as a Gaussian copula model, failed under unusual circumstances, after having performed well for the overwhelming majority of cases: "In the CDO [collateralized debt obligations] market, people used the Gaussian copula model to convince themselves they didn't have any risk at all, when in fact they just didn't have any risk 99 percent of the time. The other 1 percent of the time they blew up. Those explosions may have been rare, but they could destroy all previous gains, and then some."26 Examples of future predictions running into black swans can also be found in science fiction. In Isaac Asimov's classic Foundation trilogy (1951–53), a future historian creates a nearly perfect science of apprehending the future—"psychohistory"—which yields mathematically precise determinations of the future, nearly every time. But eventually the 1 percent chance occurs, the black swan emerges, and a statistically unlikely character breaks through psychohistory's best predictions.

If methods used to apprehend the future run into such debilitating problems as complexity, unknown unknowns, and black swans, why should we attempt the enterprise at all? If the outcomes are so poorly applicable to the actual world, to real futures as they unfold into the present, how can those outcomes be worth the expenditure of our resources, especially in such economically dangerous times?

One answer argues that these methods do, sometimes, get things right. Scenarios, for example, can show how a group reacts to a crisis. Prediction markets hit the truth often enough to spur a small industry of such markets. The Delphi method draws on the real knowledge of domain experts, who can extrapolate within their field based on deep knowledge of actors and trends. Conversely, environmental scanning can alert us to developments occurring outside of our main professional focus. Crowdsourcing is a relative newcomer to future studies, but it offers a benefit along the same lines as environmental scanning, generating input from multiple perspectives.

Another reply is that over years of practice, we have found ways to improve the performance of these methods. We know, for instance, that people reflecting on their own actions tend to have a bias toward optimism, and so we can correct for that. Groups in discussion also tend toward consensus, broadly speaking, so we need to structure their processes to allow divergent views to surface. Surveying the literature, J. Scott Armstrong, a professor at the Wharton School, University of Pennsylvania, deduced a series of nine top-level best practices that can help improve accuracy across forecasting methods:

  1. Match the forecasting method to the situation
  2. Use domain knowledge
  3. Structure the problem
  4. Model experts' forecasts
  5. Represent the problem realistically
  6. Use causal models when you have good information
  7. Use simple quantitative methods
  8. Be conservative when uncertain
  9. Combine forecasts27

These generalizations apply readily to technology and higher education. We have been working on metrics for benchmarking and assessment for years, providing ample fodder for "simple quantitative methods."

Perhaps the best answer to the question of whether we should attempt to apprehend the future is that doing so prepares us for events when they occur. The intellectual exercise of working through options and possibilities stretches our personal and institutional horizons, building intelligence and flexibility. Just as learning how to use one tool prepares us to better grasp the next, similar tool, thinking through different hypothetical scenarios and trends helps us know how to react to and take advantage of the ones that actually cross over the horizon.

Conclusion

Technology development has yet to slow down, and the use of electronic devices continues to grow nearly past our ability to keep up. Consider Figure 2, one small liberal arts college's model of the online evolution. The complexity of what is described only increases over the years. New pedagogical and scholarly forms appear. Already established products and platforms morph. Practices change in spite of, or because of, financial problems.

Figure 2. Evolution of the Online Ecosystem

Figure 2
Source: Jay Collier, "The Online Ecosystem (Redux)," Bates Online Media, November 17, 2008, <http://batesmedia.net/2008/11/17/the-online-ecosystem-redux/>.

That complexity demands non-simple responses. Each of the techniques sketched above offers one way of helping groups to think through these emergent forces and to apprehend the future. Crowdsourcing, scenarios, prediction markets, the Delphi method, and environmental scanning are complementary strategies. Using several of these methods can teach us to learn about the future in more sophisticated, pro-active ways. If the methods appear strange, resembling science fiction, perhaps that is a sign of their aptness for the future, since the future often appears strange just before it becomes ordinary—or, in our case, just before it becomes a campus reality. As higher education budgets clamp down and the future hurtles toward us, we need these methods and techniques as allies that can help us to survive . . . and to learn.

Notes
  1. William Gibson, Burning Chrome (New York: Arbor House, 1986).
  2. This article does not cover all available methods; other approaches range from simple extrapolation to analogies. For a quick survey, consider either of these dynamic charts: <http://www.forecastingprinciples.com/methodologytree.html>; <http://www.forecastingprinciples.com/selection_tree.html>. For a good example of practical extrapolation using digital technologies, see Google's Flu Trends project: <http://www.google.org/about/flutrends/how.html>.
  3. J. L. Morrison, "Environmental Scanning," in M. A. Whitely, J. D. Porter, and R. H. Fenske, eds., A Primer for New Institutional Researchers (Tallahassee, Fla.: Association for Institutional Research, 1992), pp. 86–99.
  4. Online Computer Library Center, The 2003 OCLC Environmental Scan: Pattern Recognition (Dublin, Ohio: OCLC, 2004), <http://www.oclc.org/reports/escan/toc.htm>.
  5. Association of Research Libraries, Transformational Times: An Environmental Scan Prepared for the ARL Strategic Plan Review Task Force (Washington, D.C.: ARL, 2009), <http://www.arl.org/bm~doc/transformational-times.pdf>.
  6. See Beth Forrest Warner and the 2008 EDUCAUSE Evolving Technologies Committee, "Glimpses of Our IT Future: What's Green, Mobile, and Regulated All Over?" EDUCAUSE Review, vol. 43, no. 6 (November/December 2008), <http://www.educause.edu/EDUCAUSE+Review/EDUCAUSEReviewMagazineVolume43/GlimpsesofOurITFutureWhatsGree/163266>. Individual reports for the years 2000–2008 can be found on the committee website: <http://www.educause.edu/EvolvingTechnologiesReports/869>.
  7. Harold A. Linstone and Murray Turoff, eds., The Delphi Method: Techniques and Applications (Reading, Mass.: Addison-Wesley, 1975). (A scanned version of the book is available online: <http://www.is.njit.edu/pubs/delphibook/index.html>).
  8. New Media Consortium and EDUCAUSE Learning Initiative, The Horizon Report: 2009 Edition (Austin, Tex.: NMC, 2009), <http://www.nmc.org/pdf/2009-Horizon-Report.pdf>.
  9. "The 2008 Survey," <http://www.elon.edu/e-web/predictions/expertsurveys/2008survey/default.xhtml>. See also <http://pewresearch.org/pubs/1053/future-of-the-internet-iii-how-the-experts-see-it>.
  10. Janna Quitney Anderson and Lee Rainie, "The Future of the Internet III," December 14, 2008, "Summary of Findings," <http://www.elon.edu/docs/e-web/predictions/2008_survey.pdf>.
  11. For more on the EDUCAUSE Top Teaching and Learning Challenges 2009, see the article "Charting the Course and Tapping the Community," published in this issue (May/June 2009) of EDUCAUSE Review. Credit is owed Joe Murphy (Kenyon College) for describing the EDUCAUSE project in terms of the Delphi method: <http://b2e.nitle.org/index.php/2008/12/05/crowdsourcing_ideas_about_libraries_in_2#c360784>.
  12. Gene Rowe and George Wright, "Expert Opinions in Forecasting: The Role of the Delphi Technique," in J. Scott Armstrong, ed., Principles of Forecasting: A Handbook for Researchers and Practitioners (Boston: Kluwer Academic, 2001), pp. 125–44.
  13. Kesten C. Green, J. Scott Armstrong, and Andreas Graefe, "Methods to Elicit Forecasts from Groups: Delphi and Prediction Markets Compared," MPRA Paper No. 4999 (November 2007), <http://mpra.ub.uni-muenchen.de/4999/1/MPRA_paper_4999.pdf>, originally published in Foresight: The International Journal of Applied Forecasting, issue 8 (Fall 2007).
  14. Bo Cowgill, Justin Wolfers, and Eric Zitzewitz, "Using Prediction Markets to Track Information Flows: Evidence from Google," January 2009, <http://bocowgill.com/GooglePredictionMarketPaper.pdf>.
  15. For more on the NITLE Prediction Markets, see Bryan Alexander, "A Web Game for Predicting Some Futures: Exploring the Wisdom of Crowds," in the online version of this issue (May/June 2009) of EDUCAUSE Review: <http://www.educause.edu/library/ERM0931>.
  16. James Surowiecki, The Wisdom of Crowds: Why the Many Are Smarter than the Few and How Collective Wisdom Shapes Business, Economies, Societies, and Nations (New York: Doubleday, 2004).
  17. I am especially fond of Jamais Cascio's observation that managing role-playing games, like Dungeons & Dragons, can serve as excellent training for running scenarios: <http://www.openthefuture.com/2008/04/roll_3_vs_the_future.html>.
  18. Rachel Hatch, "Virtual Reality May Unlock Keys to Fighting Spread of HIV, Other Sexually Transmitted Diseases," Illinois Wesleyan University News and Events, April 8, 2008, <http://www2.iwu.edu/CurrentNews/newsreleases08/fac_Smoak_grant_408.shtml>.
  19. Jesse James Garrett, "Aurora: Forecasting the Future," Adaptive Path Blog, August 6, 2008, <http://www.adaptivepath.com/blog/2008/08/06/aurora-forecasting-the-future/>.  
  20. Daniel W. Rasmus, "Scenario Planning and the Future of Education," Innovate: Journal of Online Education, vol.  4, issue 5 (June/July 2008), <http://www.innovateonline.info/index.php?view=article&id=555>.
  21. See Bryan Alexander, "Crowdsourcing Ideas about Libraries in 2009," NITLE: Liberal Education Today, December 5, 2008, <http://b2e.nitle.org/index.php/2008/12/05/crowdsourcing_ideas_about_libraries_in_2>.
  22. Jane McGonigal, "Results," The Future Now Blog, November 25, 2008, <http://www.iftf.org/node/2400>.
  23. For a summary, see Bryan Alexander, "Academic Discussion by Blog," NITLE: Liberal Education Today, July 23, 2008, <http://b2e.nitle.org/index.php/2008/07/23/academic_discussion_by_blog_anatomy_of_a>.
  24. Made famous by former U.S. Secretary of Defense Donald Rumsfeld in 2002, the phrase "unknown unknowns" was originally used in the field of engineering (see, for example, the definition in the Double-Tongued Dictionary: <http://www.doubletongued.org/index.php/dictionary/unk_unk/>).
  25. Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007), pp. 11, 149.
  26. Felix Salmon, "Recipe for Disaster: The Formula That Killed Wall Street," Wired, February 23, 2009, <http://www.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all>.
  27. J. Scott Armstrong, "The Forecasting Canon: Nine Generalizations to Improve Forecast Accuracy," Foresight: The International Journal of Applied Forecasting, issue 1 (June 2005), p. 29, <http://www.forecastingprinciples.com/paperpdf/The_Forecasting_Canon.pdf>.