Introduction

If it is true that “There is nothing so practical as a good theory” (Lewin, 1952, p. 169), then open and distance education (ODE) has been awash in practicality since its inception more than a century ago. We are at once told that key research questions were already answered decades ago and yet see for ourselves the proliferation of new theories with the development of each new delivery technology. Is this because of an ahistorical perspective, as Selwyn (2012, p. 216) suggests? Is it true that we are locked in an endless cycle of contextualization and generalization, as Jung (2020) suggests? Or is it different this time?

The emergence of newer theories for digital learning spaces occurs because of a general dissatisfaction with the theorizing of earlier generations of ODE and not as a result of ignorance of it. This dissatisfaction is manifest in several dimensions, each of which will be explored through the course of this chapter.

In earlier generations, for example, distance education (DE) was presented as addressing a transmission challenge, while in the digital era a much broader conception of learning environments is considered. Earlier generations reflected an emphasis on content and learning design, while in the digital era context and community assume a much greater importance. Earlier generations depict knowledge as consisting of idealized representations and schemas, while in the digital context knowledge is intuitive and contextual. And finally, earlier generations think of learning as a cognitive process based essentially in logical structures such as language and mathematics, while in the digital era learning is understood as a physical process based on adaptation to concrete experience.

These are important distinctions, though not without precedent in the historical literature. The philosophically minded will recognize elements of the historical division between rationalism and empiricism, while those schooled in the history of education will recognize the contrast between what might be called traditional and progressive education. What’s new with digital technology and digital learning spaces, however, is the possibility of expressing theory precisely in technology and experiencing for ourselves answers to questions that could not even be asked in pre-digital environments.

And so the dissatisfaction with more traditional forms of theorizing in education is also a dissatisfaction with the posing of questions and experimental methods rooted in non-digital forms of investigation and theorizing. For example, the problems of education are often represented as statistical problems, addressed through the social sciences or economics, rather. Complex phenomena are interpreted using the broad generalizations of folk psychology rather than analyzed and understood at an individual and personal level.

Finally, there is dissatisfaction with traditional conceptions of what sort of questions we are attempting to answer. This arises most clearly in the light of asking “what is a theory?” What work do we expect a theory for digital learning spaces to do? What sort of questions need it answer? Again, we find the question changes the more deeply we are engaged in digital learning technology. To this, then, we turn first, as a prelude to the remainder of the discussion.

What Is a Theory?

The word “theory” is used differently in different domains. In physics we see the “theory of gravity” while in language studies we see “critical theory”. So too in the fields of education and technology, a theory may be taken as being anything from a “lens” through which to interpret phenomena to a set of causal mechanisms explaining learning behaviour and practice. The theory guiding the practice of ODE is often characterized under the auspices of “learning theory,” which in the field of education has a broad connotation.

As Picciano (2017) writes,

Learning theory is meant to explain and help us understand how people learn; however, the literature is complex and extensive enough to fill entire sections of a library. It involves multiple disciplines, including psychology, sociology, neuroscience, and of course, education. (p. 166)

This is most clear when we consider the multiple purposes to which theories are put in education. Gibbons and Bunderson (2005) describe theories that: explore: “what exists?” attempting to define, describe and categorize; explain: “why does this happen?” looking for causality, correlation, and relationships; and design “how I achieve this outcome?” describing interventions for reaching targeted outcomes and operational principles (Graham et al., 2013, p. 13). These correspond with three ways of seeking knowledge about the world: through exploration, typically through qualitative research methods, which may establish the existence of an entity (an object, a problem, a perspective); through explanation, which often involves quantitative methods, to address questions of identity, relatedness, and causality; and design, which explores the possibility of creating a particular outcome.

Much, if not most, discussion of technology in education revolves around the first and especially the second question. Learning (or pedagogy) and technology are presented as two separate domains, and theory addresses the causal relation between them. For example, Kanuka (2008) describes three major approaches to a theory of technology: uses determinism , which “emphasizes technological uses and focuses on the ways in which we use technologies”; social determinism, “concerned with the integration of technological artefacts within social systems and cultural contexts”; and technological determinism, where “technologies are viewed as causal agents determining our uses and having a pivotal role in social change” (pp. 96-98). Each of the three approaches also offers a platform for the criticism of technology in learning. For example, Kanuka quotes Jonassen (1996) on uses determinism: “carpenters use their tools to build things; the tools do not control the carpenter. Similarly, computers should be used as tools for helping learners build knowledge; they should not control the learner” (Kanuka, 2008, p. 4). Critics of social determinism include Putnam (Bowling Alone, 2000) and Turkle (Alone Together, 2011). Major critics of technological determinism include Noble (1998), Postman (1992), Dreyfus (2001), and Watters (2021).

Both the Gibbons and Bunderson discussion and the Kanuka discussion present as “theory” something along the lines of the classical Deductive-Nomological (DN) Model where a scientific explanation consists of an explanandum, a sentence “describing the phenomenon to be explained” and an explanans, “the class of those sentences which are adduced to account for the phenomenon” (Hempel and Oppenheim, 1948, reprinted in Hempel, 1965, p. 247). The relation between the explanans and the explanandum may be deductive, as in determinist theories, or it may be statistical, as commonly found in theories of the social sciences, including education. A common criticism of DN model theories is that they are reductive , that is, they are held to be unificationist in the sense of attempting to provide a unified account of a range of different phenomena, for example, by explaining learning through too “low” a science (Sayer, 2010, p.5) or attributing sole responsibility to individuals for their fates (Sayer, 2010, p. 7). And so, through a rejection of reductionism, theories proliferate, each specific to its own level of discourse, its own context, or its own discipline. And yet digital learning practitioners are expected to agree that “key research questions were already answered decades ago”.

A deeper critique may be found in questioning the distinction between explanans and explanandum that forms the basis for HD-style theories. Digital technologies have fostered the rise of complex network technology that defies explanation in such simple terms. Network interactions, whether the conversations of a billion internet users or the workings of a billion-parameter artificial intelligence, cannot be understood in terms of anything like a DN model. There is no distinction that can be drawn between that which explains, and that which is being explained.

Traditional Learning Theories

Designers of early digital learning environments were influenced by, and drew from, a range of learning theories developed in previous generations. These theories, in turn, were influenced by major schools of thought in the philosophies of science and psychology.

Canonically, the first of these is behaviourism . Developed in the first part of the 1900s by authors including B.F. Skinner (Beyond Freedom and Dignity) and Gilbert Ryle (The Concept of Mind) behaviourism was offered as a response to dualist theories that posited a nonphysical “mind” that had special cognitive abilities and insights into the nature of the self and reality. behaviourism limits its conclusions to what may be observed and measured, and therefore describes learning and development in terms of stimulus-response (Skinner) and knowledge and skills as dispositions (Ryle, 1949). There is, according to behaviourism, no “mental state” constituting a single bit of knowledge or a skill; one might (in today’s terms) think of it as a “whole of body” response.

Behaviourism, as Watters (2015) explains, is the philosophy behind the concept of the “teaching machine” and proceeds by incremental conditioning.

By arranging appropriate ‘contingencies of reinforcement,’ specific forms of behaviour can be set up and brought under the control of specific classes of stimuli… a student is ‘taught’ in the sense that he is induced to engage in new forms of behaviour and in specific form upon specific occasions. (Skinner, 1958, p. 970)

“Behaviourism has persisted, although often unnamed and un-theorized - in much of the technology industry, as well as in education technology – in Turing machines not simply in teaching machines” (Watters, 2015).

In what might be thought of as a response to behaviourism, cognitivism emerged in the later 1900s. It postulates the existence of causally relevant cognitive states that can be located in the mind and are able to better explain mental phenomena such as reason and language better than stimulus and response (which, proponents such as Chomsky (1986, Preface, xxv) argue, cannot explain them at all). An example of cognitivism is the physical symbol system hypothesis, which as the name suggests references a possibly innate language of thought (Fodor 1975).

It is arguable that a combination of cognitivism and behaviourism lives on today in the form of adaptive learning. Such systems use digital technology to monitor student activities, including responses to learning tasks, interpret those responses based on domain-specific models, and present them with new activities or resources in order to address learning needs, hence embracing a cognitivist model of learning. However, as a procedural system, adaptive learning is inherently behaviourist. Stimuli and student responses are mapped to cognitive schema or frames, perhaps as “production rules” as in the Intelligent Tutoring System of Anderson et al. (1985). Formally, however, production rules and dispositions amount to the same thing, a form of counterfactual reducible (in theory) to observed behaviour.

Digital learning design drawing from transactional theories of ODE is similarly both cognitivist and behaviourist in nature. In such theories, the central problem is the communication of information from a sender (in the case of learning, an instructor or learning resource) to a recipient (a student or learner). Such theories describe the forms of online interactions, in the case of Moore (1989), instructor-to-student, student-to-student, and student-to-content, and mechanisms for ensuring the fidelity of transmission where separation between the teacher and students can “lead to communication gaps, a psychological space of potential misunderstandings between the behaviours of instructors and those of the learners” (Moore and Kearsley, 1996, p. 200).

Both traditional education and ODE were influenced by a wave of theories that push back against the idea that knowledge or learning could be “delivered” in the sense that a message or piece of information is delivered, of which the most prominent is social constructivism . Constructivism in general is the thesis that knowledge is generated by means of the creation of models, schemas or representations by means of physical symbol systems. As a philosophy of science, constructivism is a form of empiricism (van Fraassen, 1980), while as a theory of learning constructivism responds to innatism by describing learning and development as social phenomena (hence, social constructivism) employing language, storytelling, community structures, and similar methods of “making meaning”.

A related theoretical approach, discovery learning , is based on a model of learning and discovery as problem-solving activities (Laudan, 1978) where learners draw on their own activities and experiences to discover facts and construct theories about the world. The suggestion is that learners are more likely to remember facts and theories they discover on their own than those merely presented to them by instructors (Bruner, 1973). The theory of experiential learning formalizes this idea, describing a process resembling scientific models of hypothesis, prediction, and deduction and in education can be described as a “learning cycle” (; Kolb and Kolb, 2005, p. 195; Kolb, 1984). Papert’s theory of constructionism is a less formal example of this, taking from Piaget “a model of children as builders of their own intellectual structures,” where learners solve problems and develop ideas through an open-ended creative process working hands on with physical or digital objects (Papert 1980, p. 7).

In what might be considered the cumulation of traditional learning theories, the theory of direct instruction was developed through criticism of discovery and inquiry-based teaching (Kirschner et al., 2006). Based on the idea of “cognitive load,” which is a limit to a person’s ability to process information at any given time, it suggests that such theories require too much extraneous work on the part of the learner. For example, in problem-based learning, a learner might waste time and effort discovering which formula should be used to find the answer. This extraneous effort, it is argued, limits the learner’s capacity to absorb and retain information. Rather, instruction should be based on directly explaining the concept or theorem to be taught, and then providing a set of “worked examples” that students can follow in order to learn how the problems are solved.

Traditional learning theories have in common a conception of knowledge and learning as a cognitive function, even if (as in the case of behaviourism) that function cannot be directly observed. Learning is in some way the stimulation, transmission, or construction or creation of models, schemas or representations that are symbolic in nature and consist of statements of fact and sets of rules or generalizations about those facts. Learning objectives could be stated by enumerating the factual domains to be mastered, or (as in the case of Bloom’s taxonomy) evidence of progressively more abstract actions demonstrating internalization of those rules and representations. In this way, traditional theories of knowledge and learning are structurally isomorphic with DN theories of science.

Toward Newer Theories

Another way of saying that traditional learning theories have in common a conception of knowledge and learning as a cognitive function is by saying that such theories are all knowledge centered. The dissatisfaction with, and replacement of, traditional theories begins with a challenge to this conception of learning. What we understand by “knowledge and learning” is something more than or different from the cognitive function as traditionally conceived. Thus, for example, we see Bransford et al. (1999) argue that effective learning is community-centered, knowledge-centered, learner-centered, and assessment-centered. “All learning takes place in settings that have particular sets of cultural and social norms and expectations and that these settings influence learning and transfer in powerful ways” (Bransford et al., 1999, p.4).

Bransford et al. (1999) also describe (following Dreyfus and Dreyfus, 1980, p. 15) how expert knowledge differs from novice knowledge. Experts

notice features and meaningful patterns of information that are not noticed by novices. Experts’ knowledge cannot be reduced to sets of isolated facts or propositions but, instead, reflects contexts of applicability: that is, the knowledge is “conditionalized” on a set of circumstances. (Bransford et al., 1999, p. 31)

Or, learning is more than cognitive; it “changes the physical structure of the brain and, with it, the functional organization of the brain” (Bransford et al., 1999, p. 4).

Context, self, community: the emergence of newer theories of learning begins with a new understanding of their importance and “the unique characteristics or affordances of the Web to enhance these generalized learning contexts” (Anderson 2008b, p. 46). None of these were in and of themselves new to the field; as mentioned above, constructivism already emphasized the role of community, and Kolb’s version of discovery learning was based on his understanding of human psychology, for example. But it took engagement with the World Wide Web – the ultimate information processing system – to underline the importance of these other factors.

Several early theories drew on these factors. One such is adaptive learning , which is in essence the use of digital (or other) technology in order to select or recommend unique sets of learning resources or activities based on a learner’s prior knowledge and demonstrated capabilities. Intelligent Tutoring Software (ITS), for example, was based in a combination of domain knowledge, a pedagogical model, and a student model (Kravcik et al., 2005, p. 9), which then gave way to adaptive hypermedia models and web-based adaptive educational systems. These were based to a large degree on Semantic Web technologies, and implemented using a combination of learning rules and reusable learning resources, or “learning objects”.

Another was situated cognition , the idea that “activity and situations are integral to cognition and learning” and that “by ignoring the situated nature of cognition, education defeats its own goal of providing useable, robust knowledge” (Brown et al., 1989). For example, consider the difference between learning words according to dictionary definitions and learning words in the context of using them in sentences.

Teaching from dictionaries assumes that definitions and exemplary sentences are self-contained “pieces” of knowledge. But words and sentences are not islands, entire unto themselves. Language use would involve an unremitting confrontation with ambiguity, polysemy, nuance, metaphor, and so forth were these not resolved with the extra linguistic help that the context of an utterance provides. (Nunberg, 1978)

Significantly, learning how to use a tool (for example) is not rule-based. In learning how to use tools, people

build an increasingly rich implicit understanding of the world in which they use the tools and of the tools themselves… Learning how to use a tool involves far more than can be accounted for in any set of explicit rules. The occasions and conditions for use arise directly out of the context of activities of each community that uses the tool, framed by the way members of that community see the world. (Brown et al., 1989, p. 33)

Similarly,

Conceptual tools similarly reflect the cumulative wisdom of the culture in which they are used and the insights and experience of individuals. Their meaning is not invariant but a product of negotiation within the community. Again, appropriate use is not simply a function of the abstract concept alone. It is a function of the culture and the activities in which the concept has been developed. (Brown et al., 1989, p. 33)

The situatedness of cognition is obscured by the nature and function of schools. Most school activity exists in a culture of its own separate from what students will experience in their workplace and culture. In the school, learning transfer is “assumed to be the central mechanism for bringing school-taught knowledge to bear in life after school.” (Lave, 1988, p. 23) In such a context, problem-solving activities are “always a quest for truth or the ‘right answer’” (Lave, 1988, p. 36). The problem context is “the only context germane to problem-solving activity (Lave, 1988, p. 39). Contrast this account of learning in school with “life after school” where problem-solving is a “process of transformation” (Lave, 1988, p. 59). “The same activity in different situations derives structuring from, and provides structuring resources for, other activities.” (Lave, 1988, p. 122). Doing mathematics in a math class is very different from doing mathematics in a grocery store.

One major outcome of situated learning is the concept of the community of practice . “A person’s intentions to learn are engaged and the meaning of learning is configured through the process of becoming a full participant in a sociocultural practice” (Lave and Wenger, 1991, p. 29). A good example of this is the apprenticeship, where new members of the profession are gradually moved from peripheral participation involving limited duties to more and more central roles. This same process takes place less formally in other professions. A “person” becomes a “practitioner” “whose changing knowledge, skill, and discourse are part of a developing identity” (Lave and Wenger, 1991, p. 122). Hence, on this theory, “knowing is inherent in the growth and transformation of identities and is located in the relations among practitioners, their practice, the artifacts of that practice, and the social organization and economy of communities of practice” (Lave and Wenger, 1991, p. 122).

We see a similar perspective represented in the “community of inquiry” model for online learning environments developed by Garrison et al. (1999). This model is based on the interplay of three types of “presence”: social presence, cognitive presence, and teaching (or perhaps learning) presence. The concept of social presence especially identifies a connectedness between people in a learning environment.

Collaboration must draw learners into a shared experience for the purposes of constructing and confirming meaning. Realizing understanding and creating knowledge is a collaborative process. The difference between collaboration and common information exchange is: …the difference between being deeply involved in a conversation and lecturing to a group. The words are different, the tone is different, the attitude is different, and the tools are different. (Garrison et al., 1999, p. 95)

We see in this work of the late 1990s and the early 2000s the development of each of the major themes characterizing emerging theories for digital learning spaces. It became apparent that knowledge and learning are based on much more than mere transmission of information, as the nature of the learner and the learner’s environment play key roles, and context and community assume a much greater importance. Because these must be explicitly created in a digital learning environment, rather than inherent in, say, a classroom or workplace, their nature and development assumed a greater importance in learning theory and design. It also became apparent that learning and domain knowledge consist of more than idealized representations and schemas, more even than logical structures such as language and mathematics. While these cognitive phenomena continue to play a major role in learning theory, there is an increasing recognition of the importance of the ineffable properties of personal knowledge and learning communities.

Newer Theories for Digital Learning Spaces

Much of digital learning research in the early twenty-first century was devoted to the idea of learning communities, collaboration, and co-construction of knowledge. Haythornthwaite et al. (2007) provide a good overview of six major approaches, including living technologies, co–evolution of technology and learning practices, and technology and social tie formation. The importance of interaction was emphasized. “The main function of reasoning, we claim, is argumentative. Reasoning has evolved and persisted mainly because it makes human communication more effective and advantageous” (Mercier and Sperber, 2011, p. 60).

Such discussion also led to the idea that cognition is not confined to the brain but partly distributed and realized in our interactions with the environment.

The human organism is linked with an external entity in a two-way interaction, creating a coupled system that can be seen as a cognitive system in its own right. All the components in the system play an active causal role, and they jointly govern behaviour in the same sort of way that cognition usually does. (Clark and Chalmers, 1998, p. 8)

Paper-and-pencil calculation is the standard example. The social processing of information can be conceived as a species of extended cognition where our cognitive processing is distributed into the social environment and supported and constrained by social interaction.

It therefore became a matter of considerable importance to understand how such social processes can lead to knowledge. For example, factors such as the role of diversity were widely discussed. “Both cognitive and social diversity have similar effects on group deliberation. No diversity, no disagreement, and no critical feedback; but too much diversity erodes trust and mutual understandings and prevents the convergence of opinion” (Pesonen, 2022, p. 14). Rather than seeking sameness, it became clear that knowledge and learning require difference.

Connectivism was offered in 2004 as an answer to such questions. It at once embraced the role of context, community and interaction in the development of knowledge and learning, and it drew from the unique affordances of digital learning environments to describe how such a process might be implemented. It pushes back at once against the idea of knowledge acquisition through transmission and also against the idea of knowledge as consisting of purely formal, and purely internal, schemas and representations.

In his paper introducing connectivism, Siemens quotes an undated comment from Karen Stephenson to underline the first point:

Experience has long been considered the best teacher of knowledge. Since we cannot experience everything, other people’s experiences, and hence other people, become the surrogate for knowledge. “I store my knowledge in my friends” is an axiom for collecting knowledge through collecting people. (Siemens, 2005)

And he points to the complexities of chaos theory to make the second point:

Chaos is the breakdown of predictability, evidenced in complicated arrangements that initially defy order. Unlike constructivism, which states that learners attempt to foster understanding by meaning making tasks, chaos states that the meaning exists—the learner’s challenge is to recognize the patterns which appear to be hidden. (Siemens, 2005)

In the explicit embrace of an idea of knowledge and learning as embedded in chaos and context, the nature of knowledge is transformed from formal schemas and representations to connections between entities and pattern recognition. Formal representations may continue to be used, and may constitute the content of communication, but knowledge and learning are found in the structure and organization that grows around such content. “The learner’s challenge is to recognize the patterns which appear to be hidden. Meaning-making and forming connections between specialized communities are important activities…” (Siemens, 2005) leading to the “spontaneous formation of well-organized structures, patterns, or behaviours, from random initial conditions” (Rocha, 1998, p.3).

Connectivism makes these phenomena explicit in the definitions of knowledge and learning. “At its heart, connectivism is the thesis that knowledge is distributed across a network of connections, and therefore that learning consists of the ability to construct and traverse those networks” (Downes, 2007). As a theory of digital learning environments, therefore, connectivism as a theory describes the formation structures and processes that lead to self-organizing networks in education. The first and most important of these is the Massive Open Online Course (MOOC).

Developed in 2008, the “Connectivism and Connective Knowledge” MOOC (CCK08) was intended not only to introduce the theory but also to offer an example or model of the theory in action (Downes and Siemens, 2008). Rather than being centered around a body of structured content and defined in terms of learning objectives, the MOOC was arranged around a series of loosely defined topics as social networks, intentionalism and meaning, groups and networks, complexity, chaos and randomness. While course organizers offered material in the form of papers, blog posts, and recorded conversations, the course as a whole consisted of the contributions of more than 170 separate blogs or websites; posts from these were syndicated using RSS and distributed to the 2200 participants in an email and RSS newsletter called The Daily.

The design of CCK08, while still rooted to a degree in traditional pedagogy (a Moodle environment was employed alongside the blog posts, newsletter, and wiki, and formal assessment was offered to a small group of University of Manitoba students taking the course for credit), was based in what might be characterized as the principle for successful networks (Downes, 2005) and in particular around what came to be called the semantic principle outlining four major conditions for successful knowledge creation in self-organizing networks. The first two, autonomy and diversity , can be seen in many of the earlier theories discussed above. The latter two, openness and interactivity , are derived from the development of the digital networks used to support the internet in general and online learning in particular.

Following the development and success of the connectivist MOOC model, e-learning developers and designers began to ask how best to support both learner autonomy and learner diversity, a discussion that led to the articulation of the personal learning environment (PLE) as a conceptual design.

Rather than integrate tools within a single context, the system should focus instead on coordinating connections between the user and a wide range of services offered by organizations and other individuals. Rather than interacting with the tools offered within the contexts supplied by a single provider, the PLE is concerned with enabling a wide range of contexts to be coordinated to support the goals of the user. (Wilson et al. 2007, p.5)

No viable commercial product was developed along the lines of the PLE; however, the development of a successor technology, the learning experience platform (LXP), may be attributed to the PLE. For example, one contemporary LXP vendor writes that the LXP

is a consumer-grade learning software designed to create more personalized learning experiences and help users discover new learning opportunities. By combining learning contents from different sources, recommending and delivering them with the support of Artificial Intelligence, across the digital touch points, e.g., desktop application, mobile learning app and others. (Valamis, 2022)

The concepts of openness and interactivity were drawn from the example of digital network technology, and most especially, the Internet itself. In addition to the physical properties underlying the Internet that made it a reliable and useful network, properties such as decentralized design and distributed resources, the success of the Internet was also attributed to open standards and open source software. As Berners-Lee (1989) wrote in 1989,

the hope would be to allow a pool of information to develop which could grow and evolve with the organisation and the projects it describes. For this to be possible, the method of storage must not place its own restraints on the information. This is why a “web” of notes with links (like references) between them is far more useful than a fixed hierarchical system.

It was noticed by many that the structure of the Internet – a digital network consisting of a set of dynamically changing links reflecting the knowledge and learning of a society – and some forms of artificial intelligence – described under the heading of connectionism and consisting of dynamically changing links between interconnected artificial neurons reflecting the knowledge and learning of a computer system – were in many important respects the same. Connectivist theory made this association explicit and extended the association to include examples from theories of self-organizing social networks (as described by Barabási 2003, Shirky 2008; Watts 2003) as well as graph theory. The key tenet of all four sets of theories is the same: knowledge is not content, it is organization . And as such, connectivism reflects back directly to the concepts of the community of practice and the community of inquiry, where, as noted above, “knowing is inherent in the growth and transformation of identities and is located in the relations among practitioners, their practice, the artifacts of that practice, and the social organization and economy of communities of practice” (Lave and Wenger, 1991, p. 122).

Recent work in digital learning environments reflects and builds on these themes. One line of enquiry of note can be found under the heading of open educational practices (OEP), which emphasize at once the openness characteristic of learning in digital networks but also the need for a more humane approach based on an ethic of care (Farrow, 2016, p. 100) where the traditional conception of education as “the transfer of information and knowledge to learners is being replaced with a view of learners as active participants in their own learning” (Kaatrakoski et al., 2017). “OEP are defined as practices which support the (re)use and production of OER through institutional policies, promote innovative pedagogical models, and respect and empower learners as co-producers on their lifelong learning path” (Ehlers, 2011, p. 4). Cronin (2017) identifies the following dimensions of OEP: balancing privacy and openness, developing digital literacies, valuing social learning, and challenging traditional teaching role expectations.

Another more recent body of work revolves around the concept of embodied cognition and learning. “Embodied cognition involves how the body and mind work in tandem to create the human experience. Embodied cognition literature suggests that the physical actions we perform, as well as the actions being performed around us, shape our mental experience” (Sullivan, 2018, p. 129). Based on work by, among others, Varela et al. (1991, p. 172), embodied learning also draws from Papert’s theory of constructionism, referenced above, but also builds on the concept of non-cognitivist and non-formal knowledge, as described by Dreyfus and Dreyfus (1980) and Bransford et al. (1999). Shapiro and Stolz (2019) outline “some of the main ideas that distinguish embodied cognition from computational cognitive science” and argue “traditional cognitivist accounts of the mind should be challenged because they exclude the close relationship that exists between mind and body that is more profound than initially considered”(p. 20).

Similarly, the work of Princeton scientist Fei-Fei Li and her colleagues (Liu et al., 2022, p. 1) points to the development of embodied AI as a field involving “AI agents that don’t simply accept static images from a data set but can move around and interact with their environments in simulations of three-dimensional virtual worlds” (Whitten, 2022). It suggests a type of AI that “could power a major shift from machines learning straightforward abilities, like recognizing images, to learning how to perform complex humanlike tasks with multiple steps, such as making an omelet”(Whitten, 2022). The difference here is like the difference between presenting a student with text and images to learn from and giving them a real environment where they can move about and try things. “The meaning of embodiment is not the body itself, it is the holistic need and functionality of interacting and doing things with your environment” (Whitten, 2022).

Also enjoying a renaissance is an approach called enactivism which is a combination of constructivism and embodied learning and “is a theory wherein cognition and environment are inseparable, and learning is drawn from the interaction between learner and environment” and “emphasises emergent cognitive structures that self-organize as a result of interactions between organism and environment” (Ward et al., 2017, p. 368). Again, we see the link not only to connectivism but to other emerging theories of digital learning environments. “Views of the mind as embodied, embedded, extended, affective, or some combination of these, are members of the enactivist family at least in virtue of sharing important common ancestry” (Ward et al., 2017, p. 373).

The definitive theory of digital learning spaces is perhaps yet to be written, but there is a sense in which a sea change has occurred in some areas of educational technology and e-learning, even if the proponents of traditional learning, content-based MOOCs, and cognitivist theories of knowledge have not yet yielded the field. Knowledge and learning are based, minimally, in complex processes. These processes defy simple description and explanation, but at a minimum depend in important ways on one’s environment, whether by engaging in conversation or manipulating objects, and vary significantly depending on context, which may include both the learner’s prior experiences, but also the nature of the culture, workplace or community in which one finds themselves immersed .

What Is a Theory: Revised

In an earlier section of this chapter we discussed the traditional conception of a theory and in particular the HD model that informs much of the common discourse around learning theories in general and those concerning digital learning environments in particular. Such theories, we noted, are based on explanations of phenomena such that, with the appropriate intervention and a correct theory, we may reliably predict a learning outcome. Traditional learning theories were developed within the context of traditional theory. So when we say that key research questions have been answered, what we mean is that researchers have provided explanations for learning phenomena such that pedagogical interventions may reliably produce desired learning outcomes. But learning, as we detail in this chapter, is based on complex and context-based phenomena, and cannot be understood in terms of anything like a DN model.

Another way of saying the same thing is to say that humans, and human learning, cannot be subject to mechanistic explanation, and therefore, processes and pedagogies based on mechanistic theories of knowledge and learning. It may be that mechanistic processes may reliably produce some or another outcome, but the error consists in describing that outcome as “learning”. It is merely a production, an offering of content for the content machine, and not in and of itself indicative of a capacity expressed that can only be developed in a learning community or environment through a process of practice, reflection and interaction.

It therefore merits speculation that what we understand as a “theory” ought to be reflected in, and informed by, what we understand as “learning”. And though there is much more that can be studied and researched in that regard, the recent successes of connectionist artificial intelligences, today known as “deep learning,” is instructive. In particular, we can examine the application of AI to learning environment design that is, learning analytics . And we have learned that

at the petabyte scale, information is not a matter of simple three- and four-dimensional taxonomy and order but of dimensionally agnostic statistics. It calls for an entirely different approach, one that requires us to lose the tether of data as something that can be visualized in its totality… faced with massive data, this approach to science — hypothesize, model, test — is becoming obsolete. (Anderson 2008a)

So instead, the model – which is now what theories have become – is not so much a set of schemas, ontologies and representations, but rather, a large body of data combined with a description of a learning network such that a characteristic set of weighted connections can be employed to perform useful tasks in complex environments in a variety of contexts. These weighted connections do not “stand for” anything. They constitute a “representation” only in the loosest sense of the word. And the elements of the learning network, consisting of neural-level descriptions of activation functions and thresholds, among other physical properties, describe only the learning environment itself, and not the environment about which it learns. Meanwhile, explanation for the output of the neural network that would enable us to manipulate it and force certain results, defies us.

What, then, do such theories do? Popular accounts of analytics describe four major functions: description, diagnosis, prediction, and prescription (Boyer and Bonnin 2016; Brodsky et al., 2015). A study of contemporary deep learning systems suggests (Downes, 2021) two additional categories may be added: generation (or content creation), and deontology (or identifying what the best, or desired, option may be). It is arguable that, given what we now know about knowledge and learning, a scientific theory may come to be regarded just as a neural network model trained on a data set such that it may reliably perform these six functions. While it would perhaps be nice to expect simply causal explanations rooted in deep cosmic laws or principles, it may be that these are simply not forthcoming. The universe might not, after all, be like a machine.