The Quality of Massive Open Online Courses
24 April, 2013 – Moncton, New Brunswick
In this short contribution I would like to address the question of assessing the quality of massive open online courses. The assessment of the quality of anything is fraught with difficulties, depending as it does on some commonly understood account of what would count as a good example of the thing, what factors constitute success, and how that success against that standard is to be measured.
With massive open online courses, it is doubly more difficult, because of the lack of a common definition of the MOOC itself, and because of the implication of external factors in the actual perception and performance of the MOOC. Moreover, it is to my mind far from clear that there is agreement regarding the purpose of a MOOC to begin with, and without such agreement discussions of quality are moot.
Let me begin, then, with a statement describing what I take a MOOC to be. I will then address what I believe ought to be the purpose of a MOOC, the success factors involved in serving that purpose, the design features that impact success, and finally, questions regarding the measurement of those features.
What is a MOOC?
The term MOOC as is commonly known stands for ‘Massive Open Online Course’. There have been numerous efforts recently to define each of these four terms, sometimes, as I observe here, in such a way as to result in an interpretation opposite to the common understanding of the term. Thus in some case a MOOC is being thought of as a smallish closed offline (or hybrid) ongoing activity. This, for example, is what we see in the phenomenon of the “wrapped” MOOC.
To my own mind, we should be relatively rigid in our definition of a MOOC, if for no other reason than to distinguish a MOOC from the myriad other forms of online learning that have existed before and since, and hence to identify those aspects of quality that are unique to MOOCs. Hence, a MOOC is to my mind, defined along the following four dimensions:
Massive – here I attend not to the success of the MOOC in attracting many people, but in the design elements that make educating many people possible. And here we need to keep in mind that to educate is to do more than merely deliver content, and more than to merely support interaction, for otherwise the movie theatre and the telephone system are, respectively, MOOCs.
My own theory of education is minimal (so minimal it hardly qualifies as a theory, and is almost certainly not my own): “to teach is to model and to demonstrate; to learn is to practice and reflect.” Thus, minimally, we need an environment that supports all four of these on a massive scale. In practice, what this means is a system designed so that bottlenecks are not created in any of the four attributes: modeling, demonstration, practice, and reflection.
To offer a simple example: an important part of reflection is the capacity to perform and then discuss performance with others. If each person must perform and discuss the performance with a specific person, such as the teacher, then a bottleneck is created, because there is not enough time to allow a large number of people to perform. Similarly, if each performance and discussion involves the entire class, the same sort of bottleneck is created. Hence, in order for a course to be massive, performance and reflection must be designed in such a way that does not require that certain people view all performances.
Open – I have had many arguments with people over the years regarding the meaning of ‘open’, and these arguments have most always (to my perception) involved the other people attempting to define ‘open’ in such a way as to make ‘open’ mean the same as ‘closed’. There is, for example, the famous distinction between free as in ‘gratis’, and free as in ‘libre’. In education there is in addition a definition of ‘open’ which is neither gratis nor libre, but instead refers to ‘open admissions’, or the removal of any academic barriers to participation in a course or program.
For my own part, the meaning of ‘open’ has more to do with access to a resource, as opposed to having to do with what one can do with a resource. The definition of ‘free software’, for example, assumes that the software is already in your possession, and defines ways you can inspect it, run it, and distribute it, without limitations. But this definition is meaningless to a person who, for whatever reason, cannot access the software in the first place. The more common and widely understood meanings of ‘free’ and ‘open’ are broader in nature, more permissive with regard to access, and more restrictive with regard to the imposition of barriers.
In particular, something (a resource, a course, an education) is free and open if and only if:
- the resource may be read, run, consumed or played without cost or obligation. This addresses not only direct fee-for-subscription, but also enclosure, for example, the bundling of ‘free’ resources in such a way that only those who pay tuition may access them
- there are reasonable ways to share the resource or to reuse the resource, and especially to translate or format-shift the resource (but not necessarily to be able to sell or modify the resource)
Having said that, as George Siemens and I discussed the development of MOOCs in 2008, we were conscious of and communicated the fact that we were engaged in a progression of increasingly open access to aspects of education:
- first, open access to educational resources, such as texts, guides, exercises, and the like
- next, open access to curriculum, including course content and learning design
- third, open access to criteria for success, or rubrics (which could then be used by ourselves or by others to conduct assessments)
- fourth, open assessments (this was something we were not able to provide in our early courses)
- fifth open credentials
It is worth remarking that by ‘open’ we very clearly intended both the aspects of access and sharing to be included; what this meant in practice was that we expected course participants not only to use course resources, curriculum, etc., but also to be involved in the design of these. Hence, for example, before we offered CCK08, we placed the course schedule and curriculum on a wiki, where it could be edited by those who were interested in taking the course (this was a strategy adapted from the ‘Bar Camp’ school of conference organization and the EduCamp model as employed by Nancy White and Diego Leal).
It is interesting to contrast our approach to ‘open’ with the “logic model” devised by James C. Taylor and eventually adopted by OERu which preserved the openness of resources and courses, but kept closed access to assessments and credentials. Such courses are not to my mind ‘open courses’ as a critical part of the course is held back behind a tuition barrier. Exactly the same comment could be made of ‘free’ courses that entail the purchase of a required textbook. The fact that some part of a course is free or open does not entail that the course as a whole is free or open, and it is a misrepresentation to assert such.
Online – I mentioned above the phenomenon of ‘wrapped’ MOOCs, which postulate the use of a MOOC within the context of a traditional location-based course; the material offered by the MOOC is hence ‘wrapped’ with the trappings of a more traditional education. This is the sort of approach to MOOCs which treats them more as modern-day textbooks, rather than as courses in and of themselves.
But insofar as these wrapped MOOCs are courses, they are no longer online, and insofar as they are online, they are no longer courses. So whatever a ‘wrapped MOOC’ is, it is not a MOOC. It is (at best) a set of resources misleadingly identified as a ‘MOOC’ and then offered (or more typically, sold) as a means to supplement traditional courses.
For a MOOC to be ‘online’ entails that (and I’ll be careful with my wording here) no required element of the course is required to take place at any particular physical location.
The ‘wrapped MOOCs’ are not MOOCs because you cannot attend a wrapped MOOC without attending the in-person course; there will be aspects of the MOOC that are reserved specifically for the people who have (typically) paid tuition and are resident at some college or university, and are physically located at the appropriate campus at the appropriate time. Just as being online is what makes it possible for these courses to be both massive and open, being located at a specific place makes the course small and closed.
But by contrast, this does not eliminate MOOCs that include or allow elements of real-world interaction or activity. Our original CCK08 MOOC recommended (but did not require) in-person meet-ups, for example, and these were held at various locations around the world. MOOCs such as ds106 require that a person go out into the world and take photographs (for example). In any online course there will be a real-world dimension; what makes it an ‘online’ course is that it does not specify a particular real-world dimension.
Course – before we launched our first MOOC both George Siemens and I were involved in various activities related to free and open online learning. George, for example, had staged a very successful online conference on Connectivism the year before. I had, meanwhile, been running my newsletter service for the educational technology community since 2001. Each of these was in its own way massive, open and online, but they were not courses. There is obviously some overlap between ‘course’ and ‘conference’ and ‘community’, and people have since suggested that there could be (or should be) massive open online communities of practice and of course there could – but they are not MOOCs.
To be clear: I am very supportive of the idea of massive open online communities, but the MOOC is a different entity, with its own properties and role in the environment. And specifically:
- a course is bounded by a start date and an end date
- a course is cohered by some common theme or domain of discourse
- a course is a progression of ordered events related to that domain
Why insist on these? Aside, that is, from the pedantic observation that if you call something a ‘course’ then it ought to have the properties of a course?
My own observation (and I was reluctant at first to create a ‘course’ precisely because of the three limitations just specified above) is that the creation of temporary and bounded events allows for engagement between communities that would not normally associate with each other. Courses are a way of, if you will, stirring the pot. By creating a limited and self-contained event we lower the barriers to participation – you’re not signing up for a lifetime commitment – and hence increasing accessibility.
In a sense, the same reason we organize learning into courses is the reason we organize text into books. Yes, simply ‘reading’ is useful and engaging, and widely recommended, but ‘reading a book’ is defined and contained. A person can commit to ‘reading a book’ more easily than to ‘reading’, especially if by ‘reading’ we mean something that never ends.
Hence, massive open online learning that is not bounded, does not cohere around a subject, and is not a progression of ordered events, is not a course, and outside the domain of discourse.
The Purpose of MOOCs
The first reaction is to suggest that the purpose of a MOOC is to help someone learn – they are, after all, courses. But purposes are never so easily transparent, and education is a domain that defines opacity, and to the combination does not easily yield to a simple statement of purpose.
Addressing the purpose of a MOOC as ‘learning’, for example, does not begin to address why some person, organization or association would offer a MOOC, beyond at least those early MOOCs that were offered as much to explore the possibilities of the format as much to attain any educational objective.
The purpose of MOOCs offered by a commercial entity such as Coursera, for example, is to earn revenue (and beyond that, advance the Coursera brand to enable future courses to also make money). Meanwhile, the purpose of an institution offering a MOOC through Coursera may be multifaceted and nuanced. Consider, for example, the statement that “This is truly in the spirit of what we’re supposed to do in higher education, which is providing education and experimentation,” from Cole Camplese at Penn State. Compare with what Keith Devlin says: “What I see is the true democratizing of higher education on a global scale. And in today’s world – global village, Flat World, call it what you will – I think that is exactly what we (i.e., the entire world, not just the highly privileged US) will need.”
Even a focus on why students subscribe to MOOCs will not be revealing. Consider what the founders of Coursera say about most students who sign up: ““Their intent is to explore, find out something about the content, and move on to something else,” said Ms. Koller. Adding tuition fees changes the dynamic, as does adding credentials at the end of the course. Coursera has learned it can earn money charging for authentication services, which satisfies both its need to make money, and a student’s need for a certificate (though at the expense of no longer being free and open).
Doing what he does so well, Curt Bonk has compiled a list of twenty “types, targets and intents” of MOOCs, including the following:
- high scoring or impressive MOOC participants get admissions privileges, job interviews, or points if they later apply for a particular degree program, certificate, internship, or job;
- loss leader - give away one course in every department or program as a means to attract new students to that major, program, or department;
- religious revival MOOC;
- bait and switch MOOC - use it as a means to sell a product or to turn the audience on to something else.
It becomes clear through reflection that MOOCs serve numerous purposes, both to those who offer MOOCs, those who provide services, and those who register for or in some way ‘take’ a MOOC.
The original MOOC offered by George Siemens and myself had a very simple purpose at first: to explain ourselves. The topic of ‘connectivism’ had achieved wide currency, and was the subject of the online conference mentioned earlier, and yet remained the subject of considerable debate. What was it? Was it even a theory? Did it even apply to education? Was it founded on real research, or was it simply made up? We believed we had good answers to those questions, and the curriculum was designed to lead participants (and ourselves!) through a clear and articulate answering of them.
As we began to design the course (and in particular, as I began to use the gRSShopper application I had designed to support my website and newsletter) it became clearer to both of us that the purpose of the course was also to serve as an example of connectivism in practice. After several years of describing the theory  we began to feel some obligation to demonstrate it in practice. So the course design gradually began to look less and less like a traditional course, with topics and readings arranged in a nice linear order, and more like a network, with a wide range of resources connected to each other and to participants. And the course became much less about acquiring content or skills, and much more about making these connections, and learning from what emerged as a result of them.
The participants in our MOOCs also demonstrated a similarly wide range of motivations. We had several participants who were in the course for the research opportunities it offered (and people like Jenny Mackness, Frances Bell and Sui Fai John Mak have become voices in their own right in the field). Others came with the intent to learn about connectivism, to supplement their existing studies in a masters or PhD program. Others joined in to participate in what they saw as an event, others to make connections and extend their social network (or as it came later to be called, their ‘personal learning network’). At least one (and maybe others) came with the specific intent of discrediting connectivism (and in passing, to call George and myself “techno-communists”).
Even if we limit our focus to what is putatively the primary function of a course, to teach, it becomes difficult to identify the purpose of a MOOC. Much has been made of MOOC completion rates, with the (generally implicit) suggestion that completion is in some respects tantamount to learning. However, it could be argued that enabling a person to sample a course and withdraw without having lost thousands of dollars of tuition is a success. Moreover, different people want to learn different things: some about what connectivism is, some, how best to criticize it, some, whether it even makes sense to their own experience.
And there are different senses of learning. In one sense, to ‘learn’ is to acquire some knowledge or skill, and it is this sense of learning that is most often associated with education, and especially formal education. But there is an equally valid sense of learning, where the objective is to achieve some outcome or complete some task, what Rogers (2006) calls “task-conscious learning”. This sort of task-focused outcome is much more common in informal learning; it is the sort of learning I do, for example, when I dip into Stack Overflow to learn how to set the value of a field before submitting an Ajax form.
It becomes clear that we cannot assess the purpose of a MOOC qua MOOC by assessing the reasons and motivations of the people taking them, nor even by assessing the reasons and motivations of those offering them. What makes a hammer a good hammer isn’t whether it fulfills the reasons and motivations of the people using the hammer, because these people use it variously as a screwdriver, bottle opener, doorstop, weapon, wrench, general-purpose machine repair device, and as an implement for driving nails, screws, tables, pegs and other objects into various sized holes. A MOOC, similarly, may be a very good or very poor PR device, may transmit content very well or very poorly, may advance research a lot or not at all, all depending on who is using it, how they are using it, and why.
MOOC Success Factors
The primary criticism of what I will address in this chapter is that success is process-defined rather than outcomes-defined. Without outcomes measurement we cannot measure success, we can’t focus our efforts toward that success, we can’t become more competitive and efficient, we can’t plan for change and improvement, and we can’t define what you want to accomplish as a result. All this is true, and yet there is no measure of outcome or success that can be derived from designer and user motivations, or even from the uses to which MOOCs are put. The only alternative is to identify what a successful MOOC ought to produce as output, without reference to existing (and frankly, very preliminary and very variable) usage.
These outcomes are a logical consequence of the design of the MOOC. The same is true of a hammer. This tool is defined as a hand-held third-class lever with a solid flat surface at the business end. Anything that satisfies these criteria will, as an outcome, have the capacity to drive a nail into a piece of wood (whether or not any hammer is ever used in this fashion). It has to be under a certain weight to be hand-held, above a certain mass, and of a certain length, to be a lever, and of certain material and design to have a hard flat surface.
When we are evaluating a tool, we evaluate it against its design specifications; mathematics and deduction tell us from there that it will produce its intended outcome. It is only when we evaluate the use of a tool that we evaluate against the actual outcome. So measuring drop-out rates, counting test scores, and adding up student satisfaction scores will not tell us whether a MOOC was successful, only whether this particular application of this particular MOOC was successful in this particular instance.
The design of a MOOC is, in the first instance, as described above: it is a massive open online course, and the design is successful to the extent it satisfies those four criteria, and unsuccessful to the extent that it doesn’t. That said, however, there are many ways to create a massive open online course, and within that domain, some may be more successful than others. So we need to look at why we designed and developed the MOOC the way we did – why we made it massive, open, online and a course, as described above. Why this model, say, and not a traditional online instructor-led class, or an open online community, or any of a dozen other combinations?
What I begin with is the observation that each person has a different objective or motivation for taking a course, and has different needs and objectives (it’s a lot like dating that way – we think that everyone wants the same thing, but we find in practice that everybody wants something slightly different). We looked at what we called ‘sifters’ and ‘filters’ to create learning recommendation systems, resulting in work I presented at MADLat based on collaborative filtering. “Collaborative filtering or recommender systems use a database about user preferences to predict additional topics or products a new user might like.” There are different ways to approach this problem; I adopted what we called ‘resource profiles’ to characterize resources and make them accessible within a learning resources network. Since the work of filtering and selecting could now be done by the metadata, I turned to the question of what would constitute a successful network, which I addressed in 2005.
Partially influenced by earlier work I had done in networks (and especially the work of Francisco Varela) it was clear to me that the objective wasn’t to connect everything to everything, but to achieve an organization in such a way as to support cognition. The work of Rumelhart and McClelland suggested ways this organization could be defined in terms of nodes and connections and learning mechanisms to achieve what Churchland and others called “plasticity”. The structural properties I described in 2005 were drawn in large part from documents describing the design principles behind the internet. Finally, remarks by Charles Vest about the American university system led me to formulate what I now call the Semantic Principle, also in 2005 which crystalized as the ‘Groups and Networks’ presentation in new Zealand.
At the risk of repeating myself, let me say here that the Semantic Principle consists of four major elements: autonomy, diversity, openness, and interactivity.
Before discussing each of these briefly, let me describe the outcome a network design embodying the semantic principle will achieve. Such a system is not static; it is dynamic. It is self-organizing, and creates these organizations in response to (and as a reflection of) environmental input. It can be thought of as a highly nuanced perceptual system. Over time, it acquires a state such that it can (if you will) recognize entities and events in the environment as relevantly similar to those it experienced in the past, and respond accordingly. This knowledge is characterized as emergent knowledge, and is constituted by the organization of the network, rather than the content of any individual node in the network. A person working within such a network, on perceiving, being immersed in, or, again, recognizing, knowledge in the network, thereby acquires similar (but personal) knowledge in the self.
Or, to put the same point another way, a MOOC is a way of gathering people and having them interact, each from their own individual perspective or point of view, in such a way that the structure of the interactions produces new knowledge, that is, knowledge that was not present in any of the individual communications, but is produced as a result of the totality of the communications, in such a way that participants can through participation and immersion in this environment develop in their selves new (and typically unexpected) knowledge relevant to the domain. A MOOC is a vehicle for learning, yes, but it acts this way primarily by being a vehicle for discovery and experience (and not, say, content transmission).
Not every MOOC will produce this outcome, nor will this form of learning be experienced by every participant (particularly those who sample and leave early) but to judge from the commentary the experience of new and unexpected emergent knowledge is common and widespread (    among many others).
Let me now turn to the four success factors that I argue tend to produce this result. My purpose here is not to describe each in any detail – I have done that elsewhere – but rather to consider each as a success factor, that is, to consider how each design elements contributes to this result.
Autonomy – this is essentially the assertion that members of the network (in this case, participants employ their own goals and objectives, judgments and assessment of success in the process of interaction with others. This is reflected, for example, in Dave Cormier’s assertion that “you determine what counts as success in a MOOC.” A collection of people working in a MOOC should be, for example, thought of as cooperating, rather than collaborating, because though they will exchange value and support each other, each will be pursuing his or her own objectives and depending on their own means and resources.
In our MOOC it was important that we not tell people what they ought to learn or what lessons they should take home from the presentations we made and the conversations we led. People perceive what they are looking for, and often only what they are looking for, and our well-intentioned attempts to guide their cognition could just as easily lead to participants missing the information most important to them. Similarly, we did not attempt to define how participants should interact with each other, but instead focused on supporting an environment that would be responsive to whatever means they chose for themselves.
Without autonomy, a MOOC is not able to adapt to the environment. Rather that enable each person to allow his or her unique perspective or point of view of the world to influence the course design or organization, they would instead reflect the perspective or world view of some organizer telling them what their objectives should be, what they should learn, what counts as success. It is important that each person respond to the phenomena – the communications of others – in their own way, positively or negatively, in order to generate a unique structure or organization.
Diversity – this is a natural consequence of autonomy, and in addition a success factor in its own right. While we typically think of diversity in terms of language, ethnicity or culture, for us diversity applied to a broad range of criteria, including location and time zone, technology of choice, pedagogy, learning style, and more. Participants, for example, could experience the course as a series of lectures, and some did, but many skipped the experience. Others treated the course as project-based, creating artifacts and tangible products. Others viewed the course as conversation and community, focused on interaction with other participants.
The major concern with diversity so broadly construed is that some people might be seen as ‘doing it wrong’. We were, for example, criticized for offering lectures, because it did not follow good constructivist pedagogy; our response was that connectivism is not constructivism, and that it was up to those who preferred to learn through constructivist methods to do so, but not appropriate that they would require that all other participants learn in the same way. Additionally, it should be noted that it did not matter whether some particular pedagogical choice was in some respects a failure, since the perceptual recognition that it is a failure constitutes success in its own right.
Without diversity, it is not possible to contemplate the possibility of a network having different states, or different types of organization. A collection of entities that is not diverse is inert, or worse, overly reactive, in that a change in one becomes a change in all. In a computer, we expect each bit of memory to contain different values of one or zero over time than others, for otherwise, our computer could do nothing more than blink off and on and off again. Any sort of complexity requires diversity, and any sort of learning requires complexity.
Openness – this is the idea that the boundaries of the network are porous and that the contents of the network are fluid. In practical terms, it means that participants of the course are free to enroll or to leave as they wish, and to move in and out of course activities equally freely (I once remarked to ALT that what made my talk a success was defined not by the fact that they were all here, but by the fact that they could all leave (but hadn’t)). Openness also applies to the content of the course, and here the idea is that we want to encourage participants not only to share content they received from the course with each other (and outside the course), but also to bring into the course content they obtained from elsewhere.
Openness is necessary because – as the saying goes – you cannot see with your eyes closed. An a priori condition for the possibility of perception is openness to perceptual input. Learning requires perception, not only of the thing, but also of its opposite. If we were not open to the perception of evil, we would not be able to define good. If we are not open to the possibility of failure, we are not able to achieve success. We obtain these experiences through openness, by being open to other ideas, other cultures, other technologies, other people. The free flow of people and information through a MOOC is as important as the organization of the people therein.
An interesting side-effect of openness is that there is no clear line dividing those who are in the course and those who are not. The course resembles not a solid sphere but rather a cluster of more of less loosely associated participants (and resources, and ideas). In a connectivist course, for example, lurkers are seen as playing as equally important and valuable role as active participants. Off-topic discussions are not distractions but are rather seen as valuable outcomes. As members of the Bar Camp and unconference movement would say, the people who are there are the right people, and the outcome of the event was the right outcome.
Interactivity – through the years I have used various terms for this fourth element, including ‘connectedness’ and ‘interactivity’ but none of them suits exactly what is meant by this concept. It is not simply that members of the network are connected with each other, and that interaction takes places through these connections. It is rather the idea that new learning occurs as a result of this connectedness and interactivity, it emerges from the network as a whole, rather than being transmitted or distributed by one or a few more powerful members.
Another way to understand this property is to see it as the stipulation that the graph of network interactions or connections is not a power law distribution. In a power law distribution, one or a few members receive most of the connections, creating what I’ve called the ‘big spike’, and the each of the majority has only a few connections, resulting in what many people have called ‘the long tail’. This formation commonly occurs in dynamic networks, the result of what Barabasi identified as selective attraction: newcomers to the network tend to link to those people who are already popular, resulting in their disproportional growth in popularity.
Networks characterized by a big spike and long tail are not response to their environment, and can over-react to small stimuli, resulting in cascade failure and eventual network death. A more balanced (and dare I say, egalitarian) distribution of connectivity gives the network resilience, and the influence from one perspective cannot become disproportional simply because it came from an influential node. Each signal (each idea, each resource) must face not one challenge but many challenges as it is propagated, person to person, through the network.
To turn, then to the actual measurement of quality in a MOOC: it is necessary in the first instances to point out what ought not be taken into account, but because these elements are not important – they are – but because these elements are not relevant to the evaluation of a MOOC as a MOOC.
Paramount among these are evaluations consisting of evaluations of the quality of the course materials used in the course, the sort of evaluation that might be provided, say, by a peer review process or learning resources review process, such as might be undertaken by a project such as MERLOT. These evaluations examine the resources created for the MOOC or (in fewer instances, if any) at the materials shared among each other by participants in the MOOC, and assess such criteria as clarity, accuracy, usability, or engagement. Similar (or slightly varying) criteria are used to evaluate other aspects of courses, such as the facilities, the instructors, and the students themselves.
Such evaluations miss the point for several reasons:
- an evaluation of the parts isn’t the same as an evaluation of the whole. A strong course can be created out of arguably inferior, even defective, materials, if the course is organized appropriately (or, as Hemingway might say, the secret to writing is to create a perfect image out of banal and even defective sentences).
- even in cases where the parts are important, it is not often the case where better quality results in better outcomes; even a resource that is only average will suffice when the alternative is nothing at all, or as I once tweeted, what we usually need is not someone who is an expert, just someone who knows.
- similarly, what counts as quality in one context will be perceived as a weakness in others; an explanation that is complete and accurate may be incomprehensible to a beginner.
- and most importantly, the learning that happens in a MOOC is not a consequence of the learning materials, or even the instruction, it is a consequence of an immersion in an interactive community and will result from what emerges from that interaction.
Yes, we can evaluate based on some banal criteria – the website was always down, the text was too scrambled to read, the video was in Farsi – but these, insofar as they render the MOOC less successful, can be traced as failures of one or more of the success criteria described in the previous section.
The evaluation of each of the four criteria can be mapped against elements of the course, and then checked off like a counter. For example, we could list the fifty-five resources employed in the course, and count the number of resources that are free and open (in the sense I described above). But this is in a sense misleading; it makes a course that depends on a key closed resource seem to be 98 percent open, while at the same time it makes a course that had one participant post a lot of Amazon links (to books, which you must buy) seem like it was 50 percent closed. Neither estimation would be correct, but numbers know no context.
Properties like autonomy, diversity, openness and interactivity are not properly discerned by counting, but by being recognized. In this way they are a lot like other properties, like freedom, love and obscenity. A variety of factors – not just number, but context, placement, relevance and salience – come into play (that is why we need neural networks (aka., people) to perceive them, and can’t simply use machines to count them.
That said, there is a purpose to checklists and rubrics, and that is to ensure that there is nothing omitted from consideration. Even experts depend on checklists, and they are critical in environments such as hospitals and airplanes. As mentioned previously, we see what we expect to see, and checklists remind us what to expect to see.
At this point it would be reasonable to countenance a variety of features of MOOCs, and assess each for autonomy, openness, diversity and interactivity. For example, consider the question as posed to each of the following elements of a MOOC:
- content selected by the instructor (is it open? Is it diverse? etc.)
- the online platform used by participants
- the authoring environment(s) used by participants
- communication of daily news and announcements
- guest speakers and interviews
The difficulty with such a checklist is that it can easily become endless. And while posing these questions can be useful when selecting technology or when designing the course, they become less useful as an evaluation rubric after the fact.
So, a suggestion: think of the course as a language, and the course design (in all its aspects) therefore as an expression in that language. This can be applied as broadly or as narrowly as one wishes, and for the present purpose, can be used to frame an assessment of the quality of an entire MOOC in a single pass.
In consideration of the use of digital artifacts as language (for example, ‘speaking in LOLcats’), we can identify the different dimensions of literacy. Based on work in language and linguistics over the last century, I have identified six major dimensions of literacy: syntax, semantics, pragmatics, cognition, context and change.
It is important to understand that these are distinct from different types of literacy. For example, there has been a great deal of attention paid recently to ‘digital literacy’, along with numerical literacy (or ‘numeracy’) along with traditional language-based literacy. We can imagine many more types of literacy: performance, simulation, appropriation, and more, for example. There’s emotional literacy, financial literacy, and social literacy. Each of these (according to my account) constitutes in its own way the learning of a language. Each of these languages has its own literacy, and literacy in that language may be defined across the six dimensions.
Indeed, I have commented in the past, and it is relevant to point out now, that the act of learning a discipline – a trade, for example, or a science, or a skill – is more like the learning of a language than it is like learning a set of facts. Yes, there is an element of memory, but the bulk of expertise in a language – or a trade, science or skill – isn’t in knowing the parts, but in fluency and recognition, cumulating in the (almost) intuitive understanding (‘expertise’, as Dreyfus and Dreyfus would argue). This sort of fluency is acquired by immersion in a language-speaking community (of which a MOOC is a characteristic example) and described by the six elements of literacy listed above.
An evaluation of the quality of a MOOC, therefore, after we have passed beyond the gross characteristics of being massive, open, online, and a course, is an assessment of the resulting course as a network and from a linguistic perspective. Now again, this is a rubric, not a checklist. It is not intended to define a MOOC as ‘49% successful’ on the basis of that percentage of boxes being checked. It is an aid, used to assist a person who is already fluent in MOOC design (or at least, in the domain or discipline being studied) recognize the quality (or lack of quality) of a MOOC.
This rubric thus consists of a set of 24 elements: each of the four success criteria, across each of the six dimensions if literacy. Some of these will be more difficult to comprehend than others, and each will have to be considered at some length before anything like a common understanding is achieved, but the checklist serves as a starting point, and the hard empirical work can now begin.
So, for example, when I think back of the CCK08 course, and the other MOOCs we designed, one of the questions I could ask (among the 24) is ‘openness-syntax’. Openness is the quality I described above, and the question here is how well it applied to the forms, rules, regularities, patterns and operations in the course. This, in turn, leads to basic questions like: could URLs be shared? Is the login form accessible? Are there hidden or unstated regulations or criteria? This list, clearly, would be different for each course, because each course consists of a different set of forms, rules, regularities, patterns and operations. It’s not a question of whether this is the right set of rules or regularities, or whether one set of rules is better than another (that’s like asking whether Spanish or Portuguese is the superior language). It’s a question of whether the language of the course can be learned.
So there are 23 other sets of questions, each equally important, and this is neither the place to describe them in detail nor even to attempt to enumerate them (and they are more productively considered as separate and individual cases, rather than as a set).
To conclude, I will add some caveats.
The discipline of education is as a rule overly fond of taxonomies and distinctions. The taxonomies and distinctions offered in this discussion are the least important aspect of the discussion. In all cases, the taxonomies have been developed in order to enable inferential work to be performed. It doesn’t matter whether we divide the properties of successful networks into ‘autonomy’, ‘diversity’, etc., whether we focus on learning rules (Hebbian, Back-propagation, Boltzmann) or whatever, what matters is that the design principles of MOOCs are those that reliably result in successful networks, where success itself is a matter of empirical observation, convention and use. The same with respect to the elements of literacy.
And similarly, with respect to this presentation, it is not the content of what is asserted here, it is the fact of the assertion and the manner of the investigation, which should be taken to serve as a model or demonstration of thinking about quality in MOOCs, and not a definitive statement of it.
 Cited here: https://plus.google.com/102352099876644260792/posts/isymGxiZ1Ly
SUBSCRIBE TO OLDAILY DONATE TO DOWNES.CA
Web - Today's OLDaily
Web - This Week's OLWeekly
Email - Subscribe
RSS - Individual Posts
RSS - Combined version
JSON - OLDaily
National Research Council Canada
All My Articles
About Stephen Downes
About Stephen's Web
Subscribe to Newsletters
Privacy and Security Policy
Stephen's Web and OLDaily
Half an Hour Blog
Google Plus Page
Huffington Post Blog