Online Learning and MOOCs: Visions and Pathways
Stephen Downes, Nov 07, 2017, China International Distance Education Conference, Beijing, China
This talk delivered in Beijing, China, traces the history of online learning from learning objects and LMSs through to open educational resources and MOOCs, then describes some trends for the future. See also here and here for conference websites.
It took a bit of time to read this report (226 page PDF) but it's worth the effort, especially if you're in the position of designing a national education technology initiative. "This study on national ICT/education agencies seeks to provide some insights that may help answer two lead questions: 1. What do we know about the form, functions and characteristics of such organizations? 2. What are some key considerations and lessons related to their establishment, operation, and oversight?" The introductory section is probably the most use (especially the section describing the "key issues for policymakers"). The recommendations are probably too vague to be helpful, but the wealth of detail in the eleven cases from countries around the world (including two which were eventually shut down, Australia's EdNA (by Gerald White & Lesley Parker) and Britain's Becta (by Gavin Dykes)). Canada gets a paragraph about Schoolnet in the closing 'Other Initiatives' chapter, though given the focus on the role of national agencies in technology deployment I would have thought CANARIE might also rate a mention.
"Increased visibility and wide dissemination of research are the most common motivations behind both publishing and funding OA books," according to this report, and "both authors and funders also cite ethical motivations, stressing the fact that publicly-funded research should be available to everyone and calling for equal access to knowledge." The research in this Springer Nature report bears that out. "On average, there are just under 30,000 chapter downloads per OA book within the first year of publication, which is 7 times more than for the average non-OA book." So, yes, open publication means more access than closed. But we shouldn't be lulled into thinking it's a numbers thing. I don't count the downloads of my books (I can tell you how many were downloaded so far this month, and that's about it). I do check to make sure that people can access my books without constraint. It's the quality of access, not the quantity, that counts.
The type of accessibility being discussed is that of "the degree to which the scholarly content itself can actually be understood by the generalist reader." A piece of work might be more or less accessible for a number of reasons, including the quality of the writing, the use of subject-specific jargon, the use of formats inaccessible to people with disabilities, the inherent complexity of the material, and the inherent coherence of the writing itself. The author's main point is that "Not all complexity can be reduced to simplicity without a real sacrifice of meaning." That's probably true. But it's not an either-or proposition. I think all audiences can understand anything to a certain degree. The purpose of quality writing is to extend that degree to the greatest extent possible.
This is a demo of the xAPI video profile. It's an extension of xAPI that provides activity reports on the use of video. "Progress is tracked as actual percentage of the video watched. Completion is calculated only based on entire video being consumed. Video resumes from where the user left." You can log in to the reports interface and view the activity logs for yourself (Email: email@example.com Password: demo). Here's the reference implementation on Github.
The original is of course Darrell Huff's 1954 classic book How To Lie With Statistics (74 page PDF). The examples are all dated, of course, and the dollar amounts should be multiplied by 10 to make sense in today's world. But the advice is still spot on. The Lifehacker article is a bit of an update, with a more modern take, but it all just goes to show that news media and marketers have been manipulating the truth long before social media ever came along. What has happened in the last two decades is that deception has been democratized. It has always been a problem; maybe now society will finally deal with it.
This story is interesting on several levels. The first is the surface, where the not-so-secret method is revealed: "It involves three steps: a trigger, an action and a reward. A push notification, such as a message that someone has commented on your Facebook photo, is a trigger; opening the app is the action; and the reward could be a 'like' or a 'share' of a message you posted." OLDaily works on the same model, except it rewards you with insights instead of likes. The second level is the pedagogical, and the question of why educators find influencing people to be so difficult when advertisers do it with ease. And third, with a note of irony, the number of people posting "I just turn off the internet" in the comments following this internet article. I wonder whether they're rewarded with reads.
Donald Clark takes the easy sceptic's route with this article. I found myself disagreeing with most of it. For example, when he says "human fears and expectations that demand the presence of humans in the workplace" I think he has forgotten about the similar arguments around self-serve gas stations and automated tellers. Similarly, when he says "automation will not happen where the investment cost is higher than hiring human labour," I think he misses the fact, first, that automation is usually pretty cheap, and second, it is often much more reliable than human labour. But there is a good point here: "What matters is not necessary the crude measure of ‘jobs’ being automated but rather activities’ being automated." A job is a collection of activities, some of which will be automated, and some not. But (and this is key): unless we radically reform income inequality, there will be few jobs. Rich people don't employ poor people; poor people employ each other, and this is only possible when they have the means to do so.
I've covered this before, but this article has a much wider range of examples. "Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level." I want to look at Google and Facebook and Twitter in the eye and ask them, "What are you doing? Why are you doing this? What's the matter with you?" But of course, you can't.
The prediction may be plus or minus a few years, but the outcome is undeniable. "The tipping point will come when 20 to 30 percent of vehicles are fully autonomous. Countries will look at the accident statistics and figure out that human drivers are causing 99.9 percent of the accidents. Of course, there will be a transition period. Everyone will have five years to get their car off the road or sell it for scrap or trade it on a module." What does this have to do with learning technology? Nothing. Everything.
Yes, students should learn critical thinking at an early age. But they should learn criticial thinking. Not some pop version of what passes for critical thinking. And that's what concerns me about this article and the sources it cites. This and so many other resources are far more concerned than they should be about tone, source, motive and how it makes you feel. And there's almost nothing at all about clarity, evidence and reasoning (save the awkwardly explained ARE framework in this publication).
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2017 Stephen Downes Contact: firstname.lastname@example.orgThis work is licensed under a Creative Commons License.