One of the more salient stories this year has revolved around the phenomenon of fake news and (via fake news) managing and massaging public perceptions. The gist of this article is that, while social media manipulation is a problem that cannot be taken lightly, it would be misleading to attribute the U.S. election results (and the Brexit vote, etc.) to social media. Instead, write the authors of this report, we should look at mainstream media. For example, "in just six days, The New York Times ran as many cover stories about Hillary Clinton’s emails as they did about all policy issues combined in the 69 days leading up to the election." I find it interesting that it is this same media that is now affixing responsibility for the outccome on social media, when the scale of the coverage in traditional media dwarfs that found through alternativer sources. And finally, I attribute the election results to the voters (and I use the word 'attribute' rather than 'blame' when putting on my scientific or journalistic hat).
I don't think this paper really succeeded in its stated objective of defining massive open online courses. What we do get is a sense that there are many interpretations of the form, and that if you sample mostly the xMOOC form, you'll find that xMOOC properties (like instructor-centeredness) predominate. It's interesting for me to observe that as the research moves from primary sources and into secondary sources (and tertiary sources, and more) that the researchers' understanding changes. Now instead of having direct experience they are reporting on what the research says, and with no real constraints on what can be said in research, assertions are replicated and become fact.
The authors writem, "The three main themes that emerged from this study were: the importance of online communication approaches, challenges and supports for online collaborative learning, and that care is at the core of online learner support" (note that the abstract expresses this quite differently). I include this paper here not so much to address these issues (though I certianly have my own opinions) but to ask readers to think about the methodology. The study is based on interviews with four higher edeucation instructors. The authors assert "it was conducted through a post-Positivist paradigm and the findings are not intended to generalize," which is good. But why is this presented as 'research' rather than, say, 'argument' or 'perspectives'? The reserachers knew what they were looking for at the start; "the interviews focused on care expressions in digital delivery settings made within each instructor case." I think there are arguments to be made for the three themes, and they are cogently assembled here, but it just seems misleading to represent them here as discovered through reserach. Read more articles from the special issue on the AERA Online Teaching and Learning SIG.
"I want to establish an on-line repository with OERs of primary and secondary education," wrote Panagiotis Stasinakis on a Creative Commons discussion list. "I am searching for a platform, an open-source platform, to install it in my private server and use it for the repository." The result was an interesting compendium of resouces, including:
Note that the annotations are from the posts on the discussion list, not from me.
According to this press release summarizing a talk at Online Educa Berlin, Pasi Sahlberg argued that education ministers in England, Australia and the United States are continuing to invest in the GERM (Global Educational Reform) model, in spite of evidence that it doesn't work. According to Sahlberg, "unsuccessful education systems are characterised by a belief in competition, standardisation, de-professionalisation, test-based accountability and privatisation. The outstanding features of successful education systems, on the other hand, are cooperation, risk- taking and creativity, professionalism, trust-based responsibility ('not test-based accountability') and ensuring an “equitable” public education for all."
I think this is an interesting idea, but the presentation is some of the worst I have ever seen. I've reproduced the basic standards in a post, here. In a nutshell, the Transparency and Openness Promotion Guidelines are intended to describe different levels of openness (disclosure, requirement and verification) regarding data sources, algorithms, and other factors (eight in all) related to scientific research. The documentatins is broken down as best practices for funders, institutions and journals. There's a supposedly introductory article and the complete guidelines in a user-hostile content management system. They've been published, but you have to pay a subscription fee to see them.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2017 Stephen Downes Contact: firstname.lastname@example.orgThis work is licensed under a Creative Commons License.