The new Microsoft Edge browser - the first built on the Chromium browser engine - has been released and is available for general download. Features include "AAD support, Internet Explorer mode, 4K streaming, Dolby audio, inking in PDF, Microsoft Search in Bing integration, support for Chrome-based extensions, and more." What I want to know is how well ad-blocking works. Because it doesn't matter what the browser can do if I can't turn off the ads. Here's how to upgrade.
After the literature review, the bulk of this article is devoted to a big table titled "Summary of Instructional Activities for CoI." The seven big rows of the table echo Sorensen and Baylen’s final principles for online instruction, and each is subdivided into the three 'presences' from the Community of Inquiry (CoI) model. I see this mostly as a classification exercise, though I suppose if pressed we could read the rows and subdivisions as providing research support for the activities listed. But I'm left wondering, why these activities rather than others? Why is this an 'activity', for example? "Consider incorporating Web 2.0 applications in course activities, especially social software such as blogs, wikis, etc. (Richardson et al., 2009; Stephens & Roberts, 2017)." Has nothing changed since 2005?
This paper makes a promise it doesn't really deliver on, suggesting that "deep learning has evolved into developing ways to repurpose existing resources (that) can mitigate the expense of content development of future eLearning." Let's just mark that down as 'future technology'. The text of the paper, though, offers a useful summary of AI models, listing their relative strengths and eweaknesses in learning applications (see the table on p. 193).
This is nice. The Global Open Data for Agriculture and Nutrition (GODAN) has published course materials from its Action Open Data Management MOOC as a gitbook (a gitbook is an open content authoring and versioning environment). The course "aims to strengthen the capacity of data producers and data consumers to manage and use open data in agriculture and nutrition (and) to raise awareness of different types of data formats and uses, and to highlight how important it is for data to be reliable, accessible and transparent." This is just one example of the huge volume of open educational resources out there that don't appear on (and aren't talked about by) institutional OER projects.
Clint Lalonde describes his thinking and efforts to convince people to use Mastodon, noting that it's hard to get a social network off the ground because of the network effect, that is, the idea that networks become much more useful as the number of people using them grow, but which means they can appear less desirable at the early stages. I don't see Mastodon as an alternative to Twitter (which I continue to use as a broadcasting tool) but as an alternative to Facebook (in other words, as a place to form community and chat). Also, I will never overcome "the hesitation around 'using' people." I don't believe in using people. I am happy to contribute to community and to converse with friends, but the minute I start assessing a network based on 'what I can get out of it' my motivation and my relationship with the network is broken. That's why it doesn't matter to me whether people migrate or not.
Though “fictionalized and simulated for illustrative purposes only”, products like Samsung’s Neon are being called ‘artificial humans’, “a computationally created virtual being that looks and behaves like a real human, with the ability to show emotions and intelligence.” This article talks about the potential and also raises some (much-needed) scepticism. As the author writes, "If you’re going to give us artificial humans, then, by all means, do so. We have enough illustrations from SciFi to tide us over in the interim." If we extrapolate to the world of artificial teachers (and why not?) we should ask, do we want our AI teachers to have feelings and emotions? Does that help them become more sympathetic? Or does it just make them targets for abuse and manipulation? See also CNet.
CNN has picked up on a line of thought brought up in this Reuters story from a few days ago. The gist, in a nutshell, is that job interviews are being evaluated, at least in part, by artificial intelligence systems. The Reuters story focuses on the use of AI in Korea. "According to Korea Economic Research Institute (KERI), nearly a quarter of the top 131 corporations in the country currently use or plan to use AI in hiring." The CNN article highlights US-based companies like HireView, Yobs and Talview. What's significant is that they're not testing for knowledge. They're looking for character traits, like personality, attitude and ethics.
To be clear, the article is marketing for the Art of Education University, which describes itself as "a university built by art teachers, exclusively for art teachers." It offers access to online professional development resources for a monthly fee. I'm not endorsing them, just noting that they exist. Anyhow, the article specifically discusses offering "choice" in classes, which is what caught my eye. It addresses arguments against choice, for example, "(students) just don’t know what to do when they have full choice", and "you don’t teach any technical skills!" The answers are pretty good, and basically what I would say when asked similar questions.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2020 Stephen Downes Contact: firstname.lastname@example.orgThis work is licensed under a Creative Commons License.