I will admit that I find it very difficult to take lessons in ethics from people who have published them behind paywalls or teach them behind tuition barriers at elite universities. At a certain point, such ethics become self-justifying, defending in one way or another the privilege of the wealthy and powerful. So I'm sympathetic with this article, and with statements like "the assumption that most people won’t be interested in, or capable of, reading academic research is patronising." And like the author, I question the ethics of institutions that offer advancement to staff only if they publish in expensive academic journals. "Rather than focusing on career damage to those who can’t publish with an Elsevier title, we should focus on the opportunity cost in hundreds of lost careers in academia."
Commercial content companies really only have two ways to make money: selling advertising, or charging subscription fees. This revenue needs to cover the cost of content production, so it's in their interest to encourage users to create content for free. But they also need customers, which means they need high profile (ie., paid) content as well. It's a delicate balance between these two content sources, especially as the market shifts away from advertising (because, as this article says, "it turns out an uncluttered, ad-free reading experience really can make for a better internet"). No kidding. That's why I use Firefox with Ublock Origin - my internet experience is almost ad-free. Anyhow, this article talks about Twitter's acquisition of Scroll, where users pay versions of news sites. But on the other side of the ledger there's Twitter's creation of Spaces, which are basically open audio forums (yes, a lot like Clubhouse). Can all of this be bundled into a single fee and single login? I think that's what Twitter is betting the company on.
I think I'd approach an article with this title very differently. This article looks at a set of considerations that arise during online learning "and outlines the actions an Instructor can take and the reasons to take them." The considerations include things like students, teaching, technology, and the like. It's a bit of a grab bag, but more, I think, it assumes that already have a plan and just need to fine-tune it for online. By contrast, I'd approach it like this, beginning with the question "what do you want to do?" and then asking how you want to do it, what supports there are, etc. That doesn't mean this is a bad resource, especially as it links to a number of additional resources, just that it's structured in the wrong way to be 'an introduction'.
Over the last year or so the tenor of articles from this group of authors has shifted from a hard-core emphasis on instructivism, cognitive load and worked examples to a more wide-reaching and progressive set of practices. I think this has been a shift for the better, and we're seeing the result in this article, which looks at the benefits of drawing in learning (can we cay 'constructionism' anyone?). They're following Richard Mayer and Logan Fiorella 2015 book ‘Learning as a Generative Activity’ and in particular Mayer's Selection, Organising, and Integrating (SOI) memory model. To me that reads a lot like the 'aggregate, remix, repurpose' approach we've been following since the days of our early MOOCs, but without the 'sharing' part (I'm sure they'll get to it). Now to be clear, I'm not saying that we invented this; we didn't, and there are versions of the same process that go back to the days of Seymour Papert and earlier. But it's nice to see these authors working on what they're now calling 'generative learning' and seeing how it relates to pattern recognition. Eventually we'll all be in sync.
It's funny how easy it is for Facebook to moderate content when it's motivated. For example, when Signal created an Instagram advertisement that told readers exactly why the were targeted, it was swiftly banned. This article is a new release from Signal describing the incident. "Apparently," says a follow-up article in Gizmodo, "Facebook wasn’t a fan of this sort of transparency into its system." There are some lessons here, I think.
This is an interesting article from a couple months ago describing work Geoffrey Hinton has been doing to try to reconcile successful methods used by artificial neural networks with how the brain actually learns. Longtime readers will be familiar with the two mechanisms discussed: Hebbian networks ("Neurons that fire together, wire together") and back-propagation. The problem with back-propagation, according to the article, is that "in a biological network, neurons see only the outputs of other neurons, not the synaptic weights or internal processes that shape that output." One major response to this problem comes in the form of recurrent neural networks ("that is, if neuron A activates neuron B, then neuron B in turn activates neuron A"). Toward this end, the article also discusses predictive networks and how different neural cells (and pyramidal neurons in particular) work to recognize and manage error.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2021 Stephen Downes Contact: email@example.comThis work is licensed under a Creative Commons License.