This post takes about half its length to get to the point, but when it finally reaches it, it's a good one. "For absolute clarity," writes Jim Dickinson, "students aren’t 'paying' for learning outcomes. After all, some of them might fail, and some might not learn anything." No, in fact, what they're paying for is everything else: "material matters are things like the main course content, components like placements or field trips, how and where the course will be delivered.... it also includes non-course-related information that students might consider important – the sort of stuff universities promote to students on open days." Now Dickinson's point is to note that it's all of this that has been dropped in online Covid education, and students quite rightly are demanding a refund. But my interest is in something more fundamental: colleges and universities have spent decades (centuries, even) convincing people they need all of these things to obtain learning outcomes. But what if people realize after Covid that they don't need them at all? I think that universities, like politicians, are going to depend on people having short memories.
This is a teriffic article introducing readers to the concept of the 'codebook'. In a nutshell, a codebook is the list of all the concepts involved in the reserach project and how they're measured. A 'code' is "the category, tag, or label we apply to each mention." The code is defined, and operationalization describes how it shows up in practice, and therefore, how it can be measured. The code book also includes specific examples of codes as concrete manifestations of operationalizations. This (as the author acknowledges) is a dry and tedious subject, but it's absolutely critical to research. In my own work, I don't have a codebook per se but I have something better: a content management system (pictured) that I built myself that has evolved over the years.
We encounter the concept of assessment frequently in this newsletter so it's useful to return to the basics from time to time. Let me be clear: I hate the fake, hackneyed and overused trope of a teacher being confused by the question, "What evidence do you have of your students’ learning?" No teacher is caught off-guard by that. It's like a hockey player being surprised by the question "How do you know you're winning the game?" But the rest of the article has some good starting points, for example, "assess what is essential, because you can't assess everything", and "design the assessment before you design the learning", and "strive for real-world authenticity". Now there are exceptions to these and the rest of the points raised, but as I say, these are the basics, and a good place to begin.
In the future all jobs will be teaching jobs. That's not Udacity's prediction, it's mine. But it follows from what Jennifer Shalamanov writes in this post. After surveying some of the reasons AI will replace human jobs, she adds a section explaining why AI can't replace some jobs. She offers four reasons: creativity, human connection, complexity, and "someone needs to program the AI". The first three are just wrong. Outside of family and friends (ie., not jobs) the human connection is vastly over-rated. Computers are already creative, and better able to attune their creativity to an attentive audience. Computers thrive in the strategic and complex in a ay that humans (who prefer simple explanations for things) cannot. That leaves only programming. But you don't 'program' AI. It isn't programmed, it learns. So where does that leave us? We will all be teachers. The core existential question for the 21st century is: what will we teach them?
Like Element, Signal is a peer-to-peer encrypted messaging app. It also includes features of blockchain in order to enable things like financial transactions. There's a strong case for this sort of app. "People who want more control over their data and how it's used — and who want to exist outside the gaze of tech companies." But as in the case of Element, there's risk. People can use it to send objectionable content. More, people could use its currency exchange feature for illegal purposes. And that's what's causing the dispute among Signal staff, between those who want to address problems as they come up, and those who thing the platfrom should take a strong stance against bad actors from the outset.
The subhead in protocol, where I first saw this item, was "who watches the wathchers?" It's a good question. Element is an increasingly popular end-to-end encrypted messenger and collaboration app. Like a number of new tools these days, it is decentralized, which means that people manage and host their own instances. There's no need to depend on a central service like Google or Apple or Facebook. But this also creates a potential point of debate: people can use Element for illegal or offensive content, and there's no way to moderate this content. So what companies can do is to prevent distribution of the application by means of their control of the platform (this is especially true of mobile phones, where companies discourage and in some cases prohibit any unauthorized applications from being run on the hardware). Is this a reasonable response? Would you ban telephones because some phone calls are objectionable? Would you ban a browser because some web sites are hateful? And more: these decentralized services are in direct compeititon with centralized social netwirk sites and services hosted by Google, Apple and Facebook. At a certain point, banning a client like Element begins to look overly self-serving. Image: Twitter.
An examination of student preference for traditional didactic or chunking teaching strategies in an online learning environment
Brendan Humphries, Damien Clark, Research in Learning Technology, 2021/02/03
What's the best way to present video content online? Is it to present one long video, or to chunk it into a number of shorter videos? That's what this paper (12 page PDF) studies. "The major findings indicated a significant preference for chunk-style videos between 3 and 17 min duration when compared to traditional long-view didactic lecture materials." The sample size is reasonable large if unrepresentative - 1268 university students across two academic years. But is this the sort of question that can be answered by looking at averages? I would think that it varies a lot by the content, context and viewer. I don't think people would want to watch Titanic (3 hours 14 minutes) in twenty 10-minute chunks.
I'm including this item today because it's really really weird. The writer tries to achieve some sort of 2020s-snarky dialect, and fails utterly, in my view. It feels like a man trying to write like a hip and with-it woman, or maybe a woman writing the way she thinks a man would want to write like a womant. Then there are the graphics.... oh, the graphics. Shoulder-bare mouth-open woman juxtaposed against a blue banana? But just when you thought the weirdness was over, a jarring change, as it introduces us to a half-dozen "friends" who work in the field and write things like "the most imperative is to mobilise deliberate upskilling" or "start focusing on business outcomes instead of learning outcomes" or "we need to show radical accountability." If they offered anything more than a LinkedIn profile (they don't) I think the seven friends would probably be worth following, but this group as a whole needs to rethink how it presents itself to the world.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.
Copyright 2021 Stephen Downes Contact: firstname.lastname@example.orgThis work is licensed under a Creative Commons License.