by Stephen Downes
Oct 07, 2016
One of the things proponents of internet media have long said is that people will read more than they ever have before. This was to allay fears on the part of older generations that too much screen time would make children illiterate. Now while it appears the older generation may have been speaking from experience, we see that the younger generation turns to text, not video, when learning about the news. "Younger adults are far more likely than older ones to opt for text, and most of that reading takes place on the web." The problem with video news is that you have to sit and wait for it. That's find if you're in a receptive consuming mode, but if you're engaged and active online, you want the news now.
One of the longstanding criticisms of self-managed learning is that students are unable to generate the motivation or technique to accomplish their goals. This may be true in cases where traditional instruction is simply converted into online delivery, but well-designed instruction (as we have seen for decades in things like computer games) supports students quite well. This contention is confirmed by the present study, which evaluates the use of 'spoken tutorials' to teach the Java programming language. Researchers have been reporting on large-scale uses of this method for several years. In the current study, "the performance of college students who self-learned Java through the Spoken Tutorial method is found to be better than that of conventional learners." Audio cues and visual examples guide students through the tasks, where students actually perform the actions (for example, author lines of code) for themselves.
The Effects of Captioning Videos on Academic Achievement and Motivation: Reconsideration of Redundancy Principle in Instructional Videos
Muzaffer Ozdemir, Serkan Izmirli, Ozden Sahin-Izmirli, Educational Technology & Society, 2016/10/07
Cognitive load theory tells us that presenting the same message in different modalities reduces students' ability to learn. This is known as the 'redundancy principle'. But this paper (10 page PDF), released today, presents disconfirming evidence. "The findings indicated that, in contrast to the suggestion of the redundancy principle, motivation and achievement scores of students do not vary according to the instructional video type under investigation (captioned vs. non-captioned)."
I'm no fan of Paul A. Kirschner but I was curious as to what a guest post featuring an interview with him would look like (those of us who write blogs are quite familiar with the never-ending stream of 'guest posts' being offered by this or that source - they're always trying to promote something). It's not a bad interview, and it assures us that Kirschner's intentions are honorable. And it linked tho this guest post by Kirschner and Mirjam Neelen, which in turn links to their blog, which was new to me. I've signed up, so now I'll be passing along things like this useful discussion on feedback as well as pondering the basis for things like this ad hominem attack on unnamed proponents of self-directed and self-regulated learning. Some things never change, I guess.
This is a video of a presentation by Genevieve Bell from Intel at O’Reilly’s AI Conference. D'Arcy Norman comments, "I don’t think of AI as trying to invent an artificial human, but it’s extremely important to think about the cultural, moral, racial, and gender biases that get baked into code through histories of projects." We are reminded of Microsoft's attempt to create a chatbot that went terribly wrong. It's a dilemma. If you want society to get bnetter, your AIs have to do more than merely draw what they know from society. But 'guiding' these AIs then becomes a position of great responsibility, and who exactly is well-placed to take this on? Besides me, I mean.
'Deep Learning' is the use of neural networks to do smart things, like grade papers or make recommendations. This article addresses the "commoditization" of deep learning, that is, the trend toward making the data and algorithms available for free. That's why you could use an open source library like Tensor Flow to do neat things with open data. It still takes some smarts, but it's getting easier. The point of this article, though, is that it still takes computing power - quite a lot of it - and that's what companies like Amazon and Google really want to sell you. And they can charge more for it if the complementary products - data and software - are free. And it gives them a market advantage, because while anyone can produce data or write algorithms, it takes a large enterprise with a lot of resources to set up data and computing centres. So that's the play.
This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.
Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.