[Home] [Top] [Archives] [About] [Options]

OLDaily

The understanding debate
Herbert Roitblat, TechTalks, 2022/01/31


Icon

There's all kinds of goodness in this post; because to read through to enjoy the subtlety of the last sentence. The surface-level question revolves around an AI can understand what a sentence means when we use it as a prompt or example for processes such as automated writing. The examples (taken from one of my favourite authors) suggest it can't, of course. But what of human readers? Can we say they understand? How much of the meaning do we bring to a piece of text that may never have been there before? When we read meaning into a text, how can we say it isn't we who are making the mistake?

Web: [Direct Link] [This Post]


Is handwriting better than typing for note taking? Surprisingly, it's not!
Donald Clark, Donald Clark Plan B, 2022/01/31


Icon

Of course it's not, but there was no small number of people citing the one study that suggested it was. But what does matter is the practice of taking notes. "It would seem that writing notes in your own words, and studying your notes, matter more than the methods used to write your notes." I would imagine simply taking notes will be effective to some degree, but of course stimulating the same thoughts you had when you took the notes will be effective. And - here I speculate - I would imagine they would be even more effective in a context where reference to the note is being used to complete a task or solve a problem. Anyhow, having written some 35,000 notes on this web page alone I can attest to their utility.

Web: [Direct Link] [This Post]


AI in Online-Learning Research: Visualizing and Interpreting the Journal Publications from 1997 to 2019
Gwo-Jen Hwang, Yun-Fang Tu, Kai-Yu Tang, International Review of Research in Open and Distributed Learning, 2022/01/31


Icon

This article (27 page PDF) summarizes work that hase been done in relation to AI, analytics and e-learning over the last 25 years. Two things stand out to me. First, it seems to me that most of the reserach still regards the field as solving a search problem, that is, a matter of finding and recommending resources and activities. Second, we can clearly see a shift in the 25 years away from intelligent tutoring systems and toward natural language analysis tools using machine learning or neural networks, for use in things like MOOCs and discussion forums. The fields I have called generative and deontic analytics don't even appear in this study, but I would argue they will be very significant over the next decades, moving analytics (finally!) beyond solving search problems. Part of an IRRODL special issue on AI, E-Learning and Online Curriculum.

Web: [Direct Link] [This Post]


Why Wordle Works, According to Desmos Lesson Developers
Dan Meyer, Mathworlds, 2022/01/31


Icon

Wordle is the craze of the week (and I confess, I'm a daily player). You have six tries to guess a five-letter word. After each try, you will learn whether each letter is in the word, in the right place in the word, or not in the word. Oh, and there's only one game per day. That's the whole of the game, and yes, it's brilliant. This article looks at the reasons why the same succeeds, and mentions especially the way players build on failure. There's also a lot of commentary out there on strategies for success. I've seen a number of people focus on vowels, because they appear most frequently in words. But I think patterns and recognition matter more, and have focused on the most common consonants, with a fair bit of success.

Web: [Direct Link] [This Post]


How The BBC Does Ethical Personalisation
Pernille Tranberg, DataEthics, 2022/01/31


Icon

This article is too brief to be really satisfactory, but it marks what to me is a signal event, the first actual use I've seen 'in the wild' of Tim Berners-Lee's Social Linked Data (SoLiD) project. SoLiD is described here as "an open-source Personal Data Store (PDS) developed by the company Inrupt." here's how it worked in practice: "In their first demo, a user created a new data pod on the BBC system and then linked their BBC and Spotify user accounts to pull in some of their media play histories... At no time does the BBC get to see the user’s Spotify data, and Spotify does not receive a copy of the users BBC data." That's not only how we want to do ethical data management, I would say, it's also how we want to do ethical AI.

 

Web: [Direct Link] [This Post]


Get Rid Off The Green Buttons. It’s Pure Manipulation
Pernille Tranberg, DataEthics, 2022/01/31


Icon

The use of highlighted green buttons to signify preferred options (preferred, that is, from the point of the website) is known generically as a 'dark pattern' whereby the website is trying to manipulate its users. This article argues against their use. "Even if ‘dark patterns’ were to go against the law, it is unethical to manipulate your users by design." And Pernille Tranberg has a point, because dark patterns work against the choices users have, choices often enshrined in law (such as the right to decline cookies without penalty). In learning technology, however, designers are expected to manipulate the learner, so they follow the most effective path in their learning. So the ethics change. But how much do they change? When does manipulation in learning design become harmful or unethical?

Web: [Direct Link] [This Post]


This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2022 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.