Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes Photo
Stephen Downes, stephen@downes.ca, Casselman Canada

Perceptual Learning
77046 image icon

Perceptual learning is defined as "any relatively permanent and consistent change in the perception of a stimulus array, following practice or experience with this array." For example, if you're a beginner, all coffee tastes like coffee, but if you try a large number of different coffees, you learn to spot the difference between African (light, fruity) and South American (woody) coffees. This is an updated encyclopedia article on the topic of perceptual learning, and well worth reading if you want to understand how experience and sensation are closely related. Image: Huang Changbing.

Today: 346 Total: 346 Kevin Connolly, Stanford Encyclopedia of Philosophy, 2024/09/20 [Direct Link]
This App May Indicate Something Is Deeply Wrong with Us - Daily Nous
77045 image icon

You can only sign up on an Apple iPhone, so I haven't tried it yet. But it's intriguing - it's like Twitter, but as soon as you sign up, you have a million followers, but they're all AI. Justin Weinberg is not impressed. "Really it's just sad. If this app becomes successful, what does that tell us? That we're not good at being there for other persons, such that many of them feel they have to turn to this? That we don't care if there are other persons there for us, since we can have substitutes like this? Both?" I get that we're social beings and that this is the essence of human existence and all that, but there's a lot about society that doesn't work for a lot of people, and if something like this fills the gap for them, it sounds good to me.

Today: 211 Total: 212 Justin Weinberg, Daily Nous, 2024/09/20 [Direct Link]
How can we make the best possible use of large language models for a smarter and more inclusive society?
77044 image icon

Short article describing and referencing an article published from 28 authors in major scientific institutions. It says, in part, "If LLMs are to support rather than undermine collective intelligence, the technical details of the models must be disclosed, and monitoring mechanisms must be implemented." The actual article is published behind a paywall on Nature, which is a classic case of not understanding the point you've just made in your paper.

Today: 236 Total: 236 Max Planck Institute, 2024/09/20 [Direct Link]
Ms Rachel - Toddler Learning Videos
77043 image icon

This YouTube channel was mentioned on CTV today as one of the most popular sites out there for parents to treach young children. "Ms Rachel uses techniques recommended by speech therapists and early childhood experts to help children learn important milestones and preschool skills!" What I notice is that the videos are long - an hour or two hours long! They're a mixture of basic language learning and popular children's songs. I started playing them this morning and couldn't turn them off!

Today: 238 Total: 238 YouTube, 2024/09/20 [Direct Link]
Releasing Common Corpus: the largest public domain dataset for training LLMs
77042 image icon

One thing I love about Mastodon is that I get to sit in on conversations like this one between Clint Lalonde and Alan Levine on open data sets used to train large language models. It's prompted by Lalonde asking whether there are other open data sets like Common Corpus (not to be confused with Common Crawl). This leades to an article about The Pile, an 885GB collection of documents aggregating 22 datasets including Wikipedia, ArXiV, and more. There's Semantic Scholar, which appears to be based on scientific literature, but also includes a vague reference to 'web data'. There's also the Open Language Model (OLMo).

Today: 214 Total: 214 Pierre-Carl Langlais, Hugging Face, 2024/09/20 [Direct Link]
ASBA Releases Artificial Intelligence Policy Guidance for K-12 Education
77041 image icon

It's another list of principles to toss into the ever-growing list of AI policy statements. This one resembles most of the others, particularly with a very standard list of principles (illustrated). One interesting newish thing: "Ensure there is no copyright infringement when prompting AI." The principle here is that you should use copyright material as content for a prompt. That's an interesting requirement, and one I'll have to think about.

Today: 64 Total: 285 Alberta School Boards Association, 2024/09/19 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Sept 20, 2024 8:37 p.m.

Canadian Flag Creative Commons License.