Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes Photo
Stephen Downes, stephen@downes.ca, Casselman Canada

Questioning Questions
77138 image icon

As someone who has previously taught philosophy, I found this post interesting. It's based on the idea a professor had whereby students would be challenged to raise an argument they thought irrefutable, and the professor would play "devil's advocate". Unsurprisingly, there were some topics that were off limits, as the professor did not want to be in the position of advocating some seriously bad positions. But how are these 'no-go' areas decided? "What I consider acceptable to discuss or off limits reflects my own values and blind spots, and the privilege of being the professor means that I get to decide what questions are fair game in a context where reasonable people may very well disagree with my decisions."

Today: 145 Total: 145 Brynn Welch, Blog of the APA, 2024/10/16 [Direct Link]
Information Literacy Beyond Fact-Checking
77137 image icon

Heidi Yarget seeks to "spend some time unpacking the concept of a literacy framework and what might be missing." The argument here is that information literacy ought to be about more than just fact checking. "Rather than considering whether these stories are true or false, we will closely read these personal accounts to consider what we might learn about values, identity, and power when we challenge ourselves to look at these resources with compassionate analysis." To this point I'm fine, though I personally draw the line when the expectation is that I treat such stories as true.

Today: 153 Total: 153 Heidi Yarger, ACRLog, 2024/10/16 [Direct Link]
Recapping OpenAI's Education Forum
77136 image icon

This is a summary of OpenAI's recent Education Forum. Marc Watkins highlights "Leah Belsky acknowledging what many of us in education had known for nearly two years—the majority of the active weekly users of ChatGPT are students." The article leans into the question, "what is OpenAI doing to support education," where education is what colleges and universites like Harvard (which purchased access for its students) provide. This approach to me feels artificial, though George Veletsianos says ed tech innovations "haven't had the systemwide transformational impacts that their proponents promised... that's how the edtech industry operates regardless of evidence and history." Of course, ed tech has transformed education; you'd have to be blind to miss that. AI will as well. Maybe just don't listen to some of the high profile proponents (or critics) without deep backgrounds in the field. What are the people who are building (not selling) the stuff saying? What are you reading (if you're reading) in the discussion lists and informal discussions saying (as opposed to the people in the glossy tech media and education magazines)?

Today: 220 Total: 220 Marc Watkins, Rhetorica, 2024/10/15 [Direct Link]
Google Go Nuclear
77135 image icon

According to this post, "Google made the headlines today, signing a groundbreaking deal to power its data centres with six or seven mini-nuclear reactors, known as Small Modular Reactors (SMRs)." According to the SMR website, "mall modular reactors (SMRs) are advanced nuclear reactors that have a power capacity of up to 300 MW(e) per unit, which is about one-third of the generating capacity of traditional nuclear power reactors." Google plans to have the first of these running by 2030 (which seems like a long time to me, but I'm not a nuclear expert). I personally think this is a good idea, for a variety of reasons (I could easily imagine major universities building their own SMRs to mee their power needs).

Today: 63 Total: 63 Donald Clark, Donald Clark Plan B, 2024/10/15 [Direct Link]
Critical AI Literacy is Not Enough: Introducing Care Literacy, Equity Literacy & Teaching Philosophies. A Slide Deck
77134 image icon

The core of this post is the slide presentation in the middle (easily viewed in your browser) though it ends rather abruptly leaving me hanging. Maha Bali's focus is on literacy, but there's a good deal of philosophy and ethics in her presentation. Indeed, at this point I'd say we're beyond talking abut literacy (and hence the motivationfor my own approach to critical literacies) but I do agree with her that an ethical approach to AI is going to involve learning about more than just AI (which is in general the weakness of a lot of 'AI ethics' approaches) and ultimately we need to be looking at an ethics of care as well. 

Today: 66 Total: 66 Maha Bali, Reflecting Allowed, 2024/10/15 [Direct Link]
RAG Techniques
77133 image icon

This item is from this week's AI Tidbits newsletter. Retrieval Augmented Generation (RAG) is the latest trend in generative AI (I had someone drop it as an acronym on a panel recently and took the time to spell it out for people). Basically it's the idea of seeding your AI engine with documents to produce context-specific results. There are various ways to do this, and that's what this website is about. I haven't tried any of the code, but the presentation is in iteself a nicely structured and clear presentation of the different approaches. If you worked through all the code it could be quite an effective crash course in RAG. Here's the Diamant AI newsletter.

Today: 59 Total: 59 Nir Diamant, GitHub, 2024/10/15 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Oct 15, 2024 5:37 p.m.

Canadian Flag Creative Commons License.