Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes Photo
Stephen Downes, stephen@downes.ca, Casselman Canada

Inside Netflix’s Distributed Counter: Scalable, Accurate, and Real-Time Counting at Global Scale
77364 image icon

This is a really interesting engineering challenge: how do you count when the people doing the counting are scattered around the world? For Netflix, it's a practical problem: each time someone views a Netflix video, Netflix wants to increment the 'views' counter by one. But how do you do that without the many flaws that might make the actual count inaccurate? This article describes their recently published "deep dive into their Distributed Counter Abstraction." Idempotency - the idea that the same REST request should return the same result - plays a key role. It allows remote sites to retry failed requests, for example, without double counting. This may seem to some like a pretty trivial problem, but as we enter the era of distributed computing, answering questions like this will be crucial.

Today: 132 Total: 131 Eran Stiller, InfoQ, 2024/12/12 [Direct Link]
The Google Willow thing
77363 image icon

The news here is Google's fault-tolerance milestone in its Willow quantum computing chip, the publication stamp (paywalled on Nature) having been placed on the preprint on arXiv from August. "Scientifically," writes Scott Aaronson, "the headline result is that, as they increase the size of their surface code, from 3×3 to 5×5 to 7×7, Google finds that their encoded logical qubit stays alive for longer rather than shorter." But the most interesting bit to me is this: "it would also take ~10^25 years for a classical computer to directly verify the quantum computer's results" Hence, "all validation of Google's new supremacy experiment is indirect." 

Today: 197 Total: 197 Scott Aaronson, Shtetl-Optimized, 2024/12/11 [Direct Link]
The Meaning of 2025 | HESA
77362 image icon

There are many places where I disagree with Alex Usher, but I think we're on the same page on this one. First, "we've spent 80 years building a system of higher education that is simply more expensive to run than the public is willing to support."Second, "Think about the consequences of reducing those cross-subsidies within universities at the exact moment when advances in technology are opening up huge potential advances in energy, materials science, and health." The cost of not supporting the system is huge. Usher argues that government is not coming to save the system. probably true. But I counter with the obvious: industry isn't going to save the system either. And so we agree that the higher education sector "is going to have to work out solutions on its own." I've spent a lifetime working on digital technologies for learning to try to help make that happen. But like a light bulb, the sector is going to have to want to change.

Today: 178 Total: 178 Alex Usher, HESA, 2024/12/11 [Direct Link]
These technologies are complex…. – Taccle AI
77361 image icon

I'm sort of over the whole 'education versus AI' debate that Graham Attwell describes here. There's only so many times people like Ben Williamson can make the same point, and only so much speed with which AI companies can roll out new models to counter the sceptics. I'm especially tired of the debate being framed as 'education versus giant corporate capitalism', partially because education has been part of giant corporate capitalism for as long as I can remember, and partially because AI was developed, in the first place, in educational institutions. None of the us-versus-them debates can be properly applied to either AI or educational institutions. And that's why I'm over it.

Today: 202 Total: 202 Graham Attwell, Taccle AI, 2024/12/11 [Direct Link]
Century-Scale Storage
77360 image icon

This is a nice (though long) article by Maxwell Neely-Cohen asking the basic question, "If you had to store something for 100 years, how would you do it?" He runs through all the likely answers, including dispersal and decentralized storage, before reaching the inevidable conclusion that "the success of century-scale storage comes down to the same thing that storage and preservation of any duration does: maintenance." Neely-Cohen also warns that we might be entering a 'dark age' where most of what we produce is lost to the future. "On the internet, Alexandria burns daily." Via Molly White, who gives us a long thread of relevant quotes from the article.

Today: 197 Total: 197 Maxwell Neely-Cohen, Harvard Law School, 2024/12/11 [Direct Link]
Striking a Balance: Navigating the Ethical Dilemmas of AI in Higher Education
77359 image icon

According to this article, "Responsible AI integration in higher education requires striking a balance between riding the wave of AI advancements and upholding ethical principles." I don't think the idea of a 'balance' is at all the right way to think of this. Ethics and AI aren't some sort of opposites you have to 'balance'. And 'balance' itself is not a viable ethical principle; I can think of many things I would not like to 'balance' with something else. This higher level criticism also applies to many of the individual points. For example, the very first suggestion is to "create generative AI training materials to support faculty, staff, and students aimed toward combatting the digital divide." Clearly, an effort to 'balance' is being made here. But the balance here makes no sense; how exactly are these 'generative AI training materials' supposed to 'combat the divital divide'?

Today: 208 Total: 208 Katalin Wargo, Brier Anderson, EDUCAUSE Review, 2024/12/11 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Dec 11, 2024 6:37 p.m.

Canadian Flag Creative Commons License.