This is a really interesting engineering challenge: how do you count when the people doing the counting are scattered around the world? For Netflix, it's a practical problem: each time someone views a Netflix video, Netflix wants to increment the 'views' counter by one. But how do you do that without the many flaws that might make the actual count inaccurate? This article describes their recently published "deep dive into their Distributed Counter Abstraction." Idempotency - the idea that the same REST request should return the same result - plays a key role. It allows remote sites to retry failed requests, for example, without double counting. This may seem to some like a pretty trivial problem, but as we enter the era of distributed computing, answering questions like this will be crucial.
Today: 132 Total: 131 Eran Stiller, InfoQ, 2024/12/12 [Direct Link]Select a newsletter and enter your email to subscribe:
Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.
Stephen Downes,
stephen@downes.ca,
Casselman
Canada
The news here is Google's fault-tolerance milestone in its Willow quantum computing chip, the publication stamp (paywalled on Nature) having been placed on the preprint on arXiv from August. "Scientifically," writes Scott Aaronson, "the headline result is that, as they increase the size of their surface code, from 3×3 to 5×5 to 7×7, Google finds that their encoded logical qubit stays alive for longer rather than shorter." But the most interesting bit to me is this: "it would also take ~10^25 years for a classical computer to directly verify the quantum computer's results" Hence, "all validation of Google's new supremacy experiment is indirect."
Today: 197 Total: 197 Scott Aaronson, Shtetl-Optimized, 2024/12/11 [Direct Link]There are many places where I disagree with Alex Usher, but I think we're on the same page on this one. First, "we've spent 80 years building a system of higher education that is simply more expensive to run than the public is willing to support."Second, "Think about the consequences of reducing those cross-subsidies within universities at the exact moment when advances in technology are opening up huge potential advances in energy, materials science, and health." The cost of not supporting the system is huge. Usher argues that government is not coming to save the system. probably true. But I counter with the obvious: industry isn't going to save the system either. And so we agree that the higher education sector "is going to have to work out solutions on its own." I've spent a lifetime working on digital technologies for learning to try to help make that happen. But like a light bulb, the sector is going to have to want to change.
Today: 178 Total: 178 Alex Usher, HESA, 2024/12/11 [Direct Link]I'm sort of over the whole 'education versus AI' debate that Graham Attwell describes here. There's only so many times people like Ben Williamson can make the same point, and only so much speed with which AI companies can roll out new models to counter the sceptics. I'm especially tired of the debate being framed as 'education versus giant corporate capitalism', partially because education has been part of giant corporate capitalism for as long as I can remember, and partially because AI was developed, in the first place, in educational institutions. None of the us-versus-them debates can be properly applied to either AI or educational institutions. And that's why I'm over it.
Today: 202 Total: 202 Graham Attwell, Taccle AI, 2024/12/11 [Direct Link]This is a nice (though long) article by Maxwell Neely-Cohen asking the basic question, "If you had to store something for 100 years, how would you do it?" He runs through all the likely answers, including dispersal and decentralized storage, before reaching the inevidable conclusion that "the success of century-scale storage comes down to the same thing that storage and preservation of any duration does: maintenance." Neely-Cohen also warns that we might be entering a 'dark age' where most of what we produce is lost to the future. "On the internet, Alexandria burns daily." Via Molly White, who gives us a long thread of relevant quotes from the article.
Today: 197 Total: 197 Maxwell Neely-Cohen, Harvard Law School, 2024/12/11 [Direct Link]According to this article, "Responsible AI integration in higher education requires striking a balance between riding the wave of AI advancements and upholding ethical principles." I don't think the idea of a 'balance' is at all the right way to think of this. Ethics and AI aren't some sort of opposites you have to 'balance'. And 'balance' itself is not a viable ethical principle; I can think of many things I would not like to 'balance' with something else. This higher level criticism also applies to many of the individual points. For example, the very first suggestion is to "create generative AI training materials to support faculty, staff, and students aimed toward combatting the digital divide." Clearly, an effort to 'balance' is being made here. But the balance here makes no sense; how exactly are these 'generative AI training materials' supposed to 'combat the divital divide'?
Today: 208 Total: 208 Katalin Wargo, Brier Anderson, EDUCAUSE Review, 2024/12/11 [Direct Link]Web - Today's OLDaily
OLDaily Email - Subscribe
Web - This Week's OLWeekly
OLWeekly Email - Subscribe
RSS - Individual Posts
RSS - Combined version
Podcast - OLDaily Audio
Websites
Stephen's Web and OLDaily
Half an Hour Blog
Leftish Blog
MOOC.ca
Stephen's Guide to the Logical Fallacies
gRSShopper
Let's Make Some Art Dammit
Email: stephen@downes.ca
Email: Stephen.Downes@nrc-cnrc.gc.ca
Skype: Downes
Professional
National Research Council Canada
Publications
Presentations
All My Articles
My eBooks
About Stephen Downes
About Stephen's Web
About OLDaily
Subscribe to Newsletters
gRSShopper
Privacy and Security Policy
Statistics
Archives
Courses
CCK 2008, 2009, 2011, 2012
PLENK 2010
Change 11 - 2011
Education Futures - 2012
Learning Analytics - 2012
REL 2014
Personal Learning - 2015
Connectivism and Learning - 2016
E-Learning 3.0 MOOC - 2018
Ethics, Analytics - 2020
Stephen Downes, Casselman, Canada
stephen@downes.ca
Last Updated: Dec 11, 2024 6:37 p.m.