[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

Nobody knows how many jobs will "be automated"
Noah Smith, Noahpinion, 2023/04/12


Icon

"Basically, these researchers went through a database of job descriptions and subjectively decided which ones they thought could be replaced by computers," writes Noah Smith. "To be perfectly blunt, this seems like a pretty poor method for assessing the automatability of jobs." Agreed. He continues, "But 'AI will increase labor productivity while forcing a small number of people to find new jobs' is not the kind of story that goes viral on social media, while '300 million jobs will be lost' definitely is that kind of story." Via SAIL.

Web: [Direct Link] [This Post]


State of the Commons 2022
Creative Commons, 2023/04/12


Icon

The Creative Commons annual report has been released. Of most interest to us is the report on CC in education (which has a separate report, here). I also note with interest that CC "dove in to open journalism in 2022", with a report published here. I am of course suspicious of the connection here with the Google News Initiative. I also wish I knew more about how specific foundations are contributing (what is FileCoin supporting, for example? Or 20 Million Minds?), what they contribute, and what outcomes they expect from that. It's interesting that CC earned $6.6 million from foundations and $233K from the CC Certificate program - long term, I would expect they'd like to see these numbers reversed.

Web: [Direct Link] [This Post]


Understanding is not an act but a Labor
Pontydysgu EU, 2023/04/12


Icon

This is in essence another a priori argument telling us what AI cannot do from Shannon Vallor: "Understanding is beyond GPT-3's reach... It's a sustained project that we carry out daily, as we build, repair and strengthen the ever-shifting bonds of sense that anchor us to the others, things, times and places, that constitute a world." This is, as Alberto Romero says, a much better framing than  typical "AI models can't understand because they don't have a world model" or "because they can't access the meaning behind the form of the words." And while understanding is a labour (we could have a long discussion about that), it is nothing that is in principle beyond the reach of an AI. Out of the box GPT-3 had no memory of previous interactions, though modifications already exist that give it one. And its lack of memory is a design feature (called few shot learning), not a limitation, enabling what is essentially a 'snapshot' of a long training history to be applied as a pre-defined algorithm. Vallor's argument is like saying 'this book cannot learn' - true, but irrelevant, given what produced it.

Web: [Direct Link] [This Post]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2023 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.