[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

Author Talks: What is the key to unlocking digital transformation?
Eric Lamarre, Kate Smaje, Rodney Zemmel, McKinsey, 2023/05/24


I want to link to this mostly because it sort of touches on the topic of my talk tomorrow. Mostly, as I read though it, it's self-serving nonsense. Like this: "When I look back on what the most important piece of work I have done with that team is, it was the initial trip to Silicon Valley with the CEO and the C-suite, spending two or three days visiting companies, learning a new language, and aligning on the art of the possible." 

Web: [Direct Link] [This Post]


Some large-scale decisions we can make about AI in 2023
Bryan Alexander, 2023/05/24


Icon

I found this to be a very odd set of 'decisions' to be made, and I found the consequences of those 'decisions' to be overstated. Begin with data size: do we continue with gigantic data sets? If yes, then we get 'black boxes', if no, we get 'democratized AI'. None of this makes any sense. Today's data sets are tiny compared to what AI will be working when when equipped with sensors and tools. AIs are already black boxes. And making data sets smaller simply disables AIs, it doesn't make them 'democratic' (I mean, why would you think that?). As for the copyright content: the AI is not gleaning content from these sources, it's scanning for word order. Is that what was copyrighted? You can't complain AI makes simple factual errors all the same and at the same time accuse it of plagiarizing our Journals of Record. In any case, who cares what the courts in one country say. There are more than 200 countries in the world. Then there's the suggestion that there may be 'significant opposition to AI'. Sure, maybe, in some rarified 'creator' communities. The big issue is whether we will have any income at all to live on when all this is said and done - but that's not mentioned at all. People have to stop reading the popular press on these issues; they're not trying to inform, they're trying to stir up emotions (and sell ads).

Web: [Direct Link] [This Post]


Governance of superintelligence
Sam Altman, Greg Brockman, Ilya Sutskever, OpenAI, 2023/05/24


Icon

Sam Altman (the CEO of openAI) and company are no doubt right when they say "that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today's largest corporations." We could argue about what he means by 'productive' in this context, but more to the point: if AI is something that should be governed, why on earth would we entrust this governance to the corporations that created (some of) it or "major governments around the world"? This announcement should be seen for what it is, in my view: a bid to entrench the major players in their leadership by requiring any new entrants "to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc." 

Web: [Direct Link] [This Post]


News execs fear ‘end of our business model’ from AI unless publishers ‘get control’ of their IP
Bron Maher, PressGazette, 2023/05/24


Icon

There are a few things happening in this story. One is the assertion that chatGPT must have been trained using newspaper content. "Jon Slade, chief commercial officer at the FT, said "there's very good evidence" that his paper's archive had been used to train large language models." I'd be interested to see that evidence; I'm quite sure it's not straightforward to train an AI on paywalled content. The second thing going on is that it's the End Of The World as we know it if major publishers aren't paid. "If somebody can type a question, or write stories, using our content or mixing it with some low-quality content, it's a risk for the political debate, political society." I can see higher education responding this way as well. I can see the argument; surely we want some measure of quality information informing our AIs. But at any price? And from a small privileged elite?

Web: [Direct Link] [This Post]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2023 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.