[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

Presentation
Connectivism: What is it? How to apply it.
Stephen Downes, Jan 26, 2024, Microlearning Series, Online via Zoom


Connectivism is a relatively new learning theory that suggests students should combine thoughts, theories, and general information in a useful manner. Four key principles for learning: autonomy, connectedness, diversity, and openness. This presentation will outline the elements of Connectivism and model how it can be applied in an online learning setting.  

 

[Link] [Slides] [Audio] [Video]


George Carlin Estate Sues Creators of AI-Generated Comedy Special in Key Lawsuit Over Stars' Likenesses
Winston Cho, The Hollywood Reporter, 2024/01/26


Icon

As this article reports, "The legal battle stems from an hourlong special, titled George Carlin: I'm Glad I'm Dead, that was released Jan. 9 on the YouTube channel of Dudesy, a podcast hosted by Will Sasso and Chad Kultgen." It features an ersatz George Carlin talking about "modern topics such as the prevalence of reality TV, streaming services and AI." While Carlin obviously never talked about these, the Carlin Estate argues, "the AI program that created the special ingested five decades of Carlin's original stand-up routines, which are owned by the comedian's estate, as training materials, 'thereby making unauthorized copies' of the copyrighted works." However, "Given that AI models are largely black boxes, there's no definitive proof that can be offered to prove that a specific work was used in a chatbot's creation." Via George Station. Related: Do Counterfeit Digital People Threaten the Cognitive Elite?

Web: [Direct Link] [This Post]


AI Is Already Better Than You
mike cook, cohost, 2024/01/26


Icon

There are many many criticisms of AI based on the errors that it makes. It can't draw the right number of fingers, for example. But as mike cook writes, "making quality the central point of your argument against AI systems is dangerous if that's not really your issue with it." History is full of examples of people who predicted AI couldn't do something, only to later be proven wrong. Moreover, "pulling up AI technology on the basis of quality is risky, and it assumes that the people investing in this technology care about quality in the first place." Companies that aren't willing to pay human creators a decent wage aren't going to worry about the quality of the AI they use to replace them.

Web: [Direct Link] [This Post]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2024 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.