[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

Broadcasting your breakfast: why TikTokers obsess over morning routines
Rachel Signer, The Guardian, 2023/02/20


Icon

I commented on the 3goodthings hashtag on Mastodon recently: "I love the way different forms and memes emerge and twist and turn on social media." I've looked at this in my paper Hacking Memes and my presentation Speaking in LOLcats. I think that if we understand thought and cognition as growing and recognizing patterns, rather than constructing formal or linguistic structures, we're much closer to a more comprehensive understanding of what it actually is to be knowledgeable and intelligent. And with this in mind I turn to the morning routine, as depicted by TikTokers. "What I found both fascinated and confused me," writes Rachel Signer. "One video after another, creators were slowly and carefully preparing hot drinks – frothy matcha, or pod coffee poured over ice into large, cylindrical jugs, chased with almond milk... They made smoothies featuring fresh fruit and all sorts of powdered supplements. In some cases, they went on 'mental health walks'. And they documented it all." Signer examines videos as examples of symbols or emblems, of brand loyalty and commodification, or perhaps community-building, but you can't really put lables on it, in my view. It's all just patterns in the signal - we do it naturally, pick it up perceptually, and pass it along.

Web: [Direct Link] [This Post]


Some ways for generative AI to transform the world
Bryan Alexander, 2023/02/20


Icon

While I think a lot of people will appreciate this set of predictions, and while I think they're not inaccurate, I think that in order to really understand what AI will do in the future we have to get outside the tech and politics bubble (and especially the US-focused bubble). For example, Bryan Alexander predicts, "More GAI applications start to appear, increasingly specialized or marked by economic, political, and cultural identities." OK, this could be true, but the range of possibilities is not nearly exhausted by economic, political or cultural identities; there will be as many types of AIs as there are personalities, and the sorts of personalities that are important will probably not map to what contemporary news media thinks is important. Similarly, the "round of new media cultural reformations" he predicts includes "what constitutes real creativity, how to restructure copyright, what freedom of speech means, authorship, journalism, information overload, storytelling and art expectations and forms," all issues that matter today, but that will become meaningless in a world where AI does the creating and comprehending and our priorities include finding purpose in life, expressing values, and exploring the depths of experience. Related: charGPT and the end of high school English.

Web: [Direct Link] [This Post]


Fair Use: Training Generative AI - Creative Commons
Stephen Wolfson, Creative Commons, 2023/02/20


Icon

This article from Creative Commons draws what I believe is the correct conclusion regarding the use of content by artificial intelligence: "this type of use for learning purposes, even at scale by AI, constitutes fair use, and that there are ways outside of litigation that can offer authors other ways to control the use of their works in datasets." If you're not sure this is the right conclusion, consider the following reasoning: if it is not fair use, then this would disallow many of the creative forms of fair use undertaken by people today.

Web: [Direct Link] [This Post]


To understand language models, we must separate 'language' from 'thought'
Ben Dickson, TechTalks - Technology solving problems... and creating new ones, 2023/02/20


Icon

This article summarizes a paper titled 'Dissociating language and thought in large language models: a cognitive perspective.' According to the paper, "to understand the power and limits of large language models (LLM), we must separate 'formal' from 'functional' linguistic competence." We are presented with what the researchers call two common fallacies related to language and thought:

As you can see, you commit one of the other fallacy if you think language is thought. If nothing else, the large language models used in AI such as chatGPT should be seen as proving that. And "Although it is tempting to move the goalposts and focus on what these models are still unable to do… we argue that the remarkable advances in LLMs' ability to capture various linguistic phenomena should not be overlooked," write the authors. Related: Stephen Wolfram explains what a large language model does.

 

Web: [Direct Link] [This Post]


How to Use the Google Authenticator App With Twitter
Richard Byrne, Free Technology for Teachers, 2023/02/20


Icon

You may as well get used to using authenticator apps (like Google's or Microsoft's authenticator) because they will be used more and more in the future. As Richard Byrne notes, Twitter will soon start charging for two-factor authentication using text messages, so you'll want to stop doing that. The article contains a video showing how to use the Google authenticator app on Twitter; I tested it and you should be successful following the instruction; I was. If you're not happy using one of the giants, you can try one of these open source alternative authenticator apps.

Web: [Direct Link] [This Post]


Can an artificial intelligence chatbot be the author of a scholarly article?
Ju Yoen Lee, Science Editing, 2023/02/20


Icon

According to this article (6 page PDF), the answer is: "The current AI chatbot cannot be the author of an academic paper, not only from the perspective of copyright law but also from the perspective of research ethics." Related: WonkHE asks whether AI can support academic research and argues,"the temptation for academics and researchers – including PhD researchers – to use these technologies to assist with writing research papers or even generate complete articles will only increase as the technology improves." Also: a Victoria brewery has used an AI app to create a new beer recipe. Also: to express condolences after the recent shooting at Michigan State University, the Peabody Office of Equity, Diversity and Inclusion at Vanderbilt University sent a message that had been written using ChatGPT. Also: Maha Bali on the automation of care ("using AI, it's just an expression of 'I don't care enough about this to spend time on it'").

Web: [Direct Link] [This Post]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2023 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.