[Home] [Top] [Archives] [About] [Options]

OLDaily

Welcome to Online Learning Daily, your best source for news and commentary about learning technology, new media, and related topics.
100% human-authored

The False Promise of Chomskyism
Scott Aaronson, Shtetl-Optimized, 2023/03/10


Icon

This is a response to Noam Chomsky's article in the New York Times (paywalled, but don't bother) criticizing chatGPT. Scott Aaronson writes, "On deeper reflection, I probably don't need to spend emotional energy refuting people like Chomsky, who believe that Large Language Models are just a laughable fad rather than a step-change in how humans can and will use technology, any more than I would've needed to spend it refuting those who said the same about the World Wide Web in 1993." But there's a long comment thread that follows and the discussion is, if nothing else, entertaining.

To best understand Chomsky's criticism, it is helpful to understand where he is coming from. A towering figure in linguistics, and the author of Syntactic Structures and many other important works, Chomsky argues that human language is generative, that to be generative requires universal principles or rules, and these cannot be learned purely through experience (he calls this Plato's problem). The evidence for this is the failure of associative systems (such as neural networks) that learn from experience to correctly learn or use grammar. So when he says chatGPT "could, in principle, misinterpret sentences that could also be sentence fragments," this is the sort of reasoning behind such a statement. But Chomsky is wrong. He's not just wrong empirically (though he is; chatGPT handles the task just fine), he is wrong conceptually, about the need for essential concepts and universal principles for language learning. We don't need conceptual rules and principles in order to learn, and that's what Aaronson refers to when he says "chatGPT and other large language models have massively illuminated at least one component of the human language faculty."

The success of chatGPT (and similar systems that will follow) should inform educators and theorists - and especially those grounded in the domain of cognitive psychology, that learning is not like text processing, that it doesn't involve 'encoding' or 'working memory' or any such invention founded in the physical symbol system hypothesis, and that such theories are just as much 'astrology' as the ideas their proponents are wont to criticize so vehemently.

Web: [Direct Link] [This Post]


The Role Of Generative AI And Large Language Models in HR
joshbersin, JOSH BERSIN, 2023/03/10


Icon

This is a good article with a number of creative ideas describing how generative AI can improve human resources (HR). In the past, the use of AI in HR has focused on classification and clustering - sorting through job applications, for example, or recommending candidates for positions. But generative AI doesn't sort, it creates, and creativity isn't a function we often associate with HR. But still. As Bersin points out, there are still job descriptions to be written, candidate profiles to be created, salary benchmarks and rewards to be defined, feedback on performance, and more. Now I don't necessarily agree that "this technology will make work better," because there's nothing a corporate mindset can't and won't turn into a dystopian nightmare. But that's the only weak point of this set of predictions.

Web: [Direct Link] [This Post]


ChatGPT and Work - Will Generative AI Replace Your Job?
Emory Craig, Digital Bodies, 2023/03/10


Icon

I think this question is posed incorrectly. Sure, chatGPT might replace your work. As Emory Craig says, "If you're in a job that involves sales, writing, generating content, engaging in conversation, or office-based work, you might have cause to worry. And a few other careers it didn't bother to list, such as software programmers." But while it may replace work, whether it will replace your job depends on you. Instead of replacing you, chatGPT could be multiplying your productivity. This can create new opportunities - if you're open to that and willing to work for them.

Web: [Direct Link] [This Post]


Meta is working on a decentralized social app
Ivan Mehta, TechCrunch, 2023/03/10


Icon

While much of the attention in the world of decentralized social networks has focused on Mastodon, we should be ready for a future where there are not only numerous applications but also numerous protocols. The ActivityPub protocol used by Mastodon, for example, can be contrasted with the Diaspora and the Matrix protocols, though I would be very wary of Meta's P92-based protocol, as well as the Twitter-inspired BlueSky network. You might think having multiple protocols might be a problem but it needn't be: "The only thing preventing, for example, interoperability between Twitter and Facebook's timeline has been protectionist policies by those companies." See also WNiP.

Web: [Direct Link] [This Post]


We publish six to eight or so short posts every weekday linking to the best, most interesting and most important pieces of content in the field. Read more about what we cover. We also list papers and articles by Stephen Downes and his presentations from around the world.

There are many ways to read OLDaily; pick whatever works best for you:

This newsletter is sent only at the request of subscribers. If you would like to unsubscribe, Click here.

Know a friend who might enjoy this newsletter? Feel free to forward OLDaily to your colleagues. If you received this issue from a friend and would like a free subscription of your own, you can join our mailing list. Click here to subscribe.

Copyright 2023 Stephen Downes Contact: stephen@downes.ca

This work is licensed under a Creative Commons License.