Content-type: text/html Downes.ca ~ Stephen's Web ~ The False Promise of Chomskyism

Stephen Downes

Knowledge, Learning, Community

This is a response to Noam Chomsky's article in the New York Times (paywalled, but don't bother) criticizing chatGPT. Scott Aaronson writes, "On deeper reflection, I probably don't need to spend emotional energy refuting people like Chomsky, who believe that Large Language Models are just a laughable fad rather than a step-change in how humans can and will use technology, any more than I would've needed to spend it refuting those who said the same about the World Wide Web in 1993." But there's a long comment thread that follows and the discussion is, if nothing else, entertaining.

To best understand Chomsky's criticism, it is helpful to understand where he is coming from. A towering figure in linguistics, and the author of Syntactic Structures and many other important works, Chomsky argues that human language is generative, that to be generative requires universal principles or rules, and these cannot be learned purely through experience (he calls this Plato's problem). The evidence for this is the failure of associative systems (such as neural networks) that learn from experience to correctly learn or use grammar. So when he says chatGPT "could, in principle, misinterpret sentences that could also be sentence fragments," this is the sort of reasoning behind such a statement. But Chomsky is wrong. He's not just wrong empirically (though he is; chatGPT handles the task just fine), he is wrong conceptually, about the need for essential concepts and universal principles for language learning. We don't need conceptual rules and principles in order to learn, and that's what Aaronson refers to when he says "chatGPT and other large language models have massively illuminated at least one component of the human language faculty."

The success of chatGPT (and similar systems that will follow) should inform educators and theorists - and especially those grounded in the domain of cognitive psychology - that learning is not like text processing, that it doesn't involve 'encoding' or 'working memory' or any such invention founded in the physical symbol system hypothesis, and that such theories are just as much 'astrology' as the ideas their proponents are wont to criticize so vehemently.

Today: 0 Total: 1387 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Apr 29, 2024 10:22 a.m.

Canadian Flag Creative Commons License.

Force:yes