Content-type: text/html Downes.ca ~ Stephen's Web ~ Emergent Analogical Reasoning in Large Language Models

Stephen Downes

Knowledge, Learning, Community

What's really interesting to me about GPT-3 and other large language models (LLM) is that they are not programmed with rules or categories, but instead create them out of the data they're given. As this paper (27 page PDF) argues, "GPT-3 appears to display an emergent ability to reason by analogy, matching or surpassing human performance across a wide range of problem types." The authors continue, "The deep question that now arises is how GPT-3 achieves the analogical capacity that is often considered the core of human intelligence." Now many of the criticisms of LLM point to errors in these pattern recognition capabilities. They sometimes get basic facts wrong, and don't seem to (yet) understand what types of things some things are. But as the authors write, "regardless of the extent to which GPT-3 employs human-like mechanisms to perform analogical reasoning, we can be certain that it did not acquire these mechanisms in a human-like manner." We don't actually teach an LLM the way we would, say, a child. But suppose we did...

Today: 0 Total: 5 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Oct 10, 2024 03:56 a.m.

Canadian Flag Creative Commons License.

Force:yes