Stephen Downes

Knowledge, Learning, Community

What's really interesting to me about GPT-3 and other large language models (LLM) is that they are not programmed with rules or categories, but instead create them out of the data they're given. As this paper (27 page PDF) argues, "GPT-3 appears to display an emergent ability to reason by analogy, matching or surpassing human performance across a wide range of problem types." The authors continue, "The deep question that now arises is how GPT-3 achieves the analogical capacity that is often considered the core of human intelligence." Now many of the criticisms of LLM point to errors in these pattern recognition capabilities. They sometimes get basic facts wrong, and don't seem to (yet) understand what types of things some things are. But as the authors write, "regardless of the extent to which GPT-3 employs human-like mechanisms to perform analogical reasoning, we can be certain that it did not acquire these mechanisms in a human-like manner." We don't actually teach an LLM the way we would, say, a child. But suppose we did...

[Direct link]

files/images/matrix_reasoning_problems.jpg

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Creative Commons License.

Copyright 2023
Last Updated: Jan 19, 2023 11:51 a.m.