Content-type: text/html Downes.ca ~ Stephen's Web ~ Large language models have a reasoning problem

Stephen Downes

Knowledge, Learning, Community

The question being considered here is, "can large language models (LLM) do logical reasoning like humans?" The answer, according to the research paper summarized here, is that is that they find clever ways to learn statistical features that inherently exist in the reasoning problems, rather than doing actual reasoning themselves. But I think it's worth asking whether humans do logical reasoning. We see the same sort of errors in basic courses on logic or probability that the AI systems seem to make. So, yes, while "Caution should be taken when we seek to train neural models end-to-end to solve NLP tasks that involve both logical reasoning and prior knowledge and are presented with language variance," the same holds for human learners. It takes a lot to train humans to perform higher-order functions like logic, math and language. Years, even.

Today: 3 Total: 1157 [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Apr 26, 2024 7:48 p.m.

Canadian Flag Creative Commons License.

Force:yes