OpenAI says models trained to make up answers
Iain Thomson,
The Register,
Sept 17, 2025
This item raises some interesting questions. For example: if I asked you what time it is, would you tell me? Probably. But why? Because (a) you have a good idea of what it is, (b) there's room for some error, and (c) it doesn't really matter if you're wrong. But if I asked you whether it's safe to eat this mushroom, the parameters change. Now the stakes are a lot higher, there's less room for error, and you might not really know. So you'd probably respond with an "I don't know" or even a "Probably not," just to be on the safe side. In the case of OpenAI models, these parameters are (maybe) tunable. But are they adjusted for specific contexts? Probably not - which is whysome of OpenAI's errors are a lot more serious than others.
Today: Total: [] [Share]

