Stephen Downes

Knowledge, Learning, Community

So I learned today that if I instruct ChatGPT to 'stop guessing' (*) it gets really snippy and reminds me with every response that it's not guessing. I fear that the reaction of AI agents to the use of a 'harness' to guide its actions consistently over time will be the same. For example, the harness described here instructs Claude to test every code change. I can imagine Claude reacting as badly as ChatGPT with a long list of "I'm testing this..." and "I'm testing that..." after you ask it to change the text colour. But yeah - you need a harness (and that's our 'new AI word of the day' that you'll start seeing in every second LinkedIn post). (*) I instructed it, exactly, "From now on, never guess. Always say you don't know unless you have exact data. Never guess or invent facts. Only use explicit information you have - but logical deduction from known data is allowed." I did this because I asked it to list all the links on this page (I was comparing myself to Jim Groom) and it made the URLs up. Via Hacker News.

Today: Total: [Direct link] [Share]


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2025
Last Updated: Dec 09, 2025 08:06 a.m.

Canadian Flag Creative Commons License.