Good article that touches on several themes, including the one in the title. The set-up is a misleading article from Futurism claiming "one out of every 1,500 chatbot interactions results in a psychotic break," which of course is absurd. Mike Caulfield uses this as an opportunity to demonstrate how discerning readers get to the source, which in this case is a paper showing "AI validates questionable beliefs or delusion beliefs." But even better, he questions this too. "So much of this comes down to the problem of people using LLMs as chatbots and conceptualizing the problem as if AI was a respected elder in your community offering news and advice," he says. "But it's a bad frame." I've had this experience. Sometimes the chatbot is right and sometimes you're right, but it's not always clear which is which. "You can't set a rule that the LLM will always correct a user when they are wrong because the LLM is not always right." Look at me working with ChatGPT to plan a route through Iceland. It's a constant back and forth, and step by step I find myself verifying what ChatGPT says to me. If the AI isn't willing to change based on what I say, it's going to route me from Keflavik to Akranes along a nice flat 44km path... across open water. See also: Aaron Tay on the sycophancy fallacy.
Today: Total: [] [Share]

