fb-pixelChatGPT taught me something powerful about human collaboration - The Boston Globe Skip to main content
IDEAS

ChatGPT taught me something powerful about human collaboration

A system that produces plausible paragraphs can chat — but will it ever be able to tell you what you really need to hear?

Adobe

You may have heard of OpenAI’s ChatGPT. It’s a machine-learning-powered chatbot. You type in prompts, like asking it to write or summarize an article, and it rapidly generates the requested text. You can go back and forth with your requests, pushing the technology to elaborate and switch gears.

Suppose I wanted ChatGPT to help me write this article. I’d start with a prompt like “Explain what ChatGPT is.” Then I’d move on to “Identify potential uses for ChatGPT and associated ethical concerns.” Eventually I’d go with “Write an op-ed about this.” Bam. Each request would be met in a few seconds.

ChatGPT is controversial because it’s a leap forward in easy-to-use conversational AI. People are concerned that the technology is ripe for abuse. Although ChatGPT produces errors and spews BS, the quality is good enough to give students a massive edge in cheating on essays and bad actors a powerful tool for creating and spreading misinformation and disinformation.

There are other serious ethical issues. For example, since the technology has produced results that reinforce demeaning stereotypes — such as that women are less capable scientists than men — OpenAI is putting vulnerable people and their allies in the unfair position of needing to put in uncompensated labor to correct them. And the more people rely on AI-generated writing to develop ideas and draw connections, the less inclined they may be to think for themselves. After all, if a machine can generate good enough content, why do the work yourself when it’s more efficient to outsource the labor, appropriate the results, and move on?

Still, I wanted to put aside all my misgivings about the technology in hopes of trying to expand the discussion. I’ve seen some explorations of ChatGPT as a tool for improving human thinking. I wondered: If it can produce writing, can it or something like it also be a tool for helping me with my writing? Instead of looking for shortcuts, I want the best feedback for improving my ideas and their expression — especially input that moves me to pause and reflect on my motivations and beliefs.

I think there are two types of writers. Solitary writers produce their best work by turning inward and playing with language until they find the right way to express their vision. They need time and space to do their research and dig deep into their thoughts. What I’d call conversational writers, however, don’t thrive in a room of their own. They need to talk with others to be at the top of their game.

I’m a conversational writer, but until now I haven’t tried to zero in on what, exactly, other people are doing when they get me unstuck. I’ve been able to take their help for granted and haven’t had a reason to seek technological assistance. Could ChatGPT be a stand-in for those helpful souls?

I was reminded that the secret sauce is caring — something that no technology on the horizon can exhibit.

AI doesn’t give a damn about you

My many years of talking to others about writing suggests that the conversational process offers four key benefits. The first is drawing out knowledge and identifying the limits of what you know. I often tacitly know more than I can explicitly say and can’t access the full range of my understanding until prompted by dialogue. Questions like “Can you say more about X?” “Can you put this another way?” “Can you give me a clear example?” and “How does this point differ from something widely understood?” encourage me to respond to principled objections and offer better evidence.

The second benefit is becoming more reflective and fair-minded. Like most people, I can always become more charitable to an argument that I disagree with and temper how I express my dissatisfaction with it. Devil’s advocate pushback — especially pushback that motivates me to examine my biases — is essential.

The third benefit is avoiding tunnel vision. When others have noticed that I am being narrow or repetitive in my approach to a problem, they’ve offered new perspectives that have helped me see things differently. Sometimes, introducing a new metaphor or a deeper context does the trick. Other times, more is required, including pushing me to carefully interrogate my assumptions. Am I really dealing with an ethical issue that’s about conflicting values? Or is it more of a factual matter that requires a deeper dive into empirical research?

The fourth benefit is finding motivation. I often get deflated when writing — worried that I don’t have as much to say as I hoped or that I’m making such minor points that they’re not worth conveying. Other people have helped by lighting a spark. They’ve said things like “This is such a complex issue, the best anyone can do is tackle a small part of it.” Or “Knowledge is built one small point at a time within a community of inquirers. You’re playing a crucial part.” Even small affirmations such as “great point!” help. When others express thoughtful, independent judgment, they provide something invaluable: a sense of perspective.

ChatGPT isn’t programmed to ask you anything and can only respond to your prompts, but I figured I could ask the program for certain things human collaborators usually offer me.

Here’s me asking ChatGPT for motivation to write an op-ed about it, even though many others already have.

Abbi Matheson

I don’t know about you, but I don’t find the generic points remotely moving.

What about asking ChatGPT to do something I would never ask of another human? Here’s me requesting a simulation of an encouraging text from my editor at The Globe.

Abbi Matheson

It’s fine make-believe, but as a contrivance, the output doesn’t inspire. It’s like sending yourself a Valentine’s Day card from a “secret admirer.”

What about bias detection? I checked with ChatGPT to see if arguing that facial recognition technology should be banned is biased or unfair. This is a strong stance, and I’ve received a lot of pushback when advancing it. ChatGPT confirmed that it is a valid point of view.

Abbi Matheson

I’m not sure what to make of this take, though. That’s because the technology has a limited capacity to justify its reasoning. ChatGPT isn’t actually concerned about public safety, privacy, or the expansion of police power, because it doesn’t give a damn about anything.

Now, I could ask ChatGPT to offer a devil’s advocate perspective on banning facial recognition. And I can ask ChatGPT to keep elaborating, so there’s some utility here. Nevertheless, the value is limited; the technology presents only widely known objections:

Abbi Matheson

No one familiar with the topic will find any of the outputs surprising. And even if ChatGPT offered a novel perspective, it wouldn’t be able to cite where the ideas originate.

It isn’t necessarily surprising that ChatGPT can only offer derivative versions of the human-to-human conversation. A chatbot isn’t a dialoguebot.

But what if future versions of the technology offer more conversational features and do a better job of providing the benefits I’m looking for? I see no reason why technology can’t be designed to ask targeted and inquisitive questions, provide more contextually informed feedback, better detect a range of biases, and identify the sources from which it draws its material.

If this happens and someday we get the smartest and easiest-to-talk-with version of AI, it will still lack something essential to much of human communication unless it has sentience. AI sentience is a long way away — if it’s even possible to create it artificially at all.

Let me put it another way: ChatGPT has no interest in you whatsoever. It isn’t curious about your goals or motivated to help you meet them. It lacks the good faith to tell you when your goals are misplaced. OpenAI can’t make a technology that truly cares because that requires consciousness, inner experiences, an independent perspective, and emotions. To care, you need to put things in perspective, offer respect, take offense when appropriate, and provide camaraderie. A caring person is a source of motivation because you respect them and are concerned about what they think of you.

This point goes way beyond my writerly needs. Good coaches, teachers, therapists, social workers, and managers don’t just transmit knowledge. They’re caring motivators. Many other job and social roles require caring, too.

As the hype builds about the disruptive potential of future versions of ChatGPT and related technologies, we need to keep this limitation in mind. A machine that doesn’t care can’t say everything we need to hear.

Evan Selinger is a professor of philosophy at the Rochester Institute of Technology, an affiliate scholar at Northeastern University’s Center for Law, Innovation, and Creativity, and a scholar in residence at the Surveillance Technology Oversight Project. Follow him on Twitter @evanselinger.