Would you trust an AI to mediate a conflict?

Recently, I have been feeling devastated. A very close friend cut off contact with me. I don’t quite understand why, and my attempts to resolve the situation have been unsuccessful. Situations like this are painful and confusing. So it’s not surprising that people are increasingly turning to AI chatbots to help them solve these problems. And there’s good news: AI can really help.

Google DeepMind researchers recently trained a system of large language models to help people reach agreement on complex but important social or political issues. The AI ​​model was trained to identify and present areas of agreement between people’s ideas. With the help of this AI mediator, small groups of study participants were less divided in their positions on various topics.

One of the best uses of AI chatbots is for brainstorming. I’ve had success in the past using them to write more assertive or persuasive emails in sensitive situations, such as complaining about services or negotiating bills. This latest research suggests that they can also help us see things from other people’s perspectives. So why not use AI to fix my relationship with my friend?

I described the conflict, as I see it, to ChatGPT and asked for advice on what I should do. The response was very reassuring as the AI ​​chatbot supported how I was approaching the problem. The advice he gave was in line with what I had already thought about doing. I found it helpful to talk to the bot and get more ideas on how to handle my specific situation. But in the end, I was dissatisfied because the advice was still very generic and vague (“Set your boundaries calmly” and “Communicate your feelings”) and didn’t offer the kind of insight a therapist might.

And there’s another problem: every argument has two sides. I started a new conversation and described the problem as I believe my friend sees it. The chatbot supported and validated her decisions, just like it did for me. On the one hand, this exercise helped me see things from her perspective. After all, I tried to put myself in the other person’s shoes, not just win an argument. But on the other hand, I can clearly see a situation where relying too much on the advice of a chatbot that tells us what we want to hear could lead us to reaffirm our opinions, preventing us from seeing things from another’s perspective.

This served as a good reminder: an AI chatbot is not a therapist or a friend. Although he can repeat the vast internet texts on which he has been trained, he does not understand what it is like to feel sadness, confusion, or joy. Therefore, I would recommend caution when using AI chatbots for things that really matter to you, and not taking what they say literally.

An AI chatbot can never replace a real conversation, where both sides are willing to truly listen and take the other’s point of view into account. So, I decided to put aside the AI-assisted conversation and contacted my friend once again. Wish me luck!

( fonte: MIT Technology Review)