back Back

How to detoxify and defuse social conversations, using ChatGPT

Mar. 20, 2023.
2 min. read. . 0

AI-based techniques for improving conversation quality and reducing  political divisiveness

About the Writer

Amara Angelica

19.66877 MPXR

Senior Editor Amara Angelica, an electrical engineer and inventor, was previously Editor of Kurzweil AI, working with Ray Kurzweil on The Singularity Is Near and other works

Been on Twitter, Facebook, Instagram, and other social networks lately? A toxic zoo, right?  

“More than 100 billion messages are sent every day on Facebook and Instagram alone, and approximately 7 billion conversations occur daily on Facebook Messenger,” say researchers at Brigham Young University and Duke University.

Such conversations can have far-reaching impacts. “Some of the most significant social movements have recently emerged out of sprawling conversations on social media. And discussions between high-profile social media users can shape the stock market, politics, and many other aspects of the human experience.

However, “toxicity in conversations increases polarization and corrodes the capacity of diverse societies to cooperate in solving social problems,” they note. “Nearly half of social media users report observing mean or cruel behavior.” 

Chat assistant as neutral moderator

Now there’s a proposed solution: Reduce divisiveness and improve the quality of conversations, using a large language model, ChatGPT. To test that idea, the researchers conducted a large-scale experiment, choosing the perennial controversial argument: gun control.

Respondents write messages unimpeded until, at predefined intervals, the chat assistant intercepts the treated user’s message, using GPT-3 to propose evidence-based alternative phrasings while retaining the semantic content. (Credit: Brigham Young University and Duke University)

The chat assistant suggests randomly ordered alternatives to the message author: accept or edit any of these rephrasing suggestions, or send their original message. That choice is then sent to their partner, and the conversation continues.

In the research, the chat assistant recommended real-time, context-aware and evidence-based ways to rephrase messages with specific wording during a live, online conversation. esult: improved conversation quality and decreased divisiveness.  

How to reduce toxicity

These suggestions were based on three specific techniques for improving conversations without affecting the content of the message:

Restatement: simply repeating back a person’s main point to demonstrate understanding;

Validation: positively affirming the statement made by the other person without requiring explicit statements of agreement (e.g. “I can see you care a lot about this issue”).

Politeness: modifying the statement to use more civil or less argumentative language. 

In an arXiv paper, the authors report that these techniques improved conversation quality and tone and reduced political divisiveness.

“Our use of AI in this experiment does not seek to change participants’ minds. The research shows that the benefits of such conversations do not require persuasion or agreement between participants on the issues discussed. Each person’s perspective was heard, understood, and respected.“ 

Citation: Argyle, L. P., Busby, E., Gubler, J., Bail, C., Howe, T., Rytting, C., & Wingate, D. (2023). AI Chat Assistants can Improve Conversations about Divisive Topics. arXiv.

Images: MidJourney, Prompts by Lewis Farrell

Let us know your thoughts! Sign up for a Mindplex account now, join our Telegram, or follow us on Twitter.

💯 💘 😍 🎉 👏
🟨 😴 😡 🤮 💩

Here is where you pick your favorite article of the month. An article that collected the highest number of picks is dubbed "People's Choice". Our editors have their pick, and so do you. Read some of our other articles before you decide and click this botton; you can only select one article every month.

People's Choice