How to detoxify and defuse social conversations, using ChatGPT

2023-03-20
2 min read.
AI-based techniques for improving conversation quality and reducing  political divisiveness
How to detoxify and defuse social conversations, using ChatGPT

Been on Twitter, Facebook, Instagram, and other social networks lately? A toxic zoo, right?  

“More than 100 billion messages are sent every day on Facebook and Instagram alone, and approximately 7 billion conversations occur daily on Facebook Messenger,” say researchers at Brigham Young University and Duke University.

Such conversations can have far-reaching impacts. “Some of the most significant social movements have recently emerged out of sprawling conversations on social media. And discussions between high-profile social media users can shape the stock market, politics, and many other aspects of the human experience.

However, “toxicity in conversations increases polarization and corrodes the capacity of diverse societies to cooperate in solving social problems,” they note. “Nearly half of social media users report observing mean or cruel behavior.” 

Chat assistant as neutral moderator

Now there’s a proposed solution: Reduce divisiveness and improve the quality of conversations, using a large language model, ChatGPT. To test that idea, the researchers conducted a large-scale experiment, choosing the perennial controversial argument: gun control.

Respondents write messages unimpeded until, at predefined intervals, the chat assistant intercepts the treated user’s message, using GPT-3 to propose evidence-based alternative phrasings while retaining the semantic content. (Credit: Brigham Young University and Duke University)

The chat assistant suggests randomly ordered alternatives to the message author: accept or edit any of these rephrasing suggestions, or send their original message. That choice is then sent to their partner, and the conversation continues.

In the research, the chat assistant recommended real-time, context-aware and evidence-based ways to rephrase messages with specific wording during a live, online conversation. esult: improved conversation quality and decreased divisiveness.  

How to reduce toxicity

These suggestions were based on three specific techniques for improving conversations without affecting the content of the message:

Restatement: simply repeating back a person’s main point to demonstrate understanding;

Validation: positively affirming the statement made by the other person without requiring explicit statements of agreement (e.g. “I can see you care a lot about this issue”).

Politeness: modifying the statement to use more civil or less argumentative language. 

In an arXiv paper, the authors report that these techniques improved conversation quality and tone and reduced political divisiveness.

“Our use of AI in this experiment does not seek to change participants’ minds. The research shows that the benefits of such conversations do not require persuasion or agreement between participants on the issues discussed. Each person’s perspective was heard, understood, and respected.“ 

Citation: Argyle, L. P., Busby, E., Gubler, J., Bail, C., Howe, T., Rytting, C., & Wingate, D. (2023). AI Chat Assistants can Improve Conversations about Divisive Topics. arXiv. https://arxiv.org/abs/2302.07268

Images: MidJourney, Prompts by Lewis Farrell



Related Articles


Comments on this article

Before posting or replying to a comment, please review it carefully to avoid any errors. Reason: you are not able to edit or delete your comment on Mindplex, because every interaction is tied to our reputation system. Thanks!