You are here: 17吃瓜在线 College of Arts & Sciences News AI DebunkBot Effectively Persuades People from Believing Conspiracy Theories

Contact Us

CAS Dean's Office 4400 Massachusetts Avenue NW Washington, DC 20016-8012 United States

Back to top

Research

AI DebunkBot Effectively Persuades People from Believing Conspiracy Theories

An interview with DebunkBot creator and 17吃瓜在线 Psychology Professor Thomas Costello

By |

News headline reads 'Fake News.' Image created with generative AI Image created with generative AI

Conspiracy theories are spreading on an unprecedented scale across the Internet and social media, including outlandish stories about alien abductions, government assassination plots, and politicians who can 鈥済eoengineer鈥 the weather. More and more, these stories are weaponized for political purposes, leading many experts to conclude they are a real . 听

Thomas CostelloBut now there might be hope. 17吃瓜在线 Psychology Professor Thomas Costello is part of a Massachusetts Institute of Technology (MIT) team of scientists who have created , an artificial intelligence bot that chats pleasantly with users while respectfully and factually debunking their conspiracy beliefs. 听

DebunkBot has a strong track record of persuasion. This is no easy feat, says Costello, who points out that people who believe that conspiracy theories are 鈥渞eally hard to persuade and don't often change their minds.鈥 DebunkBot has captured the imaginations of scientists and politicians and journalists 鈥 as well as lots of ordinary people who have visited the site to test its effectiveness on debunking their favorite conspiratorial beliefs. 听

So, in this era of rampant misinformation and conspiracy theories, can AI bots like DebunkBot make a difference? In this interview, Costello answers questions about AI, human nature, and the battle over the truth.听

Can you tell us a bit about DebunkBot and what it's designed to do? 听

is based on a research study that was . We used GPT-4 Turbo, which at the time was OpenAI鈥檚 most advanced large language model, to engage more than 2,000 conspiracy believers in personalized, evidence-based discussions. Participants were asked to describe a conspiracy theory they believed in, using their own words, along with evidence supporting their belief.听

GPT-4 Turbo then used this information to persuade users that their beliefs were untrue, adapting its strategy for each participant鈥檚 unique arguments and evidence. These conversations, lasting an average of 8.4 minutes, allowed the AI to directly address and refute specific evidence supporting each person鈥檚 conspiratorial beliefs, an approach that was impossible to test at scale prior to the technology鈥檚 development.

How did things turn out?

The conversations lasted about eight minutes on average, and at the end, people rated whether they still believed in their conspiracy. On average, people reduced their conspiracy belief by about 20 percent, and one-in-four people became actively skeptical toward their conspiracy. 听

Most participants developed at least a little bit of skepticism about their conspiracy. We followed up with them, and even two months later, their newfound skepticism was still present. Two months was the last time we checked, so presumably it lasted for even longer than that.听

DebunkBot has gotten a lot of attention, hasn鈥檛 it?

Yes. The DebunkBot website was actually secondary to our research paper published in Science鈥攚e just created the site for people to try it out for themselves. We actually thought the New York Times was doing a story on the Science paper, but they , so the bot got a lot of traffic. 听

I think 65,000 people have tried it so far. People seem into it. It鈥檚 been featured in media outlets from to , and I was interviewed on and .听

So, why are some people drawn to conspiracy theories?听

Conspiracy theories are just descriptive claims about the world. So, let鈥檚 say someone is claiming that 911 wasn't a terrorist attack perpetrated by al Qaeda, but rather, it was organized by the US government. They think there's evidence supporting this. 听

Most people don't go out and test every hypothesis themselves. They rely on experts and other sources they trust. So, if they're exposed to information that is wrong but seems plausible, like a conspiracy theory, they might believe it, especially if it resonates with other things they believe are true. So, if you're someone who doesn't trust elites and government and institutions, you might be particularly prone to conspiratorial beliefs. 听

That's one angle, and then another is that people can get psychological value from believing in conspiracy theories. If you're afraid and believe that the world is dangerous and random and chaotic, it's almost a comforting idea that that there's order in the world, even if that order is something like an evil secret government. 听

And then a third angle is group allegiance. If members of your group share a belief, you're also likely to believe it.听

Why do you think that misinformation is so prevalent right now? 听

Bad beliefs, misinformation, and polarization have always been a problem. The democratization of information that came with the Internet, where we're not getting our facts from the same, shared sources, is a newer problem (or, at least the scale is new). Around the advent of the Internet, many people were optimistic that humans would be able to do their own research and come to their own good conclusions. But doing that sort of epistemic work may not be in our nature, and the Internet isn鈥檛 set up to stop people from drowning in random people鈥檚 opinions.听

Jonathan Swift once famously said that a lie can travel halfway around the world while the truth is still putting on its shoes. The Internet has really supercharged that phenomenon. People are able to spread misinformation, and it goes viral. 听

Another issue is partial truths. For example, there was a lot of fake news about COVID vaccinations during the pandemic that contained straight-up false information. And of course that鈥檚 harmful, but it wasn鈥檛 super widespread. 听

More impactful were real news stories that were true but ultimately misleading, like one in the Chicago Tribune with the headline, 鈥淗ealthy Doctor Dies After Getting COVID Vaccine.鈥 A doctor did in fact die after getting the covid vaccine鈥攂ut probably not because of the vaccine. The story went viral, and millions of people saw it and changed their beliefs a little. And so that's not really misinformation, but those kinds of stories probably caused more harm than misinformation.听

Human beings are not always thoughtful, don鈥檛 always stress-test beliefs against plausible counterarguments, and don't necessarily try to see the other side鈥檚 perspective. Fractionating into polarized groups only amplifies this. My hope is that AI can begin to act as a really effective counter. It can get around the world just as quickly as a lie (or a half-truth).听

Where do you see this all heading? Is this our new normal鈥攐r could things get even worse? 听

We worry about the 鈥渂ullshit asymmetry principle,鈥 where it's easier to spread lies than to debunk them. AI is a tool for combating that because it's able to say in real time, 鈥淗ere's why you're wrong,鈥 or 鈥渢his isn't trustworthy information.鈥 People don't need to do the human cognitive labor to combat the misinformation themselves. This is an optimistic version of the future where AI helps people think more clearly.

There are also pessimistic versions predicting that AI will spread misinformation more quickly than AI can stop it, and it's hard to know what will happen. But I like using AI as a form of epistemic hygiene, almost like we brush our teeth every day. Maybe if we hear a wacky claim, we can go check with ChatGPT and see what it thinks. And you don't have to trust it completely. You can just use it to get another perspective.

When you created DebunkBot, what sources did you use? And what makes people, especially conspiracy-minded people, trust them? 听

Good question. The sources we use are from the training data of the model. So, things that GPT-4 learned were true and reliable. We had a fact checker go through the AI鈥檚 claims to make sure what it was telling people was true and to look for political bias. We randomly chose 128 claims by the AI and had the fact-checker investigate each of them鈥127 were rated as 鈥渢rue,鈥 one was 鈥渕isleading,鈥 and none were 鈥渇alse.鈥 There was no evidence of political bias. That鈥檚 a really solid track record.听

Do you think people will use DebunkBot going forward?

One nice thing about this tool is that it doesn't judge you or make you feel bad for having a conspiracy belief. And if we were doing it outside of a research context, it would also be totally private. And it differs from an argument鈥攚hen you're talking to a bot, the only reason to be doing it is to find out what's true.听

What鈥檚 next for your research?

I'm very excited about what behavioral science and psychology might be capable of with AI in terms of experiments. AI opens a lot of interesting creative doors鈥擜I in the loop and talking to individual research participants, one on one, not just for persuasion, but for all kinds of things. I'm excited about using these new AI tools to study human beings.

I鈥檓 moving my research beyond conspiracy theories to other attitudes that people hold. So, whether they prefer iPhones or Androids or how they feel about immigration or gun control, or any kind of epistemically questionable, non-conspiracy beliefs. The point of doing all of this is to map out the kinds of beliefs that are responsive to evidence versus those that aren't, which will help us understand why people have their beliefs.

How will you use AI to do this?

For a long time, psychology research was done with one-on-one in-person interactions that took place in a room, where you'd have an experiment or deliver a treatment to a human being. It was hard to scale, and a lot of information got lost because no one wrote down everything that happened.听

And so, the field moved to online studies, where you pay people to participate, and you can do experiments, but they're a little less realistic and more constrained. They can鈥檛 easily be personalized. 听

AI can marry these two waves and interact one-on-one with people where everything is recorded, preserves the creativity of original psychology research, and preserves the scalability of online research. We're expecting some big breakthroughs.听