How artificial intelligence is overcoming natural stupidity

A study published in the journal Science, “Durably reducing conspiracy beliefs through dialogues with AI”carried out on 2,190 Americans who openly believed in a series of absurd conspiracy theories, shows that interacting with an AI chatbot on the subject significantly reduced the strength of those beliefs, and that the effect lasted for at least two months.

The secret to success: the chatbot, with its ability to access massive amounts of information on a huge variety of topics, was able to precisely tailor its counterarguments for each individual.

More than 50% of Americans claim to believe in at least one of the many conspiracy theories out there, outlandish explanations for an event or situation that claim the existence of a plot by powerful and sinister groups, often politically motivated. In general, these individuals tend to turn those theories into a fundamental part of their worldview, the foundations of the edifice that supports their beliefs, and to react angrily when those theories are threatened or challenged.

This is the backfire effect, or belief perseverance, which leads us to continue believing in something despite the availability of new information that firmly contradicts it. One of the best I’ve seen to explain this kind of question is this long comic by The Oatmeal, “You’re not going to believe what I’m about to tell you”, which I’ve been using for quite a few years at the beginning of all my courses.

Conspiracy theories vary greatly from person to person, covering a wide range of fields, and attempts to debunk them are usually ineffective, because the victims of those theories often have different versions in their heads. In contrast, an AI chatbot, perhaps because it is non-personal, can adapt debunking efforts to all those different versions of a conspiracy. Therefore, in theory, a chatbot could be more effective in dissuading someone, and to do so, moreover, in a sustainable way over time.

To test their hypothesis, the researchers created a chatbot specializing in conspiracy theories, DebunkBot, and conducted a series of experiments with the participants who believed in one or more conspiracy theories. The participants had several one-on-one conversations with an LLM (developed with GT-4 Turbo) in which they shared their favorite conspiracy theory and the evidence they believed supported that belief. The LLM responded by offering counterarguments based on facts and evidence, tailored to each individual participant. The LLM’s answers were subjected to professional verification, which showed that 99.2% of the claims it made were true, while only 0.8% were rated as misleading, and none of them as false.

Participants answered a set of questions before and after their dialogues with the chatbot, which lasted about eight minutes on average. These targeted dialogues resulted in a 20% decrease in participants’ erroneous beliefs, a reduction that, moreover, persisted even two months later, when the participants were reassessed.

The spread of conspiracy theories supported by signficant chunks of the population are one of the most worrying aspects of the hyperabundance of information and disinformation in general. Now we can begin to understand that as long as we have AI, we can continue to hold out the hope that conspiracy theories that not so long ago were simply considered as proof of ignorance or stupidity, can at least be confined to residual segments of the population, simply due to ignorance, lack of culture or mental illness. It seems we have reason to be optimistic: it looks like artificial intelligence can overcome natural stupidity.

Leave a Reply

Your email address will not be published. Required fields are marked *