Can AI Chatbots Help Us Escape the Grip of Conspiracy Theories?

0
240
Chatbots

AI and Conspiracy Theories: They could not have expected a clearly unlikely weapon in the fight against fake news: States.

AI chatbots are used to generate conversation and answer a number of questions, and these have proved popular. However, new research suggests they may also help to address a more serious social issue: In particular, the level of trust in conspiracy theories was posited. A paper in Science finds that AI-debrief-fact check conversation can help extract people from conspiracy beliefs and can help prevent relapse for at least two months.

A recent work by Thomas Costello and other colleagues at MIT provides at least a degree of optimism regarding the effort to combat conspiracy theories, some of which can be lethal.

Some of the conspiracy theories can be as ridiculous as the notion that there is no such country as Finland to sulfurous ones with equally disastrous consequences. However, when such theories undermine the public confidence in science or government, then the repercussions are terrible. With Regard to the ‘covid 19’ vaccine, for instance, anti-science conspiratorial beliefs include denial of vaccinations, denial of global climatic change, and so on, normally leading to unhealthy practices that have an impact on the health of the public. However, there are extreme cases where belief in such theories has been attributed to death.

In the present study, all the components of the quantitative analysis have been well explained with enough evidence.

However, conspiracy theories are largely ‘sticky.’ That means that once a person is part of a conspiracy theory, chances are that he will not easily let go of the conspiracy theory. Because these beliefs are associated with close-knit groups, and anybody who holds such beliefs will have to put a lot of effort into researching them. These people have lost faith not only in science but also in authorities; thus, informing them of the facts doesn’t always work.

Enter AI Chatbots

Generative AI has recently emerged as a hot topic, and one of the significant fears that people have is regarding fake news that is being generated and shared. AI systems create realistic fakes, which then refresh fake information in users’ minds, thus reinforcing fake beliefs. It has been observed that even an AI chatbot that is developed with good intentions may contain errors and prejudices. Therefore, one might be shocked how AI could be efficient in removing conspiracy beliefs rooted deeply in people’s minds.

However, apparently, based on Costello’s findings, it is possible to convince some people with AI chatbots to change their minds. The idea of the study was to know if factual messages from chatbots can positively influence social beings and change their mind from conspiring thoughts where the research recruited over 2,000 participants. Participants were divided into two groups: in one condition, participants had verbal communication with a chatbot that debated the participant’s conspiracy theory, while in the second condition, conversations were less target-specific.

To what extent did the chatbots prove useful?

The results were promising. That is, about 20% of participants in the group with personalized interactions recorded a decreased level of conspiracy theory belief following three rounds of conversation. Interestingly, the majority of such participants continued to hold this shift two months later.

Therefore, even if the outcomes obtained are not fully positive and are only a part of the chatbot conversations, the use of artificial intelligence remains a rather promising way of countering false beliefs in cases where a person has not fully integrated into conspiracy subcultures.

Why AI Works: Trust and Argumentation

In the same vein, people open up to an AI chatbot because it has no selfish or dubious interest whatsoever. For people, who do not believe in public institutions any longer, this impartiality is necessary. Chatbots can not only state facts but also make conclusions, and they are much more effective in doing this since pure facts combined with some conclusions make the strongest argument.

Conversely, AI is not an outfit that can be universally applied. The study also established that for people who held higher personal or community-based reasons to maintain their opinions, the chatbots were less efficient. It, therefore, becomes clear that whereas for some, these theories were a way of portraying a certain view, for others, conspiracy theories are not really a case of misinformation but an adaptation to being part of a group.

Is it Safe to Rely on Chatbots to Correct?

The study focuses on the use of AI. It emphasizes how convincing chatbots can be, but at the same time, it raises some concerns about the effective working of chatbots. When programmed, chatbots can be highly efficient at dispelling a misperception that one might have; however, chatbots can also be as wrong as the data set that was programmed into it. For example, some AI chatbots were programmed in such a way that they can defend the existence of fake truths such as the flat earther theory.

Also, chatbots are no different from any other AI. They will serve based on the questions that are presented to them, and thus, biased or false questions will lead to the strengthening of the wrong information. Just like when users search for information with some prejudice, they receive this kind of information back in return; chatbots can mislead based on the question being asked.

Conclusion: Exactly it is a tool that may not solve all the problems we have when applied in the health sector, but it is of great help.

The presented examples of the use of AI chatbots allow us to state that they have certain potential in fighting conspiracy theories; however, they cannot be regarded as a universal solution. With chatbots, one needs to change minds, and the effectiveness of this in achieving results depends on the quality of the program adopted by the chatbot’s operators. Thus, hardly can friendly, artificially wise bots give such non-biased and reasonable arguments to change some people’s minds, which is why treatments for conspiracy beliefs rooted in Social and Psych procedural and epistemological traditions have to be provided by a human expert.

LEAVE A REPLY

Please enter your comment!
Please enter your name here