Bing’s AI Chatbot Faces Accuracy Concerns in Election Info

Bing's AI Chatbot Faces Accuracy Concerns in Election Info

Bing’s AI Chatbot Faces Accuracy Concerns in Election Info

The study conducted by two nonprofit organizations headquartered in Europe found that the artificial intelligence (AI) Bing chatbot, which has since been rebranded as Copilot, generates misleading results about election information and misquotes its sources.

Bing’s artificial intelligence chatbot was found to provide incorrect responses. The study, published on December 15 by AI Forensics and AlgorithmWatch, found that Bing’s artificial intelligence chatbot provided incorrect responses thirty percent of the time when asked fundamental questions about political elections in Germany and Switzerland.

There were replies that were inaccurate about facts about candidates, polls, controversies, and voting. It also produced responses that were not true to questions about the presidential elections that would take place in the United States in 2024.

The research project utilized the artificial intelligence chatbot used by Bing because it was one of the first AI chatbots to incorporate sources into its responses. The research also stated that the mistakes are not exclusive to Bing alone. According to reports, they carried out preliminary tests on ChatGPT-4 and discovered inconsistencies when they did so.

The nonprofit organizations made it clear that the incorrect information has not had any impact on the results of the elections, despite the fact that it may have contributed to confusion and disinformation among the general population.

“As generative AI becomes more widespread, this could affect one of the cornerstones of democracy: the access to reliable and transparent public information.”

Additionally, the research found that the artificial intelligence chatbot’s safeguards were unevenly dispersed, resulting in the chatbot providing deceptive answers forty percent of the time. According to an article published by the Wall Street Journal, Microsoft has responded to the findings and stated its intention to rectify the issues before the 2024 presidential elections in the United States.

A representative for Microsoft urged users to always check for correctness to ensure the accuracy of the information collected from artificial intelligence chatbots. At the beginning of this year, in October, senators in the United States submitted a law that would punish those who intentionally create artificial intelligence reproductions of actual humans, whether they are alive or dead.

Meta, the parent company of Facebook and Instagram, introduced a directive in November that prohibited the use of generative artificial intelligence ad generation tools for political advertising. Meta, the parent company of Facebook and Instagram, introduced this directive as a precautionary measure in preparation for the 2018 elections.

Read Previous

Ethereum Approves Standard for Tokenizing Real-World Assets

Read Next

Blockchain IAM and its Role in the Future of Digital Commerce