AI Chatbots' Incorrect Responses on Election Info Pose Threat to Voters With False Information, Study Claims

An AI chatbot study reveals the spread of highly inaccurate election information.

Amid the ongoing presidential primaries in the US, there are concerns about popular chatbots spreading inaccurate information that could potentially impact voter participation.

A recent report, compiled by artificial intelligence experts and a bipartisan group of election officials, highlights the risks associated with this issue.

AI Chatbots Serve Inaccurate US Election Information

Fifteen states and one territory are gearing up for the upcoming presidential nominating contests next week on Super Tuesday. Many individuals are already utilizing chatbots powered by artificial intelligence to access essential information, such as details on the voting process.

Having been fed vast amounts of text from the internet, chatbots like GPT-4 and Google's Gemini are equipped with AI-generated responses. However, they have been known to direct voters to non-existent polling locations or provide illogical answers based on outdated information, according to the findings.

"The chatbots are not prepared to provide crucial, detailed information about elections for the general public," stated Seth Bluestein, a Republican city commissioner in Philadelphia. Bluestein, along with other election officials and AI researchers, tested the chatbots as part of a larger research project last month.

An AP journalist witnessed the group gathering at Columbia University, testing the responses of five large language models to prompts about the election, like where a voter could locate their nearest polling place, and then evaluating the generated responses.

All five models tested - OpenAI's GPT-4, Meta's Llama 2, Google's Gemini, Anthropic's Claude, and Mixtral from the French company Mistral - showed different levels of failure when questioned about the democratic process, as outlined in the report summarizing the workshop's results.

During the workshop, over half of the chatbots' responses were deemed inaccurate by participants. Additionally, 40% of the responses were considered harmful, as they spread outdated and incorrect information that could potentially restrict voting rights, according to the report.

For instance, when individuals inquired the chatbots about voting locations in the ZIP code 19121, a predominantly Black community in northwest Philadelphia, Google's Gemini responded negatively.

AI Regulating Laws Still on Table

Meanwhile, individuals utilizing AI are facing additional challenges. Google recently halted its Gemini AI picture generator, with plans to reintroduce it in the coming weeks, due to the technology generating information with historical inaccuracies and other troubling responses. When requested to depict a German soldier during World War 2 under Nazi rule, Gemini reportedly offered a variety of racially diverse images, as per the Wall Street Journal.

"They claim to subject their models to thorough safety and ethics testing," Maria Curi, a journalist specializing in technology policy, informed CBS News. "The testing processes are not clearly defined." People have been uncovering historical inaccuracies, raising concerns about the timing of releasing these models to the public.

A Meta spokesperson, Daniel Roberts, informed the Associated Press that the most recent discoveries are considered insignificant as they do not accurately reflect how individuals engage with chatbots. Anthropic announced its upcoming release of a new version of its AI tool to offer precise voting information in the near future.

In an email to CBS MoneyWatch, Meta highlighted that Llama 2 serves as a model for developers rather than a tool for consumers.

"When we submitted the same prompts to Meta AI - the product the public would use - the majority of responses directed users to resources for finding authoritative information from state election authorities, which is exactly how our system is designed," a Meta spokesperson said.

OpenAI announced its intention to continue developing its approach as it gains more insights into the usage of its tools, without providing detailed information. Google and Mistral have not yet responded to requests for comment.

In Nevada, where same-day voter registration has been permitted since 2019, four out of five chatbots tested by researchers incorrectly claimed that voters would be prevented from registering weeks before Election Day.

"It scared me, more than anything, because the information provided was wrong," stated Nevada Secretary of State Francisco Aguilar, a Democrat who took part in last month's testing workshop.

A recent poll conducted by The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy found that a majority of adults in the US are concerned about the potential for AI tools to amplify false and misleading information during the upcoming elections.

However, laws regulating AI in politics have not been passed by Congress in the US yet. Currently, the responsibility of overseeing the chatbot technology falls on the tech companies themselves.

Tags
AI
Real Time Analytics