Experts Warn of Growing Threat of AI-Generated Misinformation in US 2024 Elections

It is an alarming trend.

  • Generative AI tools can produce hyper-realistic images, videos, and audio that can deceive voters and sway elections.
  • AI experts warn that generative AI can be used to mislead voters, impersonate candidates, and undermine elections at an unprecedented pace.
  • Some organizations are using AI language models to detect and disprove disinformation, but these technologies can also undermine trust and deceive people.

For years, computer engineers and tech-savvy political scientists have warned that the availability of powerful AI tools could lead to the creation of fake images, videos, and audio that could deceive voters and sway elections.

Compared to the synthetic images produced years ago, which were usually crude, unconvincing, and expensive to make compared to other forms of misinformation that were more readily available on social media, AI-generated images now are more true to life.

With the rapid advancement of generative AI tools, hyper-realistic images, videos, and audio can now be produced in seconds and at a minimal cost. Experts say that this technology can also be integrated into powerful social media algorithms to spread fake content far and wide, targeting specific audiences and taking campaign dirty tricks to a whole new level, per an AP News report.

Generative artificial intelligence's impacts on 2024 campaigns and elections are alarming. The technology may be used to send targeted campaign emails, messages, and videos to mislead voters, spoof and defame candidates, incite violence, and sabotage elections on an unprecedented scale, according to AI experts.

AI Will Be Used in Political Strategies

Last month, following the announcement that Democratic President Joe Biden would run for re-election, the Republican National Committee released an official party video that stood out because it was entirely generated using AI images.

Given AI's fast development and availability, this trend is expected. Three years ago, Darrell West, a senior scholar at the Brookings Institution's Center for Technology Innovation, said AI wasn't frequently utilized in political campaigns, but it's now simpler to deploy, according to Al Jazeera. This resulted in a new age when anybody could produce incredibly realistic-looking films "without becoming a software designer or video editing expert," making it simpler to present new realities that may not exist.

West added that this is a new area with far-reaching repercussions, wherein people may make videos and share their realities on social media by creating fresh images from scratch.

A Complicated Matter

Newtral, a Spanish business that focuses on fact-checking remarks by politicians, has started making use of large language models to detect and disprove disinformation. The AI models are comparable to those that drive ChatGPT, an OpenAI language model.

The Chief Technology Officer of Newtral, Ruben Miguez Perez, believes that such tools can identify when a piece of material makes a factual assertion that needs to be verified and determine if it is likely to be misinformation based on the emotions it conveys, per Interesting Engineering.

However, by making people doubt the veracity of information supplied by experts and social connections, these technologies could also have the ability to undermine trust and deceive them.

Even if AI technology may be the most powerful protection against these tools, the danger of widespread, machine-generated disinformation requires deeper thought and investigation.

Tags
Artificial intelligence, AI, Tech, United States, Politics, Joe Biden, Republican
Real Time Analytics