White House Collaborates with Hackers to Uncover Security Flaws in Artificial Intelligence

The hackers will try to see how these chatbots can be manipulated to cause harm.

OpenAI and other major AI providers, like Google and Microsoft, are working with the Biden administration to let thousands of hackers test the limits of their technology.

What They Aim to Find

The hackers will try to see how these chatbots can be manipulated to cause harm. Additionally, they will also try to figure out if the chatbots will share private information when they confide in them to other users.

The mass hacking event requires thousands of people for the DEF CON hacker convention in Las Vegas. They seek a lot of people with a wide range of lived experiences, subject matter expertise, and backgrounds hacking at these models, according to APNews.

Read also: Rise of Artificial Intelligence Bots That Can Do School Essays Are Worrying Experts: Here's Why

A Mass Hack

The idea of a mass hack caught the attention of the US government in March at the South by Southwest festival in Austin, TX.

The Defense Advanced Research Projects Agency (DARPA), the research arm of the US Department of Defense, launched a competition to develop chatbots that teams of researchers could hack.

The competition called the DARPA Grand Challenge, aimed to improve the security and reliability of chatbots in various applications, including military operations. The competition attracted teams from around the world who competed for a $2 million prize.

JAPAN-TECHNOLOGY-AI-SOFTWARE-CHATGPT
JAPAN-TECHNOLOGY-AI-SOFTWARE-CHATGPT Visitors look at a booth showing "FirstContact", a problem-solving AI-equipped chatbot service by Japanese company Vitalify using ChatGPT, during the three-day 7th AI Expo, part of NexTech Week Tokyo 2023, Japan's largest trade show for artificial intelligence technology companies, at Tokyo Big Sight on May 10, 2023 by RICHARD A. BROOKS/AFP via Getty Images)

The DARPA Grand Challenge was just one example of how governments and organizations are increasingly recognizing the importance of securing chatbots from hacking and other forms of malicious activity.

As chatbots become more prevalent in everyday life, the need for robust security measures is becoming more urgent. The mass hacking event at DEF CON is another step in this direction, as researchers work to identify vulnerabilities and develop solutions to protect chatbots and their users.

In addition to the DARPA Grand Challenge, there have been other initiatives aimed at improving chatbot security. For instance, Facebook launched a program called Bug Bounty in 2018 that rewards researchers who identify vulnerabilities in its Messenger platform.

However, securing chatbots is not just about preventing malicious attacks from external sources. It also involves ensuring that the bots themselves are ethical and transparent in their operations. There have been concerns raised over how certain chatbots may perpetuate biases or even harm users if they are used for sensitive tasks such as mental health counseling.

To address these issues, organizations like the IEEE (Institute of Electrical and Electronics Engineers) have developed guidelines for designing ethical AI systems, including chatbots. These include principles such as transparency, accountability, and privacy protection.

Overall, while mass hacking events may seem alarming at first glance, they play an important role in identifying weaknesses so that appropriate measures can be taken to improve security and protect users' data and well-being when using chatbots.

Related article: Artificial Intelligence: Tech Moguls Warn Against Potential Dangers of AI

Tags
White House, Artificial intelligence, Hackers
Real Time Analytics