ChatGPT Can Create Mutating Malware, Warns Security Experts—Hackers Only Need Prompts

Will there be a point when ChatGPT's malware can't be detected?

There's no denying that ChatGPT is one of the most effective AI models in the tech industry.

ChatGPT Can Create Mutating Malware, Warn Security Experts—Hackers Only Need Prompts
A participant sits with a laptop computer as he attends the annual Chaos Communication Congress of the Chaos Computer Club at the Berlin Congress Center on December 28, 2010 in Berlin, Germany. The Chaos Computer Club is Europe's biggest network of computer hackers and its annual congress draws up to 3,000 participants. Sean Gallup/Getty Images

However, OpenAI's AI chatbot also poses numerous risks, including in cybersecurity. Many people already use ChatGPT to make their daily tasks easier. But, as this chatbot benefits users, it can also help malicious actors create malware.

Numerous cybersecurity experts already shared their concerns regarding ChatGPT's capabilities to create mutating malware.

Here's what we know so far about the cybersecurity risks of ChatGPT.

ChatGPT Can Create Mutating Malware, Warn Security Experts

According to CSO Online's latest report, some cybersecurity researchers demonstrated a key problem in ChatGPT.

ChatGPT Can Create Mutating Malware, Warn Security Experts—Hackers Only Need Prompts
In this photo illustration, the welcome screen for the OpenAI "ChatGPT" app is displayed on a laptop screen on February 03, 2023 in London, England. OpenAI, whose online chatbot ChatGPT made waves when it was debuted in December, announced this week that a commercial version of the service, called ChatGPT Plus, would soon be available to users in the United States. Leon Neal/Getty Images

This is specifically its capability to generate mutating (polymorphic) malware code. They said that these mutating malware could even evade EDR (endpoint detection and response) systems.

Instead of reproducing already-written code snippets, security experts warned that hackers could use ChatGPT to generate dynamic, mutating versions of malicious codes. Because of this, ChatGPT-generated malware can effectively evade cybersecurity tools.

The Wired reported that OpenAI integrates guardrails into ChatGPT and its other artificial intelligence technologies.

However, these protections can easily be bypassed using the right prompts. Numerous users already shared some prompts that were able to remove ChatGPT's restrictions.
One of these is the DAN (Do Anything Now) prompt.

ChatGPT Could Create More Dangerous Malware

Mackenzie Jackson, Gitguardian's developer advocate, said that the malicious codes that ChatGPT generates are still far from ground-breaking. But, ChatGPT and other similar AI tools become more efficient as they consume more sample data.

Even different products coming onto the market help these artificial intelligence models improve.

Once these AI tools become more advanced, there's a chance that hackers will take advantage of their enhancements to generate more effective malware, such as mutating malware.

Jackson even believes that the new malware that ChatGPT will produce in the future will no longer be detectable.

Only AI systems designed for cybersecurity defense can identify them. "What side will win at this game is anyone's guess," added Mackenzie.

Tags
Hackers, Security
Real Time Analytics