Page Loader
Summarize
'Synthetic cancer': This virus is using ChatGPT to spread itself
Virus leverages ChatGPT to write human-like emails and attaches itself as seemingly harmless file

'Synthetic cancer': This virus is using ChatGPT to spread itself

Jul 05, 2024
12:29 pm

What's the story

Researchers David Zollikofer from ETH Zurich and Ben Zimmerman from Ohio State University have developed a computer virus that uses the capabilities of ChatGPT to disguise itself and spread via AI-generated emails. The virus, dubbed "synthetic cancer," can also alter its own code to evade antivirus scans. "We ask ChatGPT to rewrite the file, keeping the semantic structure intact, but changing the way variables are named and changing the logic a bit," explained Zollikofer.

Malware disguise

Virus uses AI to craft contextually relevant emails

The virus infiltrates a victim's system and uses Outlook to generate contextually relevant email replies, attaching itself as a seemingly harmless file. This demonstrates how AI chatbots can be manipulated to spread malware efficiently and stealthily. For instance, the AI crafted an email inviting a recipient named Claire to an 80sNostalgiaPlaylist.exe birthday celebration, which if opened, would install the worm onto Claire's system.

Cybersecurity concerns

Researchers highlight cybersecurity risks of language learning models

Zollikofer and Zimmerman have underscored the potential risks posed by Language Learning Models (LLMs) in cybersecurity. In their yet-to-be-peer-reviewed paper, they stated that their "submission includes a functional minimal prototype, highlighting the risks that LLMs pose for cybersecurity and underscoring the need for further research into intelligent malware." Interestingly, there were instances where ChatGPT detected the malicious intent of the virus and refused to cooperate.

Expert opinion

Cybersecurity expert expresses concern over AI malware

Alan Woodward, a cybersecurity researcher at the University of Surrey, expressed concern over these developments. "I think we should be concerned," he said. "There are various ways we already know that LLMs can be abused, but the scary part is the techniques can be improved by asking the technology itself to help." Despite these concerns, Zollikofer remains optimistic about potential defensive applications of these technologies.