Hackers utilizing LLMs like ChatGPT to enhance cyberattacks, says Microsoft
Microsoft and OpenAI have uncovered that hackers are taking advantage of large language models (LLMs) like ChatGPT, to enhance their cyberattacks. In a blog post, Microsoft explained, "Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge." "This is an attempt to understand potential value to their operations and the security controls they may need to circumvent," it added.
Various hacking groups exploit LLMs
Hackers from Russia, North Korea, Iran, and China have been caught using LLMs for various reasons. The Strontium group, connected to Russian military intelligence, uses LLMs to study radar imaging technologies, satellite communication protocols, and specific technical parameters. Meanwhile, North Korea's Thallium group employs LLMs to research vulnerabilities, and basic scripting tasks, and also create phishing campaign content.
AI-assisted phishing emails and code generation
Iranian hacking group Curium is using LLMs to craft phishing emails and code that can dodge antivirus applications. Chinese state-affiliated hackers are also taking advantage of LLMs for research, translations, scripting, and improving their existing tools. A senior official at the National Security Agency (NSA) warned in January that hackers are using AI to ensure their phishing emails seem more believable.
AI-powered fraud and Microsoft's response
Microsoft cautions against future AI-powered fraud risks like voice impersonation. "A three-second voice sample can train a model to sound like anyone," said the company, adding, "Even something as innocuous as your voicemail greeting can be used to get a sufficient sampling." The Principal Detection Analytics Manager at Microsoft, Homa Hayatyfar, said that they use AI to detect, respond, and protect against the 300+ threat actors they track.
What is Security Copilot?
Microsoft has announced that it is working on a Security Copilot. This upcoming AI assistant, designed for cybersecurity experts, will identify breaches and aid in understanding the signals and data generated via cybersecurity tools every day. The firm is also upgrading its software security following spying attempts on Microsoft executives by Russian hackers, and significant Azure cloud attacks.