Researchers demonstrate that malware can be concealed in neural networks
In our pursuit to develop better software that powers our everyday lives, we seem to have forgotten that malware can be hidden in almost anything that runs on a computer. According to a recent study, malware can be embedded into the neural networks of a machine learning model so effectively that it cannot be detected, all while the neural network continues to operate normally.
Malware-embedded neural network model had just 1% more errors
A research paper published recently by Zhi Wang, Chaoge Liu, and Xiang Cui presented a method of hiding 36.9MB of malware in a 178MB AlexNet model designed to classify images. The malware-embedded model was able to perform within a 1% error margin of the original malware-free image classification model. The malware embedded in the model also managed to avoid detection.
Malware remained undetected by 50 common antivirus programs
The researchers who are all from the University of Chinese Academy of Sciences explained that replacing up to 50% of the AlexNet model also kept the model's accuracy rate above 93.1%. Such models were reportedly tested against 50 common antivirus systems but the malware remained undetected. According to the researchers, the malware is "disassembled" for being embedded into the network's neurons.
Malware would need to be extracted, recompiled on victim's computer
The researchers noted that hiding the malware was just half the job for bad actors. They would also need a malicious receiver program that reassembled the malware on the victim's computer. This means that the malware attack can be stopped if the victim verifies the neural network model before launching it. Additionally, it can be detected by "traditional methods" including static and dynamic analysis.
Once reassembled, malware could be detected by real-time protection systems
The malware is hidden using a classic method called steganography where one object or entity is concealed within another. However, there is a risk of detection by the victim's antivirus program once the malware has been reassembled to infect the computer.
Antivirus probably isn't optimized for scanning neural networks yet
In conversation with Motherboard, cybersecurity researcher Dr. Lukasz Olejnik explained that the antivirus software couldn't find the malware hidden in the neural network just because "nobody is looking in there." Dr. Olejnik probably implies that conventional antivirus is designed to find malware bundled into applications and consumer-grade programs while machine learning algorithms are a niche use case for consumers today.
Researchers hope their work contributes to future cybersecurity efforts
On the flipside, neural network models can often be rather large and complex, giving potential bad actors the ability to conceal larger malware. The well-intentioned researchers who developed the method, however, are optimistic. Their paper notes that "AI-assisted attacks will emerge and bring new challenges for computer security." So, they hope "the proposed scenario will contribute to future protection efforts."