We have discussed several times about the impact of Artificial Intelligence (AI) on threat landscape, from a defensive perspective new instruments will allow the early detections of malicious patterns associated with threats, from the offensive point of view machine learning tools can be exploited to create custom malware that defeats current anti-virus software.
At the recent DEF CON hacking conference, Hyrum Anderson, technical director of data science at security shop Endgame, demonstrated how to abuse a machine learning system to create malicious code that can avoid detections of security solutions.
Anderson adapted the Elon Musk’s OpenAI framework to create malware, the principle is quite simple because the system he created just makes a few changes to legitimate-looking code and convert them into malicious code.
A few modifications can deceive AV engines, the system created by the experts was named OpenAI Gym.
“All machine learning models have blind spots,” he said. “Depending on how much knowledge a hacker has they can be convenient to exploit.”
Anderson and his group created a system that applies very small changes to a legitimate code and submits it to a security checker. The analysis of the response obtained querying the security checker allowed the researchers to make lots of tiny tweaks that improved the capability of the malware to avoid the detection.
The machine learning system developed by the experts ran over 100,000 samples past an unnamed security engine in 15 hours of training. The results were worrisome, 16 per cent of the malware samples past the security system’s defenses.
The code of the OpenAI Gym was published by Anderson and his team on Github.
“This is a malware manipulation environment for OpenAI’s gym. OpenAI Gym is a toolkit for developing and comparing reinforcement learning algorithms. This makes it possible to write agents that learn to manipulate PE files (e.g., malware) to achieve some objective (e.g., bypass AV) based on a reward provided by taking specific manipulation actions.” reads the description of the toolkit published on GitHub.
Anderson encouraged experts to try the OpenAI Gym and improve it.
[adrotate banner=”9″] | [adrotate banner=”12″] |
(Security Affairs – OpenAI Gym, machine learning systems)
[adrotate banner=”13″]
Meta stopped three covert operations from Iran, China, and Romania using fake accounts to spread…
The U.S. sanctioned Funnull Technology and Liu Lizhi for aiding romance scams that caused major…
ConnectWise detected suspicious activity linked to a nation-state actor, impacting a small number of its…
Victoria’s Secret took its website offline after a cyberattack, with experts warning of rising threats…
Google says China-linked group APT41 controlled malware via Google Calendar to target governments through a…
GreyNoise researchers warn of a new AyySSHush botnet compromised over 9,000 ASUS routers, adding a…
This website uses cookies.