Hacking

ADVERSARIAL OCTOPUS – ATTACK DEMO FOR AI-DRIVEN FACIAL RECOGNITION ENGINE

Researchers from Adversa devised an attack technique, dubbed ADVERSARIAL OCTOPUS, against Facial Recognition systems.

THE INTENTION BEHIND THIS PROJECT

Driven by our mission to increase trust in AI, Adversa’s AI Red Team is constantly exploring new methods of assessing and protecting mission-critical AI applications.

Recently, we’ve discovered a new way of attacking Facial Recognition systems and decided to show it in practice. Our demonstration reveals that current AI-driven facial recognition tools are vulnerable to attacks that may lead to severe consequences.

There are well-known problems in facial recognition systems such as bias that could lead to fraud or even wrong prosecutions. Yet, we believe that the topic of attacks against AI systems requires much more attention. We aim to raise awareness and help enterprises and governments deal with the emerging problem of Adversarial Machine Learning.


ATTACK DETAILS

We’ve developed a new attack on AI-driven facial recognition systems, which can change your photo in such a way that an AI system will recognize you as a different person, in fact as anyone you want.  

It happens due to the imperfection of currently available facial recognition algorithms and AI applications in general. This type of attack may lead to dire consequences and may be used in both poisoning scenarios by subverting computer vision algorithms and evasion scenarios like making stealth deepfakes.

The new attack is able to bypass facial recognition services, applications, and APIs including the most advanced online facial recognition search engine on the planet, called PimEyes, according to the Washington Post. The main feature is that it combines various approaches together for maximum efficiency.

This attack on PimEyes was built with the following methods from our attack framework:

  • For better Transferability, it was trained on an ensemble of various facial recognition models together with random noise and blur.
  • For better Accuracy, it was designed to calculate adversarial changes for each layer of a neural network and use a random face detection frame.
  • For better Imperceptibility, it was optimized for small changes to each pixel and used special functions to smooth adversarial noise.

We follow a principle of responsible disclosure and currently making coordinated steps with organizations to protect their critical AI applications from this attack so we can’t release the exploit code publicly yet.


ATTACK DEMO

We present an example of how PimEyes.com, the most popular search engine for public images and similar to Clearview, a commercial facial recognition database sold to law enforcement and governments, has mistaken a man for Elon Musk in the photo.

The new black-box one-shot, stealth, transferable attack is able to bypass facial recognition AI models and APIs including the most advanced online facial recognition search engine PimEye.com.

You can see a demo of the ‘Adversarial Octopus’ targeted attack below.



WHO CAN EXPLOIT SUCH VULNERABILITIES

Uniquely, the attack is a black-box attack that was developed without any detailed knowledge of the algorithms used by the search engine, and the exploit is transferable to any AI application dealing with faces for internet services, biometric security, surveillance, law enforcement, and any other scenarios. 

The existence of such vulnerabilities in AI applications and facial recognition engines, in particular, may lead to dire consequences.

  • Hacktivists may wreak havoc in the AI-driven internet platforms that use face properties as input for any decisions or further training. Attackers can poison or evade the algorithms of big Internet companies by manipulating their profile pictures.
  • Cybercriminals can steal personal identities and bypass AI-driven biometric authentication or identity verification systems in banks, trading platforms, or other services that offer authenticated remote assistance. This attack can be even more stealthy in every scenario where traditional deepfakes can be applied.
  • Terrorists or dissidents may secretly use it to hide their internet activities in social media from law enforcement. It resembles a mask or fake identity for the virtual world we currently live in.

WHERE THE ATTACK IS APPLICABLE

The main feature of this attack is that it’s applicable to multiple AI implementations including online APIs and physical devices. It’s constructed in a way that it can adapt to the target environment. That’s why we call it Adversarial Octopus. Besides that, it shares three important features of this smart creature.

Original post at:

Follow me on Twitter: @securityaffairs and Facebook

[adrotate banner=”9″][adrotate banner=”12″]

Pierluigi Paganini

(SecurityAffairs – hacking, FACIAL RECOGNITION)

[adrotate banner=”5″]

[adrotate banner=”13″]

Pierluigi Paganini

Pierluigi Paganini is member of the ENISA (European Union Agency for Network and Information Security) Threat Landscape Stakeholder Group and Cyber G7 Group, he is also a Security Evangelist, Security Analyst and Freelance Writer. Editor-in-Chief at "Cyber Defense Magazine", Pierluigi is a cyber security expert with over 20 years experience in the field, he is Certified Ethical Hacker at EC Council in London. The passion for writing and a strong belief that security is founded on sharing and awareness led Pierluigi to find the security blog "Security Affairs" recently named a Top National Security Resource for US. Pierluigi is a member of the "The Hacker News" team and he is a writer for some major publications in the field such as Cyber War Zone, ICTTF, Infosec Island, Infosec Institute, The Hacker News Magazine and for many other Security magazines. Author of the Books "The Deep Dark Web" and “Digital Virtual Currency and Bitcoin”.

Recent Posts

Chinese threat actors exploited Trimble Cityworks flaw to breach U.S. local government networks

A Chinese threat actor, tracked as UAT-6382, exploited a patched Trimble Cityworks flaw to deploy…

2 hours ago

U.S. CISA adds a Samsung MagicINFO 9 Server flaw to its Known Exploited Vulnerabilities catalog

U.S. Cybersecurity and Infrastructure Security Agency (CISA) adds a Samsung MagicINFO 9 Server vulnerability to its…

11 hours ago

New Signal update stops Windows from capturing user chats

Signal implements new screen security on Windows 11, blocking screenshots by default to protect user…

19 hours ago

Law enforcement dismantled the infrastructure behind Lumma Stealer MaaS

Microsoft found 394,000 Windows systems talking to Lumma stealer controllers, a victim pool that included…

23 hours ago

Russia-linked APT28 targets western logistics entities and technology firms

CISA warns Russia-linked group APT28 is targeting Western logistics and tech firms aiding Ukraine, posing…

1 day ago

A cyberattack was responsible for the week-long outage affecting Cellcom wireless network

Cellcom, a regional wireless carrier based in Wisconsin (US), announced that a cyberattack is the…

1 day ago