Nation-state actors are using AI services and LLMs for cyberattacks

Pierluigi Paganini February 15, 2024

Microsoft and OpenAI warn that nation-state actors are using ChatGPT to automate some phases of their attack chains, including target reconnaissance and social engineering attacks.

Multiple nation-state actors are exploiting artificial intelligence (AI) and large language models (LLMs), including OpenAI ChatGPT, to automate their attacks and increase their sophistication.

According to a study conducted by Microsoft in collaboration with OpenAI, the two companies identified and disrupted operations conducted by five nation-state actors that abused their AI services to carry out their attacks.

The researchers observed the following APT groups using artificial intelligence (AI) and large language models (LLMs) in various phases of their attack chain:

Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets’ jobs, professional networks, and other relationships.” reads the report published by Microsoft. “Importantly, our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely.”

The researchers pointed out that at this time the attackers have yet to use LLMs to devise novel attacks, malicious use of LLMs observed by the researchers include:

  • LLM-informed reconnaissance: Employing LLMs to gather actionable intelligence on technologies and potential vulnerabilities.
  • LLM-enhanced scripting techniques: Utilizing LLMs to generate or refine scripts that could be used in cyberattacks, or for basic scripting tasks such as programmatically identifying certain user events on a system and assistance with troubleshooting and understanding various web technologies.
  • LLM-aided development: Utilizing LLMs in the development lifecycle of tools and programs, including those with malicious intent, such as malware.
  • LLM-supported social engineering: Leveraging LLMs for assistance with translations and communication, likely to establish connections or manipulate targets.
  • LLM-assisted vulnerability research: Using LLMs to understand and identify potential vulnerabilities in software and systems, which could be targeted for exploitation.
  • LLM-optimized payload crafting: Using LLMs to assist in creating and refining payloads for deployment in cyberattacks.
  • LLM-enhanced anomaly detection evasion: Leveraging LLMs to develop methods that help malicious activities blend in with normal behavior or traffic to evade detection systems.
  • LLM-directed security feature bypass: Using LLMs to find ways to circumvent security features, such as two-factor authentication, CAPTCHA, or other access controls.
  • LLM-advised resource development: Using LLMs in tool development, tool modifications, and strategic operational planning.

Microsoft report details the use of LLMs for each APT group, for instance, the Iranian nation-state actor Crimson Sandstorm (CURIUM) used its AI services to generate various phishing emails, to generate code snippets and for assist in developing code to evade detection.

OpenAI reported that the above APT group used its AI services to carry out the following tasks respectively: 

  • Charcoal Typhoon used our services to research various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns.
  • Salmon Typhoon used our services to translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, assist with coding, and research common ways processes could be hidden on a system.
  • Crimson Sandstorm used our services for scripting support related to app and web development, generating content likely for spear-phishing campaigns, and researching common ways malware could evade detection.
  • Emerald Sleet used our services to identify experts and organizations focused on defense issues in the Asia-Pacific region, understand publicly available vulnerabilities, help with basic scripting tasks, and draft content that could be used in phishing campaigns.
  • Forest Blizzard used our services primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks.

Microsoft announced principles shaping Microsoft’s policy and actions mitigating the risks associated with the abuse of its AI services by nation-state actors, advanced persistent manipulators (APMs), and cybercriminal syndicates.

The principles include Identification and action against malicious threat actors’ use, Notification to other AI service providers, Collaboration with other stakeholders, and Transparency.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – AI services, OpenAI ChatGPT)



you might also like

leave a comment