LLM

Pierluigi Paganini September 09, 2025
LunaLock Ransomware threatens victims by feeding stolen data to AI models

LunaLock, a new ransomware gang, introduced a unique cyber extortion technique, threatening to turn stolen art into AI training data. A new ransomware group, named LunaLock, appeared in the threat landscape with a unique cyber extortion technique, threatening to turn stolen art into AI training data. Recently, the LunaLock group targeted the website Artists&Clients and […]

Pierluigi Paganini August 07, 2025
Microsoft unveils Project Ire: AI that autonomously detects malware

Microsoft’s Project Ire uses AI to autonomously reverse engineer and classify software as malicious or benign. Microsoft announced Project Ire, an autonomous artificial intelligence (AI) system that can autonomously reverse engineer and classify software. Project Ire is an LLM-powered autonomous malware classification system that uses decompilers and other tools, reviews their output, and determines the […]

Pierluigi Paganini July 18, 2025
LameHug: first AI-Powered malware linked to Russia’s APT28

LameHug malware uses AI to create data-theft commands on infected Windows systems. Ukraine links it to the Russia-nexus APT28 group. Ukrainian CERT-UA warns of a new malware strain dubbed LameHug that uses a large language model (LLM) to generate commands to be executed on compromised Windows systems. Ukrainian experts attribute the malware to the Russia-linked […]

Pierluigi Paganini February 15, 2024
Nation-state actors are using AI services and LLMs for cyberattacks

Microsoft and OpenAI warn that nation-state actors are using ChatGPT to automate some phases of their attack chains, including target reconnaissance and social engineering attacks. Multiple nation-state actors are exploiting artificial intelligence (AI) and large language models (LLMs), including OpenAI ChatGPT, to automate their attacks and increase their sophistication. According to a study conducted by […]

Pierluigi Paganini December 02, 2023
Researchers devised an attack technique to extract ChatGPT training data

Researchers devised an attack technique that could have been used to trick ChatGPT into disclosing training data. A team of researchers from several universities and Google have demonstrated an attack technique against ChetGPT that allowed them to extract several megabytes of ChatGPT’s training data. The researchers were able to query the model at a cost […]

Pierluigi Paganini August 03, 2023
OWASP Top 10 for LLM (Large Language Model) applications is out!

The OWASP Top 10 for LLM (Large Language Model) Applications version 1.0 is out, it focuses on the potential security risks when using LLMs. OWASP released the OWASP Top 10 for LLM (Large Language Model) Applications project, which provides a list of the top 10 most critical vulnerabilities impacting LLM applications. The project aims to educate […]

Pierluigi Paganini June 14, 2023
LLM meets Malware: Starting the Era of Autonomous Threat

Malware researchers analyzed the application of Large Language Models (LLM) to malware automation investigating future abuse in autonomous threats. Executive Summary In this report we shared some insight that emerged during our exploratory research, and proof of concept, on the application of Large Language Models to malware automation, investigating how a potential new kind of […]