LLM

Pierluigi Paganini December 27, 2025
LangChain core vulnerability allows prompt injection and data exposure

A critical flaw in LangChain Core could allow attackers to steal sensitive secrets and manipulate LLM responses via prompt injection. LangChain Core (langchain-core) is a key Python package in the LangChain ecosystem that provides core interfaces and model-agnostic tools for building LLM-based applications. A critical vulnerability, tracked as CVE-2025-68664 (CVSS score of 9.3), affects the […]

Pierluigi Paganini December 11, 2025
GeminiJack zero-click flaw in Gemini Enterprise allowed corporate data exfiltration

Google fixed GeminiJack, a zero-click Gemini Enterprise flaw that could leak corporate data via crafted emails, invites, or documents, Noma Security says. Google addressed a Gemini Enterprise flaw dubbed GeminiJack, which can be exploited in zero-click attacks triggered via crafted emails, invites, or documents. The vulnerability could have exposed sensitive corporate data, according to Noma […]

Pierluigi Paganini November 14, 2025
Germany’s BSI issues guidelines to counter evasion attacks targeting LLMs

Germany’s BSI warns of rising evasion attacks on LLMs, issuing guidance to help developers and IT managers secure AI systems. Germany’s BSI warns of rising evasion attacks on LLMs, issuing guidance to help developers and IT managers secure AI systems and mitigate related risks. A significant and evolving threat to AI systems based on large […]

Pierluigi Paganini November 09, 2025
AI chat privacy at risk: Microsoft details Whisper Leak side-channel attack

Microsoft uncovered Whisper Leak, a side-channel attack that lets network snoopers infer AI chat topics despite encryption, risking user privacy. Microsoft revealed a new side-channel attack called Whisper Leak, which lets attackers who can monitor network traffic infer what users discuss with remote language models, even when the data is encrypted. The company warned that […]

Pierluigi Paganini September 22, 2025
Researchers expose MalTerminal, an LLM-enabled malware pioneer

SentinelOne uncovered MalTerminal, the earliest known malware with built-in LLM capabilities, and presented it at LABScon 2025. SentinelLABS researchers discovered MalTerminal, the earliest known LLM-enabled malware, which generates malicious logic at runtime, making the detection more complex. Researchers identified it via API key patterns and prompt structures, uncovering new samples and other offensive LLM uses, […]

Pierluigi Paganini September 09, 2025
LunaLock Ransomware threatens victims by feeding stolen data to AI models

LunaLock, a new ransomware gang, introduced a unique cyber extortion technique, threatening to turn stolen art into AI training data. A new ransomware group, named LunaLock, appeared in the threat landscape with a unique cyber extortion technique, threatening to turn stolen art into AI training data. Recently, the LunaLock group targeted the website Artists&Clients and […]

Pierluigi Paganini August 07, 2025
Microsoft unveils Project Ire: AI that autonomously detects malware

Microsoft’s Project Ire uses AI to autonomously reverse engineer and classify software as malicious or benign. Microsoft announced Project Ire, an autonomous artificial intelligence (AI) system that can autonomously reverse engineer and classify software. Project Ire is an LLM-powered autonomous malware classification system that uses decompilers and other tools, reviews their output, and determines the […]

Pierluigi Paganini July 18, 2025
LameHug: first AI-Powered malware linked to Russia’s APT28

LameHug malware uses AI to create data-theft commands on infected Windows systems. Ukraine links it to the Russia-nexus APT28 group. Ukrainian CERT-UA warns of a new malware strain dubbed LameHug that uses a large language model (LLM) to generate commands to be executed on compromised Windows systems. Ukrainian experts attribute the malware to the Russia-linked […]

Pierluigi Paganini February 15, 2024
Nation-state actors are using AI services and LLMs for cyberattacks

Microsoft and OpenAI warn that nation-state actors are using ChatGPT to automate some phases of their attack chains, including target reconnaissance and social engineering attacks. Multiple nation-state actors are exploiting artificial intelligence (AI) and large language models (LLMs), including OpenAI ChatGPT, to automate their attacks and increase their sophistication. According to a study conducted by […]

Pierluigi Paganini December 02, 2023
Researchers devised an attack technique to extract ChatGPT training data

Researchers devised an attack technique that could have been used to trick ChatGPT into disclosing training data. A team of researchers from several universities and Google have demonstrated an attack technique against ChetGPT that allowed them to extract several megabytes of ChatGPT’s training data. The researchers were able to query the model at a cost […]