LLM

Pierluigi Paganini February 23, 2026
AI-powered campaign compromises 600 FortiGate systems worldwide

A Russian-speaking cybercriminal used commercial generative AI tools to hack over 600 FortiGate devices across 55 countries. Amazon Threat Intelligence reports that a Russian-speaking, financially motivated threat actor used commercial generative AI services to compromise more than 600 FortiGate devices in 55 countries. The activity, observed between January 11 and February 18, 2026, highlights how […]

Pierluigi Paganini February 14, 2026
Suspected Russian hackers deploy CANFAIL malware against Ukraine

A new alleged Russia-linked APT group targeted Ukrainian defense, government, and energy groups, with CANFAIL malware. Google Threat Intelligence Group identified a previously undocumented threat actor behind attacks on Ukrainian organizations using CANFAIL malware. The group is possibly linked to Russian intelligence services and has targeted defense, military, government, and energy entities at both regional […]

Pierluigi Paganini December 27, 2025
LangChain core vulnerability allows prompt injection and data exposure

A critical flaw in LangChain Core could allow attackers to steal sensitive secrets and manipulate LLM responses via prompt injection. LangChain Core (langchain-core) is a key Python package in the LangChain ecosystem that provides core interfaces and model-agnostic tools for building LLM-based applications. A critical vulnerability, tracked as CVE-2025-68664 (CVSS score of 9.3), affects the […]

Pierluigi Paganini December 11, 2025
GeminiJack zero-click flaw in Gemini Enterprise allowed corporate data exfiltration

Google fixed GeminiJack, a zero-click Gemini Enterprise flaw that could leak corporate data via crafted emails, invites, or documents, Noma Security says. Google addressed a Gemini Enterprise flaw dubbed GeminiJack, which can be exploited in zero-click attacks triggered via crafted emails, invites, or documents. The vulnerability could have exposed sensitive corporate data, according to Noma […]

Pierluigi Paganini November 14, 2025
Germany’s BSI issues guidelines to counter evasion attacks targeting LLMs

Germany’s BSI warns of rising evasion attacks on LLMs, issuing guidance to help developers and IT managers secure AI systems. Germany’s BSI warns of rising evasion attacks on LLMs, issuing guidance to help developers and IT managers secure AI systems and mitigate related risks. A significant and evolving threat to AI systems based on large […]

Pierluigi Paganini November 09, 2025
AI chat privacy at risk: Microsoft details Whisper Leak side-channel attack

Microsoft uncovered Whisper Leak, a side-channel attack that lets network snoopers infer AI chat topics despite encryption, risking user privacy. Microsoft revealed a new side-channel attack called Whisper Leak, which lets attackers who can monitor network traffic infer what users discuss with remote language models, even when the data is encrypted. The company warned that […]

Pierluigi Paganini September 22, 2025
Researchers expose MalTerminal, an LLM-enabled malware pioneer

SentinelOne uncovered MalTerminal, the earliest known malware with built-in LLM capabilities, and presented it at LABScon 2025. SentinelLABS researchers discovered MalTerminal, the earliest known LLM-enabled malware, which generates malicious logic at runtime, making the detection more complex. Researchers identified it via API key patterns and prompt structures, uncovering new samples and other offensive LLM uses, […]

Pierluigi Paganini September 09, 2025
LunaLock Ransomware threatens victims by feeding stolen data to AI models

LunaLock, a new ransomware gang, introduced a unique cyber extortion technique, threatening to turn stolen art into AI training data. A new ransomware group, named LunaLock, appeared in the threat landscape with a unique cyber extortion technique, threatening to turn stolen art into AI training data. Recently, the LunaLock group targeted the website Artists&Clients and […]

Pierluigi Paganini August 07, 2025
Microsoft unveils Project Ire: AI that autonomously detects malware

Microsoft’s Project Ire uses AI to autonomously reverse engineer and classify software as malicious or benign. Microsoft announced Project Ire, an autonomous artificial intelligence (AI) system that can autonomously reverse engineer and classify software. Project Ire is an LLM-powered autonomous malware classification system that uses decompilers and other tools, reviews their output, and determines the […]

Pierluigi Paganini July 18, 2025
LameHug: first AI-Powered malware linked to Russia’s APT28

LameHug malware uses AI to create data-theft commands on infected Windows systems. Ukraine links it to the Russia-nexus APT28 group. Ukrainian CERT-UA warns of a new malware strain dubbed LameHug that uses a large language model (LLM) to generate commands to be executed on compromised Windows systems. Ukrainian experts attribute the malware to the Russia-linked […]