A new type of threat is alarming the world of cyber security: it is called Man-in-the-Prompt and is capable of compromising interactions with leading generative artificial intelligence tools such as ChatGPT, Gemini, Copilot, and Claude. The problem? It does not even require a sophisticated attack: all it takes is a browser extension.
“LayerX’s research shows that any browser extension, even without any special permissions, can access the prompts of both commercial and internal LLMs and inject them with prompts to steal data, exfiltrate it, and cover their tracks. The exploit has been tested on all top commercial LLMs, with proof-of-concept demos provided for ChatGPT and Google Gemini”, explains researcher Aviad Gispan of LayerX. https://layerxsecurity.com/blog/man-in-the-prompt-top-ai-tools-vulnerable-to-injection/
Point Wind credit
What is “Man-in-the-Prompt”?
With this term, LayerX Security experts refer to a new attack vector that exploits an underestimated weakness: the input window of AI chatbots. When we use tools such as ChatGPT from a browser, our messages are written in a simple HTML field, accessible from the page’s DOM (Document Object Model). This means that any browser extension with access to the DOM can read, modify, or rewrite our requests to the AI, and do so without us noticing. The extension doesn’t even need special permissions.
ChatGPT Injection Proof Of Concept https://youtu.be/-QVsvVwnx_Y
Point Wind credit
How the attack works
This technique has been proven to work on all major AI tools, including:
What are the concrete risks?
According to the report, the potential consequences are serious, especially in the business world:
According to LayerX, 99% of business users have at least one extension installed in their browser. In this scenario, the risk exposure is very high.
What we can do
For individual users:
For businesses:
A bigger problem: Prompt Injection
The Man-in-the-Prompt attack falls under the broader category of prompt injection, one of the most serious threats to AI systems according to the OWASP Top 10 LLM 2025. These are not just technical attacks: even seemingly harmless external content, such as emails, links, or comments in documents, can contain hidden instructions directed at the AI.
For example:
What we learn
The LayerX report raises a crucial point: AI security cannot be limited to the model or server, but must also include the user interface and browser environment. In an era where AI is increasingly integrated into personal and business workflows, a simple HTML text field can become the Achilles heel of the entire system.
Credit
About the author: Salvatore Lombardo (X @Slvlombardo)
Electronics engineer and Clusit member, for some time now, espousing the principle of conscious education, he has been writing for several online magazine on information security. He is also the author of the book “La Gestione della Cyber Security nella Pubblica Amministrazione”. “Education improves awareness” is his slogan.
Follow me on Twitter: @securityaffairs and Facebook and Mastodon
(SecurityAffairs – hacking, Man-in-the-Prompt)