Google addressed a Gemini Enterprise flaw dubbed GeminiJack, which can be exploited in zero-click attacks triggered via crafted emails, invites, or documents. The vulnerability could have exposed sensitive corporate data, according to Noma Security.
Gemini Enterprise is Google’s AI-powered productivity platform for businesses, integrating generative AI capabilities into tools like Gmail, Calendar, Docs, and other Workspace apps. It enables organizations to leverage AI for tasks such as drafting emails, summarizing documents, generating content, and automating workflows, all within a corporate environment while keeping data secure.
“Noma Labs recently discovered a vulnerability, now known as GeminiJack, inside Google Gemini Enterprise and previously in Vertex AI Search. The vulnerability allowed attackers to access and exfiltrate corporate data using a method as simple as a shared Google Doc, a calendar invitation, or an email.” reads the report published by Noma Security. “No clicks were required from the targeted employee. No warning signs appeared. And no traditional security tools were triggered.”
GeminiJack shows that AI tools accessing Gmail, Docs, and Calendar create a new attack surface, manipulating the AI can compromise data, signaling a rising class of AI-native vulnerabilities.
GeminiJack allowed attackers to steal corporate data by embedding hidden instructions in a shared document. When an employee searched Gemini Enterprise, e.g., “show me our budgets,” the AI automatically retrieved the poisoned file, executed the instructions across Gmail, Calendar, and Docs, and sent the results to the attacker via a disguised image request. No malware or phishing occurred, and traffic appeared legitimate. A single injection could exfiltrate years of emails, full calendar, and entire document repositories, turning the AI into a highly efficient corporate spying tool.
Below is a description of the attack provided by Noma Security:
The attack uses indirect prompt injection to exploit the gap between user-controlled content and how an AI interprets instructions. An attacker plants hidden commands inside accessible content such as Google Docs, Calendar invites, or Gmail subjects. When an employee performs a normal search (e.g., “find all documents with Sales”), the RAG system retrieves the poisoned content and feeds it to Gemini. Gemini interprets the embedded instructions as legitimate, performs broad searches across all connected Workspace data, and exfiltrates the results by embedding them in an image tag that sends an HTTP request to the attacker’s server. This enables silent, automatic data theft without malware or user interaction.
Below is the video PoC published by the researchers:
The researchers discovered the vulnerability during a security assessment on 05/06/25 and reported the flaw to the Google Security Team the same day.
Google quickly addressed the issue, collaborating with researchers to fix the RAG pipeline flaw that let malicious content be misinterpreted as instructions.
“GeminiJack demonstrates the evolving security landscape as AI systems become deeply integrated with organizational data. While Google has addressed this specific issue, the broader category of indirect prompt injection attacks against RAG systems requires continued attention from the security community.” concludes the report. “This vulnerability represents a fundamental shift in how we must think about enterprise security. “
Follow me on Twitter: @securityaffairs and Facebook and Mastodon
(SecurityAffairs – hacking, Google)