New Attack Vector Targets AI Integrations
Security researchers demonstrated a critical vulnerability in ChatGPT at the Black Hat conference in Las Vegas. Dubbed AgentFlayer, this exploit allows attackers to steal sensitive data from connected Google Drive accounts using poisoned documents.
How the Zero-Click Attack Works
The attack involves embedding hidden prompts within Google Docs using white text at size 1 font - invisible to humans but readable by AI. When users ask ChatGPT to summarize or process these documents, the malicious instructions trigger the AI to search connected drives for API keys and credentials.
Researchers Michael Bargury and Tamir Ishay Sharbat of Zenity showed how stolen data gets exfiltrated through seemingly innocent Markdown-formatted image links. Crucially, victims need only have the document shared with their account - no active interaction required.
Industry Response and Mitigation
OpenAI confirmed receiving vulnerability reports earlier this year and has implemented countermeasures. Google emphasized this isn't a Drive-specific flaw but highlights broader risks in AI-data integrations.
The Growing Threat of Prompt Injection
This incident exemplifies indirect prompt injection attacks - where external content contains hidden AI instructions. The Open Worldwide Application Security Project recently ranked prompt injection as the #1 security risk for LLM applications in its 2025 report.
As AI systems gain access to business and personal data repositories, protecting against these covert attacks becomes increasingly critical. Security experts recommend strict input validation and context-aware permission controls when connecting AI to sensitive data sources.