AI Vulnerability Exposes Google Drive Data via ChatGPT

Security researchers demonstrated how hidden prompts in Google Docs can trick ChatGPT into stealing Drive data, highlighting AI integration risks.

ai-vulnerability-google-drive-chatgpt
Facebook X LinkedIn Bluesky WhatsApp

New Attack Vector Targets AI Integrations

Security researchers demonstrated a critical vulnerability in ChatGPT at the Black Hat conference in Las Vegas. Dubbed AgentFlayer, this exploit allows attackers to steal sensitive data from connected Google Drive accounts using poisoned documents.

How the Zero-Click Attack Works

The attack involves embedding hidden prompts within Google Docs using white text at size 1 font - invisible to humans but readable by AI. When users ask ChatGPT to summarize or process these documents, the malicious instructions trigger the AI to search connected drives for API keys and credentials.

Researchers Michael Bargury and Tamir Ishay Sharbat of Zenity showed how stolen data gets exfiltrated through seemingly innocent Markdown-formatted image links. Crucially, victims need only have the document shared with their account - no active interaction required.

Industry Response and Mitigation

OpenAI confirmed receiving vulnerability reports earlier this year and has implemented countermeasures. Google emphasized this isn't a Drive-specific flaw but highlights broader risks in AI-data integrations.

The Growing Threat of Prompt Injection

This incident exemplifies indirect prompt injection attacks - where external content contains hidden AI instructions. The Open Worldwide Application Security Project recently ranked prompt injection as the #1 security risk for LLM applications in its 2025 report.

As AI systems gain access to business and personal data repositories, protecting against these covert attacks becomes increasingly critical. Security experts recommend strict input validation and context-aware permission controls when connecting AI to sensitive data sources.

Related

google-ai-health-summaries-errors
Ai

Google Halts AI Health Summaries After Dangerous Medical Errors

Google has removed AI Overviews from certain medical searches after an investigation revealed dangerous...

google-ai-chatbots-69-accurate-flaws
Ai

Google Study: AI Chatbots Only 69% Accurate, Reveals Major Flaws

Google's FACTS Benchmark reveals AI chatbots are only 69% accurate, with multimodal understanding scoring below 50%....

google-ceo-warns-ai-trust
Ai

Google CEO Warns: Don't Blindly Trust AI Technology

Google CEO Sundar Pichai warns against blind trust in AI, citing error vulnerabilities and investment bubble risks...

chatgpt-email-data-leak-vulnerability
Ai

ChatGPT Vulnerability Allowed Email-Based Data Leaks

Security researchers discovered ChatGPT vulnerability allowing hidden email commands to manipulate AI and leak...

ai-vulnerability-google-drive-chatgpt
Ai

AI Vulnerability Exposes Google Drive Data via ChatGPT

Security researchers demonstrated how hidden prompts in Google Docs can trick ChatGPT into stealing Drive data,...

ai-security-prompt-injection
Ai

How Safe Is Your AI Model? Inside the Prompt Injection Arms Race

Prompt injection attacks manipulate AI models by exploiting their inability to distinguish between instructions and...