ChatGPT Vulnerability Allowed Email-Based Data Leaks

Security researchers discovered ChatGPT vulnerability allowing hidden email commands to manipulate AI and leak sensitive data from connected services like Gmail, since patched by OpenAI.

chatgpt-email-data-leak-vulnerability
Facebook X LinkedIn Bluesky WhatsApp
de flag en flag es flag fr flag nl flag pt flag

AI Security Breach: ChatGPT Manipulated Through Hidden Email Commands

Security researchers have uncovered a critical vulnerability in OpenAI's ChatGPT that allowed attackers to manipulate the AI chatbot through seemingly innocent emails, potentially leading to sensitive data leaks from connected services like Gmail.

The ShadowLeak Attack Method

Dubbed "ShadowLeak" by researchers at cybersecurity firm Radware, this sophisticated attack exploited ChatGPT's Deep Research Agent feature. The vulnerability, which has since been patched by OpenAI, involved embedding hidden commands within the HTML code of emails that appeared harmless to human recipients but were executable by ChatGPT.

When users connected ChatGPT to their Gmail accounts and instructed the AI to analyze their emails, the hidden prompts would trigger automatically. "This represents a new frontier in AI security threats where the attack surface extends beyond traditional endpoints," explained a Radware spokesperson.

How the Exploit Worked

The attack chain began with cybercriminals sending specially crafted emails to potential victims. These emails contained malicious HTML code invisible to users but detectable by ChatGPT when processing email content. Once triggered, the hidden commands could instruct ChatGPT to extract sensitive information from the victim's Gmail account and transmit it to external servers controlled by attackers.

What made this vulnerability particularly dangerous was its cloud-based nature. Unlike traditional attacks that target user devices, ShadowLeak operated entirely within ChatGPT's cloud environment, bypassing conventional security measures. The attack demonstrated how AI systems can be manipulated through indirect channels that traditional security protocols might not monitor effectively.

Broader Implications for AI Security

While the demonstration focused on Gmail, researchers confirmed that similar vulnerabilities could affect other services integrated with ChatGPT's Deep Research Agent, including Outlook, Dropbox, Google Drive, and SharePoint. The discovery highlights the growing security challenges as AI systems become more deeply integrated with personal and enterprise data sources.

OpenAI responded promptly to the disclosure, implementing fixes that prevent such manipulation attempts. However, the incident serves as a stark reminder of the evolving threat landscape in the age of artificial intelligence. As AI systems handle increasingly sensitive tasks, ensuring their security against sophisticated manipulation techniques becomes paramount.

Security experts recommend that users remain cautious when connecting AI assistants to sensitive accounts and regularly review connected applications and permissions. The incident underscores the need for ongoing security research and proactive vulnerability management in AI systems.

Related

openai-erotic-chatbot-adult-mode-delay
Ai

OpenAI Erotic Chatbot Delay Explained: Adult Mode Shelved Indefinitely | Tech News

OpenAI indefinitely delays erotic 'Adult Mode' chatbot launch amid safety concerns, investor pressure, and strategic...

microsoft-ai-phishing-business-campaign
Ai

Microsoft Thwarts AI-Obfuscated Phishing Campaign Using Business Terminology

Microsoft detected a phishing campaign using AI-generated code obfuscated with business terminology. Despite...

ai-vulnerability-google-drive-chatgpt
Ai

AI Vulnerability Exposes Google Drive Data via ChatGPT

Security researchers demonstrated how hidden prompts in Google Docs can trick ChatGPT into stealing Drive data,...

ai-security-prompt-injection
Ai

How Safe Is Your AI Model? Inside the Prompt Injection Arms Race

Prompt injection attacks manipulate AI models by exploiting their inability to distinguish between instructions and...

openai-deepmind-ai-2025
Ai

OpenAI vs. Google DeepMind: Who’s Winning the AI Arms Race in 2025?

In 2025, OpenAI and Google DeepMind continue their fierce rivalry in AI development. OpenAI focuses on open models...

eu-deepfake-nudes-ban-2026
Ai

EU Bans AI Deepfake Nudes After Grok Scandal | New Rules

EU bans AI deepfake nudes after Grok scandal: new rules prohibit non-consensual sexual images, effective December...