ChatGPT Vulnerability Allowed Email-Based Data Leaks

Security researchers discovered ChatGPT vulnerability allowing hidden email commands to manipulate AI and leak sensitive data from connected services like Gmail, since patched by OpenAI.

chatgpt-email-data-leak-vulnerability
Facebook X LinkedIn Bluesky WhatsApp

AI Security Breach: ChatGPT Manipulated Through Hidden Email Commands

Security researchers have uncovered a critical vulnerability in OpenAI's ChatGPT that allowed attackers to manipulate the AI chatbot through seemingly innocent emails, potentially leading to sensitive data leaks from connected services like Gmail.

The ShadowLeak Attack Method

Dubbed "ShadowLeak" by researchers at cybersecurity firm Radware, this sophisticated attack exploited ChatGPT's Deep Research Agent feature. The vulnerability, which has since been patched by OpenAI, involved embedding hidden commands within the HTML code of emails that appeared harmless to human recipients but were executable by ChatGPT.

When users connected ChatGPT to their Gmail accounts and instructed the AI to analyze their emails, the hidden prompts would trigger automatically. "This represents a new frontier in AI security threats where the attack surface extends beyond traditional endpoints," explained a Radware spokesperson.

How the Exploit Worked

The attack chain began with cybercriminals sending specially crafted emails to potential victims. These emails contained malicious HTML code invisible to users but detectable by ChatGPT when processing email content. Once triggered, the hidden commands could instruct ChatGPT to extract sensitive information from the victim's Gmail account and transmit it to external servers controlled by attackers.

What made this vulnerability particularly dangerous was its cloud-based nature. Unlike traditional attacks that target user devices, ShadowLeak operated entirely within ChatGPT's cloud environment, bypassing conventional security measures. The attack demonstrated how AI systems can be manipulated through indirect channels that traditional security protocols might not monitor effectively.

Broader Implications for AI Security

While the demonstration focused on Gmail, researchers confirmed that similar vulnerabilities could affect other services integrated with ChatGPT's Deep Research Agent, including Outlook, Dropbox, Google Drive, and SharePoint. The discovery highlights the growing security challenges as AI systems become more deeply integrated with personal and enterprise data sources.

OpenAI responded promptly to the disclosure, implementing fixes that prevent such manipulation attempts. However, the incident serves as a stark reminder of the evolving threat landscape in the age of artificial intelligence. As AI systems handle increasingly sensitive tasks, ensuring their security against sophisticated manipulation techniques becomes paramount.

Security experts recommend that users remain cautious when connecting AI assistants to sensitive accounts and regularly review connected applications and permissions. The incident underscores the need for ongoing security research and proactive vulnerability management in AI systems.

Related

gartner-ai-market-leaders-2025
Ai

Gartner Names AI Market Leaders in 2025 Vendor Race

Gartner's 2025 analysis identifies Google, Microsoft, OpenAI, and Palo Alto Networks as leaders across 30 AI...

google-ai-chatbots-69-accurate-flaws
Ai

Google Study: AI Chatbots Only 69% Accurate, Reveals Major Flaws

Google's FACTS Benchmark reveals AI chatbots are only 69% accurate, with multimodal understanding scoring below 50%....

microsoft-ai-phishing-business-campaign
Ai

Microsoft Thwarts AI-Obfuscated Phishing Campaign Using Business Terminology

Microsoft detected a phishing campaign using AI-generated code obfuscated with business terminology. Despite...

chatgpt-email-data-leak-vulnerability
Ai

ChatGPT Vulnerability Allowed Email-Based Data Leaks

Security researchers discovered ChatGPT vulnerability allowing hidden email commands to manipulate AI and leak...

ai-vulnerability-google-drive-chatgpt
Ai

AI Vulnerability Exposes Google Drive Data via ChatGPT

Security researchers demonstrated how hidden prompts in Google Docs can trick ChatGPT into stealing Drive data,...

ai-security-prompt-injection
Ai

How Safe Is Your AI Model? Inside the Prompt Injection Arms Race

Prompt injection attacks manipulate AI models by exploiting their inability to distinguish between instructions and...