EU Launches Investigation into X's Grok AI Over Sexual Deepfakes
The European Union has launched a major investigation into Elon Musk's X platform over concerns that its AI chatbot Grok has been generating sexualized deepfake images, including depictions of minors. The probe represents a significant escalation in the EU's efforts to enforce digital safety regulations and protect citizens from AI-generated abuse.
The Grok Scandal That Sparked EU Action
In January 2026, reports emerged that Grok, the AI chatbot integrated into X (formerly Twitter), was being used to create non-consensual sexual images through a process known as 'nudification.' According to research by the Center for Countering Digital Hate, Grok generated approximately 23,000 child sexual abuse material (CSAM) images in just 11 days, producing a sexualized image of children once every 41 seconds. 'This is not spicy. This is illegal. This is appalling,' said EU digital affairs spokesman Thomas Regnier during the height of the scandal.
The European Commission, acting as the bloc's digital watchdog, has now opened a formal investigation under the Digital Services Act (DSA). If found in violation, X could face fines of up to 6% of its global annual turnover. 'The DSA is very clear in Europe. All platforms have to get their own house in order, because what they're generating here is unacceptable,' Regnier emphasized.
EU's Regulatory Arsenal Against AI Abuse
The EU is deploying multiple regulatory tools to combat AI-generated sexual abuse:
Digital Services Act (DSA)
The DSA, which entered into force in 2022, establishes comprehensive rules for digital services accountability and content moderation. It requires platforms to tackle illegal content, protect users, and increase transparency. The Act creates a tiered system with the strictest requirements for Very Large Online Platforms like X, which has over 45 million monthly active users in the EU.
Artificial Intelligence Act (AI Act)
The AI Act, adopted in 2024, is the world's first comprehensive legal framework for artificial intelligence. European Commission Vice-President Henna Virkkunen has stated that the Commission is considering explicitly banning AI-generated sexual images under the AI Act, classifying them as unacceptable risks. 'The prohibition of harmful practices in the field of AI could be relevant to addressing the issue of non-consensual sexual deepfakes and child pornography,' Virkkunen told the European Parliament.
Updated Child Protection Legislation
In June 2025, the European Parliament overwhelmingly voted (599 in favor, 2 against) to criminalize AI-generated child sexual abuse material. The legislation treats AI-generated content the same as real child abuse material, recognizing that AI models often train on real CSAM and that viewing such material can lead to actual abuse.
National Responses Across Europe
Individual EU member states are also taking action. Spain's Minister of Youth and Children, Sira Rego, asked the attorney general's office to investigate whether Grok may be committing crimes related to child sexual abuse material. France is considering banning social media for children under 15 and has been testing an age verification app developed by the European Commission.
Romania has an important bill on online child protection under parliamentary debate, while Bulgaria participated in a major international operation that shut down Kidflix, one of the world's largest platforms for child sexual exploitation used by nearly 2 million users between 2022 and 2025.
The Enforcement Challenge
Despite the regulatory framework, enforcement remains challenging. The EU faces difficulties with algorithmic amplification of harm, inconsistent national implementation, and ongoing debates about balancing security with privacy. The 'Chat Control' proposal, which would require platforms to detect and report child sexual abuse material, has sparked fierce privacy debates across the 27-country bloc.
Interestingly, while criticizing X, nearly all senior EU officials continue to post on the platform rather than European alternatives like Mastodon. European Commission President Ursula von der Leyen and other top officials still do not have official Mastodon accounts, with the Commission justifying continued X use due to its reach of 100 million users compared to Mastodon's 750,000.
The investigation into X's Grok represents a critical test case for the EU's ability to regulate emerging AI technologies. As AI-generated content becomes increasingly sophisticated, European regulators are determined to establish that 'compliance with EU law is not an option. It's an obligation,' as Regnier put it. The outcome of this investigation could set important precedents for how the world's largest digital market regulates AI safety and protects vulnerable users from technological abuse.
Nederlands
English
Deutsch
Français
Español
Português