The EU's AI Act, implemented in 2025, introduces a risk-based framework for AI regulation, focusing on transparency and safety. It includes strict rules for high-risk applications and foundation models, sparking varied industry reactions.

EU AI Act 2025: A Landmark Regulation for Artificial Intelligence
The European Union has officially implemented the AI Act, a groundbreaking regulation aimed at governing the development and deployment of artificial intelligence across member states. Enforced as of August 2024, the Act introduces a risk-based framework to ensure transparency, safety, and accountability in AI applications.
Key Provisions of the AI Act
The Act categorizes AI systems into four risk levels:
- Unacceptable Risk: Banned applications include AI that manipulates human behavior or uses real-time biometric identification in public spaces.
- High Risk: AI used in critical sectors like healthcare, education, and law enforcement must comply with stringent transparency and safety requirements.
- Limited Risk: Systems with minimal harm potential must inform users they are interacting with AI.
- Minimal Risk: Unregulated applications with negligible impact.
Focus on Foundation Models
A notable addition to the Act is the regulation of general-purpose AI, such as foundation models like ChatGPT. These models must adhere to transparency requirements unless released as open-source. High-impact models, including those exceeding 1025 FLOPS, undergo rigorous evaluations.
Industry Reactions
Tech companies have expressed mixed reactions. While some applaud the clarity, others criticize the compliance burden. OpenAI and other major players are adapting their policies to align with the new rules.
For more details, visit the EU Digital Strategy.