
EU's Landmark AI Regulation Comes into Force
The European Union's Artificial Intelligence (AI) Act came into effect on August 1, 2024, establishing the world's first comprehensive regulatory framework for AI systems. This legislation categorizes AI applications by risk level, imposing stricter requirements for high-risk systems.
Risk Classification System
The law defines four risk levels:
- Unacceptable risk: Banned applications include real-time biometric surveillance in public spaces, social scoring systems, and AI designed to manipulate human behavior
- High risk: Systems used in healthcare, education, employment, and law enforcement require compliance assessments and human rights impact evaluations
- Limited risk: Applications like chatbots must provide transparency about their AI nature
- Minimal risk: Unregulated category including most consumer AI applications
Special Provisions for General-Purpose AI
The legislation introduces specific rules for foundational models like ChatGPT. Developers must provide detailed summaries of training data and implement copyright compliance measures. High-capacity models exceeding 10^25 FLOPS undergo additional systemic risk assessments.
Implementation Timeline
Provisions will be phased in over 6-36 months:
- Bans on unacceptable risk applications take effect immediately
- Compliance for high-risk systems required by February 2025
- Rules for general-purpose AI applicable from August 2025
Enforcement Mechanisms
The European AI Office will coordinate enforcement across member states. Non-EU providers must comply when offering services in the EU market, facing fines up to 7% of global revenue.