The EU AI Act: Compliance for Developers
The EU AI Act entered into force in August 2024, with requirements phasing in through 2027. It’s the world’s first comprehensive AI regulation—and it has teeth. Here’s what developers need to know.
The Risk-Based Approach
The Act categorizes AI systems by risk level:
Unacceptable Risk (Banned)
- Social scoring by governments
- Real-time remote biometric identification in public spaces (limited exceptions)
- Emotion recognition in workplaces/schools
- Untargeted facial image scraping
- Manipulation techniques that exploit vulnerabilities
Timeline: Prohibited from February 2025.
High Risk (Heavy Regulation)
Systems used in:
- Critical infrastructure
- Education and employment decisions
- Essential services (credit, insurance)
- Law enforcement
- Migration and border control
- Justice administration
Requirements:
- Risk assessment and mitigation
- High-quality training data
- Logging and traceability
- Human oversight
- Accuracy, robustness, cybersecurity
Timeline: August 2026.
Limited Risk (Transparency)
Chatbots and AI-generated content:
- Must disclose AI involvement
- Deep fakes must be labeled
- Users must know they’re interacting with AI
Timeline: August 2025.
Minimal Risk (No Requirements)
Games, spam filters, inventory management—most AI falls here.
What Developers Must Do
For General Purpose AI (GPAI)
If you’re building or deploying foundation models:
All GPAI:
- Technical documentation
- Copyright compliance information
- Model card publication
GPAI with Systemic Risk (>10^25 FLOPS training):
- Model evaluation
- Adversarial testing
- Incident reporting
- Cybersecurity measures
For High-Risk Applications
# Conceptual requirements
class HighRiskAISystem:
def __init__(self):
self.risk_management = RiskManagementSystem()
self.data_governance = DataGovernanceFramework()
self.logging = ComprehensiveLogging()
self.human_oversight = HumanOversightMechanism()
def deploy(self):
# Conformity assessment required
self.perform_conformity_assessment()
# Register in EU database
self.register_in_eu_database()
# Ongoing monitoring
self.setup_post_market_monitoring()
Documentation Requirements
## AI System Documentation (High-Risk)
### System Information
- Intended purpose
- Geographic/demographic scope
- Hardware requirements
### Data
- Training data sources
- Data preparation methods
- Bias assessment
### Performance
- Accuracy metrics
- Known limitations
- Failure modes
### Human Oversight
- Override mechanisms
- Decision explanation
- Escalation procedures
Penalties
Non-compliance is expensive:
| Violation | Maximum Fine |
|---|---|
| Prohibited practices | €35M or 7% global turnover |
| High-risk non-compliance | €15M or 3% global turnover |
| Incorrect information | €7.5M or 1.5% global turnover |
Practical Implementation
Step 1: Classify Your Systems
def classify_ai_system(system):
if is_prohibited_use(system):
return "PROHIBITED"
elif is_high_risk_use(system):
return "HIGH_RISK"
elif requires_transparency(system):
return "LIMITED_RISK"
else:
return "MINIMAL_RISK"
Step 2: Conduct Risk Assessment
For high-risk systems:
- Identify potential harms
- Assess likelihood and severity
- Document mitigation measures
- Plan ongoing monitoring
Step 3: Data Governance
class TrainingDataGovernance:
def __init__(self, dataset):
self.dataset = dataset
def document_sources(self):
# Where does data come from?
pass
def assess_bias(self):
# Are there demographic imbalances?
pass
def verify_consent(self):
# Do we have rights to use this data?
pass
def enable_opt_out(self):
# Can individuals opt out?
pass
Step 4: Human Oversight
Design for human control:
- Can humans override decisions?
- Are decisions explainable?
- Is there an escalation path?
Step 5: Technical Documentation
Maintain comprehensive docs:
- Model architecture
- Training procedure
- Evaluation results
- Known limitations
What This Means for Startups
Small companies (<50 employees, <€10M turnover):
- Simplified compliance for some requirements
- Still must meet essential requirements
- May need external conformity assessment
Practical advice:
- Build compliance into development from the start
- Document everything—it’s cheaper than retroactive compliance
- Consider EU-focused legal counsel
- Join industry groups for guidance
International Impact
The “Brussels Effect”:
- Companies serving EU markets must comply
- Likely sets global standards
- Other jurisdictions watching and adapting
US companies with EU customers: You’re in scope.
Timeline Summary
| Date | Requirement |
|---|---|
| Aug 2024 | Act enters force |
| Feb 2025 | Prohibited practices banned |
| Aug 2025 | Transparency requirements |
| Aug 2026 | High-risk full compliance |
| Aug 2027 | Full application |
Final Thoughts
The EU AI Act is real, enforceable regulation. Unlike previous AI ethics guidelines, this has legal force.
For developers: Build with compliance in mind. Document your systems. Implement human oversight. The regulatory era of AI has begun.
Regulate early, litigate never.