The EU AI Act: Compliance for Developers

ai regulation dev

The EU AI Act entered into force in August 2024, with requirements phasing in through 2027. It’s the world’s first comprehensive AI regulation—and it has teeth. Here’s what developers need to know.

The Risk-Based Approach

The Act categorizes AI systems by risk level:

Unacceptable Risk (Banned)

Timeline: Prohibited from February 2025.

High Risk (Heavy Regulation)

Systems used in:

Requirements:

Timeline: August 2026.

Limited Risk (Transparency)

Chatbots and AI-generated content:

Timeline: August 2025.

Minimal Risk (No Requirements)

Games, spam filters, inventory management—most AI falls here.

What Developers Must Do

For General Purpose AI (GPAI)

If you’re building or deploying foundation models:

All GPAI:

GPAI with Systemic Risk (>10^25 FLOPS training):

For High-Risk Applications

# Conceptual requirements

class HighRiskAISystem:
    def __init__(self):
        self.risk_management = RiskManagementSystem()
        self.data_governance = DataGovernanceFramework()
        self.logging = ComprehensiveLogging()
        self.human_oversight = HumanOversightMechanism()
        
    def deploy(self):
        # Conformity assessment required
        self.perform_conformity_assessment()
        # Register in EU database
        self.register_in_eu_database()
        # Ongoing monitoring
        self.setup_post_market_monitoring()

Documentation Requirements

## AI System Documentation (High-Risk)

### System Information
- Intended purpose
- Geographic/demographic scope
- Hardware requirements

### Data
- Training data sources
- Data preparation methods
- Bias assessment

### Performance
- Accuracy metrics
- Known limitations
- Failure modes

### Human Oversight
- Override mechanisms
- Decision explanation
- Escalation procedures

Penalties

Non-compliance is expensive:

ViolationMaximum Fine
Prohibited practices€35M or 7% global turnover
High-risk non-compliance€15M or 3% global turnover
Incorrect information€7.5M or 1.5% global turnover

Practical Implementation

Step 1: Classify Your Systems

def classify_ai_system(system):
    if is_prohibited_use(system):
        return "PROHIBITED"
    elif is_high_risk_use(system):
        return "HIGH_RISK"
    elif requires_transparency(system):
        return "LIMITED_RISK"
    else:
        return "MINIMAL_RISK"

Step 2: Conduct Risk Assessment

For high-risk systems:

Step 3: Data Governance

class TrainingDataGovernance:
    def __init__(self, dataset):
        self.dataset = dataset
        
    def document_sources(self):
        # Where does data come from?
        pass
        
    def assess_bias(self):
        # Are there demographic imbalances?
        pass
        
    def verify_consent(self):
        # Do we have rights to use this data?
        pass
        
    def enable_opt_out(self):
        # Can individuals opt out?
        pass

Step 4: Human Oversight

Design for human control:

Step 5: Technical Documentation

Maintain comprehensive docs:

What This Means for Startups

Small companies (<50 employees, <€10M turnover):

Practical advice:

  1. Build compliance into development from the start
  2. Document everything—it’s cheaper than retroactive compliance
  3. Consider EU-focused legal counsel
  4. Join industry groups for guidance

International Impact

The “Brussels Effect”:

US companies with EU customers: You’re in scope.

Timeline Summary

DateRequirement
Aug 2024Act enters force
Feb 2025Prohibited practices banned
Aug 2025Transparency requirements
Aug 2026High-risk full compliance
Aug 2027Full application

Final Thoughts

The EU AI Act is real, enforceable regulation. Unlike previous AI ethics guidelines, this has legal force.

For developers: Build with compliance in mind. Document your systems. Implement human oversight. The regulatory era of AI has begun.


Regulate early, litigate never.

All posts