EU AI Act: A Practical Guide for Engineering Teams
Overview
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It entered into force in August 2024, with provisions being phased in through 2027. For engineering teams, understanding the risk-based classification system is essential.
The Four Risk Tiers
The Act classifies AI systems into four tiers based on their potential impact:
Unacceptable Risk (Banned)
These systems are prohibited outright:
- Social scoring by governments
- Real-time biometric identification in public spaces (with narrow exceptions)
- Manipulation techniques that exploit vulnerabilities
- Emotion recognition in workplaces and educational institutions
High Risk
Systems that significantly impact health, safety, or fundamental rights. These require:
- Conformity assessments before deployment
- Risk management systems
- Data governance and quality controls
- Technical documentation
- Human oversight mechanisms
- Accuracy, robustness, and cybersecurity standards
Examples include AI in hiring, credit scoring, law enforcement, and critical infrastructure.
Limited Risk
Systems with transparency obligations. Users must be informed they are interacting with AI. This covers chatbots, deepfake generators, and emotion detection systems.
Minimal Risk
Most AI systems fall here — spam filters, recommendation engines, AI-assisted games. No specific obligations beyond existing law.
What Engineering Teams Should Do
- Classify your systems — Map each AI component to a risk tier
- Document everything — Training data, model architecture, testing results
- Implement logging — High-risk systems need full audit trails
- Build human oversight — Ensure humans can intervene in automated decisions
- Test for bias — Regular fairness audits across protected characteristics
Timeline
- February 2025: Bans on unacceptable-risk systems take effect
- August 2025: General-purpose AI rules apply
- August 2026: High-risk system obligations fully enforceable
- August 2027: Remaining provisions for embedded AI products
Resources
The European Commission provides guidance documents and standards references. National AI offices in each EU member state handle enforcement and can answer jurisdiction-specific questions.