- Blog
Sign up to our newsletter
EU AI Act explained. Understanding Europe’s AI regulatory framework
Quick summary
The European Union has introduced the world’s first comprehensive regulatory framework for artificial intelligence through the AI Act. Built on a risk-based model, the framework prohibits certain AI uses, sets strict obligations for high-risk AI, and introduces governance and transparency requirements for general-purpose AI models.
The EU regulatory framework for artificial intelligence
Artificial intelligence is increasingly embedded in digital and physical systems across sectors such as energy, healthcare, industrial automation, finance and public administration. As AI systems influence safety, access to services and fundamental rights, the European Union has established a comprehensive regulatory framework to govern how AI is developed, placed on the market and used within the EU.
This framework is centred on the Artificial Intelligence Act (AI Act), which forms part of the European Commission’s broader digital strategy. According to the European Commission, the AI Act is designed to ensure that AI systems used in the EU are safe, respect fundamental rights and support innovation by creating legal certainty for organisations operating across the single market (European Commission, 2024).
Scope and objectives of the AI Act
The AI Act is a horizontal regulation that applies across all sectors and technology types. It covers public and private actors involved in providing, deploying, importing or distributing AI systems in the EU. The regulation also has extraterritorial reach. Non-EU organisations fall under its scope if their AI systems are placed on the EU market or if the outputs of those systems affect people in the EU, according to the European Commission (2024).
The regulation pursues several interlinked objectives:
-
Protect health, safety and fundamental rights
-
Increase transparency and accountability in AI use
-
Strengthen trust in AI systems
-
Prevent fragmentation of national AI rules within the EU
Rather than regulating specific technologies, the AI Act focuses on how AI is used and the risks it creates, allowing the framework to remain relevant as technologies evolve.
Takeaway: The AI Act establishes a single EU-wide compliance framework for AI, including for non-EU providers affecting the EU market.
Risk-based classification of AI systems
A defining feature of the EU regulatory framework is its risk-based approach, which classifies AI systems according to their potential impact on individuals and society.
Unacceptable risk
Certain AI practices are prohibited outright because they are considered incompatible with EU values or pose unacceptable risks. The European Commission identifies a limited number of prohibited uses, including systems that manipulate human behaviour, exploit vulnerabilities, or enable certain forms of biometric surveillance (European Commission, 2025).
High-risk AI systems
High-risk AI systems are permitted but subject to strict requirements. These systems are typically used in sensitive areas such as:
-
Recruitment and worker management
-
Creditworthiness and access to essential services
-
Safety components of critical infrastructure
-
Certain biometric identification applications
For these systems, providers must meet requirements related to data governance, technical documentation, record-keeping, accuracy, robustness, cybersecurity and human oversight, according to the European Commission (2024).
Transparency risk
Some AI systems must comply with transparency obligations. Users must be informed when interacting with AI, and certain AI-generated or manipulated content must be clearly labelled, except in limited lawful contexts (European Commission, 2024).
Minimal or no risk
Most AI applications fall into this category and are not subject to new obligations under the AI Act.
Takeaway: The risk-based model concentrates regulatory effort on AI uses most likely to affect safety and fundamental rights.
Obligations across the AI value chain
The AI Act assigns responsibilities across the entire AI value chain. Obligations differ depending on whether an organisation acts as a provider, deployer, importer or distributor.
For high-risk AI systems, providers are required to:
-
Carry out conformity assessments before placing systems on the market
-
Maintain detailed technical documentation and logs
-
Ensure high-quality and relevant training data
-
Implement effective human oversight measures
-
Monitor systems after deployment and report serious incidents
Deployers must ensure AI systems are used in accordance with their intended purpose and that human oversight is active in real-world use, according to the European Commission (2024).
This approach aligns AI governance with established EU product safety and market surveillance frameworks.
Takeaway: AI compliance responsibilities are shared across organisations involved in development and deployment.
General-purpose AI and systemic risk
A key development in the EU framework is the treatment of general-purpose AI (GPAI), meaning AI models that can be adapted for a wide range of downstream tasks.
The European Commission confirms that specific obligations for GPAI models start to apply from 2 August 2025. These include transparency requirements such as documentation of training compute and summaries of training data, as well as copyright-related safeguards (European Commission, 2025).
For GPAI models considered to pose systemic risk, additional risk-mitigation obligations apply. The Commission provides guidance on how systemic risk may be assessed, including reference to training compute thresholds, to support consistent implementation across the EU (European Commission, 2025).
Existing GPAI models placed on the market before August 2025 are subject to transitional periods, with full compliance expected by 2 August 2027, according to the European Commission (2025).
Takeaway: The EU has introduced measurable, time-bound governance rules for general-purpose AI models.
Governance, supervision and enforcement
Enforcement of the AI Act follows the EU internal market model. National market surveillance authorities are responsible for supervision within Member States, while EU-level coordination ensures consistent application of the rules.
The European Commission plays a central role in overseeing general-purpose AI obligations and supporting harmonised enforcement. The Council of the European Union provides the policy framework for coordination between Member States and EU institutions in the area of artificial intelligence (Council of the European Union, n.d.).
Penalties for non-compliance are proportionate to the severity of violations, with higher fines for prohibited practices and serious breaches involving high-risk or systemic-risk AI systems.
Takeaway: A coordinated EU governance model balances national supervision with consistent enforcement.
Implementation timeline
The AI Act entered into force on 1 August 2024, according to the European Commission (2024). Its obligations apply gradually through a phased timeline.
The European Commission confirms that the first set of rules became applicable on 2 February 2025, including provisions on AI literacy, AI system definitions and certain prohibited practices (European Commission, 2025).
Obligations for general-purpose AI models apply from 2 August 2025, with transitional arrangements for existing models extending to 2 August 2027 (European Commission, 2025).
This phased implementation is intended to give organisations time to adapt governance structures, technical documentation and oversight mechanisms.
Takeaway: AI Act compliance is an ongoing programme with clearly defined milestones.
Conclusion
The EU regulatory framework for artificial intelligence represents a landmark shift in how AI is governed at scale. Through the AI Act, the European Union has established a comprehensive, risk-based system that protects fundamental rights, improves transparency and provides legal certainty for organisations operating in the EU market.
As AI becomes embedded in increasingly critical systems, organisations developing or deploying AI in Europe must treat governance, documentation and oversight as core operational requirements rather than optional best practices.
FAQ
What is the EU AI Act?
The AI Act is the EU’s comprehensive regulation for artificial intelligence, setting harmonised rules based on the risk level of AI systems (European Commission, 2024).
Who must comply with the AI Act?
Any organisation that provides or deploys AI systems affecting people in the EU, including organisations based outside the EU.
When did the AI Act enter into force?
The AI Act entered into force on 1 August 2024 (European Commission, 2024).
Do the rules apply to general-purpose AI models?
Yes. Obligations for general-purpose AI models apply from 2 August 2025, with transitional periods for existing models (European Commission, 2025).
Sources
-
EU rules on general-purpose AI models start to apply, bringing more transparency, safety and accountability – European Commission – https://digital-strategy.ec.europa.eu/en/news/eu-rules-general-purpose-ai-models-start-apply-bringing-more-transparency-safety-and-accountability
-
General-purpose AI obligations under the AI Act – European Commission – https://digital-strategy.ec.europa.eu/en/factpages/general-purpose-ai-obligations-under-ai-act
-
First rules of the Artificial Intelligence Act are now applicable – European Commission – https://digital-strategy.ec.europa.eu/en/news/first-rules-artificial-intelligence-act-are-now-applicable
-
European Artificial Intelligence Act comes into force – European Commission – https://digital-strategy.ec.europa.eu/en/news/european-artificial-intelligence-act-comes-force
-
AI Act (Regulatory framework for artificial intelligence) – European Commission – https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
-
Artificial intelligence in the EU – European Commission – https://digital-strategy.ec.europa.eu/en/policies/artificial-intelligence
-
European approach to artificial intelligence – European Commission – https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
-
Artificial intelligence – Council of the European Union – https://www.consilium.europa.eu/en/policies/artificial-intelligence
