Logo
StreamLex Home
Logo
StreamLex Home
Laws
Laws
Recitals
Recitals
Contact
About UsNewsRecitalsTrackersNewsletterTerms of UsePrivacy NoticeLinkedIn
AIA

What Is the EU AI Act?

by Streamlex 21 April 2025

A simplified guide to the EU AI Act text with risk categories, obligations, and a link to the full PDF.


The EU Artificial Intelligence Act (AI Act) is the first major regulatory framework in the world focused on the safe development, deployment, and use of artificial intelligence. Proposed by the European Commission in 2021 and adopted in 2024, the AI Act establishes clear rules based on risk categories — from minimal risk to high-risk and prohibited AI systems.

Its aim is to ensure AI in Europe is used ethically, transparently, and without harm to fundamental rights or safety.

What’s in the AI Act Text?

The AI Act text defines AI systems, classifies them by risk, and introduces legal obligations for providers, users, and importers.

Key components include:

  • Definitions of AI systems and providers
  • Classification of AI risk levels
  • Requirements for high-risk AI (e.g. documentation, human oversight)
  • Bans on certain use cases (e.g. social scoring)
  • Obligations for General-Purpose AI (GPAI) and foundation models
  • Enforcement mechanisms and penalties

Explore the full AI Act text in interactive format on Streamlex.

AI Act PDF and Official Sources

The official AI Act PDF is available on EUR-Lex, and can be downloaded here.

AI Act Full Text PDF

However, it can be complex to navigate. Streamlex offers an annotated version of the AI Act that includes:

  • Linked articles and recitals
  • In-text definitions
  • Risk category navigation
  • Practical context for each provision

View the interactive AI Act on Streamlex

Who Does the AI Act Apply To?

The AI Act applies to:

  • Developers (providers) of AI systems
  • Users and deployers of high-risk AI
  • Importers and distributors placing AI systems on the EU market
  • General-purpose AI and foundation model developers (e.g., large language models)

Even non-EU providers must comply if their AI systems are used within the EU.

Risk Categories Under the AI Act

The regulation classifies AI systems into four risk levels:

  • Unacceptable risk AI systems are banned. These include AI for social scoring, predictive policing, and real-time remote biometric surveillance in public spaces (with very limited exceptions).
  • High-risk AI systems are allowed but heavily regulated. These include AI in employment (like resume screening), education, critical infrastructure, medical devices, and credit scoring. They must meet strict requirements for transparency, record-keeping, human oversight, and cybersecurity.
  • Limited risk AI systems, like chatbots and emotion detection, must meet transparency requirements (e.g., users must know they’re interacting with AI).
  • Minimal risk AI systems, such as spam filters or recommendation engines, are largely exempt from obligations under the Act.

High-risk AI must meet strict standards for:

  • Data quality
  • Documentation & logs
  • Human oversight
  • Cybersecurity

When Does the AI Act Apply?

  • Final adoption: March 2024
  • AI Act takes effect: August 2024
  • Ban on AI systems with unacceptable risks and the implementation of AI literacy requirements: February 2025
  • Entry into force of governance rules and obligations for GPAI providers, as well as regulations on notifications to authorities and fines: August 2025
  • End of the 24-month transition period. Obligations for high-risk AI systems come into effect: August 2026
  • Obligations for high-risk AI systems as a safety component come into effect and the entire EU AI Act becomes applicable: August 2027

Penalties and Fines Under the AI Act

The EU AI Act introduces significant fines for non-compliance, with penalties aligned to the severity of the violation and the type of AI system involved.

  • For using banned AI systems, the fine can be up to €35 million or 7% of global annual turnover, whichever is higher.
  • For breach of specific AI Act obligations (e.g., related to high-risk AI systems), fines can reach €15 million or 3% of turnover.
  • Providing incorrect or misleading information to regulators may result in fines of €7.5 million or 1.5% of turnover.

These fines are whichever is higher between the fixed amount and the percentage of the company's total worldwide annual turnover.

FAQ: EU AI Act

What is the EU AI Act in simple terms?

The AI Act is an EU law that regulates AI systems based on how risky they are. It sets rules for safety, transparency, and accountability.

Where can I read the full AI Act text?

You can access the official PDF on EUR-Lex or view the interactive version with explanations on Streamlex.

Is the AI Act already in force?

As of February 2025, ban on AI systems with unacceptable risks and the implementation of AI literacy requirements is in force. Other rules will come into effect gradually by August 2027.

Who needs to comply with the AI Act?

Any company developing, using, or distributing AI systems regulated under the AI Act within the EU, including non-EU providers.

Are there compliance guidance for AI Act?

The European Commission already released guidance on the following topics (more will come in the future):

Explore the EU AI Act on Streamlex

Want to go beyond the basics?

Read the AI Act with in-text definitions and full cross-referencing on Streamlex

Explore AI Act on StreamLex

Related News

© 2025 StreamLex

NewsletterAbout UsTerms of UsePrivacy NoticeManage cookies

© 2025 StreamLex