Logo
StreamLex Home
Logo
StreamLex Home
Laws
Laws
Recitals
Recitals
Contact
About UsNewsRecitalsTrackersNewsletterTerms of UsePrivacy NoticeLinkedIn
AIA

EU AI Act GPAI Obligations – Top 5 Official Resources for Providers

by Streamlex 2 August 2025

The EU Artificial Intelligence Act (AI Act) sets out strict requirements for General-Purpose AI (GPAI) models, with obligations for providers coming into force on 2 August 2025. From transparency rules to copyright compliance, GPAI providers must prepare now to meet the Act’s standards and avoid enforcement action starting 2 August 2026. This guide compiles Streamlex’s Top 5 Official Resources released by the European Commission, designed to help GPAI providers understand and comply with their AI Act obligations.


What Is General-Purpose AI (GPAI) and Why Does It Matter?

Under the EU Artificial Intelligence Act (AI Act), General-Purpose AI (GPAI) is defined as:

“An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks.”

Because of their broad use, GPAI models carry unique compliance burdens. Providers must meet strict transparency, safety, and copyright obligations when placing GPAI models on the EU market. The timeline for compliance with these obligations depends on when the GPAI model was placed on the market - before or after 2 August 2025.

For GPAI models placed on the market AFTER 2 August 2025:

  • No fines for the first year – GPAI provider obligations are activated, but the Commission’s enforcement powers apply only from 2 August 2026.
  • Two weeks’ notice – GPAI providers training or planning to train models with systemic risk must notify the Commission by 16 August 2025 at the latest.
  • Proactive cooperation with the AI Office – GPAI providers anticipating compliance challenges should inform the AI Office about the steps they are taking.

For GPAI models placed on the market BEFORE 2 August 2025:

  • Grace period – Providers have until 2 August 2027 to achieve compliance.
  • AI Office assistance – The AI Office will help guide providers toward meeting obligations.
  • Retraining/unlearning exemptions – Not required where impossible or disproportionately burdensome, but any such cases must be disclosed.

Top 5 Official EU AI Act Resources for GPAI Providers

The European Commission has released ca number of official guidance materials to help GPAI providers navigate compliance. Here’s what you need to know about each:

1. Guidelines on Obligations for GPAI Providers

Adopted on 18 July 2025

The non-binding guidelines under Article 96(1) of the AI Act provide the Commission's authoritative interpretation of GPAI obligations, establishing technical criteria for compliance.

The guidelines introduce a compute threshold approach, defining GPAI models as those trained using more than 10²³ FLOPS (floating point operations per second) and capable of generating language, text-to-image, or text-to-video outputs. This threshold corresponds approximately to models with one billion parameters trained on substantial datasets.

The guidelines clarify that models exceeding 10²⁵ FLOPS are presumed to pose systemic risk, triggering additional obligations including mandatory notification to the AI Office within two weeks. The framework addresses the full model lifecycle, extending obligations from the start of pre-training through all subsequent development phases, including post-market modifications.

Key Takeaways:

  • Establishes objective compute-based criteria rather than subjective capability assessments
  • Obligations apply throughout the entire model development and deployment process
  • The AI Office will adopt a collaborative, risk-based approach with informal cooperation encouraged during training phases
  • Models placed before August 2, 2025, have until August 2, 2027, to comply, with allowances for cases where retraining is technically or economically infeasible

Access full European Commission's documents: Guidelines, Guidelines FAQ, GPAI Models FAQ.

2. Template for the Public Summary of GPAI Training Content

Adopted on 24 July 2025

Unlike the voluntary nature of other guidance documents, this template (document available here) developed under Article 53(1)(d) of the AI Act establishes mandatory disclosure requirements for all GPAI providers. The template structures training data summaries into three comprehensive sections: general information (provider details, model characteristics, data modalities), list of data sources (including major datasets and detailed web-scraping disclosures), and relevant data processing aspects (copyright considerations and illegal content removal measures).

The template requires differentiated disclosure levels based on data source types. For web-scraped content, providers must disclose the top 10% of domain names by content volume scraped (5% for SMEs), while licensed datasets require only confirmation of licensing agreements and data modalities. The framework balances transparency with trade secret protection, explicitly allowing providers to withhold commercially sensitive information where appropriate.

Key Takeaways:

  • Only way to fulfill Article 53(1)(d) obligations, with potential fines up to 3% of global turnover or €15 million
  • Designed to enable copyright holders and other parties with legitimate interests to exercise their rights under EU law
  • Different detail levels required based on data source sensitivity and provider size
  • Must be published on provider websites in a clearly visible and accessible manner

Access full European Commission's documents: Template, Template FAQ.

3. GPAI Code of Practice: Transparency Chapter

The Chapter was published as a part of the GPAI Code of Practice. The full Code of Practice was published on July 10, 2025, with the Commission confirming its adequacy on August 1, 2025.

The transparency chapter applies to all GPAI providers who sign the Code, with specific exceptions for free and open-source models without systemic risk.

This chapter establishes a standardized documentation framework through the Model Documentation Form, which consolidates all required transparency information into a single comprehensive document. The form covers licensing details, technical specifications, intended use cases, dataset information, compute and energy usage, and more. Documentation must be maintained for at least 10 years and made available to the AI Office and downstream providers within 14 days of request.

Key Takeaways:

  • Single form consolidates all required transparency information
  • 10-year retention requirement ensures historical accountability
  • Centralized system through AI Office
  • Strong safeguards for intellectual property and trade secrets while ensuring regulatory transparency

Access full European Commission's documents: Transparency Chapter, Model Documentation Form, GPAI Code of Practice FAQ, Signing GPAI Code of Practice FAQ

4. GPAI Code of Practice: Copyright Chapter

The Chapter was published as a part of the GPAI Code of Practice. The full Code of Practice was published on July 10, 2025, with the Commission confirming its adequacy on August 1, 2025.

This chapter addresses Article 53(1)(c) requirements for GPAI providers to adopt copyright compliance policies   The chapter mandates comprehensive copyright policy frameworks that go beyond basic compliance to establish proactive protection mechanisms. Providers must develop and maintain up-to-date policies that clearly define internal accountability structures and demonstrate adherence to EU copyright law, including Article 4(3) of Directive (EU) 2019/790.

Key Takeaways:

  • Draft and keep an up-to-date copyright policy
  • Reproduce only lawfully accessible content when crawling the web
  • Respect copyright reservations and enforce rights
  • Mitigate copyright-infringing outputs
  • Appoint a point of contact for complaints

Access full European Commission's documents: Copyright Chapter, GPAI Code of Practice FAQ, Signing GPAI Code of Practice FAQ

5. GPAI Code of Practice: Safety & Security Chapter

The Chapter was published as a part of the GPAI Code of Practice. The full Code of Practice was published on July 10, 2025, with the Commission confirming its adequacy on August 1, 2025.

This chapter applies exclusively to GPAI models with systemic risk, implementing Article 55 obligations for state-of-the-art safety and security measures.

The chapter establishes a comprehensive AI safety framework, requiring providers to develop cutting-edge Safety & Security Frameworks before model release. These frameworks must outline evaluation triggers, risk categories, mitigation strategies, forecasting methods, and organizational responsibilities, with regular updates responding to new risks or incidents.

Key Takeaways:

  • Maintain a Safety & Security Framework
  • Identify and analyze systemic risks
  • Implement mitigations and report serious incidents
  • Keep updated Safety & Security Model Reports

Access full European Commission's documents: Safety & Security Chapter, GPAI Code of Practice FAQ, Signing GPAI Code of Practice FAQ

Stay Updated on EU AI Act GPAI Compliance

The EU AI Act is reshaping how GPAI models are developed and deployed. These applications are relevant not only to current GPAI providers. If you are a so-called ‘downstream actor’ and make significant changes to an AI model you’re using, you may legally be considered a provider. Not every modification will trigger this designation — the change must be significant in terms of the model’s generality, capabilities, or systemic risk. It is thus important to understand how the roles across the GPAI lifecycle are assigned and what legal obligations such roles come with.

👉 Follow Streamlex.eu for real-time updates, compliance checklists, and full access to official EU AI Act documents.

GPAI FAQ

When do GPAI obligations under the EU AI Act begin?
GPAI obligations start on 2 August 2025, with enforcement and fines for the models placed on the market after 2 of August 2025, beginning a year later.

What qualifies as a GPAI model under the AI Act?
Any AI model trained at scale that can perform a wide range of tasks across different domains and exhibit significant degree of generality. A model qualifies as a general-purpose AI model if the computational resources used for its training (training compute) exceed 10^23 FLOP and it can generate language (text or audio), text-to-image, or text-to-video.

What is a GPAI model with systematic risk under the AI Act?
A GPAI model is classified as having systemic risk if it meets one of two conditions.

  • Compute threshold condition: The model has capabilities that match or exceed those of the most advanced models. The AI Act presumes this for models trained with a cumulative amount of computational resources exceeding 10^25 floating point operations (the ‘compute threshold’) to have such capabilities.
  • Designation condition: The Commission can designate a model as a general-purpose AI model with systemic risk either on its own initiative or in response to a qualified alert from the scientific panel if the model’s capabilities or impact are equivalent to those of the most advanced models.

Who needs to comply with the Code of Practice?
Any GPAI provider that signs the Code (with some exceptions for open-source projects) must comply with the Transparency, Copyright, and Safety & Security chapters. Signing the code is voluntary. The list of signatories is available here.

I am not a GPAI provider, but I use AI models. Does this apply to me?
Yes, in some cases. If you are a so-called ‘downstream actor’ and make significant changes to an AI model you’re using, you may legally be considered a provider. Not every modification will trigger this designation — the change must be significant in terms of the model’s generality, capabilities, or systemic risk. More details are available in the European Commission’s latest guidelines, included in our resources above.

Explore AI Act on StreamLex

Related News

© 2025 StreamLex

NewsletterAbout UsTerms of UsePrivacy NoticeManage cookies

© 2025 StreamLex