Episode 23 — Understand the Distinct Requirements That Apply to General-Purpose AI Models

In this episode, we turn to one of the most important ideas in modern Artificial Intelligence (A I) law: some obligations attach not just to finished A I systems, but to the models that sit underneath many different products and services. That matters because a powerful model can be reused, adapted, fine-tuned, embedded, or wrapped into many downstream tools that reach very different users and contexts. The European Union (E U) A I Act treats that layer seriously by creating a separate set of duties for providers of general-purpose Artificial Intelligence (G P A I) models, with even stricter duties for the subset considered to create systemic risk. For a new learner, the easiest way to hear this topic is to picture a foundation and a building. The model is the foundation, while the chatbot, search assistant, image tool, workflow engine, or decision-support product built on top of it is the building, and the law wants to govern both layers in different ways because trouble can begin at either one.

Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

A key starting point is understanding that a model and a system are not the same thing under this framework. The Commission’s guidance explains that models are essential components of A I systems, but they are not usually complete systems by themselves because they still need extra pieces such as interfaces, workflows, rules, or surrounding business processes. That distinction matters because a provider of a G P A I model may never operate the final product that ordinary people use, yet the choices made at the model layer can shape what every downstream system can do well, do badly, or do unsafely. If the base model is poorly documented, difficult to understand, or released without basic safeguards, every later builder inherits part of that problem. This is why the law does not wait until a model is wrapped inside a finished application before imposing obligations. It recognizes that powerful models can spread capabilities and risks across many sectors long before any one final use case is fully visible.

The next question is what makes a model general-purpose in the first place. The A I Act defines a G P A I model as one that displays significant generality, can competently perform a wide range of distinct tasks, and can be integrated into a variety of downstream systems or applications, rather than being locked into one narrow job. In plain language, the law is looking for models that can travel. A model built only for one tightly bounded function may still matter, but a model that can generate text, audio, images, video, code, summaries, classifications, and many other outputs across different contexts creates a different kind of regulatory challenge because its uses are broader and harder to predict in advance. The Commission’s guidance also gives an indicative compute-based criterion for identifying models likely to fall in this category, while still stressing that true generality matters more than raw size alone. That helps beginners see why the law is focusing on capability and adaptability, not just brand name or marketing hype.

Scope is also broader than many people assume. A G P A I model can be placed on the Union market even if access is free, and an open-source release can count as placing on the market as well. The Commission’s FAQ also explains that internal use may count at least when a model is essential for providing a product or service to third parties on the Union market or when it affects the rights of natural persons in the Union. At the same time, the law excludes models that are specifically developed and put into service solely for scientific research and development, and it also excludes research, testing, or development activity before placement on the market, with limits around real-world testing. For a beginner, the practical lesson is that organizations should not assume they are out of scope just because they did not sell a boxed product or because the model sits behind the scenes. If the model is being supplied or used in a commercially meaningful way that reaches the Union market, the legal analysis becomes very real very quickly.

Another distinct feature of the G P A I rules is that Chapter V mainly regulates the provider of the model rather than spreading separate model-level duties across deployers, importers, and distributors in the same way the Act does for many A I systems. The Commission’s FAQ states that, for models, the regulated role is the provider, meaning the actor that develops or has the model developed and places it on the market under its own name or trademark, whether for payment or free of charge. That matters because new learners sometimes assume every business that uses a model automatically becomes the provider of that model, which is not true. A company can license or use a model without taking on the provider role for the original model, though it may still face obligations if it builds its own A I system on top of that model or significantly modifies the model in a way that changes its capabilities or risk profile. So one of the distinct requirements here is not only what must be done, but who must do it at the model layer.

For all providers of G P A I models, the first major obligation is technical documentation. The law requires providers to draw up and keep up to date technical documentation about the model, including information about development, training, testing, and evaluation, so that the European Commission’s A I Office can request and review it when needed. This is not busywork. It exists because a general-purpose model can influence many downstream systems, and regulators may need to understand what the model is, how it was built, what it was evaluated for, and what limitations are known before they can assess whether the provider is acting responsibly. Good documentation also helps future internal teams, auditors, and downstream providers avoid making false assumptions about what the model can safely do. In practice, this requirement pushes providers toward disciplined model governance rather than informal release culture, because if a provider cannot explain how the model was created and what evidence supports its performance claims, the compliance story becomes very weak.

The second major baseline duty is to support downstream providers with enough information for them to understand the model’s capabilities and limitations and comply with their own obligations under the law. This is a very important distinction because the G P A I provider does not necessarily control every later application, but it still must supply useful information to those who build systems on top of the model. A downstream provider cannot make responsible choices about safeguards, human oversight, use restrictions, or testing if the model provider shares only glossy marketing claims and hides important limits. The law therefore expects a more serious handoff of information, including material that helps downstream actors interpret performance and risk in a realistic way. For beginners, this shows that governance does not stop at the laboratory door. A model provider has obligations not just to regulators, but also to the ecosystem of later builders who rely on the model to create end-user systems, and weak communication at that stage can amplify risk across many products.

A third distinctive requirement is the copyright and training-data transparency package. The Commission’s FAQ states that providers of G P A I models must implement a policy to comply with Union copyright law and related rights, including the identification and respect of rights reservations using state-of-the-art technologies, and they must publish a sufficiently detailed summary of the content used for training the model. This is one of the clearest signs that the law is not only worried about safety in the narrow sense. It is also concerned with the legality and accountability of the material that shaped the model in the first place. A beginner should hear this as a demand for disciplined data governance at the model level, not as a vague promise to be ethical. The provider is expected to think seriously about what it trained on, how rights were handled, and how to communicate enough about that training corpus to make oversight possible without necessarily revealing every detail or trade secret.

For non-Union providers, there is another distinct requirement that does not apply in the same way to every domestic actor. Before placing a G P A I model on the Union market, a provider established in a third country must appoint an authorized representative established in the Union by written mandate. That representative must be enabled to perform specified tasks, including cooperating with the A I Office and keeping documentation available, which gives regulators a practical point of contact inside the Union. At the same time, the law creates an important open-source carve-out for certain G P A I models that are released under a free and open-source license, make parameters and related information publicly available, and do not present systemic risk. For those models, some obligations such as maintaining technical documentation for authorities, providing downstream documentation, and appointing an authorized representative may not apply. But the exemption does not wipe everything away, because the copyright policy duty and training-data summary duty still remain, and systemic-risk models do not get this carve-out.

The law then creates a second layer for the most powerful and potentially consequential models: G P A I models with systemic risk. This is where students need to notice that not all G P A I models are treated the same. One route into this category is meeting the compute threshold in Article 51, currently 10 to the power of 25 floating point operations for cumulative training compute, which creates a presumption of high-impact capability, although the provider may try to rebut that presumption with substantiated arguments. Another route is designation by the Commission when a model has capabilities or market impact equivalent to the most advanced models, even if the threshold route is not the trigger. The provider must notify the Commission within two weeks after the threshold is met or when it becomes known that the model will meet it. For beginners, the core lesson is simple: once a model reaches the level where its failures or misuse could have Union-wide spillover effects, the law expects more than ordinary documentation and disclosure.

Those extra duties for systemic-risk models are much more operational and safety-focused. In addition to the baseline obligations that apply to all G P A I providers, providers of systemic-risk models must perform model evaluation using standardized protocols and state-of-the-art tools, including adversarial testing, to identify and mitigate systemic risks. They must also assess and mitigate possible systemic risks at Union level that may stem from the development, placing on the market, or use of those models. This matters because the law is acknowledging that the most advanced models can create risks that are not limited to one product bug or one customer complaint. The concern is broader and can include widespread misuse, dangerous capability spillover, major cybersecurity implications, and other harms that travel across sectors. The provider is therefore expected to look beyond narrow product performance and think about how a model behaves under pressure, abuse, unexpected scaling, and interaction with a large and diverse downstream ecosystem.

Systemic-risk providers also face continuing duties after release. The Commission’s FAQ explains that they must track, document, and report relevant information about serious incidents and possible corrective measures without undue delay to the A I Office and, where appropriate, national authorities. The Commission’s July 2025 guidelines add that this concept can include serious cybersecurity breaches involving the model or its infrastructure, including theft or exfiltration of model parameters and cyberattacks, because those events can directly affect safety and systemic risk management. On top of that, providers must ensure adequate cybersecurity protection for the model and its physical infrastructure against unauthorized access, leakage, or theft. There is also a practical compliance route through the General-Purpose A I Code of Practice, which the Commission’s FAQ says providers can use to demonstrate compliance, although alternative adequate means are possible if the provider can justify them. In plain terms, the most capable model providers are expected to act more like operators of critical infrastructure than casual software publishers.

Timing and transition rules add another layer that beginners should understand because they shape what is expected today. The Commission’s service desk explains that the rules on governance and the obligations for G P A I models became applicable on 2 August 2025, while providers of G P A I models placed on the market before that date must take the necessary steps to comply by 2 August 2027. That means there is both an immediate compliance story for new market placements and a catch-up story for older models already in circulation. A common misconception is that older models are simply grandfathered forever, but the official guidance does not support that view. Another misconception is that open-source release means automatic freedom from duties, when the law actually preserves several obligations and removes the carve-out entirely for systemic-risk models. The safer way to think about the timeline is that providers need to understand where a model sits in its lifecycle and which date-based duties apply to that model now.

By the end of this topic, the pattern should sound much clearer. A G P A I model is regulated because it is a reusable foundation that can affect many downstream systems, so the law puts baseline duties on all providers around documentation, downstream information, copyright compliance, and training-data summary, with additional representation duties for non-Union providers and nuanced open-source exceptions. Then, for the much smaller group of systemic-risk models, the law adds a heavier safety and security layer including evaluation, adversarial testing, systemic-risk mitigation, serious-incident reporting, and cybersecurity protection. The distinct requirements are therefore not random add-ons. They reflect the idea that the broader and more powerful a model becomes, the more the provider must prove that it understands the model, communicates responsibly about it, and manages the risks that radiate outward from it. That is the heart of this episode and the clearest way to remember why G P A I models receive their own special treatment under the Act.

Episode 23 — Understand the Distinct Requirements That Apply to General-Purpose AI Models
Broadcast by