The European Union has reached a new milestone in regulating Artificial Intelligence, one year after the EU AI Act was enacted.

From 2 August 2025, provisions of the AI Act governing general-purpose AI (GPAI) models are in force. These rules apply to GPAI models, i.e. models that can be adapted for many tasks, from content generation to complex decision-making, and are designed to ensure such models operate in a way that is both safe and trustworthy.

The flexibility of GPAI models means they can be integrated across sectors and serve a wide variety of purposes, which also carries significant risks. Without clear safeguards, they could be deployed in ways that amplify bias, spread misinformation, or create security vulnerabilities.

The AI Act aims to prevent those outcomes while enabling responsible innovation.

What Every GPAI Provider Must Do

Any provider placing a GPAI model on the EU market must now:

  • Provide technical documentation detailing training and testing processes, accessible to both regulators and downstream users;
  • Establish and follow a copyright policy, ensuring training data is lawfully sourced and rights-respecting;
  • Publish a clear summary of the content used for model training, using the standardized template provided.

Additional Measures for GPAI Models with Systemic Risk

For the most advanced GPAI models that could cause large-scale harm, the AI Act imposes additional responsibilities:

  • Perform model evaluation to identify and mitigate systemic risks;
  • Assess and mitigate possible systemic risks that may arise during development, market release, or use of the models;
  • Document and report serious incidents and mitigating measures to the regulators;
  • Ensure an adequate level of cybersecurity protection.

Guidance and Enforcement

The European Commission has introduced a set of tools to help providers prepare for compliance:

  • Guidelines clarifying the definition of a GPAI model, the criteria for being classified as a provider, the timing of obligations, and available exemptions;
  • A template for publishing the required summary of training data; and
  • TheCode of Practice, a voluntary framework developed with industry experts, that helps providers demonstrate compliance with the AI Act.

Providers such as Amazon, Anthropic, Google, IBM, Microsoft, and OpenAI have already become signatories to the  Code of Practice, signaling their intent to comply with the AI Act.

While the GPAI obligations kicked in on 2 August 2025, enforcement of penalties starts a year later on 2 August 2026. Older models already in the market get extra time, until 2 August 2027, to meet these requirements.

Conclusion

The AI Act’s GPAI obligations mark a clear move from broad principles to rules with real teeth. Providers now face fixed timelines, defined duties, and practical tools to prove they comply. With major players already on board with the Code of Practice, the real test will be in the application and enforcement of the obligations.

For businesses deploying models such as OpenAI’s GPT-4 or Anthropic’s Claude, compliance means verifying that providers meet the AI Act’s transparency and documentation standards, ensuring internal teams are trained to use the systems responsibly, and maintaining clear processes for incident reporting. Taking these steps now will allow businesses to benefit from GPAI while managing risk and staying within the bounds of the law.