Developments in Accountable AI

EU AI Act Update: What the New Code of Practice Means for Business

A European Union flag displaying yellow stars in a circle on a blue background, set against a blurred cityscape.

The European Union’s new General-Purpose AI requirements are now live, and its newly finalized Code of Practice offers the clearest signal yet of what regulators will expect. Whether companies sign on to the Code or not, their business strategy, market reputation, and relationship with regulators will be shaped by the way they choose to respond.

On August 2, the EU AI Act reached a major milestone: binding obligations for providers of General-Purpose AI (GPAI) models—AI systems capable of performing a broad range of tasks—became enforceable. These obligations require providers to

  • maintain detailed technical documentation regarding their models’ capabilities,
  • make copyright-related disclosures about their training data, and
  • put risk management processes in place to guard against harmful uses.

For new GPAI models, these requirements apply immediately; models on the market since before August 2, 2025 have two years to comply. These requirements (along with the rest of the EU AI Act) apply to all parties whose uses of AI affect the European market. This means major U.S. and Chinese providers are covered if they make their models available in the EU.

The EU’s Code of Practice

On July 10, shortly before the Act’s requirements became enforceable, the EU AI Office also finalized its long-awaited General-Purpose AI Code of Practice: a voluntary framework designed specifically to help GPAI providers meet their legal obligations under the Act. While not law, the Code interprets and applies Act’s requirements and serves as a regulator-approved pathway to compliance.

Companies signing the Code of Practice publicly commit themselves to following it, and regulators will generally treat adherence to the Code as proof of compliance with the Act’s GPAI obligations. Providers choosing not to sign the Code still have to meet the legal requirements of the Act, but without the benefit of a presumption of compliance—in other words, they will need to prove that their particular governance framework satisfies the Act’s requirements.

The Code is organized into three chapters that offer detailed recommendations, templates, and technical benchmarks to guide providers in meeting their obligations with respect to (1) transparency, (2) copyright, and (3) safety and security. The transparency and copyright provisions apply to all GPAI providers, while the safety and security chapter applies only to models classified as “systemic risk” under the AI Act (i.e., advanced “frontier” models such as GPT, Gemini, or Claude).

Regarding data transparency, for example, the EU AI Act requires a “sufficiently detailed summary” of training data that went into a company’s model but leaves the format open. In July, the EU AI Office published a standardized Training-Data Summary Template, now incorporated into the Code. It asks for:

  • General information: model name, provider, modalities (text, images, etc.), estimated dataset size, geographic and language coverage.
  • List of data sources: narrative descriptions of public and private datasets, including web domains where applicable.
  • Data processing: whether opt-outs for text and data mining are respected and what measures were are used to exclude illegal content.

For companies, this transforms a vague obligation into a clear, implementable process—crucially, one that regulators have already endorsed.

The Code still leaves some issues unresolved, such as fine-tuning of GPAI models by third parties, which the European Commission intends to address in future regulation. In addition, even though the Code has been published by the EU AI Office, it has yet to be formally endorsed by EU Member States.

Reactions

Industry responses to both the Code and the underlying GPAI requirements in the AI Act have been sharp and varied. Tech associations and trade groups such as the Business Software Alliance and the Information Technology Industry Council have applauded the Code for its flexibility and practicality. But not all voices are aligned. EU heavyweights such as Airbus, BNP Paribas, and Mistral called for a two‑year delay in implementing the GPAI rules, warning that “unclear, overlapping and increasingly complex EU regulations” could diminish European competitiveness in the AI race. European Commission leaders have rejected this proposal.

Analysts and policy experts generally view the Code in a positive light. Many see it as a practical compliance tool—a non-binding but highly influential benchmark—to help providers navigate the Act’s GPAI obligations until formal technical standards are put in place. Yet the Code is not without critics. Privacy advocates and civil society stakeholders remain concerned that the final version, after extensive lobbying from U.S. technology firms and EU industry actors, comes with weakened transparency and copyright protections.

As for uptake, responses thus far have been mixed. Many tech giants, including Amazon, Microsoft, and OpenAI, have agreed to sign the Code. Google has done so while also expressing concerns that the Act and the Code could slow innovation or delay approvals. By contrast, Meta explicitly declined to sign, citing “legal uncertainties” and concerns about regulatory overreach. xAI has opted to sign only the Safety & Security chapter.

Insights

Even though the Code is voluntary and does not dictate exactly what is expected of companies using general-purpose AI, it offers a clear signal as to what regulators will be looking for. In this way, the Code of Practice reflects a behavioral approach to regulation (a form of “choice architecture”): rather than directly enforcing the Act or altering its requirements, it structures companies’ incentives in a way that nudges them toward compliance. Regulators are not claiming to know precisely how to prescribe every use of AI, but they are setting a baseline. At the same time, they make clear that those found non-compliant will face penalties. Businesses are not required to follow the Code, but doing so is a safe bet.

This strategy is akin to what US regulators call a “safe harbor” provision: We can’t (or won’t) tell you exactly what conduct will get you into trouble, but we can tell you what conduct will keep you safe. Viewed this way, companies might follow the Code as a precaution to avoid the risk of falling afoul of regulators. This strategy allows regulators to position themselves as friendly to innovation and not unduly paternalistic, while also incentivizing companies—through a mix of uncertainty and risk aversion—to hold themselves to a sufficiently high standard of conduct.

The Code is also a political and strategic pressure point. It promises to reduce the administrative burden for signatories while establishing a clear regulatory benchmark. Still, the rollout will be a test for both sides. Tech companies, facing tight timelines and unclear enforcement contours, are divided; some have called to halt or delay implementation to allow for technical standards to catch up. At the same time, the Commission is visibly invested in securing signatories, with a “grace period” framed as both incentive and signal of cooperative regulation.

The Bottom Line

Here are three key takeaways for industry leaders:

1. A regulatory baseline:

Even companies that do not sign the Code of Practice should expect EU regulators and courts to measure them against it. Companies that choose their own approach to compliance will need strong evidence that their independent AI governance framework is equally robust, or at least robust enough to satisfy regulators. For businesses intending to operate across multiple jurisdictions or regulated sectors, aligning with the Code could also prove more efficient than creating multiple frameworks or reinventing the wheel (depending on the company’s individual circumstances).

2. Flexibility and structure:

The Code leaves room for practices specific to each company. Even those who choose to adhere to the Code should adapt it to their needs and business model. Regulators and stakeholders are likely to be most impressed by companies that demonstrate a serious, proactive approach to AI governance. That could include maintaining audit trails and showing how their internal practices map onto the Code’s provisions. Most of all, it means going beyond treating the Code as a set of rules to be followed mechanically, and instead treating it as a guidepost for developing a thoughtful corporate ethos for the AI age.

3. Taking a stance:

A company’s position on the Code is also a brand statement. Companies committed to being active in the AI space—or seeking to leverage generative AI as part of their business activities—should reflect on how they want to be viewed, and on what sort of message they will be sending through their reaction to the Code. Now that the AI Act’s GPAI obligations have become enforceable, companies must decide whether to present themselves as governance leaders, selective adopters, or opponents of regulation—and how their stance toward these obligations (and regulation more broadly) will shape stakeholders’ perceptions of their identity and values.

Now marks the time when the EU’s AI Act moves from high-level principle to day-to-day operational reality for GPAI providers. The Code of Practice is the clearest picture yet of what regulators will expect, and for many businesses, complying with it will be the simplest way to stay on the right side of the law, while sending the right signals to markets, regulators, and the public.

Written by Jon Iwry, Fellow, Wharton Accountable AI Lab