The EU’s new AI code of practice has its critics but will be valuable for global governance

EXPERT COMMENT

The EU’s rules and guidance will continue to be controversial as many countries move away from hard regulation of AI, but they offer vital lessons for the world’s approach to governing use of the technology.

The EU AI Act is by far the world’s most comprehensive legal framework on AI. Its rules on general-purpose AI (GPAI) models began to enter into application this month.

The Act is complex and carries strict obligations for companies providing AI systems used in the EU, like OpenAI and DeepMind. To help these companies comply with legal obligations, the Act mandated the development of a Code of Practice on GPAI. 

The Code is a guiding document: a set of non-legally binding guidelines designed to help companies demonstrate compliance on areas like transparency, copyright and safety. Crucially, providers of GPAI models that choose not to sign the Code are still bound by the AI Act’s obligations, but are free to report on how they are complying with the law in a different way. 

After a nearly yearlong multi-stakeholder process coordinated by leading global AI scientists and researchers, the Code was finally published in early July. Its drafting process sparked much drama and criticism. Despite this, the Code could play a vital role in shaping and informing effective global governance for AI, particularly GPAI with systemic risk.

Click here to continue reading the full version of this Expert Comment on the Chatham House website.