Highlights

EU policy adviser Kai Zenner warned that many EU member states are financially strained and lack the expert personnel needed to enforce the AI Act effectively.

By Aug. 2, EU countries must finalize rules for penalties and fines under the AI Act, which began phased implementation this year.

The law applies not only to EU-based companies but also to foreign firms doing business in the bloc.

As the European Union begins its phased implementation of the EU AI Act this year, an EU policy adviser is warning that enforcement will be a challenge for cash-strapped member nations.

“Most member states are almost broke,” Kai Zenner, head of office and digital policy adviser for European Parliament Member Axel Voss, said at a conference hosted by George Washington University, per Bloomberg Law.

Zenner also said that countries are not only struggling to fund their data protection agencies, but they are also losing artificial intelligence (AI) talent to companies that can pay them a lot more money.

“This combination of lack of capital finance and also lack of talent will be really one of the main challenges of enforcing the AI Act,” Zenner said. “They need some experts, some real experts, in order to understand what companies are telling them.”

European Union member nations have until Aug. 2 to lay down rules for penalties and fines as they enforce the EU AI Act.

Non-European companies can be affected by the EU AI Act if they have customers or otherwise do business in the bloc.

Read more: AI Action Summit: Global AI Regulations Tilting Toward Growth

What Is the EU AI Act?

In July 2024, the EU passed its AI Act, which is the most comprehensive such law of its kind in the world. Implementation started this year.

The EU AI Act is a set of rules to protect people’s safety and rights, prevent discrimination or harm caused by artificial intelligence, and build trust in the technology. The EU wants to make sure that AI also protects privacy and ensures security.

The EU AI Act is important because it could be a template for other nations’ own AI regulations, just like Europe led the way GDPR on privacy laws. This is called the “Brussels effect.”

In the U.S., Colorado is one state that has passed a similar risk-based system, but now state officials want to revise it to a more pro-growth regulation. The Act uses a risk-based system that regulates AI depending on its level of risk. The main categories are:

Unacceptable Risk Systems Are Banned

These include:

Social scoring systems, like ones used to rank citizens
AI that manipulates people using subliminal techniques
Real-time facial recognition in public spaces, with some exceptions for law enforcement

High-Risk Systems Face Strict Rules

AI used in sensitive areas such as in hiring, education, healthcare or law enforcement is labeled “high risk.”

These artificial intelligence systems must follow strict rules: They must be transparent, accurate, keep records of how the AI makes decisions, and be tested and monitored regularly.

For example, if a hospital uses AI to help diagnose patients, that AI system must meet high standards and be open to inspection.

Limited-Risk Systems Need Some Transparency

These are lower-risk systems like ChatGPT or other chatbots. They don’t need heavy regulation, but they must let people know the content was generated with AI. For example, an image generated or modified by AI will have to be disclosed as such.