UK businesses report the highest confidence in their readiness for the EU AI Act but face more fines and contract disruption than peers in Germany and France, new research shows.

The study, commissioned by software company Conga, highlights gaps between self-assessed preparedness and actual compliance outcomes. It points to quoting and contracting systems as a growing source of risk.

The findings come as EU institutions debate changes to the implementation schedule for the AI Act. Lawmakers are considering more time for high-risk systems, including some that support pricing and contracting.

The survey of 1,501 decision makers in revenue operations, sales and legal functions covered the UK, Germany and France. Fieldwork took place in September.

The research found that 89% of UK business leaders believe their organisations are ready for the EU AI Act. The share was 82% in Germany and 68% in France.

However, 30% of UK respondents reported fines or penalties linked to regulatory or tariff changes in the past year. This was the highest proportion among the three markets.

UK leaders also reported higher levels of contractual disruption. Some 40% said their organisation had renegotiated contracts because of regulatory or tariff pressures in the past year.

The figures suggest that confidence in compliance does not always translate into outcomes. They also indicate that regulatory change is already affecting commercial terms.

UK firms report heavier investment in compliance headcount than their continental peers. Some 44% of UK business leaders said they had expanded compliance staff. The figure was 38% in Germany and 31% in France.

Almost one in five UK business leaders, or 18%, said they did not know whether AI was already embedded in their contract and quoting systems. This points to limited visibility over tools that sit close to revenue flows.

Only 19% of UK respondents said compliance ownership was shared across legal, sales and revenue teams. Most firms still concentrate responsibility in a single function or keep it fragmented.

The EU AI Act will apply to UK businesses that operate in the bloc or sign contracts with EU-based customers and partners. It will also reach firms that deploy qualifying systems inside their operations, even when those systems run from outside the EU.

Quoting and contracting platforms fall within the scope of high-risk systems in many use cases under the regulation. These systems will need strict standards for transparency, auditability and human oversight.

Quoting under scrutiny

The research places particular emphasis on configure, price, quote and contract lifecycle tools. These platforms often apply AI models to propose prices, draft language or flag risk.

Spencer Earp, SVP EMEA at Conga, said: “There’s a big difference between saying you’re ready and actually being ready. Even with discussions underway about possible adjustments to the EU AI Act timeline, the fundamentals don’t change. Too many organisations still treat quoting and contracting as background administration, when in reality they decide how revenue is generated, how supply chains are managed and where regulatory exposure sits. As rules continue to tighten, these systems are fast becoming a test of how connected and transparent a business really is. Getting this right will decide which organisations can keep business moving under greater volatility, whether that’s economic or regulatory, and which risk losing ground.”

Conga positions itself in configure, price, quote, contract lifecycle management and document automation software. The company works with more than 10,000 customers worldwide.

The EU AI Act will impose governance, documentation and monitoring requirements on high-risk systems. Providers and users will need clear records of model behaviour and decision logic.

Governance gaps

Many firms are now embedding AI tools into sales and contracting workflows. These tools can generate proposals, assess counterparties and suggest commercial terms.

Charlie Bromley-Griffiths, Senior Legal Counsel at Conga, said: “Embedding AI across business systems and operations can accelerate efficiency, but without oversight and auditability, it introduces new risks. Organisations must ensure that AI outputs are explainable, accurate, and aligned with regulations like the EU AI Act. When governed effectively, AI becomes a tool for transparency, enabling teams to make faster, smarter, and more reliable decisions across the enterprise. Whether the timeline shifts or not, strengthening governance now will ensure AI supports transparency and enables faster, more reliable decisions as the implementation deadline approaches.”

Some industrial users see defined rules as a basis for product investment. They also see regulation as a way to increase customer trust in AI systems.

“We embrace the EU AI Act because it provides us a level of trust and regulation to foster innovation. When allowed the right boundaries, our team can become very creative and innovative,” said Nils Heblich-Menke, Chief Product Owner, Bosch.