The ACHILLES project helps organisations translate the EU AI Act principles into lighter, clearer, safer machine learning across domains.
The EU’s AI Act is now in countdown mode. The first bans and transparency duties took effect in early 2025 and August, respectively, and full obligations for high-risk systems are scheduled to arrive in August 2026 (with a one-year delay for those in Annex I). ACHILLES, a €9m Horizon Europe project launched last November, was formed to help innovators cross that compliance gap without sacrificing performance or sustainability.
Our first article in The Innovation Platform Issue 21 sketched out the ambition: a human-centric, energy-efficient machine learning (ML) approach guided by a shared legal and ethical ‘compass’. Eight months later, the multidisciplinary consortium is moving from vision to execution:
- Two legal and ethical landmarks among the deliverables: D4.1 Legal & Ethical Mapping and D4.4 Ethics Guidelines, turning hundreds of legal pages into helpful and actionable recommendations.
- After a round of technical requirements gathering and a legal workshop, the four pilot use cases are refining their problem statements and evaluation frameworks.
- The ACHILLES Integrated Development Environment (IDE) is starting to take shape, promising easier and responsible AI development with transparent documentation, crucial for compliance and auditing workflows.
Real-world use cases
ACHILLES is being validated through four pilots spanning healthcare, security, creativity, and quality management.
Montaser Awal, Director of Research at IDnow, said: “ACHILLES represents a huge step forward in building privacy-preserving and regulatory-compliant AI models. Less real data will be required to build reliable and accurate AI models for identity verification, while improving the quality and robustness of the algorithms.”
Marco Cuomo of Cuomo IT Consulting said: “ACHILLES provides a set of tools and reusable frameworks that enable pharma-driven, AI-based projects to focus on their domain-specific expertise. This significantly accelerates overall project timelines and facilitates compliant operations.”
Nuno Gonçalves from the Institute of Systems and Robotics at University of Coimbra said: “A key aspect that the ACHILLES project will bring is the way research institutions can collaborate with industrial partners to improve ML models on both ends, collaboratively, while respecting privacy and security rights of data owners.”
From rule-book to reality
With the EU AI Act guiding design choices across ACHILLES and its use cases, the consortium required a rigorous legal framework early in the project. Deliverable ‘D4.1 Legal & Ethical Mapping’ aligns each relevant norm (AI Act, GDPR, Data Act, Medical Device Regulation) with the ACHILLES IDE and the validation pilots. The companion deliverable, ‘D4.4 Ethics Guidelines’, turns that map into checklists, consent templates, and bias-audit scripts.
D4.1 presents the initial legal analysis of European and international legal and ethical frameworks relevant to the project. Besides focusing on fundamental rights, AI regulation (with a focus on the EU AI Act) and privacy and data protection under the GDPR, it also examines broader European legislative instruments concerning data governance (Data Act, Data Governance Act, and Common European Data Spaces), information society services (Digital Services Act and Digital Markets Act), cybersecurity (NIS2 Directive, EU Cybersecurity Act, and EU Cyber Resilience Act), sector-specific legal requirements (Medical Devices Regulation, and intellectual property rights legislation). Ethical considerations, including the importance of informed consent for research participants, accuracy challenges associated with facial recognition and verification, algorithmic biases, hallucinations in generative AI, and broader concerns about trustworthiness, round out the analysis. Together, these elements provide partners with an initial checklist of ‘what applies and why’, which can be refined as technical details emerge during the project.
To ensure that paper rules survive their initial interaction with AI researchers and engineers, KU Leuven’s CiTiP team conducted an internal workshop in June 2025. Each pilot completed a comprehensive legal questionnaire, enabling the session to drill down into specific decisions and gaps. The first workshop was an internal event, limited to project partners, which provided them with the opportunity to ask questions and better define the applicability of different legal frameworks and requirements. The results of the workshop directly inform the next iteration of use case definition and support the pilot execution phase. Similar workshops, including public ones, will follow during the project.
Although the confirmed risk level may change as the pilot design matures, we share here our provisional view of the AI Act per use case.
The ACHILLES IDE itself will be considered a limited-risk AI system under the AI Act, which means it must inform users that they are interacting with software rather than a human. Whenever the project handles personal data, especially biometric or health data, as in the pilots, the GDPR also applies. Because those data are classified as ‘special categories’, processing is allowed only under specific exceptions, such as explicit consent or use for scientific research, and it may trigger a mandatory Data Protection Impact Assessment (DPIA).
Although the initial analysis suggests that the DA, DGA, DSA, and DMA may be less relevant to the project, the cybersecurity requirements in the EU Cyber Resilience Act could be applicable, particularly for the healthcare use case. Finally, ethical considerations, such as algorithmic biases, hallucinations, automation bias, and overall trustworthiness, should be carefully addressed. These concerns will be further refined and integrated as the project progresses, and more information becomes available.
The project follows an iterative compliance loop, incorporating legal, ethical, and technical requirements throughout the AI development lifecycle. The loop consists of four phases:
- Map, identify and analyse applicable norms, obligations, and ethical guardrails.
- Design and build AI systems that are trustworthy, transparent, and compliant, with the support of the ACHILLES IDE.
- Conduct pilot tests in real-world settings to evaluate technical performance and compliance outcomes, with validation through measurable KPIs.
- Refine and update the legal and ethical mapping based on pilot feedback and developments, and upgrade the supporting tools, restarting the loop.
Compliance made easy: The ACHILLES IDE
Today, teams aiming to build EU-compliant AI must juggle regulations, spreadsheets, and 300-page PDFs. The ACHILLES Integrated Development Environment (IDE) combines specifications, code, documentation, and evidence in a single workspace, driven by a specification-first approach. This paradigm brings the rigour of well-documented software engineering: start with the business, legal and ethical requirements, then let the IDE scaffold the code and proofs, with a powerful Copilot to guide you throughout.
The ACHILLES IDE acts as a foundation for unifying research outcomes in the trustworthy AI space into coherent, compliant, and efficient development pipelines.
How it works – A hypothetical project
1. Through a set of (value-sensitive) co-design best practices, processes, and tools, define the project goals and, involving as many stakeholders as possible, come up with a set of specifications.
- Based on the project description and specifications, the IDE’s Copilot iteratively makes recommendations at different levels of the project’s lifecycle, from compliance checklists to technical tools (e.g., a specific bias auditing method applicable to medical images). The framework’s innovative Standard Operating Procedures language (SOP Lang) enables flexible yet controllable workflows, designed for both human and AI agents to collaborate. Additionally, every manager’s and developer’s decisions are logged for improved transparency.
- Monitor the training and get support building transparent and semi-automated documentation, like Model Cards. After deployment, continue to monitor the system for drifts and potential performance deterioration that may trigger the need for retraining.
- Export evidence based on semi-automated documentation and enable seamless auditing trails, effectively bridging the gap between decision-makers, developers, end-users, and notified bodies.
Looking ahead
As the project nears the completion of its first year, the use cases are concluding the pilot definition, including the evaluation framework to assess the impact of ACHILLES in their workflows. The technical work is getting to full speed with the various supporting toolkits (e.g., bias, explainability, robustness) and the implementation of the ACHILLES IDE.
Multidisciplinary alignment workshops are key, and new events are planned to accompany the ongoing research, with some events open to external participants. These sessions will explore topics such as explainability, human oversight, bias mitigation, and compliance verification.
In the background, ACHILLES is teaming up with the remaining CL4-2024-DATA-01-01 ‘AI-driven data operations and compliance technologies’ projects: ACCOMPLISH, CERTAIN, and DATA-PACT. The goal is to leverage collaboration through joint workshops, shared open-source tooling, and cross-dissemination.
The stakes are clear: by the time the AI Act is fully enforceable, ACHILLES aims to prove that trustworthy, greener AI is not a bureaucratic burden but a competitive edge for European innovators.
Disclaimer:
This project has received funding from the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101189689.
Please note, this article will also appear in the 23rd edition of our quarterly publication.