{"id":316828,"date":"2025-08-04T09:49:14","date_gmt":"2025-08-04T09:49:14","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/316828\/"},"modified":"2025-08-04T09:49:14","modified_gmt":"2025-08-04T09:49:14","slug":"is-the-eu-ai-act-a-step-in-the-right-direction","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/316828\/","title":{"rendered":"Is the EU AI Act a step in the right direction?"},"content":{"rendered":"<p>The <a href=\"https:\/\/artificialintelligenceact.eu\/\" target=\"_blank\" rel=\"noopener\">EU Artificial Intelligence Act<\/a> is a set of regulations designed to categorise and govern the development of AI within the EU based on specific levels of risk.<\/p>\n<p>The primary aim of this EU AI Act is to ensure AI systems are safe and secure and promote trustworthy AI development \u2013 but is the desired outcome likely to become reality?<\/p>\n<p>We have spoken to four experts across the technology sector to get their thoughts on this new legislation and how effective they deem it to be, as they weigh up the pros and cons of the act.<\/p>\n<p>Heightening security through reduced risk<\/p>\n<p>One of the more prominent aspects of this new regulation is the classification of levels of risk within AI systems. Through this clear categorisation and identification of security risks, <a href=\"https:\/\/www.innovationnewsnetwork.com\/will-businesses-be-impacted-by-the-eu-ai-act-and-what-can-they-do-to-prepare\/35226\/\" target=\"_blank\" rel=\"noopener\">businesses should be able to control the trustworthiness of their systems<\/a>.<\/p>\n<p>For Martin Davies, Audit Alliance Manager at <a href=\"https:\/\/try.drata.com\/product\/pci-dss?utm_campaign=CL_cap_goog_all_all-nb-pci_EMEA_ALL__demo_framework&amp;utm_source=google&amp;utm_medium=paidsearch&amp;utm_term=pci%20dss%20compliance&amp;utm_content=pci-prod_txt_v1&amp;utm_campaignid=21477762902&amp;utm_adgroup=170986936004&amp;utm_creative=746587281813&amp;utm_targetid=kwd-1570448249&amp;gad_source=1&amp;gad_campaignid=21477762902&amp;gbraid=0AAAAABpLT49d0RzU9y4_s5UDn9ah8I8VR&amp;gclid=CjwKCAjwy7HEBhBJEiwA5hQNoj9u2FH9BJuvJ7l9I4NHjn0sUjGa7aVqWglMrhf0SyQxktk3KrE7choCboUQAvD_BwE\" target=\"_blank\" rel=\"noopener\">Drata<\/a>: \u201cThe EU AI Act has a clear common purpose to reduce the risk to end users. By prohibiting a range of high-risk applications of AI techniques, the risk of unethical surveillance and other means of misuse is certainly mitigated.<\/p>\n<p>\u201cEven in circumstances where high-risk biometric AI applications are still permitted for the purposes of law enforcement, there is still a limitation on the purpose and location for such applications, which prevents their misuse (intentional or otherwise) in this sector.\u201d<\/p>\n<p><strong>\u00a0<\/strong>As AI becomes a standard integration into more and more modern technology, it grows ever more important to build secure systems that perform as they are intended.<\/p>\n<p>Ilona Cohen, Chief Legal and Policy Officer at <a href=\"https:\/\/www.hackerone.com\/\" target=\"_blank\" rel=\"noopener\">HackerOne<\/a>, highlights the importance of this strong stance on security, saying: \u201cWe are pleased that the Final Draft of the General-Purpose AI Code of Practice retains measures crucial to testing and protecting AI systems, including frequent active red-teaming, secure communication channels for third parties to report security issues, competitive bug bounty programs, and internal whistleblower protection policies.<\/p>\n<p>\u201cWe also support the commitment to AI model evaluation using a range of methodologies to address systemic risk, including security concerns and unintended outcomes.\u201d<\/p>\n<p>Drata\u2019s Davies continues: \u201cFurthermore, those high-impact AI systems that remain permitted under the EU AI Act will still need impact assessments. This will require organisations that use them to understand and articulate the full spectrum of potential consequences.<\/p>\n<p>\u201cThis is a step in the right direction, and the proposed penalties will mean that the developers of such high-impact AI applications are rendered accountable for their outcomes.<\/p>\n<p>\u201cThe positive impact this Act could have on creating a safe and trustworthy AI ecosystem within the EU will lead to an even wider adoption of the technology. To that extent, this regulation will encourage innovation within defined parameters, which will only benefit the AI industry at large.\u201d<\/p>\n<p>However, not everyone thinks the regulation will have such an immediate positive impact.<\/p>\n<p>Global preparation for AI regulation<\/p>\n<p>Around the world, governments are turning their focus to <a href=\"https:\/\/www.innovationnewsnetwork.com\/what-does-the-eu-ai-act-mean-in-practice\/50341\/\" target=\"_blank\" rel=\"noopener\">compliance and regulation on the development of artificial intelligence<\/a>. But this is no mean feat.<\/p>\n<p>According to Hugh Scantlebury, CEO and Founder of <a href=\"https:\/\/www.aqilla.com\/\" target=\"_blank\" rel=\"noopener\">Aqilla<\/a>: \u201cCompanies, individuals and governments around the world are working on an almost unimaginable range of AI-related projects. So, trying to regulate the technology right now is like trying to control the high seas or bring law and order to the Wild West.<\/p>\n<p>\u201cIf we did attempt to introduce regulation, it would have to be global \u2013 and such an agreement seems unlikely any time soon. Otherwise, if one region, such as the EU, or one country, such as the UK, attempts to regulate AI and establish a \u2018safe framework,\u2019 developers will simply move to another jurisdiction to continue their work. And that\u2019s before we consider those already based outside the EU or the UK. Would a global agreement stop state-sponsored or independent developers in countries like Russia, China, Iran, and South Korea?\u201d<\/p>\n<p>A global consensus is important to ensure an even playing field that promotes AI development while prioritising security. This regulatory cohesion will not be easy to attain, as Darren Thomson, Field CTO EMEAI at <a href=\"https:\/\/www.commvault.com\/\" target=\"_blank\" rel=\"noopener\">Commvault<\/a>, explains: \u201cThe EU AI Act is a comprehensive, legally binding framework that clearly prioritises regulation of AI, transparency, and prevention of harm.<\/p>\n<p>\u201cFollowing suit to some degree, the UK is maintaining a lighter touch on governance. Its AI Action Plan sets out a commendable vision for the future, but, arguably, with insufficient regulatory oversight. Meanwhile, the recently announced US AI Action Plan aims to brush regulatory hurdles under the carpet and push forward to win the global AI race.<\/p>\n<p>\u201cBut rather than being a positive sign of progress, this regulatory divergence is creating a complex landscape for organisations building and implementing AI systems. The lack of cohesion makes for an uneven playing field and conceivably, a riskier AI-powered future. Organisations will need to determine a way forward that balances innovation with risk mitigation, adopting robust cybersecurity measures and adapting them specifically for the emerging demands of AI.\u201d<\/p>\n<p>For Aqilla\u2019s Scantlebury: \u201cThe birth of AI is second only to the foundation of the Internet in terms of its power to fundamentally alter our lives \u2013 and some people even compare it to the discovery of fire.<\/p>\n<p>\u201cHyperbole aside, AI is still in its infancy, and we have only scratched the surface of what it could achieve. So, right now, no one is in a position to legislate \u2013 and even if they were, AI is developing at such a pace that the legislation wouldn\u2019t keep up.\u201d<\/p>\n<p>So, is the EU AI Act going to bring peace and harmony, or signal the beginning of greater complexity and fragmentation? Only time will tell.<\/p>\n<p><strong>Martin Davies, Drata<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone  wp-image-60387\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/08\/Martin-Davies_Drata-278x300.jpg\" alt=\"\" width=\"114\" height=\"123\"  \/><\/p>\n<p><strong>Ilona Cohen, HackerOne<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-60388 alignnone\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/08\/Ilona-Cohen_HackerOne-300x300.jpg\" alt=\"\" width=\"115\" height=\"115\"  \/><\/p>\n<p><strong>Darren Thomson, Commvault<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone  wp-image-60394\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/08\/Darren-Thomson_Commvault-244x300.jpg\" alt=\"\" width=\"113\" height=\"138\"  \/><\/p>\n<p><strong>\u00a0<\/strong><strong>Hugh Scantlebury, Aqilla<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone  wp-image-60395\" src=\"https:\/\/www.europesays.com\/uk\/wp-content\/uploads\/2025\/08\/Hugh-Scantlebury_Aqilla-254x300.jpeg\" alt=\"\" width=\"115\" height=\"136\"  \/><\/p>\n","protected":false},"excerpt":{"rendered":"The EU Artificial Intelligence Act is a set of regulations designed to categorise and govern the development of&hellip;\n","protected":false},"author":2,"featured_media":316829,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5174],"tags":[1942,2000,299,5187],"class_list":{"0":"post-316828","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-eu","8":"tag-artificial-intelligence","9":"tag-eu","10":"tag-europe","11":"tag-european"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114969874869227508","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/316828","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=316828"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/316828\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/316829"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=316828"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=316828"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=316828"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}