{"id":145655,"date":"2025-05-31T02:20:17","date_gmt":"2025-05-31T02:20:17","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/145655\/"},"modified":"2025-05-31T02:20:17","modified_gmt":"2025-05-31T02:20:17","slug":"european-union-ai-regulation-is-both-model-and-warning-for-u-s-lawmakers-experts-say","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/145655\/","title":{"rendered":"European Union AI regulation is both model and warning for U.S. lawmakers, experts say"},"content":{"rendered":"<p>The EU\u2019s law is comprehensive, and puts regulatory responsibility on developers of AI to mitigate risk of harm by the systems, reviewed by EU officials.<\/p>\n<p class=\"single-page__signature top\"><b>This <\/b><b>article<\/b><b> was reprinted with permission from <a href=\"https:\/\/virginiamercury.com\" target=\"_blank\" rel=\"noopener\">Virginia Mercury<\/a>.\u00a0<\/b><\/p>\n<p>The European Union\u2019s landmark AI Act, which went into effect last year, stands as inspiration for some U.S. legislators looking to enact widespread consumer protections. Others use it as a cautionary tale warning against overregulation leading to a less competitive digital economy.<\/p>\n<p>The European Union enacted its law to prevent what is currently happening in the U.S. \u2014 a patchwork of AI legislation throughout the states \u2014 said Sean Heather, senior vice president for international regulatory affairs and antitrust at the Chamber of Commerce during an exploratory congressional subcommittee hearing on May 21.<\/p>\n<p>\u201cAmerica\u2019s AI innovators risk getting squeezed between the so-called Brussels Effect of overzealous European regulation and the so-called Sacramento Effect of excessive state and local mandates,\u201d said Adam Thierer, a Senior Fellow at think tank R Street Institute, at the hearing.<\/p>\n<p>The EU\u2019s AI Act is comprehensive, and puts regulatory responsibility on developers of AI to mitigate risk of harm by the systems. It also requires developers to provide technical documentation and training summaries of its models for review by EU officials. The U.S. adopting similar policies would kick the country out of its first-place position in the Global AI race, Thierer testified.<\/p>\n<p>The \u201cBrussels Effect,\u201d Thierer mentioned, is the idea that the EU\u2019s regulations will influence the global market. But not much of the world has followed suit \u2014 so far Canada, Brazil and Peru are working on similar laws, but the UK and countries like Australia, New Zealand, Switzerland, Singapore, and Japan have taken a less restrictive approach.<\/p>\n<p>When Jeff Le, founder of tech policy consultancy 100 Mile Strategies LLC, talks to lawmakers on each side of the aisle, he said he hears that they don\u2019t want another country\u2019s laws deciding American rules.<\/p>\n<p>\u201cMaybe there\u2019s a place for it in our regulatory debate,\u201d Le said. \u201cBut I think the point here is American constituents should be overseen by American rules, and absent those rules, it\u2019s very complicated.\u201d<\/p>\n<p>Does the EU AI act keep Europe from competing?<\/p>\n<p>Critics of the AI Act say the language is overly broad, which slows down the development of AI systems as they aim to meet regulatory requirements. France and Germany rank in the top 10 global AI leaders, and China is second, according to Stanford\u2019s AI Index, but the U.S. currently leads by a wide margin in the number of leading AI models and its AI research, experts testified before the congressional committee.<\/p>\n<p>University of Houston Law Center professor Peter Salib said he believes the EU\u2019s AI Act is a facto \u2014 but not the only one \u2014 in keeping European countries out of the top spots. First, the law has only been in effect for about nine months, which wouldn\u2019t be long enough to make as much of an impact on Europe\u2019s ability to participate in the global AI economy, he said.<\/p>\n<p>Secondly, the EU AI act is one piece of the overall attitude about digital protection in Europe, Salib said. The General Data Protection Regulation, a law that went into effect in 2018 and gives individuals control over their personal information, follows a similar strict regulatory mindset.<\/p>\n<p>\u201cIt\u2019s part of a much longer-term trend in Europe that prioritizes things like privacy and transparency really, really highly,\u201d Salib said. \u201cWhich is, for Europeans, good \u2014 if that\u2019s what they want, but it does seem to have serious costs in terms of where innovation happens.\u201d<\/p>\n<p>Stavros Gadinis, a professor at the Berkeley Center for Law and Business who has worked in the U.S. and Europe, said he thinks most of the concerns around innovation in the EU are outside the AI Act. Their tech labor market isn\u2019t as robust as the U.S., and it can\u2019t compete with the major financing accessible by Silicon Valley and Chinese companies, he said.<\/p>\n<p>\u201cThat is what\u2019s keeping them, more than this regulation,\u201d Gadinis said. \u201cThat and, the law hasn\u2019t really had the chance to have teeth yet.\u201d<\/p>\n<p>During the May 21 hearing, Rep. Lori Trahan, a Democrat from Massachusetts, called the Republican\u2019s stance \u2014 that any AI regulation would kill tech startups and growing companies \u2014 \u201ca false choice.\u201d<\/p>\n<p>The U.S. heavily invests in science and innovation, has founder-friendly immigration policies, has lenient bankruptcy laws and a \u201ccultural tolerance for risk taking.\u201d All policies the EU does not offer, Trahan said.<\/p>\n<p>\u201cIt is therefore false and disingenuous to blame EU\u2019s tech regulation for its low number of major tech firms,\u201d Trahan said. \u201cThe story is much more complicated, but just as the EU may have something to learn from United States innovation policy, we\u2019d be wise to study their approach to protecting consumers online.\u201d<\/p>\n<p>Self-governance<\/p>\n<p>The EU\u2019s law puts a lot of responsibility on developers of AI, and requires transparency, reporting, testing with third parties and tracking copyright. These are things that AI companies in the U.S. say they do already, Gadinis said.<\/p>\n<p>\u201cThey all say that they do this to a certain extent,\u201d he said. \u201cBut the question is, how expansive these efforts need to be, especially if you need to convince a regulator about it.\u201d<\/p>\n<p>AI companies in the U.S. currently self-govern, meaning they test their models for some of the societal and cybersecurity risks currently outlined by many lawmakers. But there\u2019s no universal standard \u2014 what one company deems safe may be seen as risky to another, Gadinis said. Universal regulations would create a baseline for introducing new models and features, he said.<\/p>\n<p>Even one company\u2019s safety testing may look different from one year to the next. Until 2024, OpenAI\u2019s CEO Sam Altman was pro-federal AI regulation, and sat on the company\u2019s Safety and Security Committee, which regularly evaluates OpenAI\u2019s processes and safeguards over a 90-day period.<\/p>\n<p>In September, he left the committee, and has since become vocal against federal AI legislation. OpenAI\u2019s safety committee has since been operating as an independent entity, Time reported. The committee recently published recommendations to enhance security measures, be more transparent about OpenAI\u2019s work and \u201cunify the company\u2019s safety frameworks.\u201d<\/p>\n<p>Even though Altman has changed his tune on federal regulation, the mission of OpenAI is focused on the benefits society gains from AI \u2014 \u201cThey wanted to create [artificial general intelligence] that would benefit humanity instead of destroying it,\u201d Salib said.<\/p>\n<p>AI company Anthropic, maker of chatbot Claude, was formed by former staff members of OpenAI in 2021, and focuses on responsible AI development. Google, Microsoft and Meta are other top American AI companies that have some form of self safety testing, and were recently assessed by the AI Safety Project.<\/p>\n<p>The project asked experts to weigh in on the strategies each company took for risk assessment, current harms, safety frameworks, existential safety strategy, governance and accountability, and transparency and communication. Anthropic scored the highest, but all companies were lacking in their \u201cexistential safety,\u201d or the harm AI models could cause to society if unchanged.<\/p>\n<p>Just by developing these internal policies, most AI leaders are acknowledging the need for some form of safeguards, Salib said.<\/p>\n<p>\u201cI don\u2019t want to say there\u2019s wide industry agreement, because some seem to have changed their tunes last summer,\u201d Salib said. \u201cBut there\u2019s at least a lot of evidence that this is serious and worthwhile thinking about.\u201d<\/p>\n<p>What could the U.S. gain from EU\u2019s practices?<\/p>\n<p>Salib said he believes a law like the EU AI Act in the U.S. would be too \u201coverly comprehensive.\u201d<\/p>\n<p>Many laws addressing AI concerns now, like discrimination by algorithms or self-driving cars, could be governed by existing laws \u2014 \u201cIt\u2019s not clear to me that we need special AI laws for these things.\u201d<\/p>\n<p>But he said that the specific, case-by-case legislation that the states have been passing have been effective in targeting harmful AI actions, and ensuring compliance from AI companies.<\/p>\n<p>Gadinis said he\u2019s not sure why Congress is opposed to the state-by-state legislative model, as most of the state laws are consumer oriented, and very specific \u2014 like deciding how a state may use AI in education, preventing discrimination in healthcare data or keeping children away from sexually explicit AI content.<\/p>\n<p>\u201cI wouldn\u2019t consider these particularly controversial, right?\u201d Gadinis said. \u201cI don\u2019t think the big AI companies would actually want to be associated with problems in that area.\u201d<\/p>\n<p>Gadinis said the EU\u2019s AI Act originally mirrored this specific, case-by-case approach, addressing AI considerations around sexual images, minors, consumer fraud and use of consumer data. But when ChatGPT was released in 2022, EU lawmakers went back to the drawing board and added the component about large language models, systematic risk, high-risk strategies and training, which made the reach of who needed to comply much wider.<\/p>\n<p>After 10 months living with the law, the European Commission said this month it is open to \u201csimplify the implementation\u201d to make it easier for companies to comply.<\/p>\n<p>It\u2019s unlikely the U.S. will end up with AI regulations as comprehensive as the EU, Gadinis and Salib said. President Trump\u2019s administration has taken a deregulated approach to tech so far, and Republicans passed a 10-year moratorium on state-level AI laws in the \u201cbig, beautiful bill\u201d heading to the Senate consideration.<\/p>\n<p>Gadinis predicts that the federal government won\u2019t take much action at all to regulate AI, but mounting pressure from the public may result in an industry self-regulatory body. This is where he believes the EU will be most influential \u2014 they have leaned on public-private partnerships to develop a strategy.<\/p>\n<p>\u201cMost of the action is going to come either from the private sector itself \u2014 they will band together \u2014 or from what the EU is doing in getting experts together, trying to kind of come up with a sort of half industry, half government approach,\u201d Gadinis said.<\/p>\n","protected":false},"excerpt":{"rendered":"The EU\u2019s law is comprehensive, and puts regulatory responsibility on developers of AI to mitigate risk of harm&hellip;\n","protected":false},"author":2,"featured_media":145656,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5174],"tags":[323,8179,2000,299,5187,1699],"class_list":{"0":"post-145655","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-eu","8":"tag-ai","9":"tag-congress","10":"tag-eu","11":"tag-europe","12":"tag-european","13":"tag-european-union"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114600059141450338","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/145655","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=145655"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/145655\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/145656"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=145655"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=145655"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=145655"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}