{"id":30659,"date":"2026-05-07T07:56:10","date_gmt":"2026-05-07T07:56:10","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/30659\/"},"modified":"2026-05-07T07:56:10","modified_gmt":"2026-05-07T07:56:10","slug":"building-ai-without-guardrails","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/30659\/","title":{"rendered":"Building AI Without Guardrails"},"content":{"rendered":"<p>Key Takeaways:<\/p>\n<p>AI governance is broadly recognized as essential, but today it remains fragmented, largely aspirational, and lacking enforceable mechanisms for accountability, runtime assurance, and global interoperability.<br \/>\nBecause AI innovation is advancing too quickly for governments or standards bodies to keep pace, practical AI governance is most likely to emerge first from high\u2011risk, safety\u2011critical industries such as automotive, industrial systems, and semiconductors.<br \/>\nThe most immediate governance risks center on intellectual property exposure, data misuse, and over\u2011reliance on AI\u2011generated code or designs without strong human review, sandboxing, and independent verification.<\/p>\n<p>Artificial intelligence is being integrated across the semiconductor ecosystem at a pace that far outstrips any rules to govern it, raising the specter of increased IP theft and security breaches with no obvious way to prevent them.<\/p>\n<p>From foundation models embedded in EDA workflows to agentic systems influencing design, verification, and physical outcomes, AI is reshaping how chips are built and where and how risk is introduced. But while AI governance is widely acknowledged as necessary, today\u2019s efforts remain fragmented, unevenly interpreted, and focused more on intent than on measurable outcomes. Put simply, current governance falls short today, and given the pace of innovation in AI, traditional regulation approaches are trailing behind and unlikely to catch up.<\/p>\n<p>\u201cAI governance requires some form of guidelines or regulations to cover policies and framework processes, to guide responsible development, to guide when you deploy an AI system, and to guide the use of AI systems,\u201d said Dana Neustadter, senior director of product management for Security IP Solutions at <a href=\"https:\/\/semiengineering.com\/entities\/synopsys-inc\/\" rel=\"nofollow noopener\" target=\"_blank\">Synopsys<\/a>, drawing a rough parallel to safety and security in cars. \u201cAI goes along the same lines to drive trust, ethics, accountability, and transparency. But to identify and mitigate risk, AI systems should look at data security model integrity, access control, and authentication, similar to functional safety and cybersecurity in cars. It\u2019s tied to security, as well. Typically, it looks at ethical principles, like how to know some form of regulations, and how to manage risks and accountability.\u201d<\/p>\n<p>Safety-critical industries may well become the proving ground for the first practical, enforceable models of AI accountability, following a similar path for how safety standards evolved in the automotive, industrial control and automation, aerospace, and military sectors. \u201cThe semiconductor industry\u2019s chief designers, as well as our software and ecosystem partners, have all come together over the decades,\u201d said John Weil, vice president, IoT and Edge AI Processor Business at <a href=\"https:\/\/semiengineering.com\/entities\/synaptics\/\" rel=\"nofollow noopener\" target=\"_blank\">Synaptics<\/a>. \u201cThis goes back into the \u201990s with military\/aerospace safety, and then as automotive content and factory automation equipment increased significantly, there were a lot of ISO standards that came about to try to guide engineers on what it means to design a reliable and safe system. We don\u2019t really have that on this AI side today. If somebody said, \u2018Hey, I want to do an industrial automation safe product with AI,\u2019 those two things are difficult to put in the same sentence.\u201d<\/p>\n<p>Defining use cases key in AI governance<br \/>To most experts, there are two distinct use cases for AI. \u201cOne focuses on data management, and the other involves generating and verifying code and hardware,\u201d said Sylvain Guilley, co-founder and CTO at Secure-IC, a <a href=\"https:\/\/semiengineering.com\/entities\/cadence-design-systems\/\" rel=\"nofollow noopener\" target=\"_blank\">Cadence <\/a>company. \u201cThese are truly separate areas. When it comes to managing data, this field is more advanced. For example, in cybersecurity, which has existed longer than many other applications, consider how antivirus programs and, more recently, security operations centers (SOCs) are now largely run by AI. The reason for this shift is that without AI, analysts would be overwhelmed by the sheer volume of data and the complexity of the correlations they must process. AI is therefore needed to perform the initial layer of analysis on content and metadata. That\u2019s the primary use of AI when it comes to handling data.\u201d<\/p>\n<p>There\u2019s an important governance issue here, however. \u201cThe SOC operator handling the data doesn\u2019t actually own it,\u201d Gulley said. \u201cThey\u2019re simply monitoring it. If the data includes sensitive, confidential, or proprietary information, effective governance is necessary so that any AI overseeing traffic can ensure the information isn\u2019t corrupted or compromised and doesn\u2019t show signs of attack. This highlights the key challenge of managing your data when you rely on external providers, such as SaaS cybersecurity services. The legitimate owner of the data isn\u2019t necessarily the same party conducting the analysis.\u201d<\/p>\n<p>A further risk comes when AI is generating intellectual property. \u201cHere, there\u2019s a gray area because it depends on the license of the model that is used for the LLM model,\u201d he said. \u201cIt might depend on where the data has been generated. There can be some IP, but also some export considerations. If I am operating from France, what happens if I\u2019m using foreign tools? They will tell me that I still own the output. Meanwhile, the data has been circulated to another country. I\u2019m not sure there is any stronger legal mandate.\u201d<\/p>\n<p>IP security risks are high<br \/>At the same time, foundational models are being rolled out in many companies, and there is tremendous activity around building agentic AI solutions. This brings up bigger concerns than are obvious at first glance. IP security is a huge concern to many, of course, but there are new licensing considerations that may not have been considered in the past.<\/p>\n<p>\u201cThe question of security on the IP is one of my biggest concerns,\u201d said Alexander Petr, senior director at <a href=\"https:\/\/semiengineering.com\/entities\/keysight-technologies\/\" rel=\"nofollow noopener\" target=\"_blank\">Keysight EDA<\/a>. \u201cThat\u2019s a very contentious discussion across the board, because everyone wants to move super fast, and by doing so, there is a risk that someone will expose IP, willingly or unwillingly. I don\u2019t think anyone who\u2019s working on this right now in the companies puts security first. When it comes to their IP, they are very conscious, and they\u2019re very rigorous about making sure they don\u2019t do anything that could potentially harm their company. But when it comes to other people\u2019s IP, they don\u2019t apply the same rigor.\u201d<\/p>\n<p>For example, if a customer wants to build an agentic AI flow, they need information from the foundry. That information in the past has been shared only under NDA and contracts. Petr noted that NDAs currently don\u2019t explain how to deal with the IP they get from the foundry in terms of AI enablement. \u201cEvery foundry right now struggles with the idea that their PDKs can be fed into a foundational model. There is no legal precedent right now, or any discussion on that side. The discussions we keep having with the foundries all go back to NDAs. Beyond that, everyone shrugs.\u201d<\/p>\n<p>The flow is such that design houses get PDKs or libraries from the foundries and feed those into foundational models. \u201cThey get EDA tools from the EDA vendors, which have user license agreements (ULAs), and part of the existing ULAs talk about reverse engineering,\u201d Petr said. But ULAs generally don\u2019t mention AI. \u201cThat\u2019s the only clause in most of the ULAs that currently talks about anything like, \u2018You should be careful with the information you get.\u2019 All the EDA vendor solutions are also intellectual property. They are licensed. You can\u2019t use them unless they\u2019re licensed. But when it comes to APIs, the documentation, again, people just take them and put them in foundational models. So the big question here is, \u2018Now we have basically the end user of two branches of the semiconductor pyramid who just feeds everything into a system because he wants to automate this thing beyond the current existing solution, or go all the way to agentic AI. What you need is information, and he just pulls information from both sides and feeds it into a foundation model. As long as those foundational models are properly secured on premises, the assumption is that it\u2019s okay to do. The concern here is, can we expect them to protect our IP in a proper way, and can we expect that they have configured their system in a way that it will never lead back to the foundational model? Or do we have to expect that at some point the foundational model will just have our information because someone did something they should not have done?\u201d<\/p>\n<p>As chip architects encounter the evolving landscape of AI governance and IP protection, they must carefully weigh the challenges of fostering innovation against the imperative for robust security. The task ahead calls for adaptable industry standards and legal frameworks, ensuring responsible AI integration across increasingly complex chip designs and architectures.<\/p>\n<p>Software development pressures increase with AI<br \/>On the software development side, the proliferation of AI tools has dramatically accelerated the pace of coding, testing, and deployment, enabling teams to automate repetitive tasks and generate code more rapidly. However, this rapid adoption also raises concerns about code quality, explainability, and maintaining robust security practices, as developers may rely on AI-generated outputs without fully understanding their implications or vulnerabilities.<\/p>\n<p>\u201cIn the conventional software development domain, AI is pretty ubiquitous,\u201d observed Jason Oberg, fellow, security solutions at <a href=\"https:\/\/semiengineering.com\/entities\/arterisip\/\" rel=\"nofollow noopener\" target=\"_blank\">Arteris<\/a>. \u201cEveryone\u2019s using LLMs to generate code. It\u2019s unclear how much is being used, honestly, for generating RTL, or actual Verilog or VHDL system code, as well as test bench environments and so on. It is being used, but I don\u2019t think it\u2019s as pervasive as generating JavaScript or C code or something like that. That said, it\u2019s going to get used more and more. And at least in the way some chips are designed, there\u2019s a lot of rigor around test and verification, even arguably more so than software, because you can always patch software. The issue I see in the software domain is that you get stuff that gets built, and you don\u2019t really know what it does. Not that they say, \u2018I generate all this code, and I\u2019ve checked it in, and everything seems to be working.\u2019 You ask, \u2018What is it? What is this block of code doing? When you can write it, you figure it all out in the hardware domain. But because everything\u2019s so well tested, it may be that it\u2019s okay, but the concern about generating harmful blocks of code and these kinds of things will still be relevant. And when you are testing the base design, if you\u2019re also using AI to generate your test, then the governance concern comes into play. Because then if you say, \u2018I have an LLM generate my design, and then I have it generate my test, and on my test pass, and then I say, Okay, I\u2019m good,\u2019 there are a lot of concerns around that. There needs to be some process or checks and balances on this, and that\u2019s hard because if verification is yours, and the design team says, \u2018I got my work done 10 times faster, let\u2019s move on to the next thing,\u2019 there\u2019s going to be that tendency to just get stuff done quickly.\u201d<\/p>\n<p><img data-recalc-dims=\"1\" fetchpriority=\"high\" decoding=\"async\" class=\"alignnone size-full wp-image-24276175\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/05\/Screenshot-2026-05-06-at-3.13.09-PM.png\" alt=\"\" width=\"2136\" height=\"930\"  \/><br \/>Fig. 1: Examples of AI Governance regulations and standards. Source: Synopsys white paper, \u201cSecuring AI at the Silicon Level.\u201d<\/p>\n<p>Should AI governance be mandated by governments?<br \/>There is debate about whether AI governance should be mandated. \u201cWhen mandates are in place, and there is liability, compliance becomes essential,\u201d Synopsys\u2019 Neustadter said. \u201cThis brings a distinct level of pressure and responsibility that must be addressed. For instance, reviewing AI governance forums and published recommendations reveals ongoing gaps \u2014 despite the presence of mandatory requirements, such as those stipulated in the EU AI Act. It is evident that significant elements are missing from current AI governance.\u201d<\/p>\n<p>Presently, much of AI governance focuses on guidelines and intentions rather than ensuring outcomes. Critical questions remain: Can an AI system actively detect issues during operation?<\/p>\n<p>Does it allow for quick identification of failures, attribution of responsibility, and prevention of recurrences? \u201cAt this stage, most systems do not meet these expectations,\u201d Neustadter noted.<\/p>\n<p>Addressing this \u201cmissing layer\u201d requires implementing continuous assurance mechanisms, including runtime monitoring, incident reporting, and the establishment of clear thresholds for acceptable performance. Legal ambiguities persist, and one of the key challenges involves achieving global interoperability. The European Union, the United States, China, and other regions all have different approaches. There is no unified global standard, which is a substantial and pressing gap in AI governance frameworks.<\/p>\n<p>Mike Eftimakis, founding director, CHERI Alliance, agreed. \u201cWe are racing to get AI on the market without thinking about any of the consequences. And there are examples where they rushed to develop a solution, discovered that it was scary, and didn\u2019t release it. But they still developed it, and I know this is not the only one. We are developing applications without thinking about the consequences, and there\u2019s no direct market force that would prevent it, because everybody is rushing to get things out to benefit from being the first and to show their capabilities, etc. \u00a0The market rewards the first \u2014 being innovative and going faster \u2014 so there\u2019s no way that the market alone would solve the problem. The only way to do that is through regulation. I\u2019m not really a big fan of regulation, but there are areas where it is necessary, where the greater good should be considered, and where we should do something. But what exactly should we do? That is something difficult to define, because it\u2019s not a country problem. It\u2019s really a global problem. Clearly, something is missing here. Typically, government, legislation, etc., goes much, much slower than the technology itself. And here we have a problem, because technology is advancing at a fast pace, and increasingly faster, so we need to find how to deal with that.\u201d<\/p>\n<p>Secure-IC\u2019s Guilley also sees benefit in an industry-wide, agreed-upon mandate for AI governance. \u201cOtherwise, there will be no alignment about what the rules are, even what the definitions are. Everything needs to be formalized so that when you want some assurance from a vendor, you can express something in words that both parties will agree on and share the same understanding. AI is making the situation much, much more complex than it used to be, because you are mixing the data and algorithms. And with some AI generating data to train other AI, everything is mixed. There are also feedback loops. When you do synthetic data for generating scenarios for other kinds of reinforcement learning for some other algorithms, you see it circle back. If there is a feedback loop, unless you have some control, this can go crazy.\u201d<\/p>\n<p>In the early stages of autonomous or semi-autonomous vehicles, there was a lack of real-time data on emissions. \u201cNot every possible scenario could be anticipated \u2014 for example, someone suddenly crossing the street when the traffic light is red,\u201d Guilley said. \u201cSituations like these are rare, so they often aren\u2019t represented in training datasets. As a result, making accurate decisions becomes challenging because there\u2019s no relevant data to learn from. For those unusual yet crucial scenarios necessary to develop reliable perception and cruise control systems, synthetic data had to be generated\u2014sometimes inspired by other AI applications. This marked one of the first times such data generation occurred. Now, with LLMs and similar technologies, we don\u2019t face the same data shortages since we can leverage vast amounts of information available on the internet, forums, GitHub, and more. As we advance toward physical AI, including autonomous vehicles, we face new challenges unique to these systems.\u201d<\/p>\n<p>So who is responsible for setting standards? Synaptics\u2019 Weil said it\u2019s traditionally been the governing bodies, but he doesn\u2019t know who\u2019s going to take charge of this. \u201cI don\u2019t think the government can do it. The technology is moving too fast, and as we see with the bigger stuff on the cloud side, they can hardly respond to that, and they don\u2019t have the technical acumen to really understand it. The government regulatory agencies globally don\u2019t have the technical acumen to understand it at the embedded level.\u201d<\/p>\n<p>He agreed that the best proxy for this is what\u2019s happened in automotive over the years with safety and automation in the car. \u201cSome things have gotten harder, and some things have gotten more relaxed,\u201d Weil said. \u201cFor example, to do what\u2019s called infotainment in the car, where you have touch screens and automation and things going on there, those quality standards and coding standards frankly are more relaxed today than they were probably 20 years ago. It\u2019s not as hard for the industry to be able to do that. Early on, people were very afraid, so it got very tight and hard. The industry\u2019s gotten so good at it that we can move through that pretty fast today. When you look at driver safety aspects, like ADAS kinds of systems, the bar on quality just keeps going up. And as we move to AI in the car, where the car is trying to steer and drive and build these, there\u2019s this interesting interaction between how AI and automotive are coming together. And so if I were reading the tea leaves and trying to be forward-looking on this, I would say automotive and AI are where the first regulatory stuff is going to come. I don\u2019t think it will be governments that cause it. Governments will be measuring it.\u201d<\/p>\n<p>Keysight\u2019s Petr also doesn\u2019t believe regulations are going to fix the lack of a cohesive global AI governance approach. \u201cIt will take too long. Interpretations will be too wide. They might be too draconian. They might stifle the speed at which we can move. Who wants to regulate standards? By the time we have a standard, it\u2019s outdated. So that is an issue. I guess we are left with blissful ignorance and wishful thinking.\u201d<\/p>\n<p>In addition, because the speed of innovation is so fast, by the time any rules are written the mechanism for implementing them will be different. \u201cIt will be on the end products that are built and generated, or the end products that are built using the technologies that we all generate in our chip industry, orthogonal industries, and related industries to what we do,\u201d said Weil. \u201cThey will start writing their guidelines, and it will be a trickle-down back into how we develop things.\u201d<\/p>\n<p>Bridging the gap between innovation and responsible oversight will require not only collaboration across industries, but also a willingness to adapt governance approaches as new challenges arise.<\/p>\n<p>Conclusion<br \/>AI governance involves enforceable accountability, runtime assurance, IP and data protection, human oversight, explainability, and outcome\u2011driven controls, which will be developed first in safety\u2011critical industries and only later codified more broadly. This is vital, and there must be a clear understanding in these areas.<\/p>\n<p>\u201cInterestingly, large language models are designed to be non-deterministic, which means their responses can vary each time, making consistent outcomes hard to achieve,\u201d Guilley noted. \u201cThis creates challenges in predictability when conversing with AI.\u201d<\/p>\n<p>If the goal is to truly control AI systems, regulation must begin by establishing requirements, prompting the market to consider how to implement these standards. \u201cThis process often stimulates creativity as stakeholders find innovative solutions within the established framework,\u201d he said. \u201cSimilar to art, where rules provide a foundation for creative expression, regulations can guide technological advancements. However, presently, we are not fully prepared to ensure transparency in AI. Effective governance requires clear rules, oversight, feedback mechanisms, and comprehensive reporting \u2014 elements that are not yet fully there, as AI continues to operate largely autonomously.\u201d<\/p>\n<p>AI governance should not slow innovation. The goal is ensuring that powerful AI systems can be trusted to act responsibly in real-time, protect what matters, and fail safely when they don\u2019t.<\/p>\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"Key Takeaways: AI governance is broadly recognized as essential, but today it remains fragmented, largely aspirational, and lacking&hellip;\n","protected":false},"author":2,"featured_media":30660,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[179,24,1798,19908,25,10595,19909,2617,19910,19911,19401,19912,9383],"class_list":{"0":"post-30659","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-agentic-ai","9":"tag-ai","10":"tag-ai-governance","11":"tag-arteris","12":"tag-artificial-intelligence","13":"tag-cadence","14":"tag-cheri-alliance","15":"tag-foundation-models","16":"tag-keysight-eda","17":"tag-secure-ic","18":"tag-security-operations-centers","19":"tag-synaptics","20":"tag-synopsys"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/30659","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=30659"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/30659\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/30660"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=30659"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=30659"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=30659"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}