Key Takeaways:
AI governance is broadly recognized as essential, but today it remains fragmented, largely aspirational, and lacking enforceable mechanisms for accountability, runtime assurance, and global interoperability.
Because AI innovation is advancing too quickly for governments or standards bodies to keep pace, practical AI governance is most likely to emerge first from high‑risk, safety‑critical industries such as automotive, industrial systems, and semiconductors.
The most immediate governance risks center on intellectual property exposure, data misuse, and over‑reliance on AI‑generated code or designs without strong human review, sandboxing, and independent verification.
Artificial intelligence is being integrated across the semiconductor ecosystem at a pace that far outstrips any rules to govern it, raising the specter of increased IP theft and security breaches with no obvious way to prevent them.
From foundation models embedded in EDA workflows to agentic systems influencing design, verification, and physical outcomes, AI is reshaping how chips are built and where and how risk is introduced. But while AI governance is widely acknowledged as necessary, today’s efforts remain fragmented, unevenly interpreted, and focused more on intent than on measurable outcomes. Put simply, current governance falls short today, and given the pace of innovation in AI, traditional regulation approaches are trailing behind and unlikely to catch up.
“AI governance requires some form of guidelines or regulations to cover policies and framework processes, to guide responsible development, to guide when you deploy an AI system, and to guide the use of AI systems,” said Dana Neustadter, senior director of product management for Security IP Solutions at Synopsys, drawing a rough parallel to safety and security in cars. “AI goes along the same lines to drive trust, ethics, accountability, and transparency. But to identify and mitigate risk, AI systems should look at data security model integrity, access control, and authentication, similar to functional safety and cybersecurity in cars. It’s tied to security, as well. Typically, it looks at ethical principles, like how to know some form of regulations, and how to manage risks and accountability.”
Safety-critical industries may well become the proving ground for the first practical, enforceable models of AI accountability, following a similar path for how safety standards evolved in the automotive, industrial control and automation, aerospace, and military sectors. “The semiconductor industry’s chief designers, as well as our software and ecosystem partners, have all come together over the decades,” said John Weil, vice president, IoT and Edge AI Processor Business at Synaptics. “This goes back into the ’90s with military/aerospace safety, and then as automotive content and factory automation equipment increased significantly, there were a lot of ISO standards that came about to try to guide engineers on what it means to design a reliable and safe system. We don’t really have that on this AI side today. If somebody said, ‘Hey, I want to do an industrial automation safe product with AI,’ those two things are difficult to put in the same sentence.”
Defining use cases key in AI governance
To most experts, there are two distinct use cases for AI. “One focuses on data management, and the other involves generating and verifying code and hardware,” said Sylvain Guilley, co-founder and CTO at Secure-IC, a Cadence company. “These are truly separate areas. When it comes to managing data, this field is more advanced. For example, in cybersecurity, which has existed longer than many other applications, consider how antivirus programs and, more recently, security operations centers (SOCs) are now largely run by AI. The reason for this shift is that without AI, analysts would be overwhelmed by the sheer volume of data and the complexity of the correlations they must process. AI is therefore needed to perform the initial layer of analysis on content and metadata. That’s the primary use of AI when it comes to handling data.”
There’s an important governance issue here, however. “The SOC operator handling the data doesn’t actually own it,” Gulley said. “They’re simply monitoring it. If the data includes sensitive, confidential, or proprietary information, effective governance is necessary so that any AI overseeing traffic can ensure the information isn’t corrupted or compromised and doesn’t show signs of attack. This highlights the key challenge of managing your data when you rely on external providers, such as SaaS cybersecurity services. The legitimate owner of the data isn’t necessarily the same party conducting the analysis.”
A further risk comes when AI is generating intellectual property. “Here, there’s a gray area because it depends on the license of the model that is used for the LLM model,” he said. “It might depend on where the data has been generated. There can be some IP, but also some export considerations. If I am operating from France, what happens if I’m using foreign tools? They will tell me that I still own the output. Meanwhile, the data has been circulated to another country. I’m not sure there is any stronger legal mandate.”
IP security risks are high
At the same time, foundational models are being rolled out in many companies, and there is tremendous activity around building agentic AI solutions. This brings up bigger concerns than are obvious at first glance. IP security is a huge concern to many, of course, but there are new licensing considerations that may not have been considered in the past.
“The question of security on the IP is one of my biggest concerns,” said Alexander Petr, senior director at Keysight EDA. “That’s a very contentious discussion across the board, because everyone wants to move super fast, and by doing so, there is a risk that someone will expose IP, willingly or unwillingly. I don’t think anyone who’s working on this right now in the companies puts security first. When it comes to their IP, they are very conscious, and they’re very rigorous about making sure they don’t do anything that could potentially harm their company. But when it comes to other people’s IP, they don’t apply the same rigor.”
For example, if a customer wants to build an agentic AI flow, they need information from the foundry. That information in the past has been shared only under NDA and contracts. Petr noted that NDAs currently don’t explain how to deal with the IP they get from the foundry in terms of AI enablement. “Every foundry right now struggles with the idea that their PDKs can be fed into a foundational model. There is no legal precedent right now, or any discussion on that side. The discussions we keep having with the foundries all go back to NDAs. Beyond that, everyone shrugs.”
The flow is such that design houses get PDKs or libraries from the foundries and feed those into foundational models. “They get EDA tools from the EDA vendors, which have user license agreements (ULAs), and part of the existing ULAs talk about reverse engineering,” Petr said. But ULAs generally don’t mention AI. “That’s the only clause in most of the ULAs that currently talks about anything like, ‘You should be careful with the information you get.’ All the EDA vendor solutions are also intellectual property. They are licensed. You can’t use them unless they’re licensed. But when it comes to APIs, the documentation, again, people just take them and put them in foundational models. So the big question here is, ‘Now we have basically the end user of two branches of the semiconductor pyramid who just feeds everything into a system because he wants to automate this thing beyond the current existing solution, or go all the way to agentic AI. What you need is information, and he just pulls information from both sides and feeds it into a foundation model. As long as those foundational models are properly secured on premises, the assumption is that it’s okay to do. The concern here is, can we expect them to protect our IP in a proper way, and can we expect that they have configured their system in a way that it will never lead back to the foundational model? Or do we have to expect that at some point the foundational model will just have our information because someone did something they should not have done?”
As chip architects encounter the evolving landscape of AI governance and IP protection, they must carefully weigh the challenges of fostering innovation against the imperative for robust security. The task ahead calls for adaptable industry standards and legal frameworks, ensuring responsible AI integration across increasingly complex chip designs and architectures.
Software development pressures increase with AI
On the software development side, the proliferation of AI tools has dramatically accelerated the pace of coding, testing, and deployment, enabling teams to automate repetitive tasks and generate code more rapidly. However, this rapid adoption also raises concerns about code quality, explainability, and maintaining robust security practices, as developers may rely on AI-generated outputs without fully understanding their implications or vulnerabilities.
“In the conventional software development domain, AI is pretty ubiquitous,” observed Jason Oberg, fellow, security solutions at Arteris. “Everyone’s using LLMs to generate code. It’s unclear how much is being used, honestly, for generating RTL, or actual Verilog or VHDL system code, as well as test bench environments and so on. It is being used, but I don’t think it’s as pervasive as generating JavaScript or C code or something like that. That said, it’s going to get used more and more. And at least in the way some chips are designed, there’s a lot of rigor around test and verification, even arguably more so than software, because you can always patch software. The issue I see in the software domain is that you get stuff that gets built, and you don’t really know what it does. Not that they say, ‘I generate all this code, and I’ve checked it in, and everything seems to be working.’ You ask, ‘What is it? What is this block of code doing? When you can write it, you figure it all out in the hardware domain. But because everything’s so well tested, it may be that it’s okay, but the concern about generating harmful blocks of code and these kinds of things will still be relevant. And when you are testing the base design, if you’re also using AI to generate your test, then the governance concern comes into play. Because then if you say, ‘I have an LLM generate my design, and then I have it generate my test, and on my test pass, and then I say, Okay, I’m good,’ there are a lot of concerns around that. There needs to be some process or checks and balances on this, and that’s hard because if verification is yours, and the design team says, ‘I got my work done 10 times faster, let’s move on to the next thing,’ there’s going to be that tendency to just get stuff done quickly.”

Fig. 1: Examples of AI Governance regulations and standards. Source: Synopsys white paper, “Securing AI at the Silicon Level.”
Should AI governance be mandated by governments?
There is debate about whether AI governance should be mandated. “When mandates are in place, and there is liability, compliance becomes essential,” Synopsys’ Neustadter said. “This brings a distinct level of pressure and responsibility that must be addressed. For instance, reviewing AI governance forums and published recommendations reveals ongoing gaps — despite the presence of mandatory requirements, such as those stipulated in the EU AI Act. It is evident that significant elements are missing from current AI governance.”
Presently, much of AI governance focuses on guidelines and intentions rather than ensuring outcomes. Critical questions remain: Can an AI system actively detect issues during operation?
Does it allow for quick identification of failures, attribution of responsibility, and prevention of recurrences? “At this stage, most systems do not meet these expectations,” Neustadter noted.
Addressing this “missing layer” requires implementing continuous assurance mechanisms, including runtime monitoring, incident reporting, and the establishment of clear thresholds for acceptable performance. Legal ambiguities persist, and one of the key challenges involves achieving global interoperability. The European Union, the United States, China, and other regions all have different approaches. There is no unified global standard, which is a substantial and pressing gap in AI governance frameworks.
Mike Eftimakis, founding director, CHERI Alliance, agreed. “We are racing to get AI on the market without thinking about any of the consequences. And there are examples where they rushed to develop a solution, discovered that it was scary, and didn’t release it. But they still developed it, and I know this is not the only one. We are developing applications without thinking about the consequences, and there’s no direct market force that would prevent it, because everybody is rushing to get things out to benefit from being the first and to show their capabilities, etc. The market rewards the first — being innovative and going faster — so there’s no way that the market alone would solve the problem. The only way to do that is through regulation. I’m not really a big fan of regulation, but there are areas where it is necessary, where the greater good should be considered, and where we should do something. But what exactly should we do? That is something difficult to define, because it’s not a country problem. It’s really a global problem. Clearly, something is missing here. Typically, government, legislation, etc., goes much, much slower than the technology itself. And here we have a problem, because technology is advancing at a fast pace, and increasingly faster, so we need to find how to deal with that.”
Secure-IC’s Guilley also sees benefit in an industry-wide, agreed-upon mandate for AI governance. “Otherwise, there will be no alignment about what the rules are, even what the definitions are. Everything needs to be formalized so that when you want some assurance from a vendor, you can express something in words that both parties will agree on and share the same understanding. AI is making the situation much, much more complex than it used to be, because you are mixing the data and algorithms. And with some AI generating data to train other AI, everything is mixed. There are also feedback loops. When you do synthetic data for generating scenarios for other kinds of reinforcement learning for some other algorithms, you see it circle back. If there is a feedback loop, unless you have some control, this can go crazy.”
In the early stages of autonomous or semi-autonomous vehicles, there was a lack of real-time data on emissions. “Not every possible scenario could be anticipated — for example, someone suddenly crossing the street when the traffic light is red,” Guilley said. “Situations like these are rare, so they often aren’t represented in training datasets. As a result, making accurate decisions becomes challenging because there’s no relevant data to learn from. For those unusual yet crucial scenarios necessary to develop reliable perception and cruise control systems, synthetic data had to be generated—sometimes inspired by other AI applications. This marked one of the first times such data generation occurred. Now, with LLMs and similar technologies, we don’t face the same data shortages since we can leverage vast amounts of information available on the internet, forums, GitHub, and more. As we advance toward physical AI, including autonomous vehicles, we face new challenges unique to these systems.”
So who is responsible for setting standards? Synaptics’ Weil said it’s traditionally been the governing bodies, but he doesn’t know who’s going to take charge of this. “I don’t think the government can do it. The technology is moving too fast, and as we see with the bigger stuff on the cloud side, they can hardly respond to that, and they don’t have the technical acumen to really understand it. The government regulatory agencies globally don’t have the technical acumen to understand it at the embedded level.”
He agreed that the best proxy for this is what’s happened in automotive over the years with safety and automation in the car. “Some things have gotten harder, and some things have gotten more relaxed,” Weil said. “For example, to do what’s called infotainment in the car, where you have touch screens and automation and things going on there, those quality standards and coding standards frankly are more relaxed today than they were probably 20 years ago. It’s not as hard for the industry to be able to do that. Early on, people were very afraid, so it got very tight and hard. The industry’s gotten so good at it that we can move through that pretty fast today. When you look at driver safety aspects, like ADAS kinds of systems, the bar on quality just keeps going up. And as we move to AI in the car, where the car is trying to steer and drive and build these, there’s this interesting interaction between how AI and automotive are coming together. And so if I were reading the tea leaves and trying to be forward-looking on this, I would say automotive and AI are where the first regulatory stuff is going to come. I don’t think it will be governments that cause it. Governments will be measuring it.”
Keysight’s Petr also doesn’t believe regulations are going to fix the lack of a cohesive global AI governance approach. “It will take too long. Interpretations will be too wide. They might be too draconian. They might stifle the speed at which we can move. Who wants to regulate standards? By the time we have a standard, it’s outdated. So that is an issue. I guess we are left with blissful ignorance and wishful thinking.”
In addition, because the speed of innovation is so fast, by the time any rules are written the mechanism for implementing them will be different. “It will be on the end products that are built and generated, or the end products that are built using the technologies that we all generate in our chip industry, orthogonal industries, and related industries to what we do,” said Weil. “They will start writing their guidelines, and it will be a trickle-down back into how we develop things.”
Bridging the gap between innovation and responsible oversight will require not only collaboration across industries, but also a willingness to adapt governance approaches as new challenges arise.
Conclusion
AI governance involves enforceable accountability, runtime assurance, IP and data protection, human oversight, explainability, and outcome‑driven controls, which will be developed first in safety‑critical industries and only later codified more broadly. This is vital, and there must be a clear understanding in these areas.
“Interestingly, large language models are designed to be non-deterministic, which means their responses can vary each time, making consistent outcomes hard to achieve,” Guilley noted. “This creates challenges in predictability when conversing with AI.”
If the goal is to truly control AI systems, regulation must begin by establishing requirements, prompting the market to consider how to implement these standards. “This process often stimulates creativity as stakeholders find innovative solutions within the established framework,” he said. “Similar to art, where rules provide a foundation for creative expression, regulations can guide technological advancements. However, presently, we are not fully prepared to ensure transparency in AI. Effective governance requires clear rules, oversight, feedback mechanisms, and comprehensive reporting — elements that are not yet fully there, as AI continues to operate largely autonomously.”
AI governance should not slow innovation. The goal is ensuring that powerful AI systems can be trusted to act responsibly in real-time, protect what matters, and fail safely when they don’t.