Key takeaways
Agentic methodologies need to be able to reason across multiple data formats and abstractions.
It is not clear how much data from previous designs is useful in new designs.
Standards may help, but the lack of them may only impact cost.
The relationship between tools and methodologies is bidirectional. Tools enable methodologies, and methodologies are dependent on tool features and the data they provide. But there are very few architectural-level tools in the industry today, which will make it more difficult to create complete agentic flows.
Much of the first round of AI within EDA concentrated on a single tool, and therefore only concerned itself with one type of data at one level of abstraction. It did not have to think about external tools and data interoperability. (See related story here.) As the industry progresses to flows and methodologies, all of these simplifications disappear.
This creates a potential problem for the EDA industry. Most of what’s valuable for AI is at the front end of the flow, when specifications are being developed, architectures defined, and verification plans put in place. The potential gains from allowing AI to make changes closer to the back end of the flow have less value and much higher risk. Historically, the front end has not been a favored place for tool development because the limited time spent by just a few architects made it a non-economic venture.
Another problem is that abstractions in this part of the flow have not been settled upon. Academia attempted to propose several in the past, and electronic system-level (ESL) tools were developed and discarded in the 1990s and 2000s. SystemC did bring about the notion of untimed and approximately timed models. While these have been used by high-level synthesis (HLS) tools, they have not seen much application outside of this one tool.
This may create difficulties when AI methodologies are being created, or AI may help to provide the solution by tying together these abstractions with RTL, enabling bi-directional connectivity between them. Several industry insiders have suggested that large semiconductor companies are working toward solutions in this area, citing the competitive advantage it gives them. This is how disruptions get started, even though it can take a long time before it expands into common usage.
Diversified data
Data comes from many sources. Some are related to the current design, but AI needs to learn from past experiences as well. Few people have looked into the applicability or longevity of some of that data.
Data relevance is anchored by many factors. “We have been designing protocol IPs across multiple generations,” says Badarinath Kommandur, fellow at Cadence. “We have the entire design content, all the way from specification, RTL implementation, verification test benches, to active design implementation — on multiple generations, multiple foundries, and multiple nodes. The question is, given a new interface standard, can you take and train an AI engine or LLM-based approach to learn from the previous implementations? Can it look at a new interface specification and come up with a solution where an expert can iterate quickly to make it production quality, and take it all the way through design implementation? Can we learn from that and use that knowledge to design the next generation?”
Data representations change across stages of the design process. “Even if we just take digital design as an example, there are stages like SystemC, RTL, gate-level netlist, and layout, each producing its own distinct types of data,” says Doyun Kim, AI engineer at Normal Computing. “In the industry, the term ‘shift-left’ embodies the idea of predicting later-stage results at earlier stages so that flawed designs can be pruned early, minimizing costly iterations.”
The degrees of freedom are reduced as you progress through a development flow. “We are seeing a lot of agentic flows being developed for the front-end side of things,” says Sathishkumar Balasubramanian, head of products at Siemens EDA. “This is when you’re doing the design phase, on the functional side. As you get closer to tape-out, everything gets tighter or more constrained. There are fewer opportunities for AI here because you’ve done all the hard work and you don’t want something to mess that up.”
Crossing abstractions requires a different kind of data to be captured. “A knowledge database could provide both a comprehensive, detailed view of the design including structure, behavior, and verification concerns,” says Shelly Henry, founder and CEO of Moores Lab AI. “It may also need a complementary view of the overall process flow so an AI agent can reason across the full pipeline. Such databases are being created, capturing critical information from the design architectural specification and from the RTL itself. This could enable Agentic AI solutions that can automatically create many aspects of the verification environment.”
Shift left requires getting estimates early so that informed choices can be made in more abstract designs. “Various techniques, including AI-based approaches, are being used to predict performance metrics early on,” says Normal’s Kim. “However, this faces generalization challenges. Prediction may work well for similar types of IP, but for entirely different classes of IP, the relevant parameters for prediction may differ, and if training data is scarce, prediction accuracy cannot be guaranteed. Can LLM-level generalization truly cover these cases as well? That is an interesting open question.”
The ultimate goal
The ultimate goal is often stated as an agentic flow from specification to optimized design that is correct by construction. “In the future I may be able to say, ‘Design an ALU, or design a cache controller with the following parameters,” says Dean Drako, CEO of IC Manage. “Here is the spec, and it goes off and grinds and designs it. I don’t think folks in the EDA world are really in the chip design world, so they are not jumping on this bandwagon yet. They want to make sure they understand what it’s doing, why it’s doing it, and where it’s doing it.”
To come up with good designs requires lots of back-end expertise. “If you look at a design team, they will have concentrated expertise in different domains, in a few engineers,” says Cadence’s Kommandur. “If you go through a design execution cycle, typically you will reach a time in the closure, sign-off tape-out process where you need to rely on a handful of these experts to close the design, meet your performance, power, and area targets, and tape it out. The question is, how do you capture that knowledge in this handful of engineers into an agentic AI framework so that any designer within the design EDA ecosystem can apply it to solve and convert the designs in the most efficient manner possible?”
Using AI to create models is an interesting possibility. “The future will all be about reduced order models (ROM) that are needed for many aspects of a flow,” says Jeff Tharp, product manager at Synopsys. “It’s going to require accurate ROMs in order to have fast solutions and accurate solutions. But it’s also important to have cross-physics ROMs and cross-scale ROMs, so that you can have true virtual assembly of complex systems.”
Part of the problem is a lack of data. There are insufficient numbers of good designs on which to train, or bad designs where the problems are identified. “A large portion of historical data consists of failed attempts or design alternatives that were ultimately not chosen on the path to a successful design, and you need to be able to extract meaningful insights from that,” says Kim. “Second, there is the issue of tool versions and process design kit (PDK) evolution. As tools and process technologies evolve over time, it becomes uncertain how useful past data can be. Third, there is the matter of project structure. In most cases, teams within the same company will follow a similar design methodology and maintain comparable project structures, but this is by no means guaranteed.”
Cooperation and standards
Nobody is in a position to do it on their own. “You’ve got companies with unbelievable wealth building solutions for themselves at the moment,” says Simon Davidmann, AI and EDA researcher at the University of Southampton. “You’ve got big EDA, and startups that probably don’t have the resources or the AI expertise that’s needed. Stuff is going to be done in collaboration, either as open source, with the big AI guys that are really developing very sophisticated stuff, or they’re going to fund, very significantly, EDA companies.”
Successful methodologies will need to work closely with tools. “Whichever EDA tool makes their data accessible through an open format will be the best primitive to build around for any company utilizing LLMs,” says Arvind Srinivasan, product engineering lead for Normal Computing. “Semiconductor companies have historically pushed vendors toward at least some interoperability, because you might use one vendor’s tools for signoff and another vendor for design and verification. That pressure may matter less than it used to. AI systems are already capable of reverse-engineering proprietary formats, reading binaries, and generating the code needed to extract usable data from closed tooling. The vendors that make their data accessible will have a smoother integration story. The ones that don’t will find that the walls they’ve built are increasingly easy to climb over.”
Ultimately, data needs to become standardized. “Standardization is achievable when it delivers clear value to users without exposing the proprietary mechanisms that EDA vendors rely on for differentiation,” says Moores Lab’s Henry. “A practical path forward is to define an API that focuses on shared contractual elements — such as event definitions, provenance information, and other externally meaningful signals — rather than attempting to harmonize each vendor’s internal data structures. By centering the standard on these interoperable touchpoints, AI systems can perform reliable flow orchestration, continuous integration, and optimization while vendors retain control over their internal secret sauce. As engineering teams increasingly expect this level of openness to support AI-driven workflows, market pressure is likely to encourage major EDA providers to adopt such interfaces, creating space for faster innovation from both established players and emerging startups.”
Knowing which standards are required means understanding what you are trying to achieve. “Focus on the use case,” says Siemens’ Balasubramanian. “Don’t focus on reusing your existing infrastructure to fit into an agentic flow. Focus on the use case, and then figure out how you can do it with an agentic flow. Customers have a lot of legacy in their flow that is getting built into the agentic flow. This would not happen if we started from a clean sheet of paper.”
Key players need a business motivation. “Standardization becomes realistic once technology, value, and user motivation align,” says Olivera Stojanović, CTO for Vtool. “I believe we are getting close. A shared standard reduces misinterpretation, especially for AI agents, and strengthens the engineer’s role in the loop.”
Until standards are reached, the AI workload is higher. “There are many dimensions of diversity in this data that resist standardization,” says Kim. “Recent advances in AI models have made it possible to process diverse data efficiently without requiring strict data standardization, but whether this can scale effectively to the level of diversity and volume found in EDA data remains an open question. Alternatively, processing such data could require an enormous number of tokens.”
Many doubt EDA companies will ever be able to standardize on an API. “That’s just not how this industry has converged,” says Normal’s Srinivasan. “But the good news is that in an LLM world, much of it comes down to how much each vendor wants to expose to other tools. The standard doesn’t need to be a formal specification. The standard just needs to be that the data is open. If it’s accessible and text-based, modern AI systems can adapt to the specifics of each tool’s format. The real question is whether individual vendors will recognize that openness is the faster path to staying relevant in AI-native design flows.”
Conclusion
AI will increasingly penetrate all aspects of semiconductor design, implementation, and verification. The big question is, who is most likely to succeed? Today, the big semiconductor companies are the only ones capable of doing it. They have the data, they have restricted flows and methodologies, and they have the business incentive.
Creating an agentic flow that starts from a specification can provide them with a competitive advantage, until such time as all their competitors have similar capabilities. At that point, the technology is likely to be transferred to the large EDA companies for more general productization. Until then, the onus is on the EDA companies to provide the necessary information from tools, for the tools to take direction from agents, and for them to work with their competitors to bring about as much standardization as possible.