{"id":22725,"date":"2026-04-30T07:55:24","date_gmt":"2026-04-30T07:55:24","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/22725\/"},"modified":"2026-04-30T07:55:24","modified_gmt":"2026-04-30T07:55:24","slug":"creating-agentic-eda-methodologies","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/22725\/","title":{"rendered":"Creating Agentic EDA Methodologies"},"content":{"rendered":"<p>Key takeaways<\/p>\n<p>Agentic methodologies need to be able to reason across multiple data formats and abstractions.<br \/>\nIt is not clear how much data from previous designs is useful in new designs.<br \/>\nStandards may help, but the lack of them may only impact cost.<\/p>\n<p>The relationship between tools and methodologies is bidirectional. Tools enable methodologies, and methodologies are dependent on tool features and the data they provide. But there are very few architectural-level tools in the industry today, which will make it more difficult to create complete agentic flows.<\/p>\n<p>Much of the first round of AI within EDA concentrated on a single tool, and therefore only concerned itself with one type of data at one level of abstraction. It did not have to think about external tools and data interoperability. (See related story <a href=\"https:\/\/semiengineering.com\/using-data-and-ai-more-effectively-in-eda\/\" rel=\"nofollow noopener\" target=\"_blank\">here<\/a>.) As the industry progresses to flows and methodologies, all of these simplifications disappear.<\/p>\n<p>This creates a potential problem for the EDA industry. Most of what\u2019s valuable for AI is at the front end of the flow, when specifications are being developed, architectures defined, and verification plans put in place. The potential gains from allowing AI to make changes closer to the back end of the flow have less value and much higher risk. Historically, the front end has not been a favored place for tool development because the limited time spent by just a few architects made it a non-economic venture.<\/p>\n<p>Another problem is that abstractions in this part of the flow have not been settled upon. Academia attempted to propose several in the past, and electronic system-level (ESL) tools were developed and discarded in the 1990s and 2000s. SystemC did bring about the notion of untimed and approximately timed models. While these have been used by high-level synthesis (HLS) tools, they have not seen much application outside of this one tool.<\/p>\n<p>This may create difficulties when AI methodologies are being created, or AI may help to provide the solution by tying together these abstractions with RTL, enabling bi-directional connectivity between them. Several industry insiders have suggested that large semiconductor companies are working toward solutions in this area, citing the competitive advantage it gives them. This is how disruptions get started, even though it can take a long time before it expands into common usage.<\/p>\n<p>Diversified data<\/p>\n<p>Data comes from many sources. Some are related to the current design, but AI needs to learn from past experiences as well. Few people have looked into the applicability or longevity of some of that data.<\/p>\n<p>Data relevance is anchored by many factors. \u201cWe have been designing protocol IPs across multiple generations,\u201d says Badarinath Kommandur, fellow at <a href=\"https:\/\/semiengineering.com\/entities\/cadence-design-systems\/\" rel=\"nofollow noopener\" target=\"_blank\">Cadence<\/a>. \u201cWe have the entire design content, all the way from specification, RTL implementation, verification test benches, to active design implementation \u2014 on multiple generations, multiple foundries, and multiple nodes. The question is, given a new interface standard, can you take and train an AI engine or LLM-based approach to learn from the previous implementations? Can it look at a new interface specification and come up with a solution where an expert can iterate quickly to make it production quality, and take it all the way through design implementation? Can we learn from that and use that knowledge to design the next generation?\u201d<\/p>\n<p>Data representations change across stages of the design process. \u201cEven if we just take digital design as an example, there are stages like SystemC, RTL, gate-level netlist, and layout, each producing its own distinct types of data,\u201d says Doyun Kim, AI engineer at Normal Computing. \u201cIn the industry, the term \u2018shift-left\u2019 embodies the idea of predicting later-stage results at earlier stages so that flawed designs can be pruned early, minimizing costly iterations.\u201d<\/p>\n<p>The degrees of freedom are reduced as you progress through a development flow. \u201cWe are seeing a lot of agentic flows being developed for the front-end side of things,\u201d says Sathishkumar Balasubramanian, head of products at <a href=\"https:\/\/semiengineering.com\/entities\/mentor-a-siemens-business\/\" rel=\"nofollow noopener\" target=\"_blank\">Siemens EDA<\/a>. \u201cThis is when you\u2019re doing the design phase, on the functional side. As you get closer to tape-out, everything gets tighter or more constrained. There are fewer opportunities for AI here because you\u2019ve done all the hard work and you don\u2019t want something to mess that up.\u201d<\/p>\n<p>Crossing abstractions requires a different kind of data to be captured. \u201cA knowledge database could provide both a comprehensive, detailed view of the design including structure, behavior, and verification concerns,\u201d says Shelly Henry, founder and CEO of Moores Lab AI. \u201cIt may also need a complementary view of the overall process flow so an AI agent can reason across the full pipeline. Such databases are being created, capturing critical information from the design architectural specification and from the RTL itself. This could enable Agentic AI solutions that can automatically create many aspects of the verification environment.\u201d<\/p>\n<p>Shift left requires getting estimates early so that informed choices can be made in more abstract designs. \u201cVarious techniques, including AI-based approaches, are being used to predict performance metrics early on,\u201d says Normal\u2019s Kim. \u201cHowever, this faces generalization challenges. Prediction may work well for similar types of IP, but for entirely different classes of IP, the relevant parameters for prediction may differ, and if training data is scarce, prediction accuracy cannot be guaranteed. Can LLM-level generalization truly cover these cases as well? That is an interesting open question.\u201d<\/p>\n<p>The ultimate goal<\/p>\n<p>The ultimate goal is often stated as an agentic flow from specification to optimized design that is correct by construction. \u201cIn the future I may be able to say, \u2018Design an ALU, or design a cache controller with the following parameters,\u201d says Dean\u00a0Drako, CEO of IC Manage. \u201cHere is the spec, and it goes off and grinds and designs it. I don\u2019t think folks in the EDA world are really in the chip design world, so they are not jumping on this bandwagon yet. They want to make sure they understand what it\u2019s doing, why it\u2019s doing it, and where it\u2019s doing it.\u201d<\/p>\n<p>To come up with good designs requires lots of back-end expertise. \u201cIf you look at a design team, they will have concentrated expertise in different domains, in a few engineers,\u201d says Cadence\u2019s Kommandur. \u201cIf you go through a design execution cycle, typically you will reach a time in the closure, sign-off tape-out process where you need to rely on a handful of these experts to close the design, meet your performance, power, and area targets, and tape it out. The question is, how do you capture that knowledge in this handful of engineers into an agentic AI framework so that any designer within the design EDA ecosystem can apply it to solve and convert the designs in the most efficient manner possible?\u201d<\/p>\n<p>Using AI to create models is an interesting possibility. \u201cThe future will all be about reduced order models (ROM) that are needed for many aspects of a flow,\u201d says Jeff\u00a0Tharp, product manager at <a href=\"https:\/\/semiengineering.com\/entities\/synopsys-inc\/\" rel=\"nofollow noopener\" target=\"_blank\">Synopsys<\/a>. \u201cIt\u2019s going to require accurate ROMs in order to have fast solutions and accurate solutions. But it\u2019s also important to have cross-physics ROMs and cross-scale ROMs, so that you can have true virtual assembly of complex systems.\u201d<\/p>\n<p>Part of the problem is a lack of data. There are insufficient numbers of good designs on which to train, or bad designs where the problems are identified. \u201cA large portion of historical data consists of failed attempts or design alternatives that were ultimately not chosen on the path to a successful design, and you need to be able to extract meaningful insights from that,\u201d says Kim. \u201cSecond, there is the issue of tool versions and process design kit (PDK) evolution. As tools and process technologies evolve over time, it becomes uncertain how useful past data can be. Third, there is the matter of project structure. In most cases, teams within the same company will follow a similar design methodology and maintain comparable project structures, but this is by no means guaranteed.\u201d<\/p>\n<p>Cooperation and standards<\/p>\n<p>Nobody is in a position to do it on their own. \u201cYou\u2019ve got companies with unbelievable wealth building solutions for themselves at the moment,\u201d says Simon\u00a0Davidmann, AI and EDA researcher at the University of Southampton. \u201cYou\u2019ve got big EDA, and startups that probably don\u2019t have the resources or the AI expertise that\u2019s needed. Stuff is going to be done in collaboration, either as open source, with the big AI guys that are really developing very sophisticated stuff, or they\u2019re going to fund, very significantly, EDA companies.\u201d<\/p>\n<p>Successful methodologies will need to work closely with tools. \u201cWhichever EDA tool makes their data accessible through an open format will be the best primitive to build around for any company utilizing LLMs,\u201d says Arvind Srinivasan, product engineering lead for Normal Computing. \u201cSemiconductor companies have historically pushed vendors toward at least some interoperability, because you might use one vendor\u2019s tools for signoff and another vendor for design and verification. That pressure may matter less than it used to. AI systems are already capable of reverse-engineering proprietary formats, reading binaries, and generating the code needed to extract usable data from closed tooling. The vendors that make their data accessible will have a smoother integration story. The ones that don\u2019t will find that the walls they\u2019ve built are increasingly easy to climb over.\u201d<\/p>\n<p>Ultimately, data needs to become standardized. \u201cStandardization is achievable when it delivers clear value to users without exposing the proprietary mechanisms that EDA vendors rely on for differentiation,\u201d says Moores Lab\u2019s Henry. \u201cA practical path forward is to define an API that focuses on shared contractual elements \u2014 such as event definitions, provenance information, and other externally meaningful signals \u2014 rather than attempting to harmonize each vendor\u2019s internal data structures. By centering the standard on these interoperable touchpoints, AI systems can perform reliable flow orchestration, continuous integration, and optimization while vendors retain control over their internal secret sauce. As engineering teams increasingly expect this level of openness to support AI-driven workflows, market pressure is likely to encourage major EDA providers to adopt such interfaces, creating space for faster innovation from both established players and emerging startups.\u201d<\/p>\n<p>Knowing which standards are required means understanding what you are trying to achieve. \u201cFocus on the use case,\u201d says Siemens\u2019 Balasubramanian. \u201cDon\u2019t focus on reusing your existing infrastructure to fit into an agentic flow. Focus on the use case, and then figure out how you can do it with an agentic flow. Customers have a lot of legacy in their flow that is getting built into the agentic flow. This would not happen if we started from a clean sheet of paper.\u201d<\/p>\n<p>Key players need a business motivation. \u201cStandardization becomes realistic once technology, value, and user motivation align,\u201d says Olivera Stojanovi\u0107, CTO for Vtool. \u201cI believe we are getting close. A shared standard reduces misinterpretation, especially for AI agents, and strengthens the engineer\u2019s role in the loop.\u201d<\/p>\n<p>Until standards are reached, the AI workload is higher. \u201cThere are many dimensions of diversity in this data that resist standardization,\u201d says Kim. \u201cRecent advances in AI models have made it possible to process diverse data efficiently without requiring strict data standardization, but whether this can scale effectively to the level of diversity and volume found in EDA data remains an open question. Alternatively, processing such data could require an enormous number of tokens.\u201d<\/p>\n<p>Many doubt EDA companies will ever be able to standardize on an API. \u201cThat\u2019s just not how this industry has converged,\u201d says Normal\u2019s Srinivasan. \u201cBut the good news is that in an LLM world, much of it comes down to how much each vendor wants to expose to other tools. The standard doesn\u2019t need to be a formal specification. The standard just needs to be that the data is open. If it\u2019s accessible and text-based, modern AI systems can adapt to the specifics of each tool\u2019s format. The real question is whether individual vendors will recognize that openness is the faster path to staying relevant in AI-native design flows.\u201d<\/p>\n<p>Conclusion<\/p>\n<p>AI will increasingly penetrate all aspects of semiconductor design, implementation, and verification. The big question is, who is most likely to succeed? Today, the big semiconductor companies are the only ones capable of doing it. They have the data, they have restricted flows and methodologies, and they have the business incentive.<\/p>\n<p>Creating an agentic flow that starts from a specification can provide them with a competitive advantage, until such time as all their competitors have similar capabilities. At that point, the technology is likely to be transferred to the large EDA companies for more general productization. Until then, the onus is on the EDA companies to provide the necessary information from tools, for the tools to take direction from agents, and for them to work with their competitors to bring about as much standardization as possible.<\/p>\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"Key takeaways Agentic methodologies need to be able to reason across multiple data formats and abstractions. It is&hellip;\n","protected":false},"author":2,"featured_media":20788,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[179,7493,24,25,10595,10465,15710,15711,15712,15713,15714,15715,15716,10600,9383,15717,15718,15719],"class_list":{"0":"post-22725","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-agentic-ai","8":"tag-agentic-ai","9":"tag-agentic-artificial-intelligence","10":"tag-ai","11":"tag-artificial-intelligence","12":"tag-cadence","13":"tag-eda","14":"tag-eda-methodologies","15":"tag-electronic-system-level","16":"tag-high-level-synthesis","17":"tag-ic-manage","18":"tag-moores-lab-ai","19":"tag-normal-computing","20":"tag-rtl","21":"tag-siemens-eda","22":"tag-synopsys","23":"tag-systemc","24":"tag-university-of-southampton","25":"tag-vtool"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/22725","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=22725"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/22725\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/20788"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=22725"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=22725"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=22725"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}