The evolution of Retrieval-Augmented Generation (RAG) that combines reasoning and grounding for accuracy is also being extensively discussed.

Other developments include the passing of regulatory frameworks like the EU AI Act that will make governance and compliance central to AI strategy while businesses can be expected to accelerate the embedding of AI in platforms across functions, while business measurement is shifting from generic ROI to process-level KPIs such as cycle-time reduction, cost-to-serve, and risk mitigation.

New Electronics approached Avnet Silica and Synaptics at the end of 2025 to discuss what they thought would be some of the key developments and issues going forward.

Michael Uyttersprot, Market Segment Manager Artificial Intelligence and Vision at Avnet Silica, identified several key trends. “The first will be the rise of agentic AI and the emergence of use cases in industrial automation and smart infrastructure. Many organisations, however, are not yet ready to adopt agentic AI: there’s a lack of understanding of its capabilities, limited team training, and slow adoption of foundational AI practices so there will need to be investment in education and infrastructure before they can fully leverage these advanced systems.

“Another is the continued evolution of edge AI functionality, particularly the growth of generative AI on embedded and edge devices and finally, an increasing focus on energy-conscious AI deployment.”

“From a Synaptics perspective, one of the key trends being prioritised is the development of Large Language Models (LLMs) for the IoT Device Edge,” added Nebu Philips, Senior Director, . Technical Product Marketing, Synaptics. “While the term may appear contradictory, smaller LLMs are essential for delivering rich, multimodal and context-aware AI to IoT and other edge products. They enable natural language interaction with electronic systems. As these models are scaled down and optimised, they will allow a growing range of edge devices to understand and respond to voice commands and user queries in real time.”

To support open-ended, general-purpose voice interactions, LLMs typically need to be extremely large in scale. But that is no longer the case with IoT and edge devices as Philips explained.

“They are designed for highly specific use cases and, as a result, the range of interactions required is far more constrained. In these scenarios, it is both practical and efficient to scale LLMs down and optimise them for individual applications. In addition, a lot of applications cannot tolerate the latency, cost, or reliability trade-offs associated with remote data centres. As a result, there is a growing need to run AI locally, on-chip. Small, application-optimised LLMs that can run on embedded AI-enabled processors are therefore critical to enabling the next generation of smarter, more capable edge products.”

AI at the edge remains relatively new, and developers will need to be able to access to common compilers and toolchains.

“A significant trend is expected to be the adoption of models that are optimised for low power consumption that will run on edge devices,” said Philips.

“The adoption of low-power models at the edge is accelerating and there are clear indications that developers are embracing these technologies. Large OEMs are recognising the potential for everyday devices, such as dishwashers and washing machines, to interact naturally with users via speech. This is made possible by a growing pipeline of models that can run directly on embedded devices with minimal to no reliance on connectivity needed.

“The primary challenge lies in optimising these models to run efficiently on embedded processors. Though this challenge is expected to be resolved relatively quickly, as numerous companies work to implement the most efficient versions of these functions.”

Business integration

For companies and organisations integrating AI into their core business strategy is a critical challenge as is measuring business outcomes.

“AI adoption is rapidly moving beyond initial hype and curiosity toward tangible, operational impact. Increasingly, AI is a primary focus and is embedded directly into hardware and engineering workflows. From predictive maintenance and adaptive manufacturing to personalised edge solutions, AI is becoming a core strategic enabler,” said Uyttersprot.

“At Avnet Silica, through our own projects, collaborations with partners, and customer engagements, we continually explore ways to make AI strategies more practical, scalable, and aligned with business objectives.”

According to Uyttersprot, when it comes to metrics of success it really varies by application.

“Industrial operations may focus on reduced downtime or improved throughput, while edge deployments prioritise low-latency inference and offloading of cloud resources. Even then, in some cases, AI initiatives may not yield immediate, measurable outcomes, but instead establish a foundation for future impact.”

So, what are the challenges when moving from pilot projects to full-scale AI deployments?   

“Building on edge AI adoption, there are both encouraging developments and ongoing challenges. On the positive side, several AI functions now being integrated into products are based on mature and reliable technologies, including speech-to-text, object detection, and face detection. Vision is now a practical and deployable option, and OEMs are increasingly enthusiastic about incorporating vision as a core modality in their products,” said Philips.

“Newer AI-driven features are emerging, and more are under development. However, even well-established capabilities face significant challenges, many of which are linked to ecosystem maturity.”

OEMs frequently encounter critical questions when moving from pilot projects to production.

“Who will provide ongoing support for these features? Is it necessary to build an in-house data science team? Productising AI at scale cannot be achieved quickly or organically, even for large organisations,” said Philips. He continued, “Challenges extend across technology, including the need for standardised toolsets, selecting the appropriate silicon, and developing robust models. Additional hurdles include regulatory considerations, such as navigating restrictions on data collection for training small LLMs.”

Philips remains positive despite the challenges. “Companies across the value chain are increasingly collaborating to address these issues, and meaningful progress is being made to enable broader adoption of AI at scale.”

One of the key AI challenges facing the market today is the growing pressure on memory availability.

“As AI workloads demand higher bandwidth to process massive datasets efficiently, key memory suppliers are shifting their focus toward high-bandwidth solutions for data centres. These solutions are becoming more cost-effective compared to traditional DDR4 and DDR5 options, which accelerates this trend,” according to Uyttersprot. “However, this shift creates a ripple effect: standard memory components are becoming increasingly scarce. For customers, this translates into significant challenges when building Bills of Materials (BOMs) and securing design-ins for AI systems. Ensuring supply continuity and planning for alternative memory configurations will be critical for companies looking to scale AI deployments without disruption.”

The increased adoption of models that are optimised for low power consumption and that will run on edge devices will impact the adoption and roll-out of AI, according to both Philips and Uyttersprot.

“The adoption of low-power models at the edge is accelerating,” Philips observed. “Synaptics has been advocating for edge AI for several years, and there are now clear indications that developers are embracing these technologies. Large OEMs are recognising the potential for everyday devices to interact naturally with users via speech. This is made possible by a growing pipeline of models that can run directly on embedded devices with minimal to no reliance on connectivity needed. While smart home networks remain relevant, constant communication with remote data centres is no longer a necessity for many applications.

“Core technologies such as speech-to-text and text-to-speech are well understood. The primary challenge lies in optimising these models to run efficiently on embedded processors.

“Over time, these capabilities will become platform-level functions. There is little value in repeatedly redeveloping speech-to-text capabilities for each device. As OEMs begin to see the robustness and efficiency of these models, they will be increasingly willing to integrate them directly into products. Such AI-enabled features will become a clear differentiator, helping OEMs add value and stand out in the marketplace.”

Uyttersprot added, “Edge systems can reduce resource consumption and enhance daily life by embedding intelligence directly into the environments where people live and work.

“The edge often represents the ideal deployment point because latency, privacy, bandwidth, and security constraints are naturally addressed when data is processed locally. Connected edge devices still benefit from cloud coordination, yet they avoid the energy, cost overheads, and risks associated with centralised inference.

“When an application can run at the edge, it usually offers a more sustainable, responsive, and user-aligned solution that should also be more reliable. The challenge is that not all workloads can be compressed or optimised sufficiently, so a balanced approach remains necessary.”

And finally…..

The development of agentic AI is seen by many as a key trend for 2026, so which industries are likely to embrace it as the technology moves from pilots to mainstream deployment?

“The initial target market for agentic AI has always been the enterprise sector,” said Philips. “While Synaptics’ primary focus remains on edge computing and the IoT, agentic AI is expected to migrate to edge devices over time. Preparations are already underway to ensure the right mix of enabling technologies is in place when ready.

“A critical factor in this evolution will be the standardisation of the software stack. For agentic AI to deliver a consistent and reliable experience across home, business, and industrial environments, it must operate on a standardised foundation. If each vendor implements agentic AI differently, the user experience will be fragmented, limiting adoption and effectiveness.

“Synaptics, along with other major technology providers, is working to establish a pathway toward a cohesive ecosystem that will work for everyone. Achieving this will require common standards and open-source technologies.

“While the industry is not quite there yet, progress is being made towards creating a framework that will support the broad adoption of agentic AI across multiple sectors.”

The industry is also seeing a strong focus on hardware technologies that enable AI to be deployed more efficiently, reliably, and with greater capability on constrained resources.

“Engineers want access to processors, including specialised accelerators, AI-enabled MCUs and MPUs, and SoCs, that are optimised to handle sensor fusion and edge inference within strict power and thermal budgets,” said Uyttersprot. “Purpose-built processors from AI-focused hardware manufacturers, such as DEEPX, now offer ultra-efficient AI chips and modules that enable high-performance, low-power inference on edge devices, allowing engineers to run complex models locally without compromising speed or accuracy.

“Memory and storage, particularly DRAM, have seen exceptional demand in recent months from centralised, high-throughput AI workloads. Furthermore, energy-efficient power and thermal management remain a priority across all tiers.”

In conclusion while the technology is in place, Philips argued that establishing a healthy ecosystem would help to accelerate OEM adoption of AI across a much broader range of products.

“While progress is clearly being made and software innovation is advancing rapidly, the ecosystem’s maturity remains a key challenge for OEMs,” he concluded.