Edge AI, mired by fragmentation and a lack of broad availability of toolchains, is inching toward open architectures and open-source hardware and software. This shift was apparent at Synaptics Tech Day on 15 October 2025, held at the company’s headquarters in San Jose, California.
In other words, some edge AI processors are moving away from proprietary, closed AI software and tooling toward open software and ecosystems to deliver AI applications at scale. Google’s collaboration with Synaptics embodies this open-source approach to edge processors, aiming to deliver AI intelligence at very low power levels.
Figure 1 Astra SL2610 processors provide multimodal AI compute for smart appliances, home and factory automation equipment, charging infrastructure, retail PoS terminals and scanners, and more. Source: Synaptics
Google, which built a mini-TPU ASIC for edge AI under the Coral brand back in 2017, subsequently built the Coral NPU as a four-way superscalar 32-bit RISC-V CPU. Google is hoping that edge AI silicon suppliers will start using this small, lightweight CPU as a consistent front-end to other execution units on an edge AI processor.
As part of this initiative, Google has open-sourced a compiler and software stack to port models from any ML framework onto the CPU. That allows silicon vendors like Synaptics to create an open-standards-based pipeline from the ML frameworks all the way down to the NPU front-end.
But the question is why RISC-V, especially when Synaptics’ SL2610 processor is built around Arm Cortex-A55, Cortex-M52 with Helium, and Mali GPU technologies. Synaptics managers say that the move to RISC-V is intended to reduce fragmentation in software stacks serving edge AI designs.
When asked about this, John Weil, head of processing at Synaptics, told EDN that many semiconductor suppliers are employing RISC-V cores, generally as assisting cores, and most people don’t know that they are even there. “In this case, it’s a much more performance-oriented RISC-V core to perform neural processing.”
Synaptics tie-up with Google
In January 2025, Synaptics announced it would integrate Google’s ML core with its Astra open-source software to accelerate the development of context-aware devices. The collaboration aimed to combine AI-native hardware with open-source software to accelerate the development of context-aware devices.
Next, Synaptics introduced the Torq edge AI platform, which combines NPU architectures with open-source compilers to set a new standard in edge AI application development. Torq, leveraging an open-source IREE/MLIR compiler and runtime, has been critical in facilitating the deployment of Google’s RISC-V-based Coral open NPU in the edge AI processor Astra SL2610.
Figure 2 Torq, a combination of AI hardware and software, includes Google’s Coral NPU and Synaptics’ home-grown AI accelerator. Source: Synaptics
At Synaptics Tech Day, the company showcased the Astra SL2610 processor powering several edge AI applications. That included e-bikes, EV charging infrastructure, industrial-grade AI glasses, command-based speech recognition, and smart home automation.
Vikram Gupta, chief products officer at Synaptics, told EDN that when the company wanted to go broad, it decided that this processor would be AI native. “When we met with Google, it instantly resonated with us because they were working on Coral NPU, an open ML accelerator,” he said. “We also wanted to go open source as part of our AI-native processor story.”
Regarding Google’s interest in this collaboration, Gupta said that Google benefits because it has a silicon partner. “Google gets mindshare in the AI race while it’s prominent in the cloud as well as the edge AI.” Moreover, Google could bring multimodal capabilities to this tie-up to enable more context-aware user experiences, said Nina Turner, research director for enabling technologies and semiconductors at IDC.
Another critical goal of this silicon partnership is to confront fragmentation in the edge AI world. “Our take is that the only way to keep up with AI innovation at the edge is to be open,” said Weil of Synaptics. “While some edge AI suppliers want everything in their ecosystem, we are focused on how we knock down walled gardens.”
Regarding collaboration with Google, Weil added, “As an edge AI guy, I need to be working with guys working in the cloud, focused on the next big AI idea.” He further summed up by saying that for Synaptics, the challenge was how to make hardware that keeps up with the speed of AI, open architecture, and open source. “So, we took Google technology and matched it with ours.”
Open and collaborative
At a time when innovations in AI software and algorithms are far outpacing silicon advancements, an AI-native approach to edge IoT processing could be critical in adopting contextual LLMs for audio, voice, text, and video applications at the edge.
The launch of the Astra SL2610 processor, an AI-enabled system-on-chip (SoC) encompassing application processor-level as well as microcontroller-level parts, marks an important step in the availability of scalable, open systems for deploying real-world edge AI. These AI-native chips are expected to help create an ecosystem that will simplify development and unlock powerful new applications in the edge AI realm.
“We believe that the only way to keep up with AI innovation at the edge is to be open and collaborative,” Weil concluded.
Related Content