//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>
Arm finally made it official this week: the launch of its first production-ready silicon, the Arm AGI CPU, co-developed in conjunction with Meta and targeting agentic AI workloads in the data center. It was dubbed by some as Arm 2.0, a new era for Arm, and Arm CEO Rene Haas highlighted the company’s shift into selling silicon for the first time, as opposed to just licensing IP.

This would be a new thread for Arm’s business model. The company would continue licensing its processor IP to customers looking to do that, or reference platforms in the form of compute subsystems (CSS). At the same time, the Arm AGI CPU would be offered to data center customers needing a production-ready, application-specific CPU for growing agentic AI workloads in the data center.
The news was announced at an event involving Arm, Meta, OpenAI, and several launch partners and ecosystem partners at the Festival Pavilion in San Francisco, California, on Tuesday (March 24), with views of the Golden Gate Bridge and Alcatraz.
Arm said the new CPU would form the foundation for agentic data centers, with silicon providing the ecosystem with greater flexibility in how they build and deploy Arm-based infrastructure—whether licensing Arm IP, adopting Arm CSS, or deploying Arm-designed silicon.

According to Arm’s launch information, the CPU features:
- Up to 136 Arm Neoverse V3 cores per CPU, delivering leading performance per core, SoC, blade and rack, with 6GB/s memory bandwidth per core at sub-100ns latency.
- 300-W TDP with a dedicated core per program thread enabling deterministic performance under sustained load, eliminating throttling and idle threads.
- Support for high-density 1U server chassis that supports air-cooled deployments with up to 8,160 cores per rack, and liquid-cooled systems delivering 45,000+ cores per rack.
Not competing with data center customers
Contrary to many commentators’ concerns that Arm would simply be competing with its own customers by launching its own silicon, the event and show floor showcasing server racks and demos seemed to exude visible excitement about the development. EE Times spoke to many people, and the words used were almost unanimous: It provides them with optionality for data centers.
The target is data centers needing many CPU cores, not the low-power microcontroller market. So, Arm is more likely to go head-on against x86 architectures in the data center, or even Nvidia Vera CPUs in some cases.
According to Counterpoint Research, AI hyperscalers are projected to ship more than 10 million AI ASICs in 2028. Currently, most of these ASICs are “attached” to x86 CPUs. The co-founder of Counterpoint, Neil Shah, said in a blog post, “Arm’s AGI CPU provides a purpose-built, native alternative that eliminates the inefficiencies of mixing architectures, offering a seamless Arm-on-Arm environment for AI accelerators.”
In fact, Shah said, “The pressure is on the x86 camp to protect its market share and position. It remains to be seen how Intel and AMD react, whether with pricing or a better architecture approach or software, areas they planned to address with their x86 Ecosystem Advisory Group formed 18 months ago.”
On the flip side, customers’ own design teams could stick to building in-house. Shah commented, “While AI hyperscalers look to diversify with heterogeneous architecture, with some already licensing Arm Neoverse cores, those with strong internal teams may be less inclined to buy the Arm AGI CPU off-the-shelf, which could limit some TAM [total addressable market for Arm].”
In the media Q&A session, Mohamed Awad, executive VP for the cloud AI business unit at Arm, responded to a question about how customers developing their own CPUs would see the new chip by highlighting the optionality it gives customers and that they were free to choose from Arm’s complete portfolio.
“We can walk into any of these customers and say, here’s a portfolio of products,” Awad said. “We can give you IP, we can give you CSS, and we can give you the AGI CPU. In some cases, they may just want IP, in others they may want CSS, and in others they may want a finished piece of silicon in the form of the AGI CPU. And guess what? They get software leverage across all those products. So, what it really gives them at the end of the day is optionality.”
The revenue opportunity: 10× growth in gross profits
The move from IP to an “application-specific general-purpose compute” product (as analyst Leonard Lee of neXt Curve suggested it should be called) potentially brings Arm into a completely new league financially. Shah of Counterpoint pointed this out, highlighting that traditionally Arm could earn a few dollars in royalties per server chip. But by selling the full AGI CPU, the ASP shifts to hundreds or thousands of dollars per unit, representing a 10×-50× increase in revenue per socket.
“Arm is estimating $500 per chip in gross profits versus $50 currently through selling IP or $100 per chip through CSS,” Shah said.
Arm expects FYE28 to be the first year for meaningful Arm AGI CPU revenue and its exponential growth from there, according to Shah. By 2031, Arm expects $10 billion in IP/CSS revenue and $15 billion in AGI CPU sales, targeting a $25-billion annual revenue that year.
Customers and the roadmap for more CPUs
Arm stated that Meta is its lead partner and customer, co-developing the Arm AGI CPU to optimize gigawatt-scale infrastructure for its Meta family of apps and to work alongside Meta’s own custom MTIA accelerators. Other launch partners include Cerebras, Cloudflare, F5, OpenAI, Positron, Rebellions, SAP, and SK Telecom. Each of these is working with Arm on the deployment of the Arm AGI CPU to accelerate AI-driven services across cloud, networking, and enterprise environments. Arm also said commercial systems are now available for order from ASRockRack, Lenovo, and Supermicro.
The Arm AGI CPU is just the first of Arm’s silicon CPUs. Arm’s CEO presented a roadmap on stage, indicating that follow-on products are committed, continuing in parallel with the Arm Neoverse CSS product roadmap so that all Arm data center customers move forward together on platform architecture and software compatibility.

Since the new CPU is hugely complex, design tools are key to getting something like this to market. So, it’s not surprising that Synopsys also announced this week that it supported the design of Arm’s CPU with solutions across its full-stack design portfolio, including EDA, interface IP, and hardware-assisted verification.
In a separate announcement tied to Arm’s launch, Meta said, “Through our collaboration with Arm, we’ll co-develop multiple generations of cutting-edge CPUs built to enable massive compute power in limited space, supporting the AI-optimized data centers and large gigawatt-scale AI deployments that are central to our AI innovations.”
The Arm AGI CPU has been developed with Meta to optimize infrastructure for Meta’s family of apps and work alongside its custom MTIA silicon.
Watch the interviews
EE Times spoke to a number of people to get a snapshot of the mood at the Arm Everywhere launch event. You can watch them below.
Mohamed Awad on Arm’s first foray into silicon
Listen to the audio only here:
Synopsys’ Ravi Subramanian on development of Arm AGI CPU
Listen to the audio only here:
Positron on role of Arm CPU in orchestration of intelligence
Listen to the audio only here:
Rebellions’ Marshall Choy on how Arm AGI CPU supports its focus on AI inferencing
Listen to the audio only here:
Counterpoint Research: Arm AGI CPU targets demand for heterogeneous architectures in data centers
Listen to the audio only here:


