AI Development10 min read

Arm AGI CPU: First Physical Chip in 35 Years Guide

Arm launches its first physical chip in 35 years: a 136-core Neoverse V3 on TSMC 3nm. Meta leads adoption. Impact on AI infrastructure and cloud costs.

Digital Applied Team
March 27, 2026
10 min read
136

CPU Cores

3nm

TSMC Process

300W

Thermal Design Power

45K+

Cores per Rack

Key Takeaways

Arm ships its first physical chip ever: After 35 years of exclusively licensing IP to partners like Apple, Qualcomm, and Samsung, Arm has designed, manufactured, and will sell its own production silicon. The AGI CPU is a 136-core data center processor built on TSMC's 3nm process, representing a fundamental shift in the company's business model.
Meta co-developed the chip as lead customer: Meta partnered with Arm from the design phase to optimize the AGI CPU for its gigawatt-scale data center infrastructure. The chip is designed to work alongside Meta's custom MTIA accelerators, with additional launch partners including OpenAI, Cloudflare, Cerebras, SAP, and SK Telecom.
Two times the performance per rack versus x86: Arm claims the AGI CPU delivers more than 2x the performance per rack compared to x86 processors from Intel and AMD. A liquid-cooled Supermicro rack can house 336 AGI CPUs for over 45,000 cores, while a standard air-cooled 36kW rack fits 8,160 cores across 30 dual-node blades.
300W TDP challenges x86 efficiency at scale: The AGI CPU operates at 300 watts thermal design power while delivering up to 136 cores. Competing x86 processors from AMD and Intel require 400-500W for similar core counts, giving Arm a meaningful power efficiency advantage in data centers where electricity costs dominate operating expenses.
Commercial systems are available to order now: Unlike many chip announcements that promise future availability, servers featuring the Arm AGI CPU are already available for order from ASRock Rack, Lenovo, and Supermicro, with broader availability expected in the second half of 2026.

On March 24, 2026, Arm announced the AGI CPU, a 136-core data center processor that the company designed and will sell as finished silicon. For a company that has spent 35 years exclusively licensing intellectual property to chip manufacturers, this is not an incremental product update. It is a structural change to how one of the semiconductor industry's most influential companies operates.

The AGI CPU is built on TSMC's 3nm process using Neoverse V3 cores, runs at up to 3.7 GHz boost, and operates within a 300W thermal design power envelope. Meta co-developed the chip and is the lead customer, with OpenAI, Cloudflare, Cerebras, SAP, and SK Telecom among the announced launch partners. Commercial servers from ASRock Rack, Lenovo, and Supermicro are already available to order. For organizations evaluating their AI and digital transformation infrastructure, this launch reshapes the competitive landscape for data center computing.

What Is the Arm AGI CPU

The Arm AGI CPU is a data center processor designed for AI-era infrastructure workloads. The name AGI stands for Arm General Infrastructure, a deliberate choice to position the chip as a general-purpose foundation for cloud computing, AI inference orchestration, and agentic AI workloads rather than a narrow AI accelerator. It is the first production silicon that Arm has designed, manufactured, and sold directly, rather than licensing the design to a partner for fabrication and sale.

136 Neoverse V3 Cores

Up to 136 cores spread across two dies, clocking up to 3.2 GHz all-core and 3.7 GHz boost. Designed from the ground up for data center throughput at scale.

TSMC 3nm Process

Manufactured on TSMC's advanced 3nm node, delivering leading transistor density and power efficiency. The same process node used in Apple's latest chips.

1OU Dual-Node Design

Arm's reference server packs two AGI CPUs with dedicated memory and I/O into a 1OU blade, achieving 272 cores per blade and 8,160 cores per standard rack.

The processor supports 12 channels of DDR5 memory at up to 8800 MT/s, delivering more than 800 GB/s of aggregate memory bandwidth, or approximately 6 GB/s per core with a target of sub-100ns latency. PCIe Gen6 connectivity provides the I/O bandwidth needed for high-speed networking and accelerator interconnects in modern data center deployments.

Technical Specifications and Architecture

The AGI CPU's architecture reflects a design philosophy focused on throughput per watt rather than peak single-thread performance. Every major subsystem, from the core microarchitecture to the memory controller to the on-die interconnect, is optimized for the parallel, memory-intensive workloads that dominate AI data centers.

Complete Technical Specifications
SpecificationDetails
Core CountUp to 136 Neoverse V3 cores (dual-die)
Clock Speed3.2 GHz all-core / 3.7 GHz boost
Process NodeTSMC 3nm (N3E)
TDP300W
Memory12-channel DDR5-8800, 800+ GB/s bandwidth
Memory LatencySub-100ns target
Bandwidth per Core~6 GB/s
I/OPCIe Gen6
Server Form Factor1OU dual-node (272 cores per blade)

The dual-die design is a pragmatic choice. Rather than pushing a monolithic die to extreme sizes, which increases defect rates and manufacturing costs, Arm splits the cores across two chiplets connected by a high-bandwidth die-to-die interconnect. This approach, similar to what AMD pioneered with its EPYC chiplet architecture, allows higher yields while still delivering a unified memory space and coherent cache hierarchy across all 136 cores.

Historic Shift: Licensing to Manufacturing

For 35 years, Arm operated on a model that was unique in the semiconductor industry. The company designed processor architectures and licensed them to manufacturers, collecting royalties on every chip sold. This approach made Arm the most pervasive computing architecture in the world: more than 280 billion Arm-based chips have been shipped, powering everything from smartphones and embedded systems to cloud servers and supercomputers. But Arm never manufactured or sold a chip itself.

The AGI CPU changes that equation entirely. Arm is now both the architect and the manufacturer (through TSMC fabrication), selling finished silicon directly to data center operators. This is not a reference design that a partner will rebrand and sell. It is an Arm product, manufactured to Arm's specifications, sold under Arm's brand, and supported by Arm's engineering team.

Before and After: Arm's Business Model
DimensionBefore AGI CPUAfter AGI CPU
Revenue modelIP licensing + royaltiesIP licensing + chip sales
Customer relationshipChip designers (Qualcomm, Apple)Data center operators (Meta, OpenAI)
Value capture per chipRoyalty percentageFull chip margin
Competitive positionNeutral IP providerCompetitor to licensees

The strategic risk is obvious. Arm's licensing business depends on being a neutral platform that licensees trust. By entering the chip market directly, Arm is now competing with some of its own customers, a dynamic that Intel experienced when it tried to be both a foundry and a chip vendor. Arm is mitigating this by focusing the AGI CPU specifically on the data center market and maintaining its licensing business for mobile, automotive, and IoT, but the tension is real and will be closely watched by industry analysts.

Key Customers and Partnerships

The AGI CPU launch is backed by one of the most significant partner ecosystems in recent chip history. Meta's involvement as co-development partner and lead customer gives the chip immediate credibility and guaranteed volume, while the breadth of additional partners suggests the chip addresses a genuine market need rather than a niche one.

Meta (Lead Partner)

Co-developed the AGI CPU to optimize gigawatt-scale infrastructure for the Meta family of apps. Designed to work alongside Meta's custom MTIA accelerators for AI inference and serving workloads across Facebook, Instagram, WhatsApp, and Threads.

AI Infrastructure Partners

OpenAI, Cerebras, Positron, and Rebellions are among the AI companies adopting the AGI CPU for inference serving, model orchestration, and agentic AI workloads where CPU-side performance directly affects response latency.

Cloud and Edge Partners

Cloudflare and F5 are adopting the AGI CPU for edge and network infrastructure. SAP and SK Telecom bring enterprise software and telecommunications use cases. The ecosystem also includes AWS, Google, and Microsoft as technology partners.

System Manufacturers

ASRock Rack, Lenovo, and Supermicro are the launch server partners with systems available to order immediately. NVIDIA, Samsung, SK hynix, and TSMC provide supporting technology for accelerator integration, memory, and fabrication.

The partner list is notable for what it signals about the chip's target market. These are not consumer electronics companies or mobile device manufacturers. Every major partner operates at data center scale, which reinforces Arm's positioning of the AGI CPU as a purpose-built infrastructure processor rather than a general-purpose chip that also works in data centers.

Competitive Landscape: Nvidia, AMD, and Intel

The Arm AGI CPU enters a data center processor market that has been dominated by x86 architecture for decades. AMD's EPYC line and Intel's Xeon processors hold the overwhelming majority of the server CPU market. Arm-based processors have been gaining share through cloud provider custom silicon, including AWS Graviton, Google Axion, and Microsoft Cobalt, but Arm itself has never competed directly until now.

Data Center CPU Comparison
MetricArm AGI CPUAMD EPYC 9005Intel Xeon 6
Max Cores136192144
TDP300W500W500W
Process Node3nm3nm/4nmIntel 3
ArchitectureArm v9x86-64x86-64
Memory Channels12128

The competitive dynamics are nuanced. AMD and Intel compete on raw core counts and single-thread performance, while Arm is competing on performance per watt and rack density. At 300W TDP with 136 cores, the AGI CPU offers roughly 0.45 cores per watt, compared to 0.38 for AMD's 192-core EPYC at 500W and 0.29 for Intel's 144-core Xeon at 500W. In data centers where power and cooling represent 30-40% of total operating costs, that efficiency advantage compounds quickly at scale.

It is worth noting that Nvidia's Grace CPU, also Arm-based, occupies a similar space. The difference is that Nvidia positions Grace primarily as a CPU companion to its GPU accelerators, while Arm is positioning the AGI CPU as a standalone infrastructure processor. The two chips may end up complementary rather than competitive, especially in deployments that combine Arm-based CPUs with Nvidia or other accelerators. For a broader view of how these AI trends are shaping 2026, the convergence of custom silicon and AI workloads is one of the defining themes.

Power Efficiency and Cloud Cost Impact

Power consumption is the single largest variable cost in modern data center operations. Electricity for compute and cooling typically accounts for 30-40% of total facility operating expenses, and that percentage is increasing as AI workloads demand more power-dense deployments. The Arm AGI CPU's 300W TDP, combined with its core density, directly addresses this cost structure.

Air-Cooled Configuration
  • 30 dual-node blades per 36kW rack
  • 8,160 cores per rack
  • 272 cores per 1OU blade
  • Standard data center cooling
Liquid-Cooled Configuration
  • 336 AGI CPUs per 200kW rack
  • Over 45,000 cores per rack
  • Supermicro partnership
  • Maximum density for AI data centers

The cost implications extend beyond electricity. Higher core density per rack means fewer racks for the same compute capacity, which translates to less floor space, fewer network switches, shorter cable runs, and lower facilities overhead. For hyperscale operators like Meta that measure infrastructure in thousands of racks, these efficiency gains compound into substantial capital and operating expense reductions. For organizations evaluating cloud storage cost optimization strategies, the same principle applies: infrastructure efficiency at scale is a competitive advantage.

AI Data Center Implications

The AI data center is evolving from a GPU-centric model, where the CPU is an afterthought that feeds data to accelerators, to a heterogeneous computing model where CPUs handle a growing share of the workload directly. Agentic AI workloads, in particular, place significant demands on CPU-side performance for orchestration, context management, tool calling, and serving infrastructure.

The Arm AGI CPU is explicitly positioned for this shift. Rather than competing with GPUs for training or dense matrix computation, it targets the surrounding infrastructure: the inference servers that route requests to models, the orchestration layers that manage multi-step agent workflows, the data preprocessing pipelines that prepare inputs, and the serving infrastructure that delivers responses to end users.

Inference Serving

High core counts and memory bandwidth enable efficient parallel inference request handling, reducing latency for real-time AI applications at scale.

Agent Orchestration

Agentic AI workloads require CPU performance for context management, tool calling, and multi-step workflow coordination. The AGI CPU's core density addresses this directly.

Data Preprocessing

Tokenization, embedding preparation, and data pipeline processing at 800+ GB/s memory bandwidth ensures CPUs keep accelerators fed without becoming a bottleneck.

The timing is significant. As AI deployments mature from experimental to production-scale, the ratio of CPU-to-GPU compute in a typical AI data center is increasing. Early AI infrastructure was almost entirely GPU-bound, but production AI systems spend substantial compute cycles on serving, routing, pre/post-processing, and orchestration, all of which are CPU workloads. The AGI CPU is designed for this emerging demand profile rather than the GPU-dominated training phase.

Market Analysis and What This Means for Businesses

Arm's stock surged 16% on the announcement, reflecting investor confidence in the strategic pivot. The market reaction is justified by the economics: selling complete chips captures significantly more revenue per unit than collecting licensing fees. If the AGI CPU achieves meaningful data center market share, Arm's revenue per chip could increase by an order of magnitude compared to its licensing model.

For businesses that are not hyperscale data center operators, the implications are indirect but meaningful. The AGI CPU will influence cloud computing economics within 12 to 18 months as cloud providers evaluate and potentially adopt the chip. AWS already offers Arm-based Graviton instances at lower price points than x86 equivalents. If the AGI CPU delivers on its performance-per-watt claims, it could accelerate the trend of cloud providers offering cost-effective Arm-based compute tiers.

The broader strategic signal is that the data center chip market is becoming more competitive, which benefits buyers. Intel's monopoly eroded with AMD's EPYC comeback. Custom silicon from AWS, Google, and Microsoft further diversified supply. Now Arm entering as a direct competitor adds another option, increasing pressure on all vendors to improve performance per dollar. For AI infrastructure planning in 2026, the additional competition is unambiguously positive for buyers.

Conclusion

The Arm AGI CPU is not just another data center chip. It is a structural change in how one of the semiconductor industry's most important companies goes to market. After 35 years of licensing IP, Arm is now selling silicon, and the implications reach far beyond the chip itself. Meta's involvement as co-development partner validates the technical approach. The broader partner ecosystem, from OpenAI to Cloudflare to Cerebras, signals that the market demand is real.

For businesses, the practical impact will unfold over 12 to 24 months as cloud providers evaluate adoption and the AGI CPU moves from initial availability to broad deployment. The performance per watt advantage, the rack density improvements, and the growing maturity of the Arm server software ecosystem all point toward lower cloud computing costs and more options for infrastructure planning. The data center chip market just became meaningfully more competitive, and that benefits everyone who runs workloads in the cloud.

Build AI-Ready Infrastructure

The shift to Arm-based data center computing is accelerating. Our team helps businesses design cloud architecture and AI infrastructure strategies that optimize for performance, cost, and scalability.

Free consultation
Expert guidance
Tailored solutions

Related Articles

Continue exploring with these related guides