- Pro
Arm CPUs can compete directly with Nvidia and Intel server processors
Comments (0) ()When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
(Image credit: Nvidia)
- Arm-based Neoverse CPUs can now communicate directly with Nvidia GPUs efficiently
- NVLink Fusion eliminates PCIe bottlenecks for AI-focused server deployments
- Hyperscalers such as Microsoft and Google can use custom Arm CPUs immediately
Nvidia has announced Arm-based Neoverse CPUs will now be able to integrate with its NVLink Fusion technology.
This integration allows Arm licensees to design processors capable of direct communication with Nvidia GPUs.
- Amazon Black Friday deals are live: here are our picks!
Previously, NVLink connections were primarily limited to Nvidia’s own CPUs or servers using Intel and AMD processors, so hyperscalers such as Microsoft, Amazon, and Google can now pair custom Arm CPUs with Nvidia GPUs in their Workstations and AI servers.
You may like-
Samsung will help Nvidia build custom non-x86 CPUs and XPUs in a bid to stave off competition from OpenAI, Google, AWS, Broadcom, Meta, and its other key partners
-
'Enfabrica has the coolest technology': Nvidia spent nearly $1 billion on a chip maker to secure its future on the same day it gave Intel $5 billion - and here's why this is actually a more important investment
-
Intel will build custom x86 CPUs for Nvidia's AI infrastructure as world's largest company invests $5 billion in beleaguered tech firm - and don't discount a data center x86 APU
Expansion of NVLink beyond proprietary CPUs
The development also enables Arm-based chips to move data more efficiently compared to standard PCIe connections.
Arm confirmed its custom Neoverse designs will include a protocol that allows seamless data transfer with Nvidia GPUs.
Arm licensees can build CPU SoCs that connect natively to Nvidia accelerators by integrating NVLink IP directly.
Customers adopting these CPUs will be able to deploy systems where multiple GPUs are paired with a single CPU for AI workloads.
Are you a pro? Subscribe to our newsletterContact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.The announcement was made at the Supercomputing ’25 conference and reflects participation by both CPU and GPU developers.
Nvidia’s Grace Blackwell platform currently pairs multiple GPUs with an Arm-based CPU, while other server configurations rely on Intel or AMD CPUs.
Microsoft, Amazon, and Google are deploying Arm-based CPUs to gain more control over their infrastructure and reduce operational costs.
You may like-
Samsung will help Nvidia build custom non-x86 CPUs and XPUs in a bid to stave off competition from OpenAI, Google, AWS, Broadcom, Meta, and its other key partners
-
'Enfabrica has the coolest technology': Nvidia spent nearly $1 billion on a chip maker to secure its future on the same day it gave Intel $5 billion - and here's why this is actually a more important investment
-
Intel will build custom x86 CPUs for Nvidia's AI infrastructure as world's largest company invests $5 billion in beleaguered tech firm - and don't discount a data center x86 APU
Arm itself does not manufacture CPUs but licenses its instruction set architecture and sells designs for faster development of Arm-based processors.
NVLink Fusion support in Arm chips allows these processors to work with Nvidia GPUs without requiring Nvidia CPUs.
The ecosystem also affects sovereign AI projects, where governments or cloud providers may want Arm CPUs for control-plane tasks.
NVLink allows these systems to use Nvidia GPUs while maintaining custom CPU configurations.
Softbank, which previously held shares in Nvidia, is backing OpenAI’s Stargate project, which plans to use both Arm and Nvidia chips.
NVLink Fusion integration, therefore, provides options for pairing Arm CPUs with market-leading GPU accelerators in multiple environments.
From a technical perspective, NVLink expansion increases the number of CPUs that can be used in Nvidia-centric AI systems.
It also allows future Arm-based designs to compete directly with Nvidia’s Grace and Vera processors, as well as Intel Xeon CPUs, in configurations where GPUs are the main computational units.
The development may reduce the appeal of alternative interconnects or competing AI accelerators, but chip development cycles could affect adoption timing.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
TOPICS Nvidia ARM
Efosa UdinmwenFreelance JournalistEfosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
Logout Read more
Samsung will help Nvidia build custom non-x86 CPUs and XPUs in a bid to stave off competition from OpenAI, Google, AWS, Broadcom, Meta, and its other key partners
'Enfabrica has the coolest technology': Nvidia spent nearly $1 billion on a chip maker to secure its future on the same day it gave Intel $5 billion - and here's why this is actually a more important investment
Intel will build custom x86 CPUs for Nvidia's AI infrastructure as world's largest company invests $5 billion in beleaguered tech firm - and don't discount a data center x86 APU
Microsoft just got its hands on 100,000 Nvidia GB300 chips - and all it took was investing $33 billion in these startups
Nvidia and Intel’s partnership could introduce the huge performance upgrade for handheld gaming PCs I’ve been hoping for
'NVLink is the key': Analysts ponder on probably the biggest tech deal of the decade - Intel + Nvidia and what it means for TSMC, AMD and others
Latest in Pro
Jimdo adds AI to its website builder, promises better business outcomes
D-Link routers under threat from dangerous flaw - here's how to stay safe
Second-order prompt injection can turn AI into a malicious insider
A glimpse into the next decade of connectivity: 4 lessons from Yotta 2025
Protecting productivity: the imperative of cybersecurity in manufacturing
AI agents are fuelling an identity and security crisis for organizations
Latest in News
Comet AI browser lands on Android
Mullvad VPN adds ultra-fast obfuscation to beat WireGuard blocking
X was down again – here's how its latest outage played out
Harlem Eubank vs Josh Wagner — How to watch, *FREE* live stream, what time does it start?
Fitbit's new AI tool wants to take the stress out of your next doctor's visit
How to watch The Ashes 2025-26 highlights on BBC iPlayer — it's *FREE*
LATEST ARTICLES- 1Is Nvidia opening up its NVLink doors even further? New partnership with AMD will see greater integration across many kinds of chips
- 2Comet AI browser lands on Android
- 3Is AI more appealing than crypto now? A major Bitcoin miner has decided to pivot to AI data centers - here's why
- 4Linus Torvalds gives approval to "vibe coding" - just don't use it on anything important
- 5Google Gemini 3 has dropped – here are 6 prompts that show what it can do