Skip to content

NVIDIA Secures Long-Term Infrastructure Agreement With Meta

NVIDIA Secures Long-Term Infrastructure Agreement With Meta

NVIDIA has entered a multi-year, multi-generation strategic collaboration with Meta, targeting deployments across on-prem data centers, cloud environments, and AI infrastructure stacks.

Through the partnership, Meta will develop hyperscale data centers optimized for AI training and inference to support its long-term infrastructure plans. The collaboration includes broad deployment of NVIDIA CPUs and millions of Blackwell and Rubin GPUs, as well as the use of Spectrum-X Ethernet switches within Meta’s Facebook Open Switching System.

NVIDIA Spectrum-X Ethernet Networking Platform
NVIDIA Spectrum-X Ethernet Networking Platform | Image Credit: Nvidia

“No one deploys AI at Meta’s scale, integrating frontier research with industrial-scale infrastructure to power the world’s largest personalization and recommendation systems for billions of users,” said Jensen Huang, founder and CEO of NVIDIA. “Through deep codesign across CPUs, GPUs, networking, and software, we are bringing the full NVIDIA platform to Meta’s researchers and engineers as they build the foundation for the next AI frontier.”

“We’re excited to expand our partnership with NVIDIA to build leading-edge clusters using their Vera Rubin platform to deliver personal superintelligence to everyone in the world,” said Mark Zuckerberg, founder and CEO of Meta.

Meta and NVIDIA are expanding their collaboration with the deployment of Arm-based NVIDIA Grace CPUs across Meta’s production data center workloads, aiming to boost performance per watt as part of the company’s long-term infrastructure strategy.

This is the first large-scale Grace-only deployment, supported by co-design and software optimization to improve performance per watt each generation.

The companies are also working together on plans to deploy NVIDIA Vera CPU, with a potential large-scale rollout targeted for 2027, a move that would expand Meta’s energy-efficient AI compute capacity and strengthen the broader Arm software ecosystem.

Meta is set to deploy GB300-based systems and tie its in-house data centers together with NVIDIA Cloud Partner setups, making things easier to manage while pushing performance and scale higher.

To keep pace with growing AI demands, Meta has integrated the Spectrum-X Ethernet platform throughout its infrastructure, ensuring steady low-latency networking while squeezing more value and efficiency from its systems.

As it expands AI inside WhatsApp, Meta has turned to NVIDIA Confidential Computing to handle private processing, aiming to introduce smarter features without compromising the security and integrity of user information.

NVIDIA and Meta engineering teams are collaborating through deep co-design to optimize and accelerate advanced AI models across Meta’s primary workloads. The effort brings together NVIDIA’s full-stack platform and Meta’s large-scale production systems to improve performance and efficiency for AI capabilities used globally.

Maybe you would like other interesting articles?

Leave a Reply

Your email address will not be published. Required fields are marked *