In the latest report we received from CNBC and Nvidia, OpenAI, the first and largest AI company of this decade, will collaborate with Nvidia, which is currently the most valuable graphics card manufacturer.
Of course, this news will further advance the development of AI in the future, especially as AI can be applied in real-world life, such as education and health, which have long been priorities.
Building AI “Factories” with Millions of GPUs
To grasp the significance of this announcement, we need to know the sheer magnitude of what these companies are building together. OpenAI will deploy at least 10 gigawatts of NVIDIA systems, creating enormous AI factories powered by millions of NVIDIA GPUs working in concert.
Consider that ChatGPT became the fastest application in history to reach 100 million users back in 2022. Today, OpenAI serves more than 700 million weekly active users with increasingly sophisticated capabilities like AI reasoning, multimodal data processing, and extended context windows. This explosive growth creates an enormous computational appetite that traditional infrastructure can’t satisfy.
Sam Altman, OpenAI’s CEO, explained in their recent CNBC interview, there’s a fascinating paradox at play in AI development. On one hand, “the cost per unit of intelligence will keep falling and falling and falling.” This means AI is becoming more efficient and accessible over time.
But simultaneously, “the frontier of AI, maximum intellectual capability, is going up and up.” This creates what we might call the AI scaling dilemma – as models become more capable, they also require exponentially more computational resources for both training new models and running inference for users worldwide.
Like you’re running a library that’s becoming incredibly popular. Not only are more people visiting every day, but they’re also asking for increasingly complex research assistance. You’d need more librarians, more books, and a bigger building – all at the same time. That’s what’s happening with AI infrastructure.
NVIDIA Is the Only Partner That Makes Sense
Altman’s statement that “there’s no partner but NVIDIA that can do this at this kind of scale, at this kind of speed” reveals crucial information about the current AI hardware landscape. NVIDIA has essentially become the infrastructure backbone of the AI revolution, much like Intel was for personal computing in previous decades.
The partnership will utilize NVIDIA’s Vera Rubin platform, representing the cutting edge of AI computing architecture. NVIDIA’s commitment to invest up to $100 billion progressively as each gigawatt is deployed.
Altman articulated this when he discussed the importance of abundant computational resources: “Without enough computational resources, people would have to choose between impactful use cases, for example, either researching a cancer cure or offering free education.”
This touches on a question about AI’s development trajectory. Should we build AI systems that force us to choose between transformative applications, or should we create infrastructure abundant enough to pursue multiple breakthrough use cases simultaneously?
The partnership embraces the abundance approach. As Altman noted, “No one wants to make that choice,” referring to having to prioritize between life-changing applications like medical research and educational access.
Maybe you will like other interesting articles?