Share Market

Nvidia CEO Jensen Huang Defends AI Computing Power Demand Amid DeepSeek Open-Source Breakthrough

In a pre-recorded interview aired Thursday, Nvidia CEO Jensen Huang addressed the recent market turmoil sparked by DeepSeek’s groundbreaking advancements in AI. The Chinese AI firm, backed by hedge fund High-Flyer, released an open-source reasoning model, R1, in January, challenging the dominance of Western-made AI models. Huang argued that investors misinterpreted the implications of DeepSeek’s innovation, leading to a massive sell-off of Nvidia stock and a temporary $600 billion loss in market capitalization.

DeepSeek’s R1 Model: A Game-Changer or a Misunderstood Milestone?
DeepSeek’s R1 model made headlines for being developed with weaker chips and significantly less funding compared to its Western counterparts. This achievement led some investors to question the necessity of trillions of dollars spent on AI infrastructure by Big Tech companies. If less computing power is required to train AI models, does the industry still need Nvidia’s high-performance chips?

Huang clarified that while pre-training AI models is crucial, the real demand for computing power lies in post-training methods. These methods enable AI models to draw conclusions, make predictions, and solve problems after the initial training phase. As post-training techniques grow and diversify, the need for Nvidia’s cutting-edge chips will only increase, Huang emphasized.

Investors’ Misinterpretation and Market Fallout
The market’s dramatic response to DeepSeek’s announcement revealed a fundamental misunderstanding of AI development, according to Huang. “From an investor perspective, there was a mental model that the world was pre-training and then inference. And inference was: you ask an AI a question, and you instantly got an answer,” he said during the virtual event hosted by Nvidia partner DDN. “I don’t know whose fault it is, but obviously that paradigm is wrong.”

Huang stressed that post-training is the “most important part of intelligence,” where AI models learn to solve complex problems. He praised DeepSeek’s innovations, calling the open-sourcing of R1 “incredibly exciting” and a source of global energy in the AI community.

Nvidia’s Defense of AI Scaling and Future Prospects
Huang’s comments come amid growing concerns that AI model scaling has hit a wall. Even before DeepSeek’s rise to prominence, reports of slowing improvements at OpenAI had raised doubts about the sustainability of the AI boom. Huang has consistently defended the industry’s trajectory, arguing that scaling has shifted from training to inference and that post-training methods are “really quite intense.”

Nvidia’s upcoming earnings call on February 26 is expected to address these topics in greater detail. DeepSeek has become a hot topic across the tech industry, with companies like Airbnb and Palantir discussing its impact on their earnings calls. Even Nvidia rival AMD acknowledged DeepSeek’s role in driving innovation, with CEO Lisa Su calling it “good for AI adoption.”

The Road Ahead for Nvidia and the AI Industry
As the AI landscape evolves, Huang’s insights highlight the importance of post-training methods and the continued demand for advanced computing power. While DeepSeek’s achievements have shaken up the industry, they also underscore the need for innovation and collaboration in AI development.

For now, Nvidia remains confident in its position as a leader in AI infrastructure, with Huang’s comments serving as a preview of the company’s strategy moving forward. As the debate over AI scaling and computing power continues, one thing is clear: the AI revolution is far from over, and Nvidia plans to stay at the forefront.

Related Articles