
Nvidia CEO Jensen Huang said on Monday that the company’s next generation of chips is in “full production,” saying they can deliver five times the artificial intelligence computing of the company’s previous chips when serving up chatbots and other AI apps.
In a speech at the Consumer Electronics Show in Las Vegas, the leader of the world’s most valuable company revealed new details about its chips, which will arrive later this year and which Nvidia executives told Reuters are already in the company’s labs being tested by AI firms, as Nvidia faces increasing competition from rivals as well as its own customers.
The Vera Rubin platform, made up of six separate Nvidia chips, is expected to debut later this year, with the flagship server containing 72 of the company’s graphics units and 36 of its new central processors. Huang showed how they can be strung together into “pods” with more than a thousand Rubin chips and said they could improve the efficiency of generating what are known as “tokens” – the fundamental unit of AI systems – by 10 times.
To get the new performance results, however, Huang said the Rubin chips use a proprietary kind of data that the company hopes the wider industry will adopt.
“This is how we were able to deliver such a gigantic step up in performance, even though we only have 1.6 times the number of transistors,” Huang said.
While Nvidia still dominates the market for training AI models, it faces far more competition – from traditional rivals such as Advanced Micro Devices as well as customers like Alphabet’s Google – in delivering the fruits of those models to hundreds of millions of users of chatbots and other technologies.
New chips
Much of Huang’s speech focused on how well the new chips would work for that task, including adding a new layer of storage technology called “context memory storage”, aimed at helping chatbots provide snappier responses to long questions and conversations.
Nvidia also touted a new generation of networking switches with a new kind of connection called co-packaged optics. The technology, which is key to linking together thousands of machines into one, competes with offerings from Broadcom and Cisco Systems.
Nvidia said that CoreWeave will be among the first to have the new Vera Rubin systems and that it expects Microsoft, Oracle, Amazon and Alphabet to adopt them as well.
Read: Intel launches Panther Lake, its next-gen PC chip
In other announcements, Huang highlighted new software that can help self-driving cars make decisions about which path to take – and leave a paper trail for engineers to use afterward. Nvidia showed research about software, called Alpamayo, late last year, with Huang saying on Monday it would be released more widely, along with the data used to train it so that automakers can make evaluations.
“Not only do we open source the models; we also open source the data that we use to train those models, because only in that way can you truly trust how the models came to be,” Huang said from a stage in Las Vegas.
Last month, Nvidia scooped up talent and chip technology from start-up Groq, including executives who were instrumental in helping Alphabet’s Google design its own AI chips. While Google is a major Nvidia customer, its own chips have emerged as one of Nvidia’s biggest threats as Google works closely with Meta Platforms and others to chip away at Nvidia’s AI stronghold.
During a question-and-answer session with financial analysts after his speech, Huang said the Groq deal “won’t affect our core business” but could result in new products that expand its line-up.
At the same time, Nvidia is eager to show that its latest products can outperform older chips like the H200, which US President Donald Trump has allowed to flow to China. The chip, which was the predecessor to Nvidia’s current Blackwell chip, is in high demand in China, which has alarmed China hawks across the US political spectrum.
Huang told financial analysts after his address that demand is strong for the H200 chips in China, and chief financial officer Colette Kress said Nvidia has applied for licences to ship the chips to China but was waiting for approvals from the US and other governments to ship them.
Meanwhile, AMD CEO Lisa Su showed off a number of the company’s AI chips on Monday at the trade show, including its advanced MI455 AI processors, which are components in the data centre server racks that the company is selling to firms like ChatGPT maker OpenAI.
Nvidia dominance
Su also unveiled the MI440X, a version of the MI400 series chip designed for on-premise use at businesses. The so-called enterprise version is designed to fit into infrastructure that is not specifically designed for AI clusters. The MI440X is a version of an earlier chip that the US plans to use in a supercomputer.
Read: China will get Nvidia H200 chips – but not without paying Washington first
AMD is one of Nvidia’s strongest rivals but has struggled to have as much success. In October, AMD signed a deal with OpenAI that, in addition to the financial upside, was a major vote of confidence in AMD’s AI chips and software. But it is unlikely to dent Nvidia’s dominance, as the market leader continues to sell every AI chip it can make, analysts said.
At the Monday event, OpenAI president Greg Brockman joined Su on stage and said chip advancements were critical to OpenAI’s vast computing needs.
Looking to the future needs of companies like OpenAI, Su previewed the MI500 and said it offered a thousand times the performance of an older version of the processor. The company said the chips would launch in 2027.
At the event, Su hosted Daniele Pucci, CEO of Generative Bionics, an Italian AI developer, who unveiled GENE.01, a humanoid robot.
“Our first commercial humanoid robot will be manufactured in the second half of 2026,” Pucci said.
The AMD deal with ChatGPT maker OpenAI that will add billions of dollars to the company’s annual revenue. The first deployment of AI chips that incorporate AMD’s MI400 series will roll out this year. Nvidia has generated tens of billions of dollars in quarterly revenue from its AI chip sales, a feat that AMD has struggled to achieve thus far.
OpenAI is a key customer of AMD and executives at the Santa Clara, California-based company expect the deal to lead to significant additional new sales.
Also on Monday, AMD launched its Ryzen AI 400 Series processors for AI PCs, alongside Ryzen AI Max+ chips for advanced local inference and gaming. Intel held a launch event earlier for its Panther Lake chips that it said would be available for order on Tuesday. — Stephen Nellis, Max Cherney and Gnaneshwar Rajan, (c) 2026 Reuters
Get breaking news from TechCentral on WhatsApp. Sign up here.
