Click here to visit Original posting
Welcome to our live coverage of Nvidia GTC 2025!
Today sees the opening keynote from Nvidia CEO Jensen Huang, who is set to unveil a host of new hardware and AI tools - along with a few surprises, no doubt.
The keynote just wrapped up - you can read everything we saw below...
Good morning and welcome to our live coverage of the Nvidia GTC 2025 keynote!
We're super excited to see what Nvidia has in store for us today, with company CEO and founder Jensen Huang set to take to the stage in a few hours time.
We're not far off the opening keynote at Nvidia GTC 2025 now, so what can we expect?
Last year's keynote saw the reveal of Blackwell, the company's new generation of GPUs, and we're expecting another major hardware update today.
The company also unveiled a host of new data center hardware, and we're expecting more data center, server and workstation news today for sure.
But there was also a big focus on robotics, particularly in factories, and the role AI can play there, so it may well be we see more of the same today.
If you want to watch along with the keynote, you'll need to head to the Nvidia GTC 2025 website, where you can sign up.
You've not got long though - Jensen Huang will be on stage in just a few hours time!
Less than half an hour to go! Get some snacks and energy drinks ready, this could be a long one...
Also, make sure to keep an eye out for Jensen Huang's leather jacket - the Nvidia CEO is always snappily-dressed, and jacket-watch has become a popular trend for us media types - it's important to look good when you're presenting the future of AI, you know...
Here we go! The lights go down and it's time for the keynote to begin...
"This is how intelligence is made - this is a whole new factory," an intro video outlining "endless possibilities" notes.
We're shown a number of possible use cases for the future of AI, from weather forecasting to space exploration to curing disease - all powered by tokens.
"Together, we take the next great leap," the video ends, showing a view of Nvidia's futuristic San Jose HQ.
The video ends, and we welcome Jensen Huang, CEO and co-founder of Nvidia, to the stage.
"What an amazing year...we have a lot of amazing things to talk about" he declares, ushering us in to the virtual Nvidia HQ via virtual reality.
Huang admits he's doing this keynote without a script - brave!
Huang starts by commemorating 25 years of GeForce - a huge lifespan for any technology - holding up one of the newest Blackwell GPUs.
"AI has now come back to revolutionize computer graPhics," he declares, showing us a stunning real-time generated AI video backdrop.
"Generative AI has changed how computing is done," Huang notes, describing a move from retrieval-based working to a generation-based one.
"Every single layer of computing has been transformed."
And now - agentic AI is what we're living in right now, with improved reasoning, solving and planning actions.
The next step is "physical AI" he notes - robotics!
But there's plenty of time for that later, Huang teases....
GTC is "the superbowl of AI", Huang claims - laughing that the only way he could fit more people in to the event is to grow the host city of San Jose...
The next step of AI will be enabled by improved training and knowledge work, Huang notes, teaching models and AIs to become smarter and more efficient.
Last year, we all got it wrong, he says, as the scaling potential of AI is greater than (almost) anyone could have predicted.
"It's easily a hundred times more than we thought we needed, this time last year," he notes.
The ability of AI to reason can be the next big breakthrough, Huang notes, with the increased availability of tokens a major part of this.
Teaching the AI how to reason is a challenge though - where does the data come from? There's only so much we can find...reinforcement learning could be the key though, helping an AI solve a problem step-by-step, even when it knows the answer.
For those of you on leather jacket watch, it's a smooth, stylish number this year - anyone want to take a guess at the price?
Huang notes AI is going through an inflection point, as it becomes smarter, more widely-used and got more resources to power it.
He expects data center build-out to reach over a trillion-dollars before long, as demand for AI and a new computing approach gets higher.
The world is going through a platform shift, he notes, shifting from human-made software to AI-made.
"It's a transition in the way we do computing," he notes.
"The computer has become the generator of tokens, not a retriever of files," he declares - as data centers turn into what Huang calls AI factories, with one job only - generating tokens which are then turned into music, words, research and more.
Seeing it up close, the fonts and images used in the keynote look a little...off? Are they AI-generated?
If so, they look pretty perfect, without the usual mistakes you'd see in AI-generated content...have a look at the below and see what you think.
Huang runs us through the entire CUDA library in some detail (and some enthusiasm too) - it's clear that Nvidia has an AI tool for everything from quantum chemistry to gene sequencing.
"We've now reached the tipping point of computing - CUDA made it possible," Huang notes, introducing a video thanking everyone who made this progress possible - around 6 million developers across 200 countries.
"I love what we do - and I love even more what you do with it!" Huang declares.
Right - time for some AI (if we hadn't talked about it enough already)
"AI will go everywhere," Huang notes, showing off a look at the various areas we'll be focusing on today.
But how do you take AI global when there are so many differences in platforms, demands and other aspects across industries, across the world?
Huang notes that context and prior knowledge can be the key to the next step forward, especially in edge computing.
Huang turns to autonomous vehicles (AVs) - often one of the biggest areas when it comes to AI.
He notes almost every self-driving car company uses Nvidia tech, from Tesla to Waymo, from software to hardware, to try and keep pushing the industry forward.
There's another new partner today though - Huang announces Nvidia will now partner with GM going forward on all things AI.
"The time for autonomous vehicles has arrived," Huang declares.
Now, Huang turns to the problems behind training data for AI - particularly when it comes to AVs, which understandably require a whole hoard of data to make sure they drive safely.
Now on to data centers - the brains behind AI.
Huang notes Grace Blackwell is now in full production, with OEMs across the board building and selling products with the company's hardware.
Scaling up will be a challenge, Huang notes - but one Nvidia is definitely up for, highlighting some of the NVLink hardware utilizing Blackwell right now - including a one exaflops module in a single rack - some seriously impressive numbers.
"Our goal is to scale up," he notes - highlighting Nvidia's goal of building a frankly super-powered system.
"This is the most extreme scale-up the world has ever seen - the amount of compute seen here," he notes.
This is all to solve the problem of inference, Huang notes.
Inference at scale is extreme computing, he explains - so making sure your AI is smarter and quick to respond is vital.
Huang shows us a demo of how a prompt asking about a wedding seating plan sees a huge difference in tokens used, and therefore compute needed, in order to get the right answer.
A reasoning model needs 20x more tokens and 150x more compute than a traditional LLM model - but it does get it right, saving those event headaches, and a load of wasted tokens.
With next-gen models possibly set to feature trillions of parameters, the need for powerful systems such as Nvidia Blackwell NV72 could make all the difference, Huang notes.
To help solve all of this is the Nvidia Dynamo.
"This is essentially the operating system of an AI factory," Huang notes.
The software of the future will be agent-based, he says, highlighting how "insanely complicated" Dynamo is.
Blackwell understandably offers a "giant leap in inference performance" over Hopper, Huang outlines,.
"We are a power-limited industry," he notes, so Dynamo can push this even further by offering better token-per-second for 1 user, meaning faster responses all round.
Despite the big leap forward, Huang is keen to still push the benefits of Hopper - noting, "there are circumstances where Hopper is fine," to big laughs from the crowd.
"When the technology moves this fast...we really want you to invest in the right versions," he says.
Blackwell offers 40x the inference performance of Hopper, he declares - "It's not less - it's better - the more you buy, the more you save!"
Huang moves on to looking at an actual AI factory - specifically, though, digital twins.
AI factories can be designed and anticipated using Nvidia's Omniverse technology, allowing for optimized layout and efficiency in the quickest (and most collaborative) way.
Huang is back, and says he has a lot to get through - so get ready!
First up, roadmap information - the Blackwell Ultra NVL72 is coming in the second half of 2025, offering some huge leaps forward.
Next up, the Vera Rubin NVL 144 - named after the scientist who discovered dark matter - which will be coming in the second half of 2026.
And after that - Rubin Ultra NVL576, coming in the second half of 2027, with some frankly ridiculous hardware, what Huang says is "an extreme scale-up" featuring 2.5 million parts, and connected to 576 GPUs.
Talk about planning for the future - even Huang admits some of this is slightly ridiculous - "but it gives you an idea of the pace at which we are moving!"
Where do you go after that?
Huang moves on to other parts of the Nvidia product line-up - first up, ethernet.
Improving the network itself will help make the whole process smoother, he notes, announcing the new Spectrum-X Silicon Photonics Ethernet switch, offering 1.6 Terabits per second per port switches to deliver 3.5x energy savings and 10x resilience in AI Factories.
We're then treated to the amazing sight of one of the world's richest men struggling to untangle a cable as he looks to show us exactly how the new systems will work.
Say what you want about Jensen, he knows how to stay human!
Here's a look at the entire Nvidia roadmap for the next few years - after Rubin, we're getting a new generation named after Richard Feynman!
Now, we're moving on to enterprise AI - which is set for a huge shake-up as workflows get smarter and more demanding.
AI agents will be everywhere, but how enterprises run it will be very different he notes.
In order to cope with this, we're going to need a whole host of new computing devices...
Well...we'd like to bring you the big finish here, but unfortunately the stream of the keynote has gone offline abruptly!
Not sure what happened there - but we're back! Phew...
Jensen is talking about the new DGX Station computing offering, describing it as, "the computer of the age of AI," he says - this is what AI will run on.
Nvidia is also working on boosting storage for the AI industry, working with a host of partners on a number of new releases, including a whole product line from Dell.
"For the very first time, your storage stack will be GPU-accelerated," he notes.
We're really rattling along now - there's also Nvidia Llama Nemotron Reasoning, an AI model "that anyone can run", Huang notes.
Part of NIMs, it can run on any platform, Huang claims, naming a host of partners building their AI offerings using Nvidia's services.
Now we're on to robotics - the big finish!
"The time has come for robots," Huang declares. "We know very clearly that the world has a severe shortage of human laborers - 50 million short."
To help combat this, Huang unveils Nvidia Isaac GROOT N1, "the world’s first open Humanoid Robot foundation model", alongside several other important development tools.
There's also blueprints, including the NVIDIA Isaac GR00T Blueprint for generating synthetic data, which help generate large, detailed synthetic data sets needed for robot development which would be prohibitively expensive to gather in real life.
Not terrifying at all, right?
"Physical AI and robotics are moving at such a high pace - everyone should pay attention," Huang notes.
Huang moves on to some Omniverse demos - first off, with Cosmos, the operating system for physical AI digital twins.
Nvidia is teaming up with DeepMind and Disney Research for a new platform
We're shown an incredibly detailed (and cute) demo video of a Star Wars-esque robot exploring a desert environment, before appearing onstage!
Jensen and his new robot offspring look very happy...
One final news announcement - GROOT N1 is open-source!
It's wrap-up time, and Huang recaps everything we've heard about over the last few hours...it's been a journey, hey?
And that's a wrap! It was an epic start to Nvidia GTC 2025, so we're off to digest all the news and announcements - stay tuned to TechRadar Pro for all the updates...