After watching Jen-Hsun Huang's 2025 GTC speech, here are four key points worth noting:

Huang Renxun's GTC speech revealed NVIDIA's grand vision for future technology.
Core content:
1. AI has entered the Agentic AI era, and AI will have independent thinking and decision-making capabilities
2. NVIDIA demonstrated its vision for future data center AI chips and robotics technology
3. The robot Blue became a highlight in Huang Renxun's speech, showing the importance of emotional value
4. GTC conference highlights, how NVIDIA responds to market doubts and challenges
In the early morning of March 19, Nvidia CEO Jensen Huang delivered a two-and-a-half-hour opening keynote speech at GTC. GTC has always been called the "Spring Festival Gala of the AI industry". At this time every year, global technology companies, experts and developers will gather together to discuss future trends.
Of course, when I look at my friends in my circle of friends who went to attend the GTC conference in person, it felt like they were not going to a technology summit, but rather to a football game.
Although the atmosphere was relaxed, this year was particularly difficult for NVIDIA. Because in the past few years, with the development of AI, NVIDIA is undoubtedly the biggest winner. But when a market catfish like DeepSeek came out, many people began to ask: "Do we really need so many GPUs?" "Do we really need such advanced GPUs?"
I guess Huang Xiaoming is facing the same serious problem that Apple CEO Tim Cook has faced in recent years, which is "Is the new phone so good? Do I have to change it?"
Faced with these doubts, this conference is a crucial "battle of confidence" for NVIDIA or Huang Renxun. They must come up with enough hard evidence to stabilize market sentiment.
To be honest, perhaps due to the pressure of the stock price, Huang Renxun's speech rhythm was obviously more tense than before, and he even stumbled occasionally. But this did not prevent him from continuing to show his ambitions. From AI agents to AI chips in data centers to robotics, he still brought Nvidia's grand vision for the future.
After listening to his speech, I had two obvious feelings.
First of all, although Huang Renxun still outlined many grand blueprints that sound exciting, in the short term, these opportunities may not be quickly converted into actual revenue. At least next year, Nvidia will not see immediate returns.
Secondly, the content of this speech was basically within market expectations and lacked some surprises. The entire conference was more like Nvidia "handing in homework" to the market, which had to be held because the time had come.
If I had to say what impressed me the most, it was the robot Blue that ran onto the stage. It was jointly developed by Disney and Nvidia, and it looked very much like Wall-E in the movie.
At first glance, this guy can't do anything except being cute or doing some small chores at home - he can't move goods, run a marathon, or wave a handkerchief on the stage. But in this era, being cute and providing emotional value is obviously a level higher than those humanoid robots that can only do work.
But objectively speaking, this GTC still has many highlights worth paying attention to. After all, many trends seen from the eyes of world-class experts like Huang Renxun are worth our attention.
I have sorted out four core points and will share them with you below.
01
The Age of Agentic AI
The first highlight is that AI has officially entered the Agentic AI era.
This word sounds quite academic. In fact, it means that AI is no longer a silly tool, but has begun to have its own "brain" and can think, reason, and make decisions independently.
In the past, AI was like an intern who followed the routine. If you gave it a task, it would execute it in a very precise manner. But if the task was a little more complicated, it would be confused.
Agentic AI is different. It can not only understand your needs, but also analyze them autonomously and come up with better solutions.
In his speech, Huang Renxun specifically cited the two models of Llama3.3 and DeepSeek R1 as examples.
What is the difference between these two models?
You can think of Llama3.3 as an assistant that only follows procedures. For example, if you ask it to arrange the seating for a wedding, it may simply arrange it according to the "per capita distribution" without considering whether the bride's best friend and her mother should sit together, resulting in a mess.
But DeepSeek R1 is different. It is like an "old hand" who understands the ways of the world better. It will first analyze the relationship between each guest, such as who is good friends with whom, who is at odds with whom, and even who got drunk and caused trouble at the last wedding. It can remember all of them, and then arrange a seating chart that makes everyone comfortable.
This is the essential difference of Agentic AI - it is not just an executor, but more like a helper with "some brains" that can actively think in complex environments and even correct itself.
But there is a problem here. For AI to have this kind of autonomous reasoning ability, the amount of computing behind it is astronomical.
Huang Renxun emphasized in his speech that Agentic AI requires at least 100 or even 1,000 times more computing power than traditional AI.
Why?
Because it doesn't just give you an answer, but it goes through a series of logical reasoning, runs through all possible solutions, and finally selects the most appropriate result.
For example, traditional AI is like filling in the blanks. You ask it a question and it directly fills in the closest answer from the training data. Agentic AI is more like solving word problems. It not only needs to understand the question, but also analyze the conditions of the problem, try different solutions, and even self-check to see if the result is reliable.
Huang Renxun also cited an example from the financial field.
Suppose you ask AI to help you choose a stock. In the past, AI might just look at historical data, make a simple forecast of the trend, and give you a "buy" or "don't buy" recommendation. However, Agentic AI is different. It will consider various economic factors, such as market trends, company fundamentals, and even analyze investor sentiment based on news hotspots, and ultimately give a more logical decision.
This means that the demand for GPUs will not decrease in the future, but will increase exponentially. Those who are worried about whether AI is sufficient actually underestimate the speed of AI development.
Huang Renxun also directly responded to these doubts on stage, saying: Do you think GPUs are endless? No, AI has just begun.
Of course, as a GPU supplier, Nvidia has its own opinion on this, but it is undeniable that AI is indeed becoming more and more complex, and the demand for computing power is also increasing.
The AI of the future will not be a simple tool, but more like an "intelligent partner". It can help us make decisions and even be calmer and more rational than humans in some matters. The era of Agentic AI has just begun.
02
Dynamo
The second highlight is that NVIDIA has come up with a new thing called Dynamo that is specifically designed to optimize AI computing. You can think of it as a "super dispatcher" in the AI era, specifically used to improve computing efficiency.
Simply put, the previous data center is a bit like a warehouse full of GPUs. Although there are many machines, due to the chaotic management, a lot of computing power is wasted and the utilization rate is not high. The role of Dynamo is to equip this warehouse with a smart housekeeper, so that each GPU can work efficiently and no longer "slack off".
Huang Renxun gave an example, saying that if you have 1,000 GPUs running AI reasoning, in the past only half of the computing power was actually used, and the rest was either waiting or the efficiency was greatly reduced due to uneven task distribution. But after using Dynamo, every GPU will be fully utilized, and the overall reasoning efficiency will be directly increased by dozens of times.
To make the experience more intuitive, Huang Renxun also did some calculations, saying that if you use Dynamo with the latest Blackwell chip, the AI reasoning speed can be increased by 40 times under the same power consumption conditions.
For enterprises, this means that with the same amount of money, more work can be done, unit costs are reduced, and profit margins naturally increase. This is a good thing for cloud computing vendors, AI companies, and even some traditional industries. Because the cost of AI reasoning is still very high, many companies are worried about "running AI too expensive." The emergence of Dynamo allows more companies to afford large-scale AI applications.
Huang Renxun also said in his speech that the combination of Dynamo and Blackwell chips will become the "standard configuration" of future AI data centers. In other words, whoever wants to run more efficient AI will have to use this combination.
03
Chip Technology
The third highlight is that NVIDIA has brought the latest chip technology this time. The newly released Blackwell Ultra chip has a performance that is 1.5 times higher than the previous generation. This means that at the same power consumption, AI computing can run faster and the reasoning efficiency is higher. For enterprises, it means that more tasks can be completed in the same amount of time.
But this time, Nvidia's chip upgrade is not just about performance improvement. The bigger highlight is the official launch of silicon photonic chip technology.
Silicon photonic chips may sound new, but the principle is actually very simple. Traditional chips rely on electrical signals to transmit data. When the current runs in the circuit, it is inevitable that there will be signal loss, heat generation, and high power consumption.
The difference of silicon photonic chips is that they use optical signals instead of electrical signals to transmit data. The speed of light is faster, there is almost no loss, and the energy consumption is lower. This is like the traffic volume on the highway has increased several times at once, and the efficiency will not be reduced due to traffic jams.
For data centers, this technology can speed up data exchange between GPUs, reduce computing latency, and reduce power consumption, saving a lot of electricity costs. Huang Renxun's meaning is very clear. In the future development of AI computing, optical transmission will become the mainstream, and silicon photonic chips are a key step.
In order to make this technology truly applicable, NVIDIA also brought two new switches: Spectrum-X and Quantum-X. These two devices use silicon photonics technology to optimize the data center network, allowing faster data transmission between servers, greater bandwidth, and lower energy consumption.
Huang Renxun said that if the data center is replaced with this silicon photonic switch, it can save 60 megawatts of electricity every year. This number may not be very clear, let's put it another way - 60 megawatts is roughly equivalent to the electricity consumption of dozens of computer rooms. Enterprises can save a lot of money in a year just by saving electricity bills.
Moreover, silicon photonic chips not only save electricity, but also solve the computing bottleneck of data centers. The computing power of many data centers is not limited by GPUs, but by network bandwidth. No matter how fast the GPU calculates, it is useless if data cannot be transmitted. After replacing silicon photonic chips, the network bandwidth of the data center can be multiplied several times, greatly improving the efficiency of the entire computing architecture.
Nvidia's move is actually to create a complete "AI factory" solution. Previously, we said that Dynamo is the dispatcher of AI computing, and now Blackwell Ultra and silicon photonic chips are the hardware foundation for improving productivity. The combination of the two will make a qualitative leap in the computing power of data centers.
So this time at GTC, NVIDIA is not just upgrading GPUs, but reshaping the entire AI computing architecture. In the next few years, we are likely to see more and more large data centers switch to this "silicon photonics technology + new GPU" combination, making AI computing faster, with lower energy consumption and more cost-effective overall.
04
Robotics
The fourth point of interest is that Nvidia’s layout in the field of robotics is becoming increasingly clear.
This time, Huang Renxun brought an AI training model called GR00T N1, which is a system specifically designed to train the robot's brain.
In the past, if you wanted a robot to learn a new action, such as pouring water or folding clothes, engineers had to write code for it and debug it countless times, which took several months. The emergence of GR00T N1 means that robots can also use data to train themselves like large AI models, without the need for humans to teach them step by step, and can automatically learn various complex actions and tasks.
The little robot Blue mentioned at the beginning of our article was trained entirely based on GR00T N1. This robot can not only tidy up the room, sweep the floor, and pick up debris by itself, but can also accurately understand human commands, with natural and smooth movements, looking like a well-trained housework assistant.
You know, in the past, robots often performed tasks in a fixed way. Once the environment changed, such as the chair was moved, it might become stupid. Robots based on GR00T N1 can adapt to changes in real time and adjust their own action paths, making task execution more intelligent.
Of course, NVIDIA's goal is far more than just household robots. Another major application direction of GR00T N1 is to help companies train industrial robots so that robots can adapt to different tasks on the assembly line more quickly.
Imagine a robot in a car manufacturing plant that no longer needs to be manually programmed by engineers. Instead, it can master assembly, inspection, handling and other skills through training data and can explore new tasks on its own. This will greatly reduce the deployment cost of the robot.
More importantly, Nvidia's ambition in the field of robotics also involves autonomous driving. At the GTC conference, General Motors announced a partnership with Nvidia, planning to use Nvidia's Omniverse and GR00T N1 technologies to train autonomous driving systems.
In simple terms, in the future, self-driving cars will be like robots, using large amounts of data to train themselves on how to better understand roads, predict pedestrian behavior, and optimize driving decisions.
This is actually a big trend. In the past, autonomous driving mainly relied on human engineers to write algorithms and adjust rules, but now Nvidia’s idea is that since AI can already learn languages, write code, and play games by itself, why can’t it learn how to drive by itself?
With the GR00T N1, future self-driving cars may be able to teach themselves various driving styles and even adapt to traffic rules in different countries more quickly.
05
Summarize
Overall, this NVIDIA GTC conference clearly depicted several important trends in the future of AI: AI will become more and more intelligent, the demand for computing power will continue to rise, AI computing will become more efficient, and technologies with practical application scenarios such as robots and autonomous driving will gradually enter real life.
Huang Renxun’s goal is very clear. It is not just to let AI stay in the data center to help people write code and draw pictures, but to let AI truly enter the physical world and help humans complete more practical tasks.
Next, we may soon see robots entering our lives like in science fiction movies, able to understand human language and complete various tasks.
I will buy a robot to supervise me while I write my manuscripts, and will not let me go out until I finish. After training, I will transform it into a homework supervision robot, and let the robot help children with their homework, so parents don't have to worry about high blood pressure and heart attacks.