Just now! DeepSeek has been quietly updated again, the version number is DeepSeek-V3-0324

DeepSeek-V3-0324 version update, mathematical capabilities and front-end task processing ushered in significant improvements, China's AI restraint and ambition coexist.
Core content:
1. DeepSeek's new version DeepSeek-V3-0324 was quietly launched, and the mathematical and front-end task processing capabilities were greatly improved.
2. DeepSeek's open source spirit: no publicity, silent iteration, and sharing progress with global developers.
3. China's AI is steadily advancing with the temperament of engineers, and DeepSeek's small steps and fast running indicate greater breakthroughs in the future
DeepSeek, updated.
The version number is DeepSeek-V3-0324.
The name is ordinary, just as low-key as the "daily check-ins" posted on WeChat Moments.
But don’t let those four numbers fool you—it’s actually a “not-so-small update.”
Judging from the feedback from some friends who have been using it quickly, the most powerful updates this time are actually in mathematical capabilities and front-end task processing .
I can’t say about other areas, but in these two areas, many people have used the word “incredible” to describe its performance.
What surprised me even more was that DeepSeek did not do any publicity and just quietly posted the model on Hugging Face.
There was no warm-up, no rhetoric, and no fanfare to tell everyone “we are awesome.”
It's like secretly laying a floor tile in the noisy AI world and then standing firm.
I saw a comment above x, saying:
“If this is the baby step for DeepSeek v3, then r2 might really take off.”
It sounds like a joke, but there is a truth behind it:
China’s AI is not catching up, but starting to overtake.
Moreover, it is being advanced in a very restrained and engineering-like way:
No general meetings, no pump-up, no promotional videos, just updating, repairing, iterating, and getting stronger silently.
It's like someone is typing a homework without writing his name on it late at night, but hands in the cleanest paper in a public exam.
More importantly, it is open source .
What does open source mean?
It means that you are not working alone, nor are you developing in a closed environment. Instead, you release your models and methods to allow the world to disassemble, improve, reuse, and even give back to you.
This kind of frank attitude is rare in today's technology circle where "moat culture" is prevalent.
I would even like to understand this as a kind of gentle ambition:
It’s not a question of who wins or loses, but “Can we also participate in shaping the future together?”
I'm starting to look forward to r2.
It’s not because “it will be stronger”, but because of the quiet but steady atmosphere behind it.
There is a kind of change that is not noisy, but rather a gradual improvement and approach to the forefront in subtle aspects such as data accuracy, mission capability, and response speed.
Like spring, it’s quiet and peaceful, but when you go out one day, you suddenly find new buds on the branches.
And you?
When using DeepSeek, have you ever felt a subtle improvement, “Hey, it seems to be going more smoothly”?
Or did you ever have one of those “maybe I can keep up” moments?
You can leave me a message, and we can sit down after dinner and talk about this quiet breeze.
If you see this and think that DeepSeek’s small step deserves to be known by more people, please click “Reading” for me or forward it to friends who are interested in AI.
We are all on the road, neither rushing ahead nor falling behind.
See you next time!