When we wake up, is the sky of domestic AI going to collapse again?

OpenAI's inference model o3-pro is coming strong, and its performance upgrade has caused a stir in the industry.
Core content:
1. As an upgraded version of o3, o3-pro has significantly improved inference capabilities in many fields
2. Pricing strategy and performance evaluation results, as well as its advantages over previous models
3. o3-pro's functional limitations and its outstanding performance in AI benchmarks
After days of procrastination, o3-pro is finally released!
Release first and then go online, not the waiting list is better than the waiting list.
o3-pro is an upgraded version of the inference model o3 that OpenAI launched earlier this year. Unlike traditional AI models, inference models can solve problems step by step, which makes them more reliable in fields such as physics, mathematics and programming.
ChatGPT Pro and Team users will be able to use the o3-pro today, which will replace the previous o1-pro model.
Enterprise and Education users will get access next week, and o3-pro was also made available in OpenAI’s developer API this afternoon.
In terms of API pricing, the input token price of o3-pro is 1 Million USD 20/140 RMB, and the output token price is 1 Million USD 80/560 RMB. One million input tokens are equivalent to approximately 750,000 words, which is slightly longer than the length of "War and Peace".
OpenAI wrote in its update log: "In expert reviews, reviewers unanimously agreed that o3-pro is better than o3 in all test categories (I am not an expert, and I also think i3-por is better than o3 from the name), especially in key areas such as science, education, programming, business, and writing assistance. In addition, reviewers also gave o3-pro consistently higher scores in clarity, comprehensiveness, instruction following, and accuracy."
According to OpenAI, o3-pro can call on a variety of tools to enable it to search the web, analyze files, understand visual input, program in Python, use memory functions to achieve personalized replies, etc. However, OpenAI also mentioned that one of the shortcomings of this model is that it usually takes longer to generate replies than o1-pro.
o3-pro has some other limitations as well. Due to a "technical issue" that has not been resolved, ad hoc chats with the model in ChatGPT are currently disabled. In addition, o3-pro cannot generate images and does not support OpenAI's artificial intelligence workspace feature Canvas.
However, on the positive side, according to OpenAI's internal testing, o3-pro has achieved impressive results in a number of popular AI benchmarks. In the AIME 2024 test, which evaluates the mathematical ability of the model, o3-pro's score exceeded that of Google's top-performing AI model, Gemini 2.5 Pro. In the GPQA Diamond benchmark, which tests doctoral-level scientific knowledge, o3-pro also outperformed Anthropic's recently released Claude 4 Opus.