OpenAI: GPT-5 is All in One, integrating various products

How will OpenAI GPT-5 integrate multiple products to create an all-round AI assistant?
Core content:
1. The latest developments of GPT-5: integrating Codex, Operator, etc.
2. The story behind the development of Codex and the user experience
3. The vision of GPT-5: to become a truly all-round assistant
GPT-5 will be a culmination of all?!
Just after releasing the "Most Powerful Programming Agent Codex" , OpenAI held a "Ask and Answer" event on Reddit .
Jerry Tworek, the company’s vice president of research, gave a sneak peek of the latest news about the next-generation base model, GPT-5:
In order to reduce model switching, there are plans to integrate Codex, Operator, Deep Research and Memory in the future .
In addition, other Codex team members have also started to expose information, such as:
Codex was originally a side project, launched when they realized that models were underutilized in their internal workflows; When using Codex internally, programming efficiency increased by about 3 times; OpenAI is exploring flexible pricing options, including pay-as-you-go. o3-pro or codex-1-pro will eventually be launched as the team's capabilities allow; …
Okk, let’s eat melons together.
Responses to 10 key questions
Overall, the OpenAI team mainly shared the details of Codex and the company's future development plans.
In order to stick to the original intention of the questioner as much as possible, we will start the conversation directly in the form of a dialogue.
Q1 : Why was the Codex CLI tool written in TypeScript instead of Python?
A1 : Because developers are familiar with TypeScript, and it is suitable for UI (including terminal interface) . However, in the future, there will be a high-performance engine that supports bindings for multiple languages, and developers can use the language they are familiar with for extensions .
Q2 : Why do you choose to run the code on the cloud instead of running the Agent locally (such as using MCP) ?
A2 : Although Codex CLI can run Agent locally, it is usually single-threaded due to computer performance limitations. Running in the cloud can achieve parallelization and sandboxing, which allows the model to run code safely without human supervision.
Q3 : What paradigm shifts from current “vibe coding” did the team discover when using Codex? What was the inspiration for developing this tool?
A3 : The main difference is that you can generate a large number of code versions at the same time, and then select the version with the best code quality. It's like you can train a large number of young programmers who like programming, and then select the best code version from them. In fact, the Codex tool was originally a side project of some engineers who were frustrated with not making full use of the model in their daily work at OpenAI, so they thought of developing it.
Q4 : Is it possible that GPT-5 can do more than just help with writing code, but also do more tasks on the computer? In other words, will it become a true assistant, rather than just providing advice?
A4 : GPT-5 is our next-generation basic model. Its core goal is to enhance the capabilities of existing models and reduce model switching.
There is already a product called Operator that can perform tasks on computers. Although it is still in the research preview stage, it will be improved in the future and become a very useful tool.
The future plan is to bring together existing tools like Codex, Operator, Deep Research, Memory so that they feel like a whole.
Q5 : Is Codex only suitable for senior engineers?
A5 : Probably more suitable for people who want to solve tedious problems rather than super difficult ones.
Q6 : Does the Codex effectively leverage the latest knowledge about libraries and other resources through search?
A6 : It currently primarily leverages information loaded into the container runtime, including GitHub repositories and other files loaded during container setup. It does not directly access the latest library documentation or get real-time information through search.
However, we are considering how to make the model better utilize the latest knowledge. In the future, we may combine the retrieval-augmented generation (RAG) technology to solve the problem of outdated information by dynamically referencing external knowledge bases.
Q7 : Does OpenAI have a research method similar to the one mentioned in the paper “Absolute Zero: Reinforced Self-play Reasoning with Zero Data”, such as allowing the coding LLM to improve its coding level through “self-game” and reinforcement learning (RL)?
A7 : In the Codex project, we used reinforcement learning to improve the model's coding ability, coding style, and reporting accuracy.
As researchers in the field of reinforcement learning, we are excited about this type of research direction and believe that reinforcement learning has broad application prospects in the fields of LLM and coding.
Q8 : If we quantify the improvement in programming efficiency achieved by Codex, how much has the overall development efficiency improved?
A8 : It’s still early days, but internal data shows that if projects take full advantage of Codex agents from the start, code and feature delivery can be increased by about 3 times.
Good software engineering practices are becoming more important than ever before, including clear code module division, sufficient testing of key functions, efficient testing processes, and code structures that facilitate quick review. When these factors are combined with Codex's automation capabilities, development efficiency can be greatly improved.
Q9 : What does the Codex team think software engineering will look like in 10 years?
A9 : We expect to be able to efficiently and reliably convert software requirements into runnable software versions.
Q10 : How to ensure that Codex is used to enhance the capabilities of human developers rather than replace them, especially junior developers who rely on learning by doing and self-taught programmers?
A10 : By providing a role similar to that of an excellent teacher, we reduce the difficulty for novices to get started and help the new generation of programmers learn faster.
Current models like Codex are still far from being able to replace humans who have longer memories and broader background knowledge. If models could take on some of that work, humans would have more opportunities to focus on what they are really good at.
Finally, the team said that it will launch free API credits for Plus/Pro users to use Codex CLI.
For more answers, please visit the homepage of the well-known blogger Bald Brother, which mainly explains the functional details of Codex.
One More Thing
At the same time, OpenAI officially released a "Codex Getting Started Guide" .
It mainly includes the following contents:
Codex Basics How to connect your GitHub? How to submit tasks to Codex and run them? What are some tips for cue words?