The latest explosive statement from OpenAI founder at the Sequoia AI Summit: AI entrepreneurs, if you want to survive, stop asking me for big models!

Written by
Audrey Miles
Updated on:June-20th-2025
Recommendation

OpenAI founder Sam Altman shared his insights on AI startups, big models, and future strategies at the Sequoia AI Summit.

Core content:
1. OpenAI's business barriers and suggestions for AI entrepreneurs
2. AI's profound impact on and dependence on the younger generation
3. Criticism and predictions on large companies' AI transformation

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)

 

Recently, the 3rd Sequoia Capital AI Summit just concluded in San Francisco. 150 of the world's top AI founders gathered at the Sequoia Capital venue. The summit invited Sam Altman, the founder of OpenAI, to be interviewed. This article shares the core summary of the interview and the full Chinese transcript.

Summary of core ideas

1. OpenAI 's core business barriers and entrepreneurial advice : Altman made it clear that OpenAI will focus on "core AI subscriptions and models" and advised entrepreneurs "not to do core AI subscriptions, but everything else can be done." This can be interpreted as a clear division of fields and warnings to participants in the AI ​​ecosystem.
2. Young people's deep dependence on AI : He mentioned that young people (especially college students) use ChatGPT as an "operating system" and even "almost ask ChatGPT what they should do before making important life decisions." This reveals the profound impact and even dependence that AI may have on the younger generation, triggering social ethics and education.
3. Sharp criticism of the AI ​​transformation of large companies:  Altman bluntly expressed that he was "disappointed but not surprised" at the slow AI transformation of large companies, and predicted that they might eventually "succumb and respond hastily at the last minute, but it will be a little too late by then." This criticism is quite direct.
4. Comparison of AI intelligence and human intelligence : When asked whether GPT-5 will be smarter than everyone, he responded: "If you think you are much smarter than GPT-3, then maybe there is still a distance, but GPT-3 is already quite smart." This remark may be seen as a challenge to the general sense of superiority of human beings.
5. Opposition to the grand strategy of "working backwards from the future":  Altman expressed disbelief in grand strategies that attempt to work backwards from an extremely complex future goal, and claimed that he "has never seen those people really achieve great success." This is in contrast to many business strategy theories.
The following is the full Chinese transcript of the interview:

Sequoia AI Closed-Door Meeting: Chinese Transcript of Sam Altman's Interview

Opening
Moderator:  Our next guest needs no introduction, Mr. Sam Altman.
Moderator:  I would like to emphasize that this is the third time that Mr. Sam Altman has participated in our AI Summit and shared his insights, and we are very grateful for this. Thank you for coming.
Sam Altman:  This was the first office of OpenAI.
Moderator:  Indeed. It is great that you can return to this memorable place.

1. Review of OpenAI’s beginnings and early development

Moderator:  Let's go back to the original office of OpenAI. You started the business in 2016. We just had Mr. Huang, who mentioned that the first DGX1 system was delivered here.
Sam Altman: He did. Yes, it looks very small now, which is impressive.
Moderator: Compared to today's equipment, it is. Today's boxes are very large. That's a fun memory.
Sam Altman: How heavy was that machine? One person could probably carry it at that time.
Moderator: He said about 70 pounds. It's not light, but one person could still carry it.
Moderator: So, in 2016, did you envision that OpenAI would grow to the size it is today?
Sam Altman: No. At that time, there were only about 14 of us, and we were all exploring this new system. Even then, we were sitting around a whiteboard and discussing the future direction. To be honest, we were more like a pure research lab at that time. We were very firm in our direction and beliefs, but there was no real specific business action plan. Not only was it hard to imagine the specific form of the company or product, but even the concept of large language models (LLMs) was far from being formed. We were mainly trying to get AI to play video games.
Host: Trying to play video games. Are you still trying to do this now?
Sam Altman:  We have made quite good progress in this area now.

2. From API to ChatGPT: The road to product evolution

Moderator:  Okay. It took you about six more years to launch your first consumer-facing product, ChatGPT. How did you plan milestones and gradually polish the product to its current level during this process?
Sam Altman:  To some extent, it was an accident of history that our first consumer-facing product was not ChatGPT.
Moderator:  Yes, it was DALL-E. The earlier product was the API.
Sam Altman:  Yes, we built the API. During this process, we went through a lot of exploration and had several key investment directions. In the end, as I mentioned before, we felt that we had to build a system to verify its effectiveness. But we didn't just write research papers, we wanted to see practical applications, such as making AI perform well in video games or developing projects such as robotic hands. At some stage, it started as one person and later developed into a team, and began to show a strong interest in unsupervised learning and building language models. This gave birth to GPT-1, and then GPT-2. By the time we got to the stage of GPT-3, we all thought it had cool potential, but its specific commercial applications were not yet clear. At the same time, we also realized that it would take a lot of money to continue to scale the model. After completing GPT-3, we hope to further develop GPT-4, which means that model development will enter the billion-dollar level. Unless it is a large scientific research project like a particle accelerator, it is difficult to advance it as a pure scientific experiment. Therefore, we began to think about two levels of problems: first, how to develop it into a business that can support the required investment; second, we have a hunch that this technology is moving towards a truly practical direction. We released GPT-2 as a model weight, but it did not attract widespread attention. Based on my observation of the development of companies and products, launching APIs usually has positive commercial results, which has been verified in many YC incubated companies. Moreover, if the product can be made easier to use, it often brings huge benefits. Therefore, we considered that it was difficult to run these increasingly large models, so we decided to develop corresponding software to optimize their operating efficiency. In addition, since we didn’t know what kind of product we should build at the time, we hoped to let other developers find suitable application scenarios by opening the API. I don’t remember the exact time, it was around June 2020, when we released GPT-3 through the API. At the time, most of the world reacted lukewarmly, but Silicon Valley practitioners took notice, thinking it was indicative of a trend. An interesting phenomenon was that while we received little attention from the rest of the world, some startup founders were very excited about it, and some even thought it was the prototype of general artificial intelligence (AGI). As far as I know, the only companies that successfully built business models using the GPT-3 API at the time were mainly those that provided "copywriting as a service," which was almost the only economically viable application of GPT-3. However, we noticed a key phenomenon that eventually led to the birth of ChatGPT: Although developers had difficulty building a large number of successful business applications using the GPT-3 API, users loved talking to the model in our "Playground" environment. At the time, the model's conversational capabilities were still poor, and we had not yet mastered the method of optimizing the conversational experience through RLHF (reinforcement learning based on human feedback), but users still enjoyed it. In a sense, this conversational interaction became the only "killer application" for API products, in addition to copywriting, which ultimately led to our decision to build ChatGPT. When ChatGPT 3.5 was released, the number of areas where businesses can be conducted based on the API expanded from the original one to about eight. But our belief that "users just want to talk to the model" has become very firm. Therefore, after DALL-E achieved certain results, we have made our goals more clear, especially combined with the fine-tuning techniques we have mastered, we hope to build a model and product that users can talk to directly. ChatGPT was launched in 2022, about six years from November 30, 2016.
Moderator:  Yes, there is a lot of work behind this. ChatGPT was launched in 2022, and now more than 500 million users interact with it every week. Very good. Next, please be prepared to answer some questions from the audience, which is also the request of Mr. Sam Altman.

3. OpenAI’s operational philosophy and product iteration

Moderator:  As Pat said, you have been to our summit for three consecutive years. The industry has experienced many ups and downs during this period. But it seems that in the past six months, OpenAI has maintained a high-intensity product release rhythm. You have released many new results, and the speed of product iteration and release continues to accelerate, which is amazing. I want to ask a multi-faceted question: How do you keep the speed of product iteration in a large company as it grows?
Sam Altman:  I think a mistake that many companies make when they scale up is that they do not increase the scope of their business and the number of projects accordingly. They just grow as expected, but the number of products they produce does not increase. At this time, the inefficiency and bloat of the organization will become prominent. I firmly believe that everyone should be kept busy, the team size should be streamlined, and more things should be done with relatively few people. Otherwise, every meeting will be filled with a large number of people, and everyone will argue over the ownership of trivial functions in the product. There is an old saying in the business world that good executives are always busy because they don't want their team members to waste time. I think that in our company and many other technology companies, researchers, engineers, and product managers are the creators of core value and must be kept efficient and influential. So if the company is going to continue to grow, it has to expand into more areas of business. Otherwise, a large number of employees will just be caught up in endless arguments, meetings, and empty talk. We strive to give relatively few employees a lot of responsibility, and the key to achieving this goal is to advance multiple projects at the same time. And we do need to do a lot of things. For example, we now have the opportunity to build a truly important Internet platform. But to achieve this goal, if we want to become the personalized AI that users rely on in various services throughout their entire life cycle, covering all mainstream and niche areas, then we have to build a lot of products and services.
Moderator:  Are there any releases in the past six months that you are particularly proud of?
Sam Altman:  I think our current models are already very good. Of course, they still have room for improvement in some aspects, and we are also working hard on this. But at this stage, ChatGPT is already a very good product, and its core is the high quality of the model itself. Of course, other factors are also important, but I am still amazed at how well a model can complete such a variety of tasks.

4. OpenAI’s vision, ecosystem, and future plans

Moderator:  You are building both small and large models, and your business covers a wide range. So, how should entrepreneurs here find the right position to avoid competing directly with OpenAI and becoming "cannon fodder"?
Sam Altman: I think it's important to understand our position: we are committed to building and becoming the core AI subscription service for users and the main way to use it. Some of these functions will be reflected in ChatGPT. In the future, we will also launch several other key components around this subscription service. But the most important thing is that we want to continue to build smarter models and provide core interactive interfaces, such as future devices, new forms similar to operating systems, and so on. Then, we hope - although it is not yet completely clear - to launch an API or SDK (software development kit) that can truly support our platform ecosystem. This may take a few attempts, but we will definitely do it. I hope this will create great wealth for the world and enable other developers to build on this basis. In short, we will focus on the core AI subscription service and the underlying models, as well as the core interactive interface. Beyond this, there are still a lot of opportunities waiting to be discovered and built.
Moderator: So the advice is, don't try to do a core AI subscription service, but there is still a lot of room for development in other areas.
Sam Altman: We'll try. Of course, if someone can make a better core AI subscription product than us, we'll welcome it.
Moderator: Okay. There are rumors that you're raising money at a $40 billion valuation or something like that, but that's just rumors. I don't know.
Sam Altman: Thank you for your attention, and the financing we have publicly announced is progressing well.
Moderator: Okay, I just want to confirm whether your company has released any official information on this.
Moderator: What is the vision and goals of OpenAI from now on?
Sam Altman: We will continue to work hard to develop excellent models and launch excellent products. Beyond that, there is no detailed master plan. I mean, there are many OpenAI employees here who can confirm this. We are not the kind of company that sets a grand blueprint and then executes step by step. I believe in focusing on doing the things in the present. Trying to work backwards from an extremely complex future scenario to the current action steps usually does not work well. We know clearly that we need massive AI infrastructure and need to build a large-scale "AI factory"; we need to continue to improve our models; we need to build top-notch consumer products and all their supporting components. But we pride ourselves on our flexibility and adaptability, and we will adjust our strategies in time according to changes in the external environment. So, we're going to have products next year that we haven't even started to think about yet. We have a very strong belief that we can build a series of products that our users will love. We also have a strong belief that we can build a great model. In fact, I'm more optimistic than I've ever been about our current research roadmap.
Moderator: What's on the research roadmap?
Sam Altman: Very smart model. But in terms of the specific steps in front of us, we usually plan and execute one or two steps at a time.
Moderator: So you prefer to move forward step by step rather than making long-term backward-looking plans?
Sam Altman: I've heard some people outline their grand strategy, set a distant goal, and then work backwards from the future to the present, planning each move step by step, as if they were going to take over the world step by step. But I've never seen anyone who uses this approach really achieve great success.
Moderator:  Got it. Does anyone in the room have any questions? The microphone will be passed to you.

5. Audience Q&A Session

About AI transformation of large companies

Questioner 1:  What common misunderstandings do you think large companies have in the process of promoting the transformation of organizations to AI native, whether in tool application or product development? It is obvious that in the current wave of innovation, small companies seem to be far more efficient and fruitful than large companies.
Sam Altman:  I think this is basically a phenomenon that will occur in every major technological revolution, and it is not surprising. The mistakes made by large companies are exactly the same as they have made in the past, that is, individuals and organizations are prone to fall into inherent thinking patterns and behaviors. If the external environment changes dramatically every one or two quarters, and the company's information security committee still only meets once a year to decide which applications are allowed and how to integrate data into the system, then the pain caused by this lag is obvious. But this is the process of "creative destruction", which is why startups can stand out and the driving force for the entire industry to move forward. Frankly speaking, I am disappointed with the willingness and speed shown by large companies in this round of changes, but not surprised. My prediction is that they may go through a few years of internal games and wait-and-see, pretending that this change will not subvert everything. Then, when they are finally forced to make changes and respond hastily, it is often too late. In general, startups are more likely to outperform companies that are stuck in traditional models. This phenomenon also occurs at the individual level. For example, the difference between how an average 20-year-old uses ChatGPT and how an average 35-year-old uses it is incredible. It reminds me of when smartphones first came out: every child was able to get started and use them quickly, while older people took about three years to master the basics. Of course, people will eventually adapt. But right now, this generational difference in the use of AI tools is very significant. I think the performance of companies is just another manifestation of this phenomenon.
Moderator:  Any other questions?

About the use of ChatGPT by young people

Questioner 2:  Following up on the previous question, what innovative use cases have you observed young people using ChatGPT that we were surprised or impressed with?
Sam Altman:  They really use ChatGPT as an operating system. They have complex ways to set it up, connect it to a lot of personal files, and have pretty complex prompts in their minds or saved in notes. I think this is all very cool and impressive. It's also worth noting that they almost always consult ChatGPT before making important decisions in their lives. The model has background information about everyone in their lives and the conversations they have with each other. The introduction of the memory function has really made a significant difference in this regard. Of course, this may be a generalization, but in general, older users may see ChatGPT as a replacement for Google; users in their twenties and thirties may see it as a role like a life consultant; and college students are more likely to use it as an operating system.

How OpenAI uses ChatGPT internally

Moderator:  How is ChatGPT applied internally by OpenAI?
Sam Altman:  ChatGPT has written a lot of code for us.
Moderator:  Approximately how much?
Sam Altman:  I don't know the specific number. Moreover, I think it is inappropriate to measure this issue with numbers. For example, you can say that 20% or 30% of the code of a certain Microsoft product is written by AI, but measuring its contribution only by the number of lines of code is very one-sided. Perhaps it is more meaningful to say that it is writing truly valuable code, those truly core and critical parts.
Moderator:  This is very interesting. Next question.

On the future of APIs and their integration with consumer products

Questioner 3:  Hello, Sam Altman. I noticed an interesting phenomenon that when you answered Alfred's question about the future direction of the company, you focused mainly on the consumer market and building core subscription services. At the same time, most of your company's revenue comes from consumer subscriptions. So why do you still plan to keep the API in the next decade?
Sam Altman:  I very much hope that these different services can be integrated in the future. For example, users should be able to log in to other third-party services with their OpenAI accounts; and other services should also have powerful SDKs so that they can take over and integrate ChatGPT's user interface at an appropriate time. The key is that if you want to have a personalized AI that really understands you, has your information, knows what you want to share and when, and has all your background information, then you must want to be able to use it seamlessly in various scenarios. I admit that the current version of the API is still a long way from this ideal vision, but I believe we can eventually achieve it.

About empowering application layer companies

Questioner 4:  Yes, I might have a question related to this, and your answer just now also partially touched on it. Many of us are building application-level companies and hope to use these basic modules and different API components, and perhaps some deep research APIs that have not yet been publicly released, to develop new products and services. So, will supporting and enabling our application-level companies become a priority for OpenAI? How should we understand OpenAI's plans in this regard?
Sam Altman:  Yes. I think the future I expect is a state in between the two. In the future, the Internet may have a new HTTP-level protocol that makes various services and information more federated and able to be broken down into smaller components. Agents will be able to continuously expose and call different tools, and functions such as authentication, payment, and data transmission will all be built on this universally trusted underlying protocol, so that everything can be connected. I can't fully clearly describe its specific form at the moment, but it is gradually emerging from a vague idea. When we have a clearer understanding of this, it may take several iterations to finally achieve it. But this is the direction I want to see in the future.

About sensor data and AI

Questioner 5:  Hello Sam Altman, my name is Roy. I'm curious, AI can obviously improve its performance by getting more input data. Has your company considered inputting sensor data, such as real-time data from the physical world such as temperature, into the AI ​​model to help it better understand the real world?
Sam Altman:  People are already trying this. Developers will input various sensor data into APIs, such as the GPT-3 API or other similar interfaces. In some specific application scenarios, this approach may not be particularly ideal. But I would like to point out that the latest models seem to perform quite well in processing this type of data, which is different from early models. Therefore, we may integrate sensor data more deeply into the model at some stage in the future, and there have been a lot of positive progress in this regard.

On the Importance of Voice Technology

Questioner 6:  Hello Sam Altman. I'm very excited to try out the voice model in Playground, so I have two questions. First, what is the position of voice technology in OpenAI's infrastructure priority? Second, can you share how you think voice technology will be applied and reflected in OpenAI's products, especially in the core product ChatGPT?
Sam Altman:  I think voice technology is extremely important. Frankly speaking, we haven't built a good enough voice product yet. But it doesn't matter. It took us a long time to develop a good enough text model. We will eventually overcome this problem. When we succeed in doing this, I believe that more people will be willing to use voice interaction frequently. When we first launched the current voice mode, the most interesting thing to me was that it added a whole new information flow on top of the traditional touch interaction interface. Users can communicate with the model through voice while clicking on their phones at the same time. I always think that there is still a huge room for innovation in the integration of "Voice + GUI" (Voice + GUI) interaction, and we have not yet fully explored its potential. But before that, we will first strive to make voice interaction itself the best. When we achieve this, I believe it will not only enable cool experiences on existing devices, but will also likely spawn a whole new category of devices — provided we can get voice interactions to a level of natural fluency that’s truly human-like.

On the centrality of programming

Questioner 7:  I have another question about programming. I'm curious, in OpenAI's strategy, is programming seen as another vertical application area, or is it more core to its future development?
Sam Altman:  Programming is more core to OpenAI's future. I think programming will be one of the main ways that models interact with the world in the future. At present, if you make a request to ChatGPT, you usually get a text response, and sometimes it may be a picture. But in the future, you may want to get a full program, or at least custom rendered code for each response - at least that's how I envision it. You will want these models to actually perform tasks and have an impact in the real world, and writing code will be a core way to achieve this goal, calling various APIs and so on. So I would say that programming will be in a more core strategic position. We will obviously also provide developers with programming-related capabilities through APIs and platforms, but more importantly, ChatGPT itself should be very good at writing code.
Moderator:  So, AI will gradually evolve from the role of auxiliary tools to intelligent agents that can act autonomously, and eventually even be able to directly build and run applications.
Sam Altman:  I think its development trajectory will be like this, yes, it is a very continuous evolutionary process.

On the key elements of building smarter models

Questioner 8:  It's very encouraging that you are confident about the R&D roadmap for smarter models. I usually understand model building from the perspective of data, data centers, Transformer architecture, test-time computing, etc. In your opinion, is there an underestimated factor that will become a key component in the future, but is not yet fully recognized by most people?
Sam Altman:  Each of the factors you mentioned is extremely challenging. Obviously, the most leveraged effect is still a major algorithmic breakthrough. I think there may be several breakthroughs at the algorithmic level that can bring 10x or even 100x performance improvement. Although the number will not be large, even one or two such breakthroughs are of great significance. In general, yes, algorithms, data, and computing power are the core elements for building smarter models.

On balancing research freedom and top-down guidance

Questioner 9:  Hello. My question is, you lead one of the top machine learning teams in the world. When managing such a team, how do you balance giving top talents like Ilya Sutskever full freedom to explore deep research or other seemingly exciting frontiers, and setting clear goals from the top down (such as "We are going to build this product and we must make it successful, even if we are not sure whether it will work")?
Sam Altman:  For some projects that require high coordination and unified execution, a certain degree of top-down guidance and management is indeed required. But I think that in most cases, managers tend to intervene too much. Frankly speaking, there may be other ways to successfully run an AI research lab or a general research organization, but at the beginning of OpenAI, we spent a lot of time studying and understanding what a well-run research lab is like. To do this, we have to go back a long way in history. In fact, almost all the people who can provide insights in this regard are no longer around, and it has been a long time since the last golden age of truly excellent research labs. People often ask us why OpenAI can continue to innovate while other AI labs seem to be more imitating? Or why does one biology lab X produce so few results, while lab Y produces so many? We always reiterate the fundamental principles we observed, how we learned and applied them, and the lessons we learned from history. Then everyone agrees, but may turn around and choose to do something else. That's okay, they come to them for advice, and it's up to them to make the final decision. But I have found that the few core principles we try to implement in our research lab - principles that we are not original to us, but "shamelessly" borrowed from other successful scientific research institutions in history - work very well for us. Those institutions that choose to go a different way for some "smarter" reason often fail to achieve the expected success.

On the potential of AI in the humanities and social sciences

Questioner 10:  I think one of the very attractive qualities of these large models, especially for intellectual enthusiasts like me, is that they have the potential to reflect and help us answer some of the big questions in the humanities that have long been unanswered, such as the cyclical evolution of artistic interests. They can even help us explore the extent to which phenomena such as systemic biases exist in society, and detect extremely subtle social dynamics that we have been unable to address except by hypothesizing. I would like to know whether OpenAI has considered or has a roadmap to work with academic researchers to jointly unlock these unprecedented new insights in the humanities and social sciences?
Sam Altman:  We have considered and practiced this. Yes, it is really exciting to see the progress that researchers have made in these areas. We have dedicated academic research programs through which we work with the academic community and carry out some customized research. But in most cases, the requests from scholars are often to access our models, or more specifically, to access our basic models. I think OpenAI has done a very good job in this regard. One of the cool things about our business model is that our incentives are very much geared towards making our models as smart as possible, as cheap as possible, and as widely available as possible, which is undoubtedly beneficial to the academic community and the world at large. So while we do some custom collaborations, we often find that what researchers and ordinary users really want is for us to improve the overall performance of general models in general. So we put about 90% of our resources and energy into this direction.

About Model Customization

Questioner 11:  I'm very interested in how OpenAI thinks and plans to customize its models. You mentioned federated authentication, such as logging in through an OpenAI account and carrying the user's memory and context information. I'm curious whether you think the current customization methods, such as post-training models for specific application scenarios, are just a stopgap measure? Or is the main direction of OpenAI in the future to continue to improve the general capabilities of the core model rather than vigorously develop customization? How do you think about this issue?
Sam Altman:  In a sense, I think the ideal "Platonic state" is to have a very small inference model, but it can handle trillions of context information (context tokens). Users can input all their life's data into this model - every conversation, every book they read, every email they send and receive, every web page they browse. The core weights of this model do not need to be retrained for a specific user, nor do they need to be customized in any way. However, it can reason efficiently based on the complete context provided by the user. This means that all the information in your life, as well as all the content from other data sources, will continue to flow into this context, and your company can also use the same approach to handle all its data. We are not yet able to fully achieve this ideal state, but I think any other customization method can be regarded as a compromise to this "Platonic ideal". This is exactly the customization direction I hope we can achieve in the end.
Moderator:  There is one more question at the end.

On value creation in the next 12 months

Questioner 12:  Hello Sam Altman, thank you for your time. Where do you think most of the value creation will come from in the next 12 months? Is it more advanced memory capabilities, stronger security, or protocols that allow agents to perform more tasks and interact more deeply with the physical world?
Sam Altman:  In a sense, the continued creation of value will come mainly from three aspects: building more powerful infrastructure, developing smarter models, and building "scaffolding" or support systems that can effectively integrate these technologies into all levels of society. If you continue to invest and work hard in these areas, I believe other problems will naturally be solved. At a more specific level, I tend to think that 2025 will be the year of "agents performing work", especially in the field of programming, where I expect it to become a dominant application category. At the same time, I think there will be some other important developments next year. Specifically in 2026, I expect AI to play a greater role in exploring new knowledge and discovering new things, and perhaps we will see AI make some major scientific discoveries independently or with humans. I personally believe that most of the truly sustainable economic growth in human history, after the initial exploration of the earth and the development of its resources, has come primarily from the advancement of scientific knowledge and its application in the real world. Then, by 2027, I guess that the impact of AI will expand massively from the knowledge field to the physical world, and robots will transform from a novelty to an important driver that can create significant economic value. Of course, this is just a rough judgment based on my intuition at the moment.
Moderator:  Can I ask a few quick questions at the end?
Sam Altman:  Of course.

6. Summary Questions and Advice for Founders

About GPT-5

Moderator:  One of the questions is, will GPT-5 be smarter than everyone here?
Sam Altman:  Well, I mean, if you think you are much smarter than GPT-3, then maybe GPT-5 still has a way to go to surpass you. But it needs to be admitted that GPT-3 itself is already quite smart.

Resilience advice for founders

Moderator:  Okay, two more questions for you. Last time you were here, OpenAI had just gone through some twists and turns. Now that you have more distance and perspective, what advice do you have for fellow founders in the room on how to develop resilience, endurance, and inner strength?
Sam Altman:  With time and experience, it becomes easier to deal with adversity. As a founder, you inevitably face a lot of adversity in your entrepreneurial journey, and the difficulty and risk of the challenges will increase. However, as you go through more adversities, the emotional pressure will be reduced. So in a sense, yes, even if the challenges themselves become objectively bigger and harder, your ability to deal with challenges and the resilience you build in the process will increase with each experience. In addition, I think that as a founder, when facing a major challenge, the hardest moment is not the moment when the challenge occurs. There will always be various problems in the development of a company. In the moment of crisis, you usually get a lot of support and adrenaline to help you focus on coping, even in a serious crisis like running out of money and going bankrupt, there will always be a lot of people who will lend a hand, and you will always find a way to get through the difficulties and start a new journey. I think what is more difficult to manage is the psychological aftermath and the reconstruction process after the crisis. People often focus on how to effectively respond to the crisis in the moment, but there is relatively little discussion and guidance on how to pick up the pieces and rebuild confidence after the crisis, which is exactly the valuable ability to learn and improve. I have never found a particularly good resource that can guide founders on how to adjust their mindset and take actions after the crisis on day 0, day 1 or day 2, especially on day 60 after the crisis, when you are trying to rebuild from the ruins. This is an area that can be improved and perfected through continuous practice.
Closing remarks
Moderator:  Thank you very much, Sam Altman.
Sam Altman:  Thank you.
Moderator:  I know you are still on paternity leave. Thank you again for taking the time to attend and share with us. Thank you very much.
Sam Altman:  Thank you.