OpenAI announces the release date of GPT-5! Ultraman: It will far exceed expectations and has said it will be open to the public for free

Written by
Silas Grey
Updated on:July-08th-2025
Recommendation

OpenAI is about to launch GPT-5, which will be open to the public for free and lead a new era of AI.

Core content:
1. GPT-5 release plan adjustment, performance greatly improved
2. Under the influence of DeepSeek, GPT-5 will be provided for free
3. OpenAI transformation challenges and future positioning prospects

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

On Friday, Eastern Time, OpenAI CEO Sam Altman announced an adjustment to the new model release plan at X:

o3 and o4-mini will be released in the next few weeks, greatly improving the preview version of o3 in many aspects
GPT-5 will be released in a few months


Altman explained the reason for the adjustment as making GPT-5 much better than originally expected , and that the integration was more difficult than expected.


Reactions from netizens were mixed. Some speculated that this might be the version of GPT that was closest to AGI, but more people began to lose patience with the multiple delays in GPT-5. 


What is certain is that the naming complexity of all OpenAI models will be further improved.



Not long ago, Sam Altman had an in-depth conversation with Ben Thompson, a well-known Silicon Valley analyst , about the progress of GPT-5 and said that due to the influence of DeepSeek, GPT-5 will be available for free.
I think DeepSeek is a really great team and has made great models, but the reason they became popular is not entirely due to the capabilities of the model itself.
 
This is a lesson for us, that is, when we hide a feature (thinking chain), we leave others with the opportunity to get virality. This is a good wake-up call. It also made me rethink what we offer in the free tier, and GPT-5 will be free to use.
During the interview, people once again felt the influence of open source models such as DeepSeek on OpenAI. In the past two months, the rhythm of OpenAI's product releases has also begun to follow the models and products that have become popular in China, such as Siwei Chain and Agent.
Sam Altman mentioned that the rise of OpenAI was "a crazy opportunity." The success of ChatGPT forced the company to become a consumer technology company.
And now, this has become OpenAI's biggest challenge.
We would like to share this interview with you again. APPSO has compiled the key points of this interview:
1. The emergence of DeepSeek was a "wake-up call" that prompted OpenAI to reconsider its free tier strategy.
2. It implies that GPT-5 will be launched soon and free users can also experience GPT-5.
3.OpenAI hopes to build a series of products that serve billions of users.
4. Sam Altman believes that 1 billion daily active users are more valuable than the most advanced models.
5. OpenAI's future positioning is three-pronged:
·Build a large Internet company.
·Build the infrastructure for reasoning.
·Do the best research and the best models.
6. Illusions have their value and are a manifestation of creativity. The key is to control the illusion so that it appears when the user needs it.
7. The unexpected success of ChatGPT forced OpenAI to transform from a research lab to a consumer technology company.
8.Sam Altman admitted that this was not the original plan and he originally wanted to focus on AGI research.
9. AGI is a vague concept with no unified definition. It tends to define an agent that can complete a series of tasks autonomously as AGI.
The following is the transcript of the interview, slightly edited and selected by APPSO:
GPT-5 will be free to use
Moderator: From my perspective, when you talk about serving billions of users and being a consumer technology company, that means advertising. Don't you agree?
SA: I hope not. I'm not opposed to it. I'm not a stickler for it if there's a good reason to do it. But we have a great business model for selling subscription services.
Moderator: It's still a long way from profitability and recouping all the investment. And the great thing about advertising is that it expands your potential market and it also increases depth because you can increase revenue per user and advertisers pay for it. You don't run into any price elasticity issues, people just use it more.
SA: Right now, I’m more interested in figuring out how to charge people a really good automated software engineer, or some other type of agent, rather than making a few quid through an ad-based model.
Host: I know, but most people are not rational. They won't pay for productivity software.
SA: Let’s wait and see.
Moderator: I pay for ChatGPT Pro, and I'm not the right person to discuss this. But I just -
SA: Do you feel like you’re getting good value out of it?
Host: Of course, I think. I think --
SA: Great.
Moderator: -- Especially Deep Research, which is awesome. But I'm probably more skeptical that people would actively pay for something, even if the math is obvious, even if it makes them a lot more efficient. At the same time, I saw you guys were talking about building memories. Part of what makes Google's ad model so great is that they don't actually need to know the user very well, because people type into the search box what they're looking for. People type a lot of information into your chatbot.
Even if you run the dumbest ads, in many ways your targeting capabilities will be extraordinary even if you can't track conversions. And, by the way, you don't have an existing business model to worry about undermining. My sense is that this is completely contrary to what everyone at OpenAI originally envisioned, and that's the biggest obstacle. But to me, from a business analyst's perspective, it seems very obvious, and you're already late.
SA: I'm more interested in trying other approaches than traditional advertising. For example, a lot of people use Deep Research for e-commerce. Could we come up with some kind of new model where we never charge money for changing the location of an item or anything, but if you buy the item you found through Deep Research, we'll get a 2% affiliate commission, or something like that? That would be cool, and I have no problem with that. Maybe, we could find a decent way to do advertising, but I don't know. I'm not really a big fan of advertising.
Host: That was always an obstacle. Mark Zuckerberg didn't like ads very much, but he still found someone to do it and "don't tell me the details" and made the money magically appear.
SA: Yeah. Again, I like our current business model. I'm not going to say what we will or won't do in the future because I don't know. But I think there are a lot of interesting ways that are higher on our list of monetization strategies right now than advertising.
Host: Do you feel that when DeepSeek came out and became popular quickly and people started using it and saw its reasoning capabilities, part of the reasoning was that people who had used ChatGPT were not as impressed because they had used the o1 model and they knew its potential.
SA: Yes.
Moderator: But free users, or people who just use it once in a while, don't feel that way. Is this actually an example of how your reticence may make other products look more impressive?
SA: Absolutely. I think DeepSeek — they have a great team, they developed a great model. But, I think, the capabilities of the model weren't really what made them go viral.
This was a lesson for us that when we hide a feature (we hid "Think Chain"), we have good reasons to do so, but it does mean that we leave opportunities for other people to get virality. I think it was a good wake-up call from that perspective. And it also made me rethink what we offer in the free tier, and GPT-5 will be free to use, which is cool .
Moderator: Wow, hints at GPT-5. Okay, I'll ask you more about that later.
Moderator: When you think about your business model, I've always thought that your business model is very suitable for those "high initiative" people, that is, those who will actively use ChatGPT and are willing to pay for it because they see its value. But how many people are "high initiative"? And the "high initiative" people will try all other models, so you have to keep it at a pretty high level. In contrast, if I have a good model, it's there, and I don't need to pay for it, it will continue to improve, and people will make more money through me, but I don't know because I don't have a problem with ads, just like most Internet users.
SA: Again, we are open to whatever needs to be done, but I am more interested in the e-commerce model you just mentioned than traditional advertising.
Competition with DeepSeek and others
Host: Regarding DeepSeek, have you ever wondered why people don't cheer for American companies? Do you think that the popularity of DeepSeek also carries some "anti-OpenAI" sentiment?
SA: I didn’t. Maybe, but I certainly didn’t feel it. I think there were two things. One, they put a cutting-edge model on the free tier. Two, they showed “chain of thoughts,” which was fascinating.
Host: People will think, "Oh, it's so cute. AI is trying to help me."
SA: Yes. I think it's mainly those two things.
Moderator: In your recent “AI Action Plan,” OpenAI expressed concern about companies building on DeepSeek models that are “freely available.” If this is really a problem, then isn’t the solution to make your models freely available as well?
SA: Yes, I think we should.
Host: So, when --
SA: I can’t announce a release date yet, but directionally I think we should do it.
Moderator: You said earlier that having a billion-user website is more valuable than a model. So should this be reflected in your release strategy and how you think about open source?
SA: Stay tuned.
Host: Okay, I'll look forward to it. No problem.
SA: I’m not giving anything away ahead of time, but stay tuned.
Moderator: I guess the next question is, is this an opportunity for you to get back to your original mission? If you look back at the original announcement, DeepSeek and Llama...
SA: Ben, I'm trying to give you as many hints as possible without saying it directly. Please.
Moderator: (Okay, no problem. Fair, fair. Is there a sense that this is liberating? Right? You think back to that GPT-2 announcement, and the questions about security and other things that might be. It seems a little corny at this point. Is there a sense that the secret is out? In that context, what's the point of being somewhat precious about these releases?
SA: I still think there may be a lot of risk in the future. I think it's fair to say that we were too conservative in the past. I also think there's nothing wrong with the principle of being a little conservative when you don't know the situation. I also think that at this stage, this technology will spread to all kinds of fields, whether our model does bad things or other people's models do bad things, what difference does it make? But what can I say, I still hope that we can be a responsible participant as much as possible.
Moderator: Another recent competitor is Grok. From my perspective, I've had two, what I think are very interesting psychological experiences with AI in the past year or so. One was running a local model on my Mac. For some reason, I was very aware that it was on my Mac and not running anywhere else, which was actually a very great feeling. The other time was using Grok, and I didn't feel like there was any "moral police" who would jump out and accuse me at some point. I think it must be admitted that ChatGPT has made great improvements in this regard. But does Grok make you feel that, in fact, we can go further in this regard and let users (use it freely) like adults?
SA: I think we’ve gotten better, actually. I think we were really bad at it before, but over the last six to nine months, I think we’ve improved a lot.
Host: I agree. It has definitely gotten better.
SA: This used to be one of my biggest concerns about our product. But now, I don't think it bothers me as a user, and I think we've done a good job. So, I used to think about it a lot, but in the last six to nine months, I haven't thought about it anymore.

Becoming a consumer tech company was an accident
Moderator: Let's talk about the nonprofit issue - there is a saying, you mentioned that "myth". It says that you set up a nonprofit for altruistic reasons and also to compete with Google for talent. Is that just it?
SA: You're asking, why did you choose to become a non-profit organization?
Host: Why did you choose to become a nonprofit, and all the problems that come with it?
SA: Because we thought we were just a research lab. We didn't think we were going to be a company. Our plan was to publish research papers. There was no product, no product plan, no revenue, no business model, no plan for any of that. One thing that has always helped me in life is to grope in the dark until you find the light. We groped in the dark for a long time, and then we found something that worked.
Host: That's right. But isn't this nonprofit identity a bit like a millstone around the neck of the company now? If you could do it again, would you do it differently?
SA: Absolutely. If I had known what would happen, we would have structured it differently. But we didn’t know that at the time, and I think the price of being at the forefront of innovation is that you make a lot of stupid mistakes because you’re stuck in the fog of war.
Moderator: I have a few more theories I want to discuss with you about ChatGPT and how no one expected you guys to be a consumer tech company. This has always been my view: you guys were originally a research lab, and sure, we would release an API and maybe make some money. But you mentioned that six-month expansion period, and you had to seize this opportunity that fell from the sky. There's a lot of discussion in the tech world about employee attrition, some well-known people left, and so on.
It seems to me that no one is here to be a consumer product company. If they want to work at Facebook, they can go to Facebook. That's another core contradiction: you have this opportunity, whether you want it or not, it's there. That means that things are very different here than they were at the beginning.
SA: Well, let’s put it this way, I can’t complain, right? I got the best job in tech. It would be really unkind of me to start complaining that this isn’t what I wanted and how unfortunate it is for me and so on. What I wanted was to run an AGI research lab and figure out how to build AGI.
I didn’t really expect to run a large consumer internet company. I knew from my previous job (which was also what I considered the best job in tech at the time, so I consider myself very, very lucky to have had the best job twice) how much energy and effort it takes to run a large consumer company and how difficult it is in some ways.
But I also knew what to do because I had coached a lot of people and had observed a lot. When we launched ChatGPT, there was a spike in users every day that crashed our servers. Then at night, the number of users dropped, and everyone was like, "This is over, this is just a short viral spread." Then the next day, the spike was higher, and then it dropped again, "This is over." By the fifth day, I was like, "Oh my god, I know what's going to happen next, I've seen this happen many times."
Host: But have you really seen this happen a lot? Because the whole game is about customer acquisition. For a lot of startups, that's the biggest challenge. There are very few companies that have really solved the customer acquisition problem through organic growth, virality. I think the company that really got ahead of OpenAI in this area was Facebook, and that was in the mid-2000s. I think you may have overestimated how many times you've seen this happen before.
SA: Well, at this scale, yes, we are probably the largest. I think we are probably the largest company that has been founded since Facebook.
Host: Consumer tech companies of this size are actually very rare, it doesn't happen very often.
SA: Yeah. But I've seen Reddit, Airbnb, Dropbox, Stripe, and a lot of other companies achieve this amazing product-market fit and then explode in growth. So maybe I haven't seen it at this scale. At the time, you don't know how big it's going to be, but I've seen this early model before.
Host: Did you tell people this was going to happen? Or was it something you just couldn't communicate?
SA: I did tell everybody. I gathered the guys in the company and said, “This is going to be really crazy, we have a lot of work to do, and we have to do it quickly. But this is a great opportunity that has fallen from the sky, we’re going to take it, and this is what’s going to happen…”
Host: Is there anyone who understands you or believes in you?
SA: I remember going home one night and putting my head in my hands, feeling really depressed. I said, “Oh my god, Oli [Oliver Mulherin], this is terrible.” He said, “I don’t understand, it looks great.” I said, “This is terrible, it’s terrible for you, you just don’t know it yet, but this is what’s going to happen…” But I don’t think anyone really understood. That was a unique thing about my experience before, I was able to realize it very early on, but no one could have experienced how crazy the first few weeks were going to be.
Moderator: What will be more valuable in five years? A website with 1 billion daily active users that doesn’t need to do customer acquisition, or a state-of-the-art model?
SA: I think it's the billion-user site.
Moderator: Is that the case anyway? Or is it that, because at least at the level of GPT-4 (I don't know if you saw it today, LG just released a new model), there will be a lot of, I don't know, I don't comment on whether it's good or bad, but there will be a lot of state-of-the-art models in the future.
SA: My favorite historical analogy is the transistor, and AGI will be like the transistor. There will be a lot of AGI, it will permeate everything, it will be cheap. It is an emergent property of physics, and it is not a differentiator in and of itself.
Host: So what will be the differentiating factor?
SA: Where I think there is a strategic advantage is in building a giant internet company. I think that should consist of a few different key services. There will probably be three or four products like ChatGPT, and you'll want to buy a bundled subscription that includes all of them. You want to be able to log in with your personal AI that has learned about you over your life and use it across other services.
I think there will be some amazing new devices that are optimized for the way you use AGI. There will be new web browsers, there will be a whole ecosystem. In short, there will be people building valuable products around AI. This is one aspect.
The other aspect is the inference stack, which is how to achieve the cheapest, richest inference. Chips, data centers, energy, there will be some interesting financial engineering to do, all of which are included.
Then, the third aspect is to really do the best research and develop the best models. I think this is the "troika" of value. However, except for the most cutting-edge models, I think most models will be commoditized very quickly.
Host: So, when Satya Nadella says that models are being commoditized and OpenAI is a product company, it's still a friendly statement and you're still on the same page, right?
SA: Yeah, I don’t know if it sounds like a compliment to most listeners, but I think he was trying to compliment us.
Moderator: Here’s how I understand it. You asked me for my explanation of your strategy, which I wrote shortly after ChatGPT was launched, that it was an “accidental consumer technology company.”
SA: I remember you wrote that article.
Moderator: This is the most - like I said, this is the rarest opportunity in the tech industry. I think I benefited a lot from Facebook in terms of strategic analysis because it was such a rare entity that I was like, "No, you have no idea where this is going." But I didn't start until 2013, and I missed the beginning. I've been doing Stratechery for 12 years, and I feel like this is the first company that I've been able to cover from the beginning, and at this scale.
SA: This doesn’t happen very often.
Moderator: Not often, really. But speaking of that, you just released a major API update, including access to the same "compute usage" model that underlies Operator (a selling point of GPT Pro). You also released the Responses API. I think the most interesting thing about the Responses API is that you said, "We think this is much better than the Chat Completions API, but of course, we'll continue to maintain it because a lot of people have already built on it." It has become an industry standard, and everyone has copied your API. At what point does the work related to these APIs, and maintaining old versions and pushing new features to new versions, become a distraction and a waste of resources? After all, you have a Facebook-level opportunity in front of you.
SA: I really believe in the “suite of products” strategy that I just talked about. I think that if we execute really well, in five years, we’ll have a suite of (a few) billion-user products. And then we have this idea that you can use your OpenAI account to log in to anywhere else that wants to integrate our API, and you can take your credits, your custom models, and everything else with you wherever you want. I think that’s the key to us really becoming a great platform.
Moderator: But here's the conundrum that Facebook has. It's hard to be both a platform and an aggregator, to use my terminology. I think mobile was a good thing for Facebook because it forced them to give up the fantasy of being a platform. You can't be a platform, you have to accept that you're a content network with ads. Ads are just more content. It actually forced them into a better strategic position.
SA: I don't think we'll be a platform in the same way that an operating system is. But I think that in the same way that Google isn't really a platform, but people log in with their Google accounts, and people bring their Google content to all corners of the web, and it's part of the Google experience, I think we'll be a platform in that way.
Host: Carrying your login information means carrying your memory, your identity, your preferences, and all of these things.
SA: Yes.
Moderator: So you guys just override everyone else. They can choose multiple logins, and OpenAI's login is better because it includes your memory? Or is it that if you want to use our API, you have to use our login?
SA: No, no, no. Of course it is optional.
Host: Don't you think that this could be a distraction or a diversion of resources when you have such a huge opportunity in front of you?
SA: We do need to do a lot of things at once, and that’s the hard part. I think in a lot of ways, yes, I think one of the most daunting challenges of OpenAI is that we need to be really good at a lot of things.
Host: Well, this is the paradox of choice. There are so many things you can do.
SA: We didn’t do a lot, we said no to almost everything. But if you think about just the core of what we think we have to do, I think we do have to do a lot, I don’t think we can succeed by just doing one thing.
Hallucinations also have meaning
Moderator: Is there a possibility that the illusion is actually beneficial? You posted an example of a writing model that kind of confirms a point I've made for a long time, which is that everyone is trying to make these probabilistic models behave like deterministic calculations, almost ignoring their magic, which is that they are actually "making up" content. This is actually quite remarkable.
SA: Absolutely. If you want something deterministic, you should use a database. The beauty of this is that it can be creative, even though sometimes what it creates is not what you want. But that's okay, you can try again.
Moderator: Is this a question for the AI ​​Lab, are they trying to do this? Or is it a question of user expectations? How can we make everyone like Hallucination?
SA: Well, you want it to hallucinate when you want it to, and not hallucinate when you don’t want it to. If you ask, “Tell me this scientific fact,” you want it to be non-hallucination. If you say, “Write me a creative story,” you want some hallucination. I think the real question, or the interesting question, is how do you get the model to hallucinate only when it’s in the user’s interest?
Moderator: How do you think about this: When these prompt words are leaked, they say things like "Don't reveal this" or "Don't say this" or "Don't do X, Y, Z." If we're worried about security and alignment, is teaching AI to lie a very serious problem?
SA: Yeah. I remember xAI was once ridiculed for saying something in the system prompt about not saying bad things about Elon Musk or something like that. It was embarrassing for them, but I kind of felt bad for them because, the model was just trying to follow the instructions given to it.
Host: Yes. It's very serious.
SA: Seriously. Yes. So, yes, it was stupid and certainly embarrassing, but I don’t think it was a “meltdown” in the sense that people are talking about it.
Moderator: Some skeptics, myself included, have argued that some aspects of your call for regulation are attempts to hinder potential competitors. I want to ask a two-part question. First, is that fair to say? And second, if the AI ​​Action Plan does nothing more than prohibit state-level AI restrictions and declare that training copyrighted material is fair use, is that enough?
SA: First of all, most of the regulation we have been calling for has been to only target the most cutting-edge models, the most advanced models in the world, and to put some safety testing standards on these models. Now, I think this is good policy, but I increasingly feel that most people in the world do not think this is good policy, and I worry about "regulatory capture."
So, obviously, I have my own beliefs, but it looks like we're unlikely to achieve this policy on a global scale. I think it's a little scary, but hopefully we can do our best to find a way out, and maybe everything will be fine. After all, not many people want to destroy the world.
But certainly you don't want to burden the entire tech industry with regulation. The kind of regulation that we're calling for would only affect us, Google, and a handful of other companies. Again, I don't think the world is going to go in that direction and we're going to compete under the existing rules. But yes, I think it would be very, very helpful if it was clear that fair use is fair use and that states didn't have all these complicated and different regulations.
Moderator: Is there anything OpenAI can do? For example, if Intel has a new CEO who is ready to refocus on AI, would you commit to buying chips produced by Intel? Can OpenAI help in this regard?
SA: I’ve been thinking a lot about what we can do for the infrastructure layer and the whole supply chain. I don’t have a great idea yet. If you have any suggestions, I’m all ears. But I do want to do something.
Host: Okay, sure. Intel needs a customer. That's what they need most, a customer that's not Intel. Having OpenAI be a major customer of the Gaudi architecture, committing to buy a lot of chips, will help them. It will push them forward. That's your answer.
SA: If we develop a chip with a partner that works with Intel and uses a compatible process, and we have a high enough confidence in their ability to deliver, we can do that. Again, I want to do something. So, I'm not avoiding the question.
Host: No, I'm also a little unfair because I just told you that you need to focus on growing your consumer business and cutting off the API. Now I'm asking you to focus on maintaining chip production in the United States, which is really unfair.
SA: No, no, no, I don't think it's unfair. I think if there's anything we can do to help, we have an obligation to do it. But we're trying to figure out what that is.
AGI has no unified standards and needs to be able to complete many tasks autonomously
Moderator: Dario and Kevin Weil, I think, have both said in different ways that 99% of the code writing will be automated by the end of this year, which is a very fast timeline. What do you think that is right now? When do you think we'll get past 50%? Or have we already?
SA: I think in many companies, it’s probably over 50% now. But I think the real breakthrough is going to come from autonomously programmed agents, and no one is really doing that yet.
Host: What are the obstacles?
SA: Oh, we just need more time.
Moderator: Is this a product problem or a model problem?
SA: Model problem.
Moderator: Should you continue to recruit software engineers? I see you have a lot of job openings.
SA: My basic assumption is that for a period of time, the amount of work that can be done per software engineer will increase dramatically. And then, at some point, yes, maybe we do need fewer software engineers.
Moderator: By the way, I think you should hire more software engineers. I think that's part of my point, I think you need to move faster. But, you mentioned GPT-5. I don't know where it is, we've been looking forward to it for a long time.
SA: We just released 4.5 two weeks ago.
Host: I know, but we are greedy.
SA: It's okay. You don't have to wait too long. The new version won't be long.
Moderator: What is AGI? You have many definitions. OpenAI also has many definitions. What is your current, or most advanced, definition of AGI?
SA: I think what you just said is the key point, that AGI is a fuzzy boundary that encompasses a lot of things, and the term, I think, has been almost completely devalued . By many people's definitions, we may have already achieved AGI, especially if you could take a person from 2020 to 2025 and show them what we have.
Host: Well, for many years, AI has been like this. AI has always been about things that we can't do. Once we can do it, it becomes machine learning. Once you're not paying attention to it, it becomes an algorithm.
SA: Right. I think for a lot of people, AGI refers to a portion of economic value. For a lot of people, it refers to a general purpose thing. I think they can do a lot of things really well. For some people, it refers to something that doesn't make any stupid mistakes. For some people, it refers to something that improves itself, and so on. There's just not a nice unified standard.
Host: What about agents? What are agents?
SA: Something that runs autonomously and does a chunk of the work for you.
Moderator: To me, that's AGI. That's the level of employee replacement.
SA: But what if it's only good at one type of task and can't do others? I mean, some employees are like that.
Host: Yeah, I'm thinking about that because it's a radical redefinition. AGI was once thought to be omnipotent, but now we have ASI. ASI, super intelligence. To me, it's a terminology question. ASI, yes, can do any job we give it. If I get an AI that does a specific job, like programming, or whatever, and it does it consistently, and I can give it a goal, and it can achieve that goal by figuring out the intermediate steps. To me, that's a clear paradigm shift from where we are right now, where we still have to largely direct it.
SA: If we had a brilliant autonomously programmed agent, would you say, “OpenAI did that, they achieved AGI”?
Host: Yeah. That's how I define it now. I agree, it's almost a watering down of what AGI used to mean. But I just use ASI instead of AGI.
SA: Can we get a little Ben Thompson gold star for our wall?
Host: (Laughing) Of course, here you go. I'll give you my circuit pen.
SA: Great.
Host: You talk to your colleagues in these labs about what you're seeing and how nobody's ready for it, and there's all sorts of tweets floating around that get people excited, and you drop some hints in this podcast. It's very exciting. But you've been talking about this for a long time. You look at the world, and in some ways, it still looks the same. Is it that your launches haven't lived up to your expectations, or are you surprised by the ability of humans to absorb change?
SA: It's more of the latter. I think there have been a few times where we've done something that's really blown the world away and people have been like, "This is... this is crazy." And then, two weeks later, people are like, "Where's the next version?"
Moderator: Well, I mean, your initial strategy was the same, because ChatGPT blew everyone’s mind. And then, ChatGPT-4 came out shortly after, and everyone was like, “Oh my god. How fast are we moving?”
SA: I think we've released something incredible, and I think it's actually a great human trait that people adapt and just want more, better, faster, cheaper. So, I think we've overdelivered, and people are just updating their perceptions.
Moderator: Given that, does that make you more optimistic, or more pessimistic? Do you see this bifurcation that I think is going to happen, between people with agency (that’s another meaning of agency, but see where we’re going. We need to invent more words. We can let ChatGPT hallucinate one for us) and people who are going to use the API?
The whole idea of ​​Microsoft Copilot is that you have an assistant that's accompanying you, and there's a lot of spiel about, "Oh, it's not going to replace jobs, it's going to make people more productive." I agree that for some people who actively use it, that will be the case.
But if you look back at, say, the history of the PC. The first wave of PC adoption was people who actually wanted to use a PC. A lot of people didn't want to use a PC. They had a PC on their desk, and they had to use it to do a specific task. You actually need a generational shift to get people to use the PC by default. AI, is that the real limiting factor?
SA: Maybe, but that’s okay. As you mentioned, this is common for other technological changes.
Moderator: But if you go back to the PC example, actually the first wave of IT was the mainframe, which wiped out the entire back office. Because actually, it turns out that the first wave was a wave of job replacement because it was easier to implement top-down.
SA: My gut feeling is that it won’t be exactly the same this time around. But I think it’s always very difficult to predict.
Host: What is your intuition?
SA: It will trickle down into the economy, mostly just gradually eating away at things, and then it will get faster and faster.
Moderator: You often mention scientific breakthroughs as a reason to invest in AI. Dwarkesh Patel recently made the point that there haven’t been any scientific breakthroughs yet. Why not? Can AI actually create or discover new things? Or is the real problem that we’re overly reliant on models that aren’t actually that good?
SA: Yeah, I don’t think the models are smart enough yet. I don’t know. You hear people using Deep Research say, “Well, the model didn’t discover new science on its own, but it did help me discover new science faster.” To me, that’s almost as good.
Moderator: Do you think the Transformer-based architecture can really create new things, or is it just outputting mediocre content on the internet?
SA: Yes.
Host: Well, where will the breakthrough point be?
SA: I mean, I think we're on the path. I think we just have to keep doing what we're doing. I think we're on the path.
Host: I mean, is this the ultimate test of God?
SA: How do you say that?
Host: Are humans inherently creative, or is creativity just about recombining knowledge in different ways?
SA: One of my favorite books is David Deutsch's The Beginning of Infinity. There are a few pages at the beginning of that book that really beautifully describe how creativity is about slightly modifying something you've seen before. And then, if something good comes out of it, someone else will slightly modify it again, and someone else will slightly modify it again. I kind of believe that. If that's the case, then AI is good at slightly modifying things.
Moderator: To what extent can you believe that this view is based on long-held beliefs rather than on things that you observe? Because I think this is a very interesting - I don't want to get too metaphysical or, like I said, almost theological - but it does seem to be the case that one's basic assumptions influence one's assumptions about the possibility of AI.
Then, most Silicon Valley people are materialists, atheists, or whatever you want to call them. So, of course, we'll figure it out, it's just a biological function and we can reproduce it in a computer. If it turns out that we never really created new things, but rather enhanced humans to create new things, would that change your core belief system?
SA: It was definitely part of my core belief system before. None of this is new. But, no, I would assume that we just haven’t found the right AI architecture yet, and at some point in the future, we will.
Career advice for young people in the AI ​​era
Moderator: The last question is from my daughter, who is graduating from high school this year. What career advice do you have for high school graduates?
SA: The most obvious and specific advice is to master AI tools. Just like when I graduated from high school, the most obvious and specific advice was to master programming. This is the new version of the advice.
The broader advice is that I think people can develop resilience and adaptability and the ability to figure out what other people want and how to be useful to other people. I would practice that. Like, whatever you study, the specific details may not be that important.
Maybe they never mattered. The most valuable thing I learned in school was the “meta-skills” of learning, not any specific thing I learned. So, whatever specific things you learn, learn these general skills that seem to be important as the world goes through this transformation. .