Sam Altman's latest 10,000-word interview transcript: I spend most of my time thinking about R&D, computing power, and products

Sam Altman's latest 10,000-word interview deeply analyzes the development and future of AI technology.
Core content:
1. Sam Altman's personal experience review, including the feelings of being fired and returning
2. The current status and future prospects of AI technology development, and the discussion of the relationship between AI and humans
3. Opportunities, challenges and organizational management in the AI era, as well as Sam's time management and working methods
On March 21 , Sam Altman was interviewed by The TED Audio Collective podcast. During the conversation, Sam Altman reviewed his past experiences and talked about the current status and future development of AI technology, the relationship between AI and humans, and the opportunities, challenges and organizational management in the AI era.
01
Fired and Returned
Moderator Question:
1. Let’s start with a question you must have been asked countless times in the past year: What does it feel like to be fired from the company you founded?
2. Is there any similarity between this and the situation when Steve Jobs was forced to leave Apple?
4. What lessons did you learn from this? What is your proudest moment?
Sam Altman's transcript:
It was a surreal trance. Confusion came first. Then, a variety of emotions came, but confusion came first. Then frustration, anger, sadness, and gratitude. It was a mixture of emotions. It was like experiencing the full spectrum of human emotion, and the breadth of emotions was impressive.
Honestly, there were so many urgent things to deal with that I didn't have time to process all these emotions. So I didn't do anything special during those 48 hours. It was when everything was settled and I had to start working again with all these mixed feelings that I felt really difficult.
About Steve Jobs. I think this is very different from Steve Jobs in a lot of ways. And the whole thing was very brief, it was over in five days. It was like a very strange, brief nightmare, and then you go back and pick up the pieces and get back to work.
Actually, maybe I'm not remembering correctly, it was probably only four days. I think it was four days. I learned a lot, and if I were to do it again, I would change the way I communicated, both during the process and afterwards, to be more direct and clear about what was going on. I think there was a cloud of suspicion around OpenAI for a long time, and we could have done better in that regard.
I know I work with a very talented group of people. However, one of my proudest moments, seeing how well the team excelled during a time of crisis and uncertainty, was watching the management team function for a short period without me and knowing clearly that any one of them was more than capable of running the company. I am proud of the people I chose, the mentorship I gave them (to whatever degree), and the solid position the company is in.
That feeling of employee support was nice, but that wasn’t what impressed me the most. I remember being very proud of the team, but not for that reason. I didn’t do research or build products. I was involved in some decisions, but not most. I built the company, so it was definitely something I had “authorship” in.
02
AI is a new stage in a long process of technological progress
Moderator Question:
1. What do you usually do? How do you arrange your time?
2. Are you surprised that AI has surpassed many human capabilities so quickly?
3. What new capabilities will become important?
4. What do you think about the positive outcomes and negative impacts that occur when AI assists scientists?
5. How do you use ChatGPT to solve problems you encounter at work?
Sam Altman's transcript:
That's a good question. Mornings are usually less chaotic, but in the afternoon, things tend to get out of control and various situations come one after another, and you get a little exhausted, like fighting fires. So I've learned to get the really important things done in the morning. I spend most of my time focusing on R&D, computing power, and product thinking, and spend less time on other things. However, at any given moment, the specific situation can be very different.
Our newest models are smarter than me in almost every way, but it hasn't really affected my life. I still care about the same things I did before. I'm probably a little bit more productive, maybe a lot more productive. I suspect that society will move a lot faster as it absorbs this new technology. And I expect that the pace of scientific progress will also accelerate a lot. We're living with this amazing new thing, this new tool. But how different is your life now than it was a few years ago? Not that different. I think AI is going to change everything in the long run . But if I go back 10 years, I might have been naive to think that once we had a model as powerful as the most powerful model we have now, everything would change. But now I think that was naive.
Ultimately, I think the entire economy will transform. We'll find new things to do, and I have no concerns about that. Even if we think every time we face a new technology, all the jobs are going to disappear, we always find new jobs. Some jobs will disappear, but we'll find a lot of new things to do, and hopefully better things. So I think this is just another stage in a long exponential curve of technological progress.
I think in some ways the AI revolution looks like the opposite of the Internet. Because in the Internet era, people who ran companies didn’t believe the Internet would change the world, and their companies failed because they didn’t make the necessary changes. But for those who embraced the Internet, the implications for action were very clear: I need a fully functional website, I need to know how to sell my products through the website, and adapting to the digital revolution was not difficult. What I hear from many founders and CEOs right now is exactly the opposite, which is that everyone believes AI is a game changer, but no one knows what it means for leadership, work, organizations, products and services. They are all clueless. In this sense, the AI revolution is more like the Industrial Revolution than it is the Computer Revolution or the Internet Revolution. There are huge unknowns about how the AI revolution will unfold, but I think we can also make many predictions about where it will go.
Regarding important new skills, for example, the ability to ask questions will become more important than the ability to find answers. This is consistent with my observations, even in the past few years. We used to value the amount of knowledge accumulated in a person's head. If you were a "fact gatherer", it made you look smart and respected. But now, I think being a "dot connector" is more valuable than being a "fact gatherer". If you can synthesize information and recognize patterns, you have an advantage.
Have you ever seen the TV show Battlestar Galactica? One of the things they say over and over in the show is, "It's happened before, and it's going to happen again." When people talk about the AI revolution, I do feel that it's different in some very important ways, but it also reminds me of old tech panics. When I was a kid, a new thing came out on the Internet. I thought it was cool, and so did everyone else. It was clearly much better than the previous thing. I was too young to experience it myself, but older kids told me that teachers were starting to ban the use of Google.
What's the point of going to history class and memorizing facts if you can look up any fact? If we can look up any fact instantly, we're going to lose something critical about how to educate children and how to be responsible members of society. You don't even have to start your car engine to drive to the library and find the book in the card catalog, it's just there. It doesn't feel fair, it doesn't feel right, it feels like we're going to lose something. We can't do that. And what usually happens with all these new technologies is that we get better tools, expectations go up, but people's abilities go up, and we just learn how to do harder, more impactful, more interesting things. I hope that's the same with AI. If you asked someone a few years ago: A, will there be a system as powerful as GPT 4 in 2024? B, if a prophet told you that you were wrong and there was a system like this, how much would the world change? How much would your daily life change? Would we be at existential risk? Almost everyone would say that the first question is definitely no. But if I'm wrong, and it happens, then we're helpless on the second question. And yet, this amazing thing happened, and here we are now.
In the area of innovation, Aiden Toner Rogers has published a new paper showing some good news for R&D scientists. When they use AI assistance, they file 39% more patents, which leads to a 17% increase in product innovation. Many of these are breakthrough innovations, discovering new chemical structures. The main beneficiaries are the top scientists, not the bottom scientists. If you are in the bottom third of scientists, there is little benefit, but the productivity of the top scientists almost doubled. This doubling seems to be because AI automates many idea generation tasks, allowing scientists to focus their energy on idea evaluation. Great scientists are very good at identifying promising ideas, while average scientists are prone to misjudgment. So this is all good news and has greatly unleashed scientific creativity. But it comes at a cost, and in the study, 82% of scientists were less satisfied with their jobs. They felt that they were doing less creative work and their skills were not fully utilized. In this case, humans seem to be relegated to being judges rather than creators or inventors.
I have two conflicting thoughts. One of the most gratifying things that happens at OpenAI for me personally is when we release these new reasoning models, we give them to these legendary scientists, mathematicians, programmers, and so on, and ask them what they think, and hear from them about how these models change their work, how they work in new ways. I certainly get a huge amount of professional joy from that, because I get to think creatively about a problem and find an answer that no one has found before. When I think about AI taking over this work, I do feel a little bit lost if that happens. But I expect that what will happen is that we will have new ways to solve hard problems, and there will be joy in participating in solving the hardest problems. If we use new tools that augment our capabilities in different ways, I think we will adapt, but I'm not sure.
As for how I use ChatGPT, to be honest, I use it for pretty mundane things. I'm not someone who uses it to help me come up with new scientific ideas, I use it to process emails, summarize documents, or other very mundane things.
03
Altman believes that humans will still value connection
Moderator Question:
1. People sometimes prefer AI-generated content, but once they are told it is AI-generated, they change their mind. What do you think about this?
2. Why do we still need human connection?
3. In a world where information disputes are becoming increasingly fierce and facts are becoming increasingly difficult to convince others, what role can AI play?
4. How can these tools be used to correct people’s misunderstandings?
5. Why is the hallucination problem a difficult problem for AI?
6. How will AI evolve in medical diagnosis in the future?
7. Will AI make people more humble?
8.What do you think about relying on AI to assist in expression?
9. Many years from now, where will human value be reflected?
Sam Altman's transcript:
I want to start by talking about a common phenomenon: people sometimes prefer AI-generated content, but once they are told it is AI-generated, they change their mind. This happens all the time. I recently saw a study that even among people who claimed to hate AI art, when they picked their favorite works of art, they still chose AI-generated works more often than human works. Only when they were told the source of the works did the situation reverse. We can give many similar examples, but one trend is that AI has reached a level comparable to humans in many aspects. However, we seem to be more inclined to pay attention to humans than AI, which I think is a very good sign. Our current discussion is based on speculation, and as I will explain, I am not very sure about these speculations. Although you may communicate with AI more frequently in the future than you do now, you will still value communication with humans. This is a deep need rooted in our biological instincts, evolutionary history, and social functions, no matter how you define it.
About human connection. You quickly realize that if you're communicating with a perfect, empathetic partner all the time, you're missing out on the drama, the tension, or something else in life. I think humans are wired to care about what other people think, feel, and see us. I don't think AI can replace that need. While you can have a productive conversation with an AI and get a sense of identity from it, and it can be a great form of entertainment, like playing video games, I don't think it satisfies our need to socialize as a group and a member of society. Of course, I could be wrong, and maybe AI can influence our psychology so perfectly that it does replace that need, but if that's the case, I'd be very sad.
It’s hard for AI to replace a sense of belonging. It’s also hard to get status and recognition from a robot, and we need the attention of others to feel important, attractive, or respected. This is exactly what I’m trying to say. I can imagine a world in the near future where AI far exceeds human capabilities and can accomplish many amazing tasks. When I imagine that world and its inhabitants, I still think those people will care deeply about each other and still care deeply about relative status and competition with others. But I think few people will measure themselves by the achievements and capabilities of AI.
Correcting misconceptions about AI . There are indeed some people in the world who can expand our thinking in some way, which is a powerful ability. But there are not many such people, and being able to communicate with them is a rare opportunity. If we can create an AI that is like an ideal dinner guest, it is funny, knowledgeable, interested in you, and willing to take the time to understand you and inspire you to think in new ways, I think it is a good thing. I have personally had the experience of talking to AI, which is like talking to an expert in a specific field. This experience changed my view of the world. Of course, human experts can do this too, but I just didn't have the opportunity to communicate with them at the time.
Regarding the hallucination problem. I think a lot of people are still stuck in the days of GPT3, which is "past tense" in 2021, when these models didn't really work and did produce a lot of hallucinations. If you use the current ChatGPT, it still has hallucinations, but I think what's surprising is that it's generally very stable. We train these models to make predictions based on all the text they've been exposed to before. There is inevitably misinformation in the training data. Sometimes there are also cases where the model fails to generalize effectively, and how to teach the model to confidently express "I don't know" when appropriate rather than making guesses is still an active area of research. But with the emergence of our new reasoning model, there has been significant progress in this area.
Regarding medical diagnosis in the future, I think it will be part of the process. There will be many other improvements, but this will be one of them. There are some new studies, and there have been a lot of similar studies in the past year or two, but one study that stood out to me last week showed that when you compare AI and doctors alone, AI wins. However, AI also beats doctors and AI teams. My interpretation of this result is that doctors don't benefit from AI assistance because they choose to override the AI when they disagree with the AI's judgment. You can find similar examples in history. For example, when AI started playing chess, initially human players were better. Then AI players took the upper hand. Then for a period of time (I don't remember how long it was), AI and human players performed better than AI alone because they could integrate different perspectives. But eventually, AI surpassed the combination of AI and humans again because human players would override the AI's judgment and make mistakes, or they didn't observe some key information. If you think your job is to override the AI's judgment in every case, then the effectiveness of AI assistance will be greatly reduced. On the other hand, I think we're still in the early stages of figuring out how humans and AI can work together. AI is going to be a better diagnostician than a human doctor, and that's probably not something you need to compete with. But there are a lot of other things that humans will do better, or at least patients will prefer to have humans do, and I think that's critical. I think this is something I think about all the time. I'm about to be a parent. My child will never be smarter than an AI. It will be natural for the children who are about to be born to know that the world they know is a world where AI is everywhere. Sure, AI is smarter than us, sure, it can do things we can't, but why does that matter? So I think this is something that is only unusual for people like us who are in the transition period.
AI is humbling. In some ways, it's driving humility, which I think is a good thing. But on the other hand, we don't yet understand how to use these tools effectively, and maybe some people have become overly reliant on them too early.
Regarding reliance on AI , it is true that I can no longer spell complex words because I rely entirely on autocorrect. But I think that is fine. It is easy for people to have a moral panic about this kind of thing. Even if people rely more on AI to assist in expressing their ideas, maybe this is just the future. I have seen some students who do not want to write papers if they do not have ChatGPT at hand, because they feel overwhelmed by the blank page or the blinking cursor. I do observe that this dependence is gradually forming. What do you think about how to prevent this? Or is this just the future and we should adapt to it? I am not sure whether we should prevent this. For me, writing is an outsourced thinking process, which is very important. But as long as people replace the traditional way of thinking with a new way of writing, I think this is a desirable direction. There is a fairly common but actually inefficient workflow: someone will write down the main points they want to express, and then let ChatGPT polish it into a beautiful email with multiple paragraphs and send it to another person. The recipient will then enter this email into ChatGPT and let it summarize the three key points. So I think there are some formalized writing and communication conventions that may have lost their original value. I’d love to see social norms evolve to allow everyone to send direct bullet points.
No one knows the exact answer to what human value will be in the future. But I think a more illuminating question is: What is human value today? I think it’s usefulness to others. And I think that will continue to be the case. Many years ago, Paul Buheit said something to me that really stuck with me. He had been thinking about this question before OpenAI was founded. He foresaw that in the future there would be “human currency” and “machine currency,” and they would be completely independent monetary systems that didn’t care about each other. I don’t think that’s going to happen, but I think it’s a very profound insight.
I never thought machines would have their own currency. But you'll be excited about what AI has accomplished, inventing all the science for you, curing diseases, achieving controlled nuclear fusion, and huge wins that we can't even imagine. But will you care more about what AI has accomplished than what your friend has accomplished, or what some entrepreneur has accomplished? I'm not sure. Probably not as much. Maybe some people will, and maybe there will be some very strange cults around specific AIs. But I believe we will be surprised to find that we are still very human-centric.
04
Quick Q&A: Altman's views on the speed of AI development, recommendations, the impact of AGI, and organizational management
Moderator Question:
1. What have you recently rethought or changed your mind about AI?
2. How fast?
3. What do you think is the worst advice people get about adapting to AI?
4. What advice do you most recommend on how to adapt to AI?
5. What is your most radical or least accepted opinion about AI?
6. Even then, their attention may only be temporary, and after 20 minutes, they'll be thinking about what to have for dinner.
7. What suggestions do you have for how OpenAI should manage our collective psychology?
8. When you talk about “organizational resilience,” what exactly do you mean?
Sam Altman's transcript:
I think a “quick takeoff” is more likely to happen than I thought a few years ago.
As for takeoff speed, it is difficult to make an accurate estimate, but it is probably within a few years rather than a decade.
AI is hitting a wall. I think this is the laziest and most avoidant way to think.
The most obvious way to get comfortable with AI is to use the tools. OpenAI has done something that I think is really amazing, where we've made the most powerful model we know of in the world today, GPT-4, available to the public for anyone to use for $20 a month. If you don't want to pay the $20, you can still use a really great model. So, this technology is at your fingertips. You have access to the most cutting-edge, most powerful tools, on the same platform that the world's top experts use. I think that's really great. So use it, explore what you like about it, what you don't like about it, and how you think it will be transformative.
The least accepted view is that, at least in the short term, it won't matter as much as people think. In the long run, everything will change. I even think that even if we release the first AGI, not many people will care.
Tech people and philosophers care. These are the two groups whose responses I observed were the most consistent.
About how OpenAI should manage the collective psychology as we go through this crazy "superintelligence" takeoff phase. How do we keep team members mentally healthy? We are not really in the "superintelligence" phase yet, but I can imagine that when we go through this process, it will bring extremely high risks and huge pressures. Even now, we are in the development stage of AGI and can feel this. But I think in order to cope with the challenges in the future, we need to build stronger organizational resilience.
Regarding organizational resilience, it is critical to make the right decisions in the face of extremely high risk and uncertainty, and to quickly adjust actions as the actual situation changes. I think the first thing to do is to draw a four-quadrant diagram and let everyone at OpenAI think about: how important is each of their choices, how high is the risk, and how reversible is it? Is it as easy as walking through a revolving door, or will it be gone forever? Those decisions with serious consequences and irreversible decisions require you to slow down and do enough thinking and reflection in advance, because they are critical and cannot be undone once made. In the other three quadrants, you can act quickly, experiment, try out, and be skeptical of your own cognition; but the quadrant with serious consequences and irreversible is the key, and it must be done right. This is also where I hope everyone will invest their best thinking and even the best prompt word engineering.
05
AI Regulation and the Future
Moderator Question:
1. You wrote a blog before about how to be successful. Do you still agree with the part about self-confidence?
2. What roles should AI and humans play in ethics?
3. How do you view ethical and safety issues?
4. Looking ahead to the next ten years or so, what are you most worried about?
5. How can AI become a force for good in developing countries?
Sam Altman's transcript:
I think so. We believed in this when we founded OpenAI. At that time, OpenAI was under a lot of skepticism from the outside world, and the external views were very different from our internal beliefs. I think my most important contribution to the company at that stage was to constantly remind everyone that the outside world rejects anything new and rejects anything that may contradict the established beliefs. So everyone has a lot of negative and even crazy comments about us. However, we have made amazing progress. I know it’s still early, and I know we have to suspend our doubts to believe that it will continue to scale, but it has been scaling. So we have to push it to the extreme. It seems to be a matter of course now, but at the time, I really believed that if we didn’t do this, we might not have achieved what we have today for a long time, because we were the only ones who had enough confidence to do those seemingly ridiculous things-investing a billion dollars to scale a GPT model. So, I think this is crucial.
AI ethics, I think we ultimately have to rely on humans for this. I hear a lot of metaphors like nuclear deterrence, like, "We need to stay ahead of the bad actors, and then we can do mutually assured destruction." And so on, but the arms race metaphor doesn't apply here because many bad actors are not state actors, and they don't face the same risks or consequences. And now we're going to trust a private company instead of elected officials? It's too complicated and it just doesn't work.
Regarding AI ethics, host Adam Grant believes that humans must make rules. AI can follow these rules, and we should require AI to follow any rules we collectively decide, but the rules must be made by humans. Secondly, people seem to always be accustomed to making historical analogies, which I understand, but I don't think it's entirely a good thing, and even feel a little bad, because historical cases are not the same as future situations. Therefore, I encourage everyone to base their discussions on how AI is different from previous things as much as possible, based on what we know now, rather than guessing and then trying to design a system that works for this. I firmly believe that using AI as a tool to significantly enhance personal ability, personal will (or whatever you want to call it) is a very good strategy for our current situation, and is better than a strategy where a company, opponent, individual or any entity monopolizes all the AI computing power in the world today. But I also admit that I don't know what will happen when AI becomes more autonomous and decision-making (Agentic) to a large extent. It's not that we can give them a task and let them program it for three hours, but we can let them do something very complicated that usually takes a whole organization many years to complete. I guess we have to explore new models. Again, I don't think historical experience will help us much.
It's not helping in the software space either. I think AI is regulated in the US like any other powerful technology, and I think the EU is probably more capable when it comes to legislation. But I don't think EU regulation of AI is helpful in another way. For example, when we finish a new model, we can release it earlier in the US than in the EU because the EU has a lot of regulatory processes. What if that means the EU is always a few months behind the cutting edge? I think that just reduces their proficiency in the space, as well as their economic engine, their understanding, and any other advantages that you're hoping to gain. So it's really hard to get the regulatory balance right, and I think we clearly need some regulation.
What I'm most worried about 10 years from now is simply the speed of change. I have every confidence that humanity has the ability to solve every problem, but we have many that need to be solved quickly.
On how AI can be a force for good in the developing world, we've been able to reduce the price of intelligence per unit by about 10 times a year. It's not sustainable forever. But we've been doing it for a while. It's really remarkable. You know, I think it's remarkable that intelligence has become so cheap.
I think in some ways this is counterintuitive to the point that, at least for now, only governments and responsible large companies have the ability to build really powerful models. Training models is the way to go, but using models is a very different thing.
I'm a techno-optimist and science-lover, and I think this is the coolest thing I can imagine and the best way I can think of to spend my work hours, to be part of what I think is the most interesting, cool, and important scientific revolution of our generation. It's a huge honor, incredible. And then, in a non-selfish way, I feel a sense of responsibility for the advancement of science, which I think is a way for society to advance, and of all the things that I have the ability to contribute to, this is the one that I believe most will advance science and thus improve living standards and the human experience. I feel a sense of responsibility, but not a negative sense of responsibility, but a grateful sense of responsibility.
Abundance is the first word that comes to mind, prosperity is the second. Overall, I want to see a world where people can do more, be more fulfilled, and live better lives, however each of us defines "better." That's it.