ChatGPT Gentle Singularity

Written by
Jasper Cole
Updated on:June-13th-2025
Recommendation

Explore the new era of coexistence between humans and digital superintelligence.

Core content:
1. The rise of digital superintelligence and its enhancement of human capabilities
2. The potential impact of AI on scientific progress and productivity improvement
3. Possible changes and challenges to human society in the 2030s

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)

We have crossed the event horizon, and the takeoff has begun. Humanity is very close to building digital superintelligence, and at least so far, it is not as strange as it seems.

Robots are not on the streets yet, and we don’t spend all day talking to AI. People are still dying of disease, we still can’t easily travel to space, and there are still many mysteries about the universe that we have yet to understand.

But we have recently built systems that are smarter than humans in many ways, and that can significantly enhance the capabilities of those who use them. The least likely part is now in the past; the scientific insights we rely on to achieve systems like GPT-4 and o3 were hard-won, but powerful enough to take us far.

AI will advance the world in many ways, and the improvements to quality of life, especially in accelerating scientific progress and increasing productivity, will be enormous; the future may be far better than the present. Scientific progress is the biggest driver of progress overall; it is incredibly exciting to imagine how much knowledge we may have that we have yet to gain.

In an important sense, ChatGPT is already more powerful than any human in history. Hundreds of millions of people rely on it every day to perform increasingly important tasks; a tiny new capability could have a huge positive impact, while a tiny misalignment multiplied by hundreds of millions of people could also have serious negative consequences.

In 2025, we’ll see intelligent agents that can do real cognitive work; the way we write computer code will be forever changed . In 2026, we’ll likely see systems that can come up with novel insights . In 2027, we’ll probably see robots that can perform tasks in the real world.

More people will be able to create software and art. But the world is far from saturated with both, and experts are likely to remain far better than beginners, as long as they are willing to embrace new tools. Overall, by 2030, one person will be able to accomplish far more than in 2020, which will be a noticeable change, and many people will know how to benefit from it.

In the ways that matter most, the 2030s may not look very different. People will still love their families, express their creativity, play games, and swim in lakes.

But in other equally important ways, the 2030s are likely to be very different from any previous era. We don’t know the limits of human intelligence, but we will soon.

By the 2030s, intelligence and energy —that is, ideas and the ability to realize them—will be extremely abundant. These two have long been the fundamental limits to human progress; with enough of them (plus good governance), we could theoretically have everything else.

We already live with powerful digital intelligence, and after the initial shock, most people quickly adapt. From marveling at AI’s ability to generate a beautiful paragraph to expecting it to write a complete novel; from being surprised at its ability to diagnose a disease to expecting it to develop a cure; from admiring it for writing a small program to longing for it to create a company. This is the rhythm of the singularity: miracles become routine, and then become thresholds.

We’ve heard scientists say they’re two to three times more productive than they used to be. Advanced AI is fascinating for many reasons, but perhaps the most important one is that we can use it to accelerate AI research itself. If we can do a decade of research in a year, or even a month, then the pace of progress will be very different.

From now on, the tools we have built will help us find further scientific insights and assist us in building better AI systems. Of course, this is not yet an AI system that can completely update its code autonomously, but it is indeed a nascent version of recursive self-improvement .

There are other positive feedback loops going on. The release of economic value has already started a flywheel of building the infrastructure to run these increasingly powerful AI systems. Robots that can build other robots (and, in a sense, data centers that can replicate other data centers) are not far away.

If we had to build the first million humanoid robots the traditional way and then have them take over the entire supply chain — mining, refining, transportation, factory operations, etc. — to build more robots and build more chip factories, data centers, etc., then the pace of progress would obviously be completely different.

As data center production is automated, the cost of intelligence should eventually converge to the cost of electricity . (Many people are concerned about the energy consumption of ChatGPT queries; the average query consumes about 0.34 watt-hours of electricity, which is equivalent to running an oven for more than a second, or turning on an efficient light bulb for a few minutes. It also consumes about 0.000085 gallons of water, which is about 1/15 of a teaspoon.)

The pace of technological progress will continue to accelerate, and the human capacity to adapt to change is almost limitless. There will be very difficult challenges, such as the disappearance of a large number of jobs, but on the other hand, the world will become much wealthier very quickly, and we will really start to think seriously about new policies that were not possible before. We may not establish a completely new social contract all at once, but looking back decades later, those incremental changes will accumulate into huge changes.

If history is any guide, humans will always find new things to do, new goals to pursue, and will always adapt quickly to new tools (think of career changes after the Industrial Revolution). People's expectations will rise, but capabilities will grow just as fast, and we will have better things. We will build more and more amazing things for each other. One of the important and unique advantages that humans have over AI over the long term is that we are hardwired to care about other people and each other's behavior, and we don't care too much about machines.

A subsistence farmer a thousand years ago would have thought we were all doing fake jobs if he saw us doing them now, thinking we were just entertaining ourselves because we had food, drink, and luxuries they could never imagine. I expect people a thousand years from now will also think their own jobs are fake jobs, but I have no doubt they will be extremely important and fulfilling to the people of that time.

The speed at which new wonders emerge will be staggering. It is hard to imagine today what discoveries will be made in 2035; we may solve high-energy physics problems one year and colonize space the next ; or achieve a major breakthrough in materials science one year and have a true high-bandwidth brain-computer interface the next . Many people may choose to continue living the way they do now, but at least some people will probably choose to "plug in."

Looking ahead, all this may sound incomprehensible. But the reality may be that experiencing it in person will be impressive but manageable . From a relativistic perspective, the singularity is gradual, and the convergence is slow. We are climbing an exponential curve of technological progress; it seems vertical and steep when looking forward, and flat when looking back, but it is actually a smooth and continuous curve. (Think about 2020, when it would have sounded crazy to say we would be close to AGI by 2025; look at the actual progress in the past five years.)

Of course, all of this comes with serious challenges. We do need to address technical and social safety issues , but it is also important to widely distribute access to superintelligence, as its economic implications are profound . The most ideal path might be:

First, solve the alignment problem, that is, be able to robustly enable AI to learn and act towards goals that we collectively actually want in the long term (social media is a classic example of “misalignment”; those algorithms are amazingly effective at keeping you scrolling, and do understand your short-term preferences, but they hijack brain mechanisms to go against your long-term preferences).

Next, focus on making superintelligence cheap, accessible, and ubiquitous, without over-concentration in the hands of any one person, company, or country. Society is resilient, creative, and able to adapt quickly. If we can mobilize the collective will and wisdom of the people, then despite mistakes and problems along the way, we can quickly learn, adjust, and maximize the benefits of this technology and minimize the downside. Giving users full freedom to operate within broad boundaries determined by society is critical. The sooner the world starts having conversations around those boundaries and how to define “collective alignment,” the better.

We (not just OpenAI, but the entire industry) are building a brain for the world. It will be extremely personalized and easy for everyone to use; the real limit will be the number of good ideas. In the past, people in the technology circle used to laugh at those who "only have one idea"; they have an idea, but can't find anyone to implement it. Now it seems that their spring is coming.

OpenAI is many things today, but first and foremost we are a superintelligence research company . We still have a lot of work to do, but the path ahead is lit and the darkness of the unknown is receding fast. We are deeply grateful for all that we have done.

The vision of “intelligence so cheap that it’s almost unnecessary to pay for it” is within reach. It may sound crazy, but if we told you in 2020 that we would get there today, it would probably sound even crazier than our predictions for 2030 now.

May our journey toward superintelligence be smooth, exponential, and uneventful.