From Air Conditioners to Aircraft: Advancing AGI in the Physical World, The Hardcore Exploration of P-1 AI

Written by
Silas Grey
Updated on:June-09th-2025
Recommendation

Explore the application of AI in the field of physical engineering, from residential cooling to aircraft design, and redefine the future of engineering AGI.

Core content:
1. The current status and challenges of AI applications in the field of physical engineering
2. The technical route of federated methods and multi-AI model fusion
3. Solutions to the problem of scarce training data and commercial prospects

 
Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)
 
In the latest podcast of Sequoia Capital, hosts Sonya Huang and  Pat Grady invited former Airbus CTO Paul Eremenko From residential cooling systems to aircraft design, they explained in detail how the intelligent agent Archie breaks down engineering tasks into basic modules and achieves the goal through a federated approach that integrates multiple AI models, allowing primary engineering capabilities to gradually evolve to engineering AGIs that can design beyond human imagination. The following is a transcript of the conversation:  

 

Sonya Huang: Paul, thank you so much for joining us today, and it's great to have you and your beagle, Lee, on the show. Welcome, guys! First, let's talk about the AI ​​Ascent we just had. At the conference, Jeff Dean talked about the potential of ambient programming and how AI will allow for a 24/7 entry-level software engineer in the next year or so . So it seems like software engineering is really experiencing a vertical takeoff moment right now. What do you think about the current state of the physical engineering field? 

Paul Eremenko: The short answer is that progress has been limited. One of the reasons we started P-1 AI was that I grew up fascinated by hard science fiction, which promised that AI would help us build the physical world and eventually make starships and Dyson spheres a reality. When the deep learning revolution started, I asked who was working on this type of AI, and found that no one was working on it, and it was not even on the agenda of basic labs. A few years later, and now in 2025, this is still the case. So we are thinking about why this is the case, and we can talk about the reasons later. And we believe that we have found some solutions now and are promoting their commercialization.

By the wayJeff is also an angel investor in our company. Programming AI  has been in preparation for a long time. My co-founder, Susmit Jha, completed his doctoral thesis on program synthesis as early as 2011. So this technology is not new, but it has only now found the product-market fit, and found the right packaging, business model and pricing model. Physical AI benefits from the research accumulation of programming AI. The programmatic revolution has been realized. While showing your physics department, you can use some program synthesis technology to create physical designs. So we don’t need to spend another ten or fifteen years. We think we can integrate these technical modules this year, and hope to find the product-market fit as early as next year.

Pat Grady: Can we talk a little bit more about this? What are those technology building blocks that you mentioned? What are the components that are needed to make this happen?

Paul Eremenko: The biggest element isand it goes back to the question I've been asking for the last few years, why isn't anyone working on AI for building the physical world? The answer is training data. Fundamentally, if you want an AI engineer to design an airplane or modify an airplane, you ask it, " What if I increase the wing area of ​the A320 by 10%? " To answer that question, the model ideally needs to be trained on millions of airplane designs. And the total number of airplanes designed by humans since the Wright brothers is far from millions. Even if you could magically get all of those designs, which you can't, if they were all modeled in a coherent, semantically unified way, even in theory, there are probably only a thousand designs since the beginning of aviation, which is far from enough to train a large model.

So the most basic technical module for us is to create a training data set, which is synthetic, based on the principles of physics, and takes into account the hypothetical design of the supply chain. It may be an airplane, or it may be other fields. We have to make it large enough and meaningful enough. Because the design space of most physical products is almost infinite, it can neither be sampled completely randomly nor uniformly, but must be sampled cleverlyDense sampling around the mainstream design and sparse sampling in the corners of the design space, because this allows you to learn things, even if you will not adopt those corners, but also allows the model to understand the reason. Creating these data sets for training models is the core of our method.

Of course, if you now have millions of aircraft designs, each with performance vectors, and then in the later stages of model training, throwing this data directly to an LLM, you won't magically get a good engineer. So the next question is, what should the model architecture look like? Our current approach is to use a multi-model joint solution, which we will talk about in detail later. Each is responsible for different engineering reasoning links. 

Sonya Huang: Can you tell us more about how you gave the model that kind of physics-based reasoning capability? Is that already available in today's design software? Or does that knowledge only exist in the engineers' heads? And how do you inject that knowledge into the model?

Sonya Huang: Can you tell us more about how you gave the model that kind of physics-based reasoning capability? Is that already available in today's design software? Or does that knowledge only exist in the engineers' heads? And how do you inject that knowledge into the model?