Microsoft CEO Satya Nadella: The Traditional Software Application Layer Is Collapsing, AI Agents Will Replace It

Written by
Audrey Miles
Updated on:June-09th-2025
Recommendation

Microsoft CEO Satya Nadella proposed that the software application layer will be replaced by AI agents, leading to the future transformation of working methods. Core content: 1. AI agents will become a new product form, replacing the traditional software application layer 2. Microsoft layouts the AI ​​agent layer and builds Microsoft 365 into a natural language integrated development environment 3. SaaS products will be transformed into data sources, and the focus of corporate competition will shift to the docking capabilities with the AI ​​agent layer

 
Yang Fangxian
53A founder/Tencent Cloud (TVP) most valuable expert

 

We used to say that models are products, but now Microsoft CEO Satya Nadella is shouting: "The traditional software application layer is collapsed and will gradually be replaced by AI agents (Agents)." It can be translated as "AI agents are products."

(For specific interview content, you can see the video above?)

I guess it's because Microsoft has no reliable model, so I have to settle for the second best and shout "AI agent is a product."

What exactly did he say?

In the past, we needed to open multiple applications (such as CRM, email, calendar, design tools) respectively to complete work tasks, but in the future, all of these applications will fall behind the scenes and become data sources. Users only need to communicate with a unified AI agent, and the agent automatically integrates data and coordinates various tools to complete complex tasks.

So Microsoft has laid out the "AI Tier" and gradually built Microsoft 365 into an integrated development environment (IDE) with natural language as the interface. Users can customize various AI agents through Copilot Studio without even programming.

Satya also calls on major SaaS companies to:

  • • SaaS products are no longer independent and complete solutions, but become data sources accessed by AI agents.
  • • Enterprises need to focus on data API construction rather than traditional user interface design.
  • • The core of future competition will focus on how to efficiently connect with the agent layer and provide accurate data and services.

The subtext is that you focus on providing data, APIs to Microsoft, and connect to my agent, so don't do user interfaces or anything else.

This thing looks good, but it is not necessarily a good thing for enterprise applications, because for users, you can only perceive Microsoft's AI agents, and don't know if you use your services. So where is your value reflected?

 

 

Full dialogue

Matthew Berman: Okay, Satya, thank you for chatting with me. Also congratulations to you on the exciting content you posted at the Build Conference! I've prepared a few questions. You have successfully dominated several major transformations at Microsoft, including the adoption of Microsoft's cloud and open source technology, and now we seem to be in the next phase of transformation. With the rise of these extremely powerful AI agents, how do you consider maintaining your existing product suite and the major changes it is about to usher in while investing?

Satya Nadella: Yes, first of all, thank you very much for attending our developer conference. You know, that's what I think, first you have to embrace new things, right? And I think that even though we have been in this AI era and this agent network era for two to three years (depending on how you calculate), the patterns of building AI agents and applications are becoming clearer and clearer, right? So you have to really look at your existing technology stack, those things you might have built for previous workloads, and now you need to rethink for new workloads based on first principles.

So, think about the infrastructure layer, right? Obviously, we are very proud to have 70 Azure regions around the world. And then you'll say, wow, we need to remodel them or tweak them now to make them AI factories. This is almost what you need to do. It turns out that even an application like ChatGPT or Copilot does require a lot of GPU or AI accelerator, but it also requires everything else. In fact, it requires massive storage space during training and inference. It requires a lot of computing power, which is conventional computing power, not AI-accelerated computing power, so as to provide an operating environment for AI agents. So it's interesting that what we've built over the past 15 years is now likely to be more realistic because AI agents require more of these resources than any previous workload. But this is a scale of different orders of magnitude. So that's about what we have to do at the infrastructure level.

The same is true in terms of data, right? Take data for example, you would say, well, you know, data has always been related to databases. You said that is where you structure people, places, things. But now you can introduce the smart layer directly into the data, right, directly into an inference engine. Like one of the coolest demos we've shown, Postgres is a very modular thing, and you can now mix and match a response from a large language model (LLM) in your SQL query, right? I mean, think about the query plan you can generate. So I feel like every layer of the tech stack has to be reimagined, but that also means we can take advantage of some of the best work we have done over the past 15 years and let it compound our developers so they can benefit from it. So that's how we think, right, that's how we make sure we think about every layer of the technology stack from a first principle perspective to fit the new AI workload being built and then really put them together to meet the real needs of our customers.

Matthew Berman: Yes, then for end users, especially products that people are very familiar with like Office 365, I guess these products will change very quickly. So where is this acceleration reflected?

Satya Nadella: Yes, that's interesting, right? If I look at Office, right, I would say there are three modes for Microsoft 365. The first is a brand new model, which is that I have a new user interface (UI) designed for AI, which is actually a new framework with chat, search, and notes. I can collect all kinds of heterogeneous data here, and handle podcasts and audio overviews and other things. I have an AI agent, right? For example, these researchers and analysts. So I can delegate tasks to them and so on. It's really exciting to have all of this. I even have Copilot Studio. In other words, I can build an AI agent, right? So that's new, right? That is, you know, I now have a user interface for AI and AI agents.

The other interesting thing is that Teams brings all of this into multiplayer collaboration mode, right? All of these AI agents are available on my channel and in my meetings, right? So Teams became a framework where AI can now work with me in a multiplayer collaboration model.

The third mode is my head working mode. Just like in GitHub, I use Copilot and VS Code, I bury my head on the code, but I have AI agents that I can use. I'm busy working on Excel spreadsheets, you know, my Copilot chat is there, right? It's like when I was analyzing a spreadsheet, there was a data scientist sitting next to me. When I do research and write documents, there is a researcher next to me. The idea is that we turn every Office canvas into an integrated development environment (IDE) with chat, if you think so. So in a sense, I think the value of the M365 system is now growing at a greater compounded rate, as intelligence is built into all these levels.

Matthew Berman: So I want to talk a little deeper about this. You have said before that software, especially the application layer, will eventually evolve into a model dominated by AI agents. I also made a video for this titled "SaaS Dead". It, you know, has attracted a lot of attention. Everyone’s idea of ​​this topic is cool, but I want to hear that this judgment means that there will be an AI agent layer, and then there will be a fact-based database that AI agents can read and write. What does this mean for software-as-a-service (SaaS) companies in the vertical field? How should they prepare for this future?

Satya Nadella: Yes, like all of us, I think the way to deal with this, I mean, even the demo we showed today, right? There is indeed Dynamics 365 in the demo, which comes with an MCP server that Copilot Studio uses to coordinate a multi-AI agent application across customer relationship management (CRM) and many other systems of record and ultimately orchestrate a complex business process. This seems to be what is happening right now. I mean, it's obvious that when you're thinking about business processes and business applications, you have to incorporate yourself into this system. So yes, if you used to think, hey, I'm just recording systems or interactive systems or something, I just care about the workflow above my data. And that's my scope. That kind of idea cannot last. So I think we all have to be open to this new orchestration layer that participates in the agent network, which basically has multiple backends, right? Your SaaS application will be one of the backends. You'd better support something like MCP to be able to participate in the agent network. Then, even something like NLWeb might reduce friction on all of these connectors, right? Because if you think about the business, you know, there is a lot of friction in the way connectors work. Things like NLWeb can bring huge changes even within the enterprise. So I think, yes, I think the SaaS application we build may need to be completely changed to fit in this future.

Matthew Berman: Yes, then these, let's talk about these SaaS companies. Do you think they will become managers of the basic fact data of their customers, and then AI agents are provided by platform companies like Microsoft?

Satya Nadella: Yes, I mean, I kind of feel, I don't quite know how it will evolve in the end, because in a sense, we all overestimate the importance of something we have today, but the reality is that when these platforms change, value is always created elsewhere, right? So, you know, at the end of the day, what is the work that needs to be done? The work that needs to be done is to complete a business process. It is not about any recording system and its management, nor about any AI agent or workflow. It's about the whole, right? So for me, that is the general trend. The question is, how do you follow the trend? Instead of thinking that I have a moat, I have to guard it, or I want to build a beautiful facade around it, such as putting a shell of an AI agent, or something, but this approach does not really see the complete perspective that users really need.

Matthew Berman: Yes, I like what you said, different types of AI agents will talk to each other, different databases, these do not matter, it is just an abstraction layer. So it all sounds very exciting. Another thing you mentioned, you said in an interview that when you hire an employee, it is actually hiring his future abilities and the entire set of agents he has. This makes me feel very interesting. But I want to understand this a little more clearly because it seems that companies, the employers, will most likely want to own the intellectual property rights of those AI agents, just as they have traditional intellectual property rights. So maybe you can clarify.

Satya Nadella: In fact, you are right. In fact, what you are talking about is our point of view, right? If I look at what I posted today, because what is the intellectual property of the company, right? I think the results of any of us at work belong to the company's property. So I think that's the same for AI agents. This is also one of the fundamental reasons why we expand the scope of the rules, right? You know, the AI ​​agent has an Entra ID. You can manage conditional access to these AI agents, just like you manage people within your organization. Now, speaking of Purview, right? Another very important thing. If an AI agent wants to access data, it needs to comply with the same data protection and data rights regulations, so we will, by the way, also security regulations. You need to manage the AI ​​agent environment like you manage endpoints. So that's why Defender wants to make sure that credentials are not stolen or something. So, it is absolutely possible to think that all the identity management and security work we do for people and their IT infrastructure will also apply to AI agents and their IT infrastructure.

Matthew Berman: Yes, you know, that makes sense. I also guess many people will build their own personal AI agents for their personal lives. So can you foresee a future that they will also bring those personal AI agents to work.

Satya Nadella: This is a good question. I mean, a system that allows you to bring into a personal AI agent must be able to prevent data breaches from occurring between these two worlds. So the problem is here, we can even take a simple thing to say, is my personal email and my company email. Today, they are two things that are isolated from each other. They are two identities. We know how to separate their status for privacy and intellectual property reasons, right? Both reasons have meaning. I think similarly, that's why we believe in Entra and Microsoft accounts, and why we have Copilot and Microsoft 365 Copilot. Confusing the two can be very confusing even at the user model level. I use two profiles to pin Edge, because when I am an individual user, I use a Microsoft account, and when I work at Microsoft, I use Entra. This is a useful division that keeps this, you know, the mental model simple. Otherwise, I think you'll really make a mess by mixing everything together.

Matthew Berman: I think this makes a lot of sense. So I want to ask a question about your vision. The cost of intelligence does seem to be falling rapidly and is expected to approach zero. I think the future world will be very fascinating. What use cases do you think will emerge? What excites you the most when the cost of intelligence approaches zero?

Satya Nadella: Yes, I mean, for me, it's the end of the day, right? When I look at the world, do we need more things like technology, smarts, to ultimately drive productivity and economic growth? Absolutely needed, right? I looked around and said, Oh my God, you know, now, to curb inflation or boost economic growth. We need some help, so now is the time. So if you take this, and the examples we share at the developer conference, is what Stanford Medicine does in some really high-risk areas, right? For example, oncology committee meetings, oncology and cancer care. They were able to leverage all of these technologies and apply it in a real way, using a multi-AI agent framework in Foundry to coordinate pathology, clinical trials, PubMed data, and eventually hold a better oncology committee meeting, then put the meeting data and information into PowerPoint for teaching, or in Teams for everyone to collaborate. In my opinion, this is exactly what needs to happen. Healthcare accounts for 20% of our GDP, and many of the overheads in this workflow are related to this. So if every healthcare provider can start using AI to improve their patient care and treatment outcomes and reduce costs, that will have a profound impact on our society. So that's exactly what I'm really looking forward to.

Matthew Berman: Yes, I think the use case of healthcare, I'm very excited about hyper-personalized healthcare. I've used ChatGPT Copilot to answer questions about my personal health. This is very, very exciting. And some of the research you showed, immersion cooling technology, right? That was discovered.

Satya Nadella: The discovery of new materials, yes, that's so cool.

Matthew Berman: Material science, there are too many possibilities in this field. I've heard some anecdotes where some younger generations either completely avoid AI or just use it lightly, and the specific reason is that they think energy consumption will have a very significant negative impact on our planet, so how do you, Microsoft, view this issue? What would you say to them to give them confidence?

Satya Nadella: Yes, first of all, you know, the younger generation is deeply concerned about it, which is very inspiring, frankly, because in a way that is the right push for all of us to say, well, anything we are creating, the fundamental purpose is to help achieve results that are crucial to us in society, right? Whether in health care, education, or access to financial services, in any field, it is ultimately for economic growth, economic prosperity and abundance. So let's take this as the first point, that is, we do this not just for some technological achievements, but to solve the challenges facing humanity and the earth.

The second part is also important, that is, we must do it in a sustainable way and achieve sustainable abundance. If so, one of the formulas I review is: the number of tokens per dollar per watt. It is a good thing that we can use the most valuable resource of software to use energy in the most efficient way, thereby creating the greatest abundance that will improve health, education and other outcomes. We just need to stick with it. The reality is that the entire technology industry consumes about 2% or 3% of today's power or total power consumption. So this ratio is small, but, yes, it will double. So if it needs to double it needs to get social permission to double it. It needs to create more value in the real world. In fact, this is why I think as a technology industry, we need to base ourselves on more than just one of our products that everyone uses to do fun things. It must be applied to healthcare, materials science, extensive knowledge work, and productivity gains in small businesses. Because this will give us social permission to continue to use scarce resources such as energy. Do it in a sustainable way. We are one of the biggest buyers of alternative energy sources. In fact, you can say that the biggest subsidy for new projects comes from people like us. We really want to make sure we keep pushing this, but ultimately achieve the token amount per dollar per watt and create economic prosperity by doing so.

Matthew Berman: I'm glad you said that. I will tell those who are very nervous about the environmental impact, and I will also show them this video, so thank you. Now the computing architecture is definitely undergoing a major shift. The boundary between the deterministic and non-deterministic parts of the architecture is becoming blurred. I saw a very cool demo a few months ago, where they recreated the Doom game with a diffusion model. Every picture is predicted. Can you foresee a future where the operating system would be similar, with no or only little underlying traditional code?

Satya Nadella: Yes, we have a similar model, like the Muse model we built, it is a world action model we built, but we train it with game data, and then what it starts to do, basically, you can use the Xbox controller as action input, and then use those actions to generate the next scene, which is a bit like, you can think of it as robotics, and so is the game, so, yes, it can be said that everything is generated.

Operation system, you know, to me, this idea, you know, sometimes I think we even exaggerate the certainty of what we call deterministic systems, right? Because after all, if you take a software program, you can't prove it, right? I mean, this is one of the fundamental challenges of computer science.

Matthew Berman: Yes.

Satya Nadella: So so I think, yes, it's a random system, but this random system does need to work in a deterministic way that we can at least check. To be honest, when I interviewed Musk in my keynote speech, he said, hey, we have to understand the physics of intelligence. This is actually a good way to think about it, right? That is, we have to return to a state in some way: when we combine these complex systems, we have some way to understand the physical principles of these complex systems, and then limit them, sandbox them, etc. I think we have to do this even in the operating system. But when I look back, take the programming AI agent we just launched for example. Interestingly, this programming AI agent has an environment. Under the GitHub Actions layer, you actually place a virtual machine, and then you are really setting the boundaries of that virtual machine, right? That is, hey, does it have network access? You have to control it. If it wants to access the tool through the MCP server, you have to control it. Then all of this has a complete audit log. So I think, that's how we're going to learn how to combine a system that is said to be certain, a software system that we build with a lot of directive code, you know, with this AI agent, and then let the interaction itself become something we can monitor.

Matthew Berman: Yes. So, it's really cool because you said in the keynote that we're starting to get into the middle of this transition. So, I really think it's a very interesting time to see where the convergence of different types of software will go. Thank you very much for being with me. I appreciate it.

Satya Nadella: This is my honor. Thank you for participating in this interview. I look forward to coming again next time.