AI should enable developed countries to increase their GDP by 10%. Meta founder Zuckerberg talks with Microsoft CEO Satya: The boundaries between documents, applications and websites disappear in the model era

Written by
Silas Grey
Updated on:June-23rd-2025
Recommendation

How does AI technology reshape the boundaries of the digital world? Zuckerberg and Satia's in-depth conversation reveals the new form of future work.

Core content:
1. The trend of disappearing boundaries between documents, applications and websites
2. The impact of the AI ​​era on the continuity of human workflow
3. Microsoft OLE technology's contribution to modern AI integration thinking

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)

Although Llama4 was not satisfactory, Meta founder and CEO Zuckerberg still led the first Meta AI developer conference, LlamaCon 2025 , more than a week ago . Zuckerberg also accepted a podcast interview during the conference.

At the conference, he invited Microsoft CEO Satya Nadella and other entrepreneurs to have a series of dialogues. This event has been a few days, but I still chose to post this blog because Satya shared a very profound point in the dialogue, which is worth recording: the boundaries between documents, applications and websites are disappearing .

According to Satya in the conversation, this actually stems from the thinking that Bill Gates has always advocated within Microsoft: "Bill always asks us to think about what the difference is between documents, applications, and websites." These three seem to be very different, but is it really impossible for them to be integrated with each other?

Satia believes that in the AI ​​era, their boundaries have finally become blurred. Let me explain.

Imagine for a moment that when you use a computer to solve a problem—like planning a trip—what is your thought process like? You look for information (reading), perform calculations (processing), communicate with others (interacting), and finally make a decision (output). In this natural cognitive process, there is actually no break between "now I want to open a document", "now I want to start an application", and "now I want to visit a website". Your consciousness is continuous and your needs are fluid.

However, in the real world of solving this problem with a computer system, we are forced to cut this continuous process into pieces:

  • Document (Word): for recording and reading

  • Applications (Excel): for calculation and processing

  • Website (Browser): for searching and interacting

But this segmentation is not because it is consistent with human cognition, but because of the technical limitations of early computers. Processor speed is limited, memory capacity is insufficient, network bandwidth is limited - we have to simplify complex human needs into discrete tasks that computers can handle.

Bill Gates realized this problem very early on. So when Microsoft promoted OLE technology in the 1990s, it was essentially asking : Why can't a sales report (document) contain real-time updated sales data (application) and customer feedback (website)?

Here is an explanation of OLE (Object Linking and Embedding) technology, which was first released by Microsoft in 1990. Its purpose is to break down the barriers between applications and realize the vision of "compound documents". It allows users to embed active objects such as Excel spreadsheets or PowerPoint slides in Word documents, which can not only display content but also retain the functions of the original application . As Microsoft's first step in trying to blur the boundaries between documents and applications, it also laid the foundation for later technologies such as COM, ActiveX and .NET, and is also the source of the idea of ​​information form fusion described by Satia.

Satia believes that the real breakthrough is finally coming thanks to modern AI technologies like big models. Satia describes his own personal digital experience: " When I read about Llama 4, it was actually done through a series of web chat sessions, and then we were able to add content to the document, and with the code completion function, it was easy to turn it (the document) into an application ."

What happened here? AI actually became a " universal transformer " that understands user intent rather than mechanically executing commands. When you say "help me learn Llama 4", AI can actually ignore the traditional question "Do you want a document, an app, or a web page?" and instead provide information in the conversation based on the context, organize structured documents, and even generate executable code.

The essence of this transformation is that we have finally moved from "tool-oriented computing" to "intention-oriented computing." In the new paradigm:

  • The form of information is determined by the usage scenario, not the preset container

  • The transition is seamless because AI understands the context

  • User experience is continuous, just like the natural human thought process

When you can complete all these tasks that traditionally require different tools in a unified environment, from ideation (chatting) to research (searching and reading) to calculation (data processing) to presentation (visualization) in a continuous workflow, Satia's question of "why should Word, Excel, and PowerPoint be separated?" is disappearing.

Documents, applications, and websites are thus integrated - not because we want a "super application", but because this segmentation is a stopgap measure under technical limitations. When AI removes these limitations, information can finally flow in the natural way of human cognition.

Of course, Zuckerberg and Satya talked about much more than just this in their conversation, but I am most interested in this point. For more content, you can continue reading below, there is more valuable information.

1. AI: A disruptive platform transformation that defines the new era

At the beginning of the conversation, Zuckerberg led the topic by mentioning that Satia had repeatedly compared the current boom in AI with major technological changes in history. Satia gave an in-depth explanation of this, reviewing his career, which saw the birth of the client-server architecture, the rise of the Internet (web), mobile (mobile) and cloud computing (cloud). In his view, the current wave of AI can be regarded as the fourth or fifth (depending on how it is divided) major platform-level transformation .   

Satia emphasized that every platform transformation like this will lead to a "re-examination" and "re-construction" of the entire tech stack. He compared it to "you can almost go back to the first principles and start building."

Taking cloud infrastructure as an example, he pointed out that the core storage system designed for AI training is very different from the traditional cloud storage system he started building around 2007-2008. AI training workloads, especially data parallel synchronous workloads, are also fundamentally different from early distributed computing frameworks such as Hadoop. This comprehensive innovation from underlying hardware to upper-level applications is the core feature of platform-level transformation. "The fact is that you have to rethink every link in the technology stack from top to bottom in every platform transformation. This is a challenge we often face."  

He further added that although new technologies are often born from existing foundations (for example, the Internet was born on operating systems such as Windows), their development will eventually far exceed their initial carriers . This is his macro judgment on the current development of AI.   

2. AI Efficiency's "Super Moore's Law" and Consumption Growth

Zuckerberg then focused on a point that Satia has mentioned many times: as technology becomes more efficient and service costs decrease, people will eventually consume more services, which is the famous Jevons paradox. He asked Satia how the efficiency improvement of AI models is specifically reflected in Microsoft's huge enterprise business, especially in the context of the rapid increase in the capabilities of each generation of models.   

Satya responded that a few years ago the industry was still worried about the end of Moore's Law, but now we seem to have entered an era of "crazy hyperdrive Moore's law ." He pointed out that the transformation of any technology platform is not the evolution of a single S-curve, but the superposition and compound effect of multiple S-curves . Specifically in the field of AI, first of all, at the chip level, industry leaders such as Jensen (Huang Renxun) of Nvidia and Lisa Su (Su Zifeng) of AMD are driving huge innovations, and the chip iteration cycle is constantly shortening, which is itself a manifestation of Moore's Law. This is also the view expressed by Satya in his tweet:

But more importantly, on this basis, the optimization of the entire system is also happening simultaneously: from cluster management of data centers, system software optimization, to innovation of model architecture, optimization of inference kernels, and even advancements in application servers and prompt caching technology, every link is iterating rapidly. Satya concluded: " When you add all these factors together, we can see about a 10-fold performance improvement maybe every 6 to 12 months ." This exponential growth in capabilities and efficiency, accompanied by a corresponding drop in prices, fundamentally drives a surge in consumption. Therefore, he is very optimistic about the in-depth development of AI applications and believes that we are at a stage where we can build complex, deep applications.   

3. Multi-model collaboration: Building the future of complex AI applications

As AI capabilities increase and costs decrease, Satya foresees a shift in application development from relying on a single model to more complex "multi-model applications." He explained: "We are finally entering the era of multi-model applications, where I can orchestrate an agent built on a certain model to interact with another agent in a certain workflow."   

He mentioned that in order to achieve smooth collaboration between multiple agents and models, some standardized protocols, such as MCP or A2A, are playing an increasingly important role . Satya believes: "If we can achieve standardization to a certain extent, then we can build applications that can take full advantage of these growth capabilities while maintaining flexibility."    

Zuckerberg agrees with this and brings up the concept of " distillation factory ", believing that Microsoft has a unique advantage in providing infrastructure that supports multi-model collaboration. This indicates that future AI applications will no longer be a simple call to a single large model, but rather multiple optimized models (intelligent agents) with different functions working together through a carefully designed orchestration layer to complete complex tasks. This not only improves the flexibility and efficiency of the system, but also provides a broad space for open source models, because open source models are easier to customize and integrate into such complex systems.   

4. Open Source: Microsoft’s Transformation and Strategic Choice in the AI ​​Era

Since we were at Llama’s developer conference, the conversation naturally turned to the place and role of open source in the AI ​​ecosystem.

Zuckerberg pointed out that Microsoft, under Satya's leadership, has gone through an interesting open source journey, from an early cautious attitude towards open source to an active embrace (friends familiar with Microsoft's history may know that former Microsoft CEO and current NBA Clippers owner Ballmer once said in an interview in 2001 that Linux is cancer). Microsoft not only established an early partnership with OpenAI (although OpenAI's models are mainly closed source) , but also made it clear that it would strongly support the development of open source models. Zuckerberg is very curious about how Satya thinks about this transformation, and how he views the evolution of the open source ecosystem and its importance to Microsoft's customers and infrastructure construction.   

Satya shared his journey candidly. He recalled that an important task in the early days of Microsoft was to ensure the interoperability between the Windows NT system and various Unix variants at the time. This experience made him realize that "interoperability is first and foremost a customer requirement. If you can do well in this regard, it is good for your business, and obviously you are also meeting the actual needs of your customers." This concept shaped his attitude towards open source.   

Satya stressed that he does not take a dogmatic stance on closed source or open source, " I am not obsessed with closed source or open source, the world needs both ." He believes that even if individuals or companies have their own preferences, the market will eventually make a choice, and customer demand will determine everything. He cited the examples of SQL Server and MySQL/Postgres, Windows and Linux coexisting, and even mentioned his personal favorite Windows Subsystem for Linux (WSL), because it greatly facilitates developers to use various development tools on Windows.   

Therefore, he believes that embracing a posture that allows mixing and matching of open source and closed source solutions is extremely beneficial. This coincides with the concept of multi-model collaboration and "distillation factory" discussed earlier. Satya pointed out that many enterprise customers want to be able to "distill" their own models because these models carry their intellectual property (IP).

In this scenario, "an open-weight model has huge structural advantages over a closed-source model."   He concluded: "So I do feel that the world today will be better served by having excellent closed-source cutting-edge models and excellent open-source cutting-edge models. For hyperscalers like us, this is a great thing, because at the end of the day, our job is to serve." Just as customers can choose between Postgres, Linux virtual machines, SQL Server, and Windows virtual machines on Azure, Microsoft hopes to provide the same rich choices and complete tool chain support in the field of AI models.  

5. Azure empowers developers: building world-class AI infrastructure and toolchain

Zuckerberg further asked what the core positioning and differentiated advantages of the Azure platform are in supporting open source (and all) AI developers.   

Satya first emphasized that an AI workload is far more than just an AI accelerator and a model during inference. Its underlying infrastructure relies on a large amount of storage, general computing (not just AI accelerators), and high-performance networks. Therefore, Azure's top priority is to "build world-class computing, storage, networking, and AI accelerator-as-a-service (IaaS) to provide a solid foundation for developers who want to build the next generation of agents ."    

On top of this, Microsoft is building an application server layer through its Foundry platform. Satya explained that looking back at each platform transformation of Microsoft, the application server has played a key role, which is responsible for packaging various services (such as search, memory, security assessment, etc.) for developers to call. These are all necessary components for every developer when building an application. Encapsulating these services into frameworks and tools is another core task of Azure.   

Finally, GitHub Copilot is also a development tool that Microsoft attaches great importance to, and its progress is encouraging. Satya concluded: " The combination of excellent tools, excellent application servers, and powerful infrastructure is what we believe is necessary to accelerate application development."    

6. AI drives productivity revolution: from code generation to knowledge work reshaping

Next, their topic turned to the huge potential of AI agents in improving productivity, which is undoubtedly one of the core themes of the entire AI ecosystem and developer community. Zuckerberg curiously asked Satya how this trend was reflected within Microsoft and what were the most interesting examples he saw among external customers.   

Satya said that the changes in the field of software development are the most intuitive examples. Take the evolution of GitHub Copilot as an example. It initially provided code completions. Later, a chat function was added, so that developers did not need to jump to external websites such as Reddit or Stack Overflow to seek answers, maintaining the consistency of the workflow. Later, an agentic workflow was developed, which can directly assign tasks to AI. Today, there are even "proto-SWE agents" that can complete coding tasks directly through high-level prompts or assign pull requests (PR) to AI .  

Satya emphasized that these features are not mutually exclusive, but are used by developers in their daily work. The key to implementing these features and improving productivity is that "you must integrate all of these with your existing code repository (repo) and your current developer workflow."  

After all, most developers don’t always work on a brand new “greenfield app”, but need to develop in a large code base and complex processes. This deep integration of the tool chain is exactly the “systems work” that the engineering team needs to complete, and it is also the prerequisite for seeing actual productivity improvement.   

The same logic applies to other areas of knowledge work. Satia used the sales scenario as an example to describe the changes in his process for preparing for client meetings. In the past, the process of preparing for an enterprise client meeting has hardly changed since he joined Microsoft in 1992: someone would always write a report, send it by email or share it with a document, and he would read it the night before the meeting . Now, "I can get real-time reports that integrate all relevant data from the Internet, internal materials, and even the CRM system by simply using the research function in Copilot." This means that there is no longer a need for someone to prepare these materials specifically because the information is "on tap." This transformation "requires you to change the way you work, the work artifact, and the work flow." Satia believes that this change may be slow at first, but then it will suddenly accelerate, just as the popularity of personal computers (PCs) changed the way companies make forecasts-from faxes and internal memos to sending spreadsheets via email. He believes that we are at the beginning of this change and have seen tangible progress and productivity improvements in areas such as customer service, marketing, and content creation.   

7. AI Coding Proportion and the Role of Future Software Engineers

Zuckerberg was very interested in the specific contribution of AI to Microsoft's internal coding. He asked: "What percentage of the code in Microsoft is currently written by AI rather than by engineers?"    

Satya responded that they mainly track two indicators. One is the code acceptance rate, which is about 30-40% and continues to rise . This ratio is also related to the programming language. For example, there is still a lot of C++ code inside Microsoft, and early AI support for C++ was not as good as Python, but now the situation has improved greatly. With the enhancement of support for more languages, the code completion function is getting better and better.   

For code generated by AI agents, it is still in the "nascent stage". In brand new "greenfield projects", the proportion of AI coding is very high, but as mentioned earlier, a lot of work is not brand new. However, Satya mentioned that the use of AI in code reviews has grown significantly. He estimated: " Right now in some of our projects, about 20-30% of the code in the code base may be written by software (AI) ."    

Zuckerberg also shared Meta's observations and goals. Although he did not provide exact AI coding percentage data, he mentioned that many teams within Meta (such as the dynamic message sorting and ad sorting teams) are conducting experiments in specific areas, allowing AI to automatically generate code by analyzing historical changes. Meta's broader goal is to build an AI and machine learning engineer agent to accelerate the development of the Llama model itself.

Zuckerberg made a bold prediction: "We think that maybe within the next year, about half of (Llama) development work will be done by AI rather than humans, and that proportion will continue to increase thereafter." He is mainly concerned with tasks such as optimization and security improvements, and believes that AI has great potential in these areas.   

Both CEOs foresee that the role of software engineers may change in the future. Zuckerberg figuratively said: "Each engineer in the future is actually more like a technical lead, leading a 'small team' of AI engineering agents to work with them." Satya also thinks from the perspective of a tool and infrastructure provider, and believes that in the future, Microsoft's tools and infrastructure may serve these AI agents themselves more, including designing appropriate tools, infrastructure and sandbox environments for them, and even redefining the form of GitHub code repositories to meet the needs of AI agents.   

8. “Distillation Factory”: Unleashing the potential of open source models and popularizing inclusive AI

The conversation once again delved into the concept of "distillation factory", exploring how to transform large and complex AI models into smaller, more focused, more efficient, and easy-to-deploy and use models through "distillation" technology. Satya believes that this is one of the areas where open source models can play a huge role.   

He explained: "For example, within the Llama family, distilling a large model into a smaller model that maintains the same model structure is a very important use case." Microsoft's goal is to build tools and services for this purpose to lower the threshold for developers to perform model distillation. He envisions that if every tenant of Microsoft 365 can easily create a distilled, task-specific model (as part of an agent or workflow) and call it from within Copilot, this will be a breakthrough scenario .   

Zuckerberg strongly agrees with this and believes that distillation is one of the most powerful features of open source. He admitted that similar work is also being done within Meta. For example, they are developing a super-large model code-named "Behemoth", whose main purpose is to produce more practical models through distillation . In fact, the excellent performance of Meta's Maverick model (a leading multimodal model with text performance comparable to top text models but smaller in size) is largely due to distillation from pre-trained large models like "Behemoth". Zuckerberg excitedly said: " Distillation is like magic. You can basically get 90% or 95% of its intelligence with a model that is one-twentieth the size, and this small model is cheaper and more efficient to run ."    

Both CEOs see great value in popularizing this capability. Currently, there are a limited number of labs that can perform complex model distillation or operate ultra-large-scale models. They hope to enable a wider range of developers around the world to benefit from this technology by building a powerful infrastructure and easy-to-use tools, not only from single model distillation, but in the future even to mix and match the advantages of different models to create unprecedented applications. Satya emphasized that the key is to lower the threshold, increase the speed of iteration, and ensure that developers can quickly adapt to and adopt the latest model advances, rather than being bound by past achievements.   

Zuckerberg also mentioned that the design of the model shape is also closely related to the target hardware and application scenarios. For example, the setting of 17 billion parameters for each expert network of Llama 4 is to run efficiently on the H100 chip widely used by Meta. The open source community also has a strong demand for smaller models. For example, the 8 billion parameter version (8B) of Llama 3 is very popular. Meta is also developing a smaller "Little Llama" version to meet the needs of running on terminal devices such as laptops and mobile phones. This ability to flexibly infuse the intelligence of large models into various form factors is considered to be one of the core trends in the future development of AI.

Satya also added that the industry is actively exploring hybrid models (such as the combination of sparse mixture of expert networks (MOE) and dense models) and "thinking models" that can adjust thinking time/delay according to needs in order to achieve the best balance between performance and efficiency.   

9. Dissolving the boundaries between documents, applications and websites: a new paradigm in the AI ​​era (this part is the issue we discussed at the beginning)

Satya then shared a very profound insight that came from thinking that Bill Gates had always promoted within Microsoft: "Bill always had us think about what the difference is between a document, an application, and a website."

In the age of AI, these boundaries are blurring rapidly. "Now when you use tools like Meta AI, ChatGPT, Copilot, it's hard to tell the line between a chat session and a document." Satia uses his own workflow as an example: "Just today, when I was reading about all the models in Llama 4, I actually went through a series of chat sessions and then added the content to a Pages document and saved it. With code completion, you can also turn it into an app."

"This process of starting with a high-level intent and ending up with a 'living product' - what we used to call an application - will have a profound impact on workflows." Satya believes that we are at the beginning of this transformation. "As a builder of infrastructure and tools, but also a user, my dream is to transcend these artificially created category boundaries."

He further explained: " These boundaries are mainly due to the limitations of the way software works. Why should Word, Excel, and PowerPoint be separate? Why can't it be a unified thing? We have tried many times to integrate, but now you can really imagine this unification - you can start with Word, visualize the data like Excel, and then present it. They can all be persisted as a data structure. This plasticity that was not strong enough in the past is now realized."

10. New engine of GDP growth: Historic opportunity of AI as a production factor

When the discussion turns to the macroeconomic impact of AI, he argues that if AI is to deliver a significant increase in productivity, it must be reflected in significant GDP growth. "For us, this is a pretty existential priority. The world needs a new factor of production that allows us to meet the many challenges we face."

He proposes a bold thought experiment: " Imagine if developed countries could achieve 10% growth, which would be the growth rate seen at the height of the Industrial Revolution ." To achieve this, "you would have to achieve productivity gains in every sector: health care, retail, broad knowledge work, any industry."

Satya believes AI has such potential, but the real challenge lies in implementation: "This requires not only software innovation, but also management changes. People have to use AI in a different way." He cited the classic example of the electricity revolution: "Electricity has been around for 50 years before people realized that they had to really change the design of the factory to make full use of electricity. The Ford case study is a famous example."

“We’re somewhere in the middle right now,” Satya said. “I hope it won’t take 50 years. But if we just think of AI as a ‘horseless carriage’, we won’t be able to make the real leap. It’s not just a technology problem. The technology has to advance, but you also need to integrate it into the system to really enable new ways of working, work outputs and workflows.”

Zuckerberg responded humorously: "We're all investing as if this isn't going to take 50 years. So I hope it isn't."

11. Embrace the future: Optimism and developer mission in the AI ​​era

At the end of the conversation, Zuckerberg invited Satya to share his outlook on the future development of AI and developer opportunities in the coming years, especially the aspects that he is most optimistic and excited about.   

Satia quoted the lyrics of Bob Dylan: " Either you're busy being born or you're busy dying ", and said "It's always better to be busy being born". He believes that the most optimistic thing, especially in this era, is that even in the face of various constraints, software, especially software in the new form of AI, is still the most malleable resource we have and can be used to solve those difficult problems.   

Satya encourages developers to actively participate (to use the popular phrase in China in the past two years, to get involved) and create practical solutions. Whether it is the backlog of IT needs within the enterprise or the complex problems that have not been solved in the real world, new methods are needed to deal with them. He believes: "This is the greatest benefit of all these technologies, and it will ultimately depend on whether developers can move forward and practice them."