In-depth research: Models are products, and agents are profits

AI is reshaping the trillion-dollar market. Only by seizing the application layer opportunities can we win the future.
Core content:
1. The huge potential of the AI market and the trend of profit pool transformation
2. Application layer entrepreneurial strategies and moat building methods
3. AI technology breakthroughs and commercialization progress in 2023-2024
Sequoia Capital AI Ascent 2025 Keynote: AI’s Trillion-Dollar Opportunity
1. Macro perspective and opportunities of AI market (So What?)
Huge market potential : The starting point of the AI service market is far beyond the initial stage of cloud computing transformation, and it has the potential to become an extremely large market in the next ten or even twenty years. Shifting profit pools : AI is not only attacking profit pools in the services industry, but also in the software industry. Companies are shifting from “selling tools” (software budgets) to “selling results” (labor budgets), which expands the addressable market (TAM). Unprecedented adoption rate : The time has come (Imminent, not just Inevitable) : Prerequisites such as computing, networks, data and talent are all mature. The Physics of Distribution has changed : With the popularity of ChatGPT, social media (such as Reddit, X), and the global Internet, new products can be recognized, demanded, and purchased at an unprecedented speed.
2. The winning strategy for AI startups (What Now?)
The application layer is the main battlefield : History has proven that in past technology waves, the vast majority of companies that achieved billion-dollar revenues were at the application layer . Sequoia firmly believes that this is also true in the AI era. Faced with the extension of the basic model to the application layer, startups should adopt a " customer-first " strategy, focus on complex problems in vertical industries or specific functions , and even introduce "human in the loop". Core elements for building an AI company : Build a moat : provide end-to-end solutions, use product usage data to build a data flywheel , and have deep industry knowledge (such as Harvey sending lawyers to communicate with law firms). Beware of “Vibe Revenue” : You must deeply analyze user data (adoption, engagement, retention) to distinguish between real user behavior changes and superficial “trial” enthusiasm. Clear path to profit margin : Although gross margins are currently low due to token costs, there must be a clear path to a healthy gross margin as costs fall and the value of the product evolves towards "sales results". The data flywheel must work : The data flywheel must clearly improve a key business metric, otherwise it is bullshit. 95% are universal entrepreneurial rules : basic principles such as solving important problems and attracting outstanding talent remain unchanged. 5% is a unique rule for AI : Move fast (Maximum Velocity) : Market demand is huge (“huge attraction”), and macroeconomic fluctuations have little impact. “Nature hates a vacuum”, so you must act at the fastest speed to seize the opportunity.
3. Current Status and Recent Development of AI (Year in Review: 2023-2024)
User engagement has increased significantly : Taking ChatGPT as an example, the daily active users/monthly active users (DAU/MAU) ratio has increased significantly, approaching the level of Reddit, indicating that AI is being deeply integrated into daily life and work. Breakthrough : Speech generation : The "Her moment" has been achieved, crossing the "uncanny valley", and the realism of the sound has been greatly improved. Coding : It became the breakthrough application category of the year, achieving "screaming product-market fit" and fundamentally changing the economics and accessibility of software development. Technology Trends : Although the speed of pre-training has slowed down, reasoning , synthetic data , tool use and AI scaffolding have become new driving forces for intelligent expansion. The most innovative work happens at the blurred boundary between research and product (e.g. Deep Research, Notebook LM).
IV. Future Prospects of AI: Agent Economy
Evolution path : from single agent (Agents) -> agent group (Agent Swarms) -> agent economy (Agent Economy) . Definition of the Agent Economy : Agents not only exchange information, but also transfer resources, conduct transactions, and have the concept of trust and reliability. This will be an economy where people and agents work together . Technical challenges in implementing the agency economy :
Persistent Identity : Agents need to have consistent “personality” and reliable “memory” that goes beyond existing RAG and vector database technologies. Seamless Communication Protocols : A protocol layer similar to the Internet TCP/IP is required to achieve efficient flow of information, value and trust. Security : In the agency world where face-to-face communication is not possible, the importance of trust and security has been raised to an unprecedented level.
5. AI reshapes the way we think and work
Mindset shift : Stochastic Mindset : Shift from deterministic computing to accepting and managing stochasticity (the output of AI may not be 100% accurate and consistent). Management Mindset : People’s roles will shift to “managing” AI agents, requiring them to make more complex management decisions. Work paradigm change : More Leverage with Less Certainty : Individuals and organizations can accomplish more, but must better manage uncertainty and risk. Reshaping organizations and economies : AI agents will integrate functions and complete end-to-end processes, ultimately forming complex “neural networks within neural networks” that will completely reshape individual jobs, companies, and the entire economy.
Detailed explanation: Macro opportunities, entrepreneurial path, technological frontiers and future economic forms
1. Macro perspective and opportunities of AI market (So What?)
Artificial intelligence (AI) is reshaping the global economy and technology landscape at an unprecedented speed and scale. This chapter aims to deeply analyze the macro potential of the AI market, reveal its similarities and differences with previous technology waves, explain its impact on the existing industry profit structure, and explore the deep-seated factors driving its unprecedented adoption speed.
A. Market potential: The starting point is far beyond the initial stage of cloud computing transformation
The starting point and growth potential of the AI service market indicate a huge market that far exceeds the initial stage of cloud computing transformation. The global artificial intelligence market size has reached US$279.22 billion in 2024 and is expected to surge to US$1,811.75 billion by 2030, with a compound annual growth rate (CAGR) of 35.9%. Other market studies have also echoed this high-growth trend. Some predict that the AI market may reach US$826 billion by 2030, which means a 350% increase. Other forecasts believe that the market size will reach US$1,339 billion in 2030, a significant leap from US$214 billion in 2024. Together, these data depict an industry that is experiencing explosive growth, providing a broad battlefield for value creation and market leadership.
Comparisons with the early cloud computing market highlight AI’s unique trajectory. The AI market is expected to grow 2.5 times faster than the cloud computing market by 2030. Specifically, the AI sector is expected to grow 350% by 2030, while the public cloud market is expected to grow 133% over the same period. For reference, the US cloud computing market is valued at $216.91 billion in 2023 and is expected to grow at a CAGR of 20.3% from 2024 to 2030, while the global cloud computing market is expected to grow from $766 billion in 2024 to $3.5 trillion in 2035, at a CAGR of 14.623%. These comparisons clearly show that AI is growing much faster than cloud computing, which means that the window for AI to establish market dominance and capture market share may be shorter and more competitive.
In terms of the internal structure of the AI market, software solutions accounted for 35.0% of global revenue in 2024, leading the way. From a functional perspective, the operational link is the main area of AI application; from the perspective of end use, the advertising and media industry accounted for the largest revenue share in 2024. Deep learning technology has become the technology leader in the market with its outstanding performance in complex data-driven applications such as voice and text recognition. Currently, North America is the largest AI market, but the Asia-Pacific region is growing at the fastest rate, indicating a potential shift in global AI influence in the future.
Table 1: Global AI market size and growth forecast (2024-2030)
Table 2: Comparison of growth of AI market and cloud computing market
The higher growth rate of the AI market compared to cloud computing indicates an accelerated cycle of industry disruption. As a major disruptive force, cloud computing took several years to mature and consolidate in the market. The rapid development of AI means that the timeline for market disruption, establishment of leaders, and elimination of existing non-AI solutions will be greatly shortened. Enterprises have more time to adapt to this change, traditional enterprises face faster threats, and emerging enterprises have a shorter window to establish their own position.
In addition, unlike the early growth of cloud computing, which was mainly driven by Infrastructure as a Service (IaaS), the current value capture of the AI market occurs earlier at the application level. Although AI also requires strong computing power as support, its main revenue driver has clearly pointed to software solutions and applications. This shows that although the "picks and shovels" (such as chips and basic models) in the field of AI are crucial, the battle for market share has already begun fiercely at the application and solution level.
The regional dynamics of the global AI market are also worth paying attention to. North America currently leads, but Asia Pacific, as the fastest growing market, shows the potential for rapid catch-up and future changes in the AI influence landscape. AI is expected to contribute 9.5% to global GDP growth by 2030, which means that the popularity and innovation of AI will become a key factor in measuring a country's future economic strength and may trigger a rebalancing of global economic power.
B. Shift in profit pools: AI’s dual attack on the service and software industries
AI technology is not only creating new markets, but also profoundly changing the profit distribution structure of existing industries, especially posing a "double attack" to the service industry and software industry. A core change is that companies are shifting from the past "sales tools" (mainly corresponding to software budgets) to "sales results" (more touching on labor budgets). This shift has greatly expanded the potential market (TAM) of AI.
For the software industry, AI has brought about efficiency improvements and cost reductions. AI technology can automate many aspects of software development, such as code generation, debugging, and testing, thereby shortening product time to market and potentially reducing labor costs. For example, AI-driven code generators and automated testing frameworks can significantly accelerate the development process. This efficiency improvement may put pressure on the pricing model of traditional software. If AI can provide similar or even better functions at a lower cost or higher efficiency, software companies must actively integrate AI to remain competitive.
For the service industry, AI's automation capabilities are equally significant. AI can automatically handle tasks such as customer service inquiries and report generation, thereby reducing operational expenses. At the same time, AI has opened up new sources of revenue for the service industry by enabling data analysis, predictive capabilities, and personalized customer interactions. Labor-intensive service industries are particularly suitable for AI-driven automation and enhancement. Enterprises can increase efficiency by 30% to 42% by using AI to optimize development processes. It is also estimated that generative AI alone can add $2.6 trillion to $4.4 trillion in value to the global economy each year through productivity improvements.
This shift from "sales tools" to "sales results" is redefining value. Traditional software is sold as a tool, and value realization depends on the customers themselves. AI enables solutions to deliver specific, measurable results, such as reducing a certain percentage of customer service calls or increasing a certain percentage of sales conversion rates. This allows suppliers to price based on the value delivered rather than just functional features, thereby touching on the customer's larger "labor budget" or operating budget. This paradigm shift has not only changed the relationship between suppliers and customers, but also reshaped the nature of competition. Future success will depend on a deep understanding of customer business problems and the ability to deliver quantifiable results, rather than just technologically advanced tools.
As AI becomes more deeply integrated into products and services, the lines between software companies and service companies are blurring. Software companies are increasingly providing AI-driven services or outcomes, while service companies are leveraging software-based AI to enhance or automate their services. This convergence will impact market entry strategies, talent needs, and the overall competitive landscape. For software vendors that rely on non-differentiated features or high-volume, low-complexity solutions, AI-driven development efficiency gains could squeeze their profit margins. Value will shift more toward highly specialized, AI-native solutions or platforms that offer unique, hard-to-copy capabilities.
C. Unprecedented adoption: The maturity of prerequisites such as computing, networks, data, and talent
The key reason why AI technology can be adopted by the market at an unprecedented speed is that the prerequisites required for its development, such as computing power, network infrastructure, data resources and professional talents, have reached a high level of maturity.
First, a solid data infrastructure is the cornerstone of AI development. Enterprises need to be able to obtain clear and organized data and invest in scalable and future-oriented data infrastructure to integrate data from different sources and in different forms to gain valuable business insights. The combination of cloud computing and AI, especially with technologies such as the Internet of Things (IoT), blockchain, and 5G, is generating massive amounts of data, which in turn increases the demand for AI-driven solutions.
Secondly, the huge progress in computing power and the increase in storage capacity have provided impetus for the popularization of AI. The increase in information storage capacity, high-performance computing, and the advancement of parallel processing capabilities are key factors in AI's ability to provide high-end services.
Furthermore, the innovation, accessibility and measurability of AI technology itself have reached new heights, making it a key tool for enterprises to gain competitive advantage. Enterprises recognize the huge potential of AI in automating tasks, optimizing operations and creating better customer experience, which is the main business motivation driving their adoption of AI. 72% of enterprises have applied AI to at least one business function.
Successful AI adoption often starts with a clear strategic vision that closely aligns AI initiatives with core business goals, such as improving customer experience, optimizing operations, or creating new products. At the same time, cultivating and introducing AI talent and forming a cross-functional team of data scientists, engineers, domain experts, and business leaders are critical to bridging the gap between technical capabilities and business needs.
In addition, the experimental nature of AI projects requires companies to foster a culture that encourages iteration, testing, and continuous learning. In high-risk, high-impact fields such as healthcare and finance, building ethical and responsible AI systems is also essential to maintaining trust in technology and driving adoption.
The maturity of these prerequisites has created a "flywheel effect." Computing, data, algorithms, and talent not only empower AI individually, but they also promote each other through AI. For example, AI improves data processing and analysis capabilities, thereby providing higher-quality data for training new AI models; better AI tools attract and cultivate more talent. This self-reinforcing cycle accelerates progress in various fields, resulting in an exponential growth in the speed of AI adoption, and making it increasingly difficult for latecomers to catch up.
In this rapidly evolving context, organizational agility becomes a key differentiator. Static plans or rigid organizational structures are no longer effective in the face of rapid changes in AI and its prerequisites. Companies that can quickly form cross-functional teams, dare to experiment, learn quickly, and flexibly adjust strategies will be better able to seize the opportunities brought by AI. More than a third (35%) of companies that are simultaneously advancing AI and cloud technologies have adopted AI faster than cloud technology. This highlights the importance of organizational learning and adaptation speed, leadership needs to shift to fostering agility and continuous learning, and HR departments need to have the ability to quickly reshape employee skills.
The symbiotic relationship between data and computing is also driving the development of specialized AI. The data explosion brought about by technologies such as the Internet of Things and 5G provides the "fuel" for AI, and the efficient processing of these massive and often specialized data requires increasingly powerful and specialized computing resources, such as application-specific integrated circuits (ASICs) and custom chips. This symbiotic relationship means that as data sources diversify and the amount of data surges, the demand for customized AI models and the hardware that runs them will continue to grow. This will not only drive further innovation in AI algorithms and semiconductor design, but also highlight the strategic significance of controlling data access and computing power.
D. The physics of distribution has changed: the global Internet and social media enable the rapid spread of awareness, demand, and purchases
Phenomenal applications such as ChatGPT, combined with social media platforms such as Reddit and X (formerly Twitter) and the global spread of the Internet, have fundamentally changed the way new products are perceived, demanded, and purchased, allowing innovative results to reach global users at an unprecedented speed.
The AI market has more than doubled in value over the past five years, attracting 370 million users worldwide. Platforms like ChatGPT, which has a user base of 180.5 million by August 2023, are a testament to this accelerated spread. Consumer familiarity with AI is increasing, with 55% of Americans already interacting with it regularly. Businesses are adopting AI just as quickly, with 72% already using it for at least one business function and 97% of business owners believing ChatGPT will benefit their business.
This new “physics of distribution” stems from the confluence of several key factors. First, the inherent “wow” effect of generative AI tools like ChatGPT makes them easy to share and discuss on social media. Second, these tools are often free or freemium models, accessible through a simple web interface or API, which greatly reduces the barrier for users to try. This ease of use and immediate utility together contribute to rapid viral spread. As the number of users grows, they create content, share experiences, and build communities, further amplifying the awareness and adoption of AI and forming a powerful network effect. This may lead to a “winner takes all” situation in the field of AI platforms, which also means that the marketing and distribution strategies of AI products must make full use of these new channels and pursue organic, community-driven growth.
The global spread of the Internet and the easy availability of AI tools have significantly lowered the threshold for individuals and enterprises to try AI. Unlike previous enterprise technologies that required a lot of upfront investment or expertise, many AI tools can be tried at almost no cost. This convenient experimental condition accelerates the user's learning curve and helps them quickly identify valuable application scenarios. The result is a shortened AI application innovation cycle, and new ideas can be prototyped and tested more quickly, giving rise to a more dynamic and competitive market environment and making AI capabilities more widely available.
As users become increasingly accustomed to features such as instant answers, content generation, and personalized interactions provided by AI tools such as ChatGPT, their expectations for all digital products and services are also changing. Users will increasingly expect AI-driven features, intelligence, and personalization to become standard. Products that lack these "AI native" qualities may appear outdated or lack value. This puts pressure on all software and service providers to integrate AI into their products. This creates opportunities for AI-first companies, but also challenges existing companies that are slow to adapt. This is also related to the concept of "atmosphere revenue" discussed later - initial adoption may be driven by novelty, but continued use will depend on true value integration.
2. The winning strategy for AI startups (What Now?)
In the context of the surging AI wave, start-ups are facing unprecedented opportunities and challenges. This chapter will explore the core strategies of AI entrepreneurship, including the main battlefield positioning of the application layer, the combination of general entrepreneurial rules and AI-specific success factors, vigilance against false prosperity, clear profit path planning, and the necessity of quick action.
A. The application layer is the main battlefield: customer-centric and deeply rooted in vertical industries
Historical experience shows that in past technology waves, the vast majority of billion-dollar companies were at the application layer. The AI era is expected to follow this pattern. Although the basic model and underlying technology are critical, the greatest business value is often achieved by solving practical problems for specific users and industries.
Faced with the trend of basic model providers extending to the application layer, AI startups should adopt a "customer-first" strategy. This means focusing on complex problems within vertical industries or specific functional areas, and even introducing a "human in the loop" mechanism to handle tasks that require a high degree of manual judgment or nuanced understanding. Vertical AI, with its deep industry knowledge and solutions to specific challenges, can bring significant competitive advantages to companies, such as improving production efficiency, reducing human errors, and driving considerable business value. This type of AI is usually trained with industry-specific data, provides highly customized insights, and can be seamlessly integrated into existing enterprise workflows.
Numerous successful cases have proven the effectiveness of application-layer and vertical field strategies. For example, CarMax uses generative AI to summarize customer reviews and provide references for potential buyers; Colgate-Palmolive uses AI to quickly analyze consumer research data and test new product concepts. In the field of financial technology, Appinventiv has worked with Mudra to develop an AI-driven budget management application; in the field of social media, it has developed the function of operating the platform through voice commands for Vyrb; in the field of recruitment, it has helped JobGet become the preferred recruitment application for blue-collar workers. These cases all reflect the ability of AI to solve complex problems in specific application scenarios.
Large basic model providers (horizontal players) have huge resources and can easily replicate general AI capabilities. Startups that try to compete with them on general capabilities have little chance of success. However, it is difficult for large horizontal players to go deep into specific vertical industries, understand their unique workflows, regulatory requirements, and data characteristics, and accumulate relevant proprietary knowledge and data in many vertical fields at the same time. Therefore, the verticalization strategy constitutes the primary moat for startups to resist the invasion of giants. By becoming an indispensable part of the workflow of a specific industry and taking advantage of industry barriers (such as specific licenses, compliance requirements, etc.), startups can establish a solid market position that is difficult to be easily replaced.
The ability to implement AI at the "last mile" is key to success at the application layer. General AI models provide powerful basic capabilities, but to effectively solve specific business problems, a lot of customization and integration work is often required. This "last mile" includes a deep understanding of the customer's unique background, data characteristics, existing workflows, and success criteria, and adjusting AI solutions accordingly. Application-layer startups that can excel in this "last mile" integration and customization will be able to provide higher customer value and establish more solid customer relationships. This means that the success of the application layer depends not only on the quality of the algorithm, but also on the ability to deliver the solution, change management capabilities, and ensuring that AI truly solves pain points in the customer's actual operating environment. This often requires an element of service or consulting, further blurring the boundaries between traditional software products and services.
B. The core elements of building an AI company: 95% general entrepreneurial rules and 5% AI-specific rules
Building a successful AI company relies overwhelmingly (95%) on universal entrepreneurial principles, while about 5% are AI-specific success factors.
1. 95% Universal Entrepreneurship Rules: Solve important problems and attract great talent
There is no essential difference between AI startups and startups in other fields. The core lies in:
Solve important problems : Identify and solve real, valuable customer pain points. Attract outstanding talents : Build a team with top capabilities in technology, products, marketing and management. Achieve Product-Market Fit (PMF) : Ensure that the product can meet market demand and is widely accepted and used by target customers. Build a robust business model : Design sustainable profitability and healthy unit economics. These fundamentals are the cornerstone of any successful business, and AI companies are no exception. Excessive focus on AI technology should not obscure these business essentials.
2. 5% AI-specific rules: Build a moat, beware of false fire, and plan for profit
In addition to general rules, AI entrepreneurs also need to master some unique strategies in the AI field:
a. Building a sustainable moat: end-to-end solutions, data flywheel and deep industry knowledge
In the age of AI, building a strong moat is more important than ever. Startups must clearly articulate how their moat was built and how it will continue to deepen over time.
Providing end-to-end solutions : Compared with point solutions that only provide a single function, end-to-end solutions covering the entire workflow can create higher user stickiness and value. For example, Harvey AI provides law firms with not only AI tools for a certain link, but also a more comprehensive legal work assistance solution. fileAI has also built a similar moat by deeply integrating professional AI components into corporate document workflows.
Build an effective data flywheel using product usage data : The core of the data flywheel is to use the data generated during product usage to continuously improve the AI model and the product itself, thereby attracting more users, generating more data, and forming a positive cycle. * Effectiveness standard : A truly effective data flywheel must be able to clearly improve a key business indicator, otherwise it is just "paper talk" [User Query]. For example, the NVInfo chatbot used internally by NVIDIA has increased the accuracy of its routing agent to more than 96% through the data flywheel built by NeMo microservices, while reducing costs and latency. * Construction elements : Building a data flywheel requires starting from the semantic layer, cultivating data literacy, and taking data governance and quality control as the foundation. * Successful case : Carro, a car trading platform in Singapore, uses transaction data to continuously optimize pricing, finance, and insurance services through its AI-driven end-to-end car ecosystem, forming a strong data network moat. AwanTunai, an Indonesian fintech company, has improved its risk assessment model by combining its financial services with its proprietary ERP system to obtain transaction data from small and medium-sized enterprises, thereby building a data-based barrier. Building a data flywheel is not a simple accumulation of data, but a strategic process that requires careful design. The core is to clarify: How does user interaction generate data? How is this data processed to improve the model? How does the improved model improve the user experience to encourage more interaction? Linking this cycle to a specific, measurable business result (such as the improvement in the accuracy of NVIDIA NVInfo bot) is the key to verifying the effectiveness of the flywheel and proving its investment value. This requires product, engineering, and data science teams to work closely together and view data as a dynamic engine that drives continuous improvement, rather than just a static asset.
Possessing deep industry knowledge (Vertical AI) : Focusing on a specific industry (vertical AI) allows startups to accumulate unique domain knowledge and proprietary data sets, thereby developing highly customized solutions that are difficult for general AI to match. * Successful cases : Sixfold in the insurance underwriting field, Rilla in the home service sales coaching field, Abridge in the medical notes field, and EvenUp in the legal claims document package field are all examples of companies that have deeply cultivated vertical industries. WIZ.AI has also established a strong vertical moat with its localized conversational AI for the Southeast Asian market, and Tonik, a digital bank in the Philippines, has combined regulatory advantages and innovative consumer credit products. * Building principles : Successful vertical AI companies usually identify the workflows that customers really want to automate, avoid applications that are easy to commoditize, look for tasks that AI can complete but humans are difficult to do, generate revenue for customers, explore new business models, and target market segments and workflows with delicate and complex needs that are easily overlooked. At the same time, although the model itself is difficult to form a lasting barrier, multimodal capabilities can enhance defensibility. Modular, scalable model stacks and an emphasis on high-quality data (quality over quantity in the early stages) are also key. In highly professional verticals such as law, healthcare, and finance, trust and adherence to existing workflows are critical. AI solutions that deeply understand and seamlessly integrate into these established processes while strictly adhering to data security and compliance requirements can establish significant switching costs. This "stickiness" comes not only from proprietary data, but also from becoming an integral part of professionals' core work. Therefore, for vertical AI startups, user empathy, in-depth workflow analysis, and co-creation with industry experts are as important as the AI technology itself. Its moat includes both data and process knowledge and user trust. In many complex or high-risk verticals, full AI automation is not yet feasible or is not fully trusted by users/customers. At this time, introducing "human in the loop" (HITL) is not only a stopgap measure, but also a strategic choice. It enables startups to handle these complex issues, ensure service quality, and build user trust. More importantly, the data generated during manual supervision and correction can form highly valuable proprietary data sets for further optimization of models - this is itself a specific type of data flywheel. This strategy enables AI to learn from expert human judgment, allowing startups to deliver superior quality in complex markets and create unique data assets that are difficult for competitors (especially those that pursue full automation too early) to replicate.
b. Beware of “Vibe Revenue”: Analyze user data in depth to distinguish between real behavior and superficial popularity
AI products, especially generative AI, often attract a large number of users to try them out in the early stages due to their novelty and "amazing" effects, thereby generating seemingly considerable revenue. However, this revenue may only be "atmospheric revenue" - derived from users' curiosity, novelty or fear of missing out (FOMO), rather than the product actually solving the persistent pain points in the user's workflow.
Characteristics of "atmosphere income" : high initial conversion rate and promising short-term growth curve, but poor user retention after 3-6 months, limited account usage expansion, and high sensitivity to new substitutes. * Harm : "Atmosphere income" can perfectly mimic the indicators of real product market fit (PMF) in the short term, which is very deceptive. However, as the novelty fades, these indicators will inevitably weaken. * The particularity of AI products : Unlike some purely hyped technologies in the past (such as some Web3 projects), current AI products can indeed work and provide initial value, which makes "atmosphere income" more difficult to identify. * Identification method : It is necessary to deeply analyze user data, including adoption rate, engagement, retention rate, etc., to distinguish between real user behavior changes and superficial "trial" enthusiasm. The key is to examine whether the product has been integrated into the user's daily workflow and whether the retention rate of user groups over 6 months is strong and continues to grow. Other measurement indicators include customer lifetime value (LTV) and churn rate. The formula for calculating user retention rate is: (number of customers at the end of the period - number of new customers) / number of customers at the beginning of the period * 100. * Engagement measurement for generative AI : Refer to indicators such as F1/BLEU scores (measure output accuracy), adoption rate, frequency of use, session duration/number of queries per session, query length, and likes/dislikes feedback. For predictive user segmentation, focus on open/click rate, conversion rate, churn rate, repurchase rate, average order value (AOV), return on advertising expenditure (ROAS), and qualitative feedback. * User behavior analysis framework : Click tracking, scroll tracking, session recording, heat map and other methods can be used, and tools such as Google Analytics, Mixpanel, and Qualtrics can be used.
Table 3: Differentiating between “atmospheric revenue” and sustainable product-market fit (PMF) for AI products
The immediate, tangible (and often impressive) output of AI products gives them a "novelty premium." Users may try or even pay for the product out of curiosity or excitement, even if the product does not solve their underlying pain points. This can lead to exaggerated early adoption and revenue data that fail to reflect long-term utility. Therefore, standard SaaS metrics can be misleading in the early stages of AI products, and more emphasis must be placed on qualitative data (such as whether it is integrated into the workflow) and longer-term user group retention (more than 6 months) to determine whether the product value still exists after the novelty wears off.
The rapid technological iteration cycle in the AI field may also exacerbate the "atmosphere income" trap. New models and tools emerge in an endless stream. If the main appeal of a product lies in the novelty brought by its current "technological leadership", it is very easy to be replaced by the next slightly better or cooler tool. This prompts users to frequently "jump" between different tools, resulting in many products with high initial registrations but poor long-term retention. To get out of this dilemma, AI startups must focus on building deep process integration, proprietary data advantages, or strong community effects to create lasting user stickiness beyond the capabilities of the core AI model. Relying solely on having the "best model" is a fragile strategy.
c. Clear profit margin path: Dealing with token costs and achieving value enhancement
Although current AI applications (especially those based on large language models) may have low gross margins due to factors such as token costs, startups must plan a clear path to healthy gross margins. This usually depends on the future decline in token costs and the higher value perception and willingness to pay brought about by the product's evolution towards "sales results" [User Query].
Token cost and performance trade-off : The profitability of AI depends largely on its performance, especially for large enterprise customers, who often value performance and security over pure cost. Expensive but high-performance cutting-edge models are more likely to be adopted by large enterprises, while low-cost models may be more applied to consumer AI or small and medium-sized enterprises. Leading AI chips maintain their competitive advantage because they can generate more tokens per second (a measure of model response speed). * Complexity of cost dynamics : The operating cost of AI is not destined to continue to decline. For example, "Agentic AI" that can perform multi-step reasoning and self-optimization may require double or even triple the computing resources, pushing up costs. * Key to profitability : The ultimate winners will be those companies that can effectively control the economics of AI (hardware efficiency, model training costs, cloud infrastructure), that is, think about "who can build AI at the lowest cost" and "who can sell AI at the highest price". * Measurement of GenAI return on investment (ROI) : Operational indicators (such as development time/cost savings, defect rate), quality assurance indicators (such as test case generation efficiency, bug detection rate), marketing indicators (such as lead generation, customer acquisition cost CAC), customer service/sales indicators (such as call center business volume, customer satisfaction CSAT, sales cycle) and adoption and impact indicators (such as employee productivity, revenue growth) should be considered comprehensively.
In the B2B space, AI companies must be proficient in value-based sales and pricing strategies. This requires a deep understanding of the customer's return on investment (ROI) and the ability to clearly articulate and demonstrate the economic benefits of AI solutions to justify their prices, regardless of the underlying token cost. As AI models become more efficient and the token cost per unit of computation decreases, one might expect the overall operating cost of AI to decrease. However, as historically revealed by the Jevons Paradox (for example, the increase in the efficiency of coal use leads to an increase in total consumption), the reduction in unit cost may trigger a significant increase in usage and the emergence of more complex applications (such as multi-step agent-based AI). Therefore, although the cost of individual AI tasks may decrease, the total expenditure on AI computing may continue to rise as AI is applied to more problems and more complex scenarios. AI companies and their customers need to plan for this and ensure that the growth in value created exceeds the growth in the total cost of computing. This also further highlights the importance of "sales shovels" (i.e., computing resource providers).
d. Maximum Velocity: Seize the huge market demand and seize the opportunity
The AI market demand is huge (showing "huge appeal"), and macroeconomic fluctuations have little impact on it. "Nature abhors a vacuum", and startups must act at the fastest speed to seize the market opportunity [User Query]. The expected growth rate of the AI market (e.g. 350% by 2030) and the rapid growth of users and market value in the past five years have confirmed this.
In a rapidly evolving field like AI, simply launching a product quickly (“speed to market”) is not enough. As the underlying technology, user expectations, and competitive landscape are all changing rapidly, the speed at which startups learn from user feedback, market signals, and technological advances and iterate their products and strategies accordingly (“learning speed”) is also critical. Therefore, “maximizing speed” should run through the entire learning and iteration cycle, not just the initial release phase. This requires startups to establish agile development processes, strong feedback loops, and a culture that embraces rapid experimentation and adaptation.
Achieving maximum speed requires a highly skilled and motivated team. AI talent is currently in short supply, so the ability to quickly attract, absorb and retain top AI engineers, researchers and product managers has become a key bottleneck or driving force that determines the speed of growth of startups. In addition to recruitment, it is also crucial to create a dynamic and challenging working environment where top talents can learn and grow quickly. Companies that can become "talent magnets" in the field of AI will gain a significant advantage in speed. This also means that as needs evolve, internal skills and retraining programs are equally indispensable to maintain development speed.
3. Current Status and Recent Development of AI (Year in Review: 2023-2024)
Between 2023 and 2024, significant progress has been made in the field of artificial intelligence, user engagement has continued to deepen, key technologies have achieved breakthroughs, and new technological trends have gradually emerged, all of which have jointly pushed AI to develop in a smarter and more practical direction.
A. User engagement has increased significantly: AI is deeply integrated into daily life and work
Taking ChatGPT as an example, its daily active users to monthly active users (DAU/MAU) ratio has increased significantly, approaching the level of the well-known social platform Reddit. This phenomenon shows that AI is rapidly transforming from a novel technical concept to a practical tool that is deeply integrated into people's daily lives and workflows [User Query].
Wider data also supports this trend. More than half (55%) of Americans say they interact with AI regularly, with 27% using it multiple times a day and 28% using it several times a day or a week. In the workplace, AI is already quite common, for example, 78.5% of employees use email spam filters and 62.2% of employees have come into contact with chatbots to answer customer service questions. As many as 72% of companies have applied AI technology to at least one business function. At the personal level, the usage rate of virtual assistants (such as Alexa and Siri) is 61.4%, and algorithm recommendations and fitness trackers are also popular. Half of mobile users in the United States use voice search every day.
These data clearly show that AI is no longer limited to a few technology enthusiasts or specific industries, but is widely penetrating into all levels of society. The high user stickiness of flagship platforms such as ChatGPT not only proves its own practical value, but also effectively improves the public's AI literacy and shapes users' expectations for future technology products.
It is worth noting that the prevalence of AI in everyday mundane tasks, such as email filtering, virtual assistants, playlist recommendations, etc., is paving the way for the adoption of more complex applications. Through these everyday, often "subtle" AI applications, users gradually become familiar with and build basic trust in AI capabilities, even if they may not realize that these functions are driven by AI (for example, 68% of respondents can identify fitness trackers as AI applications). This normalization lowers the psychological threshold for users to adopt more advanced AI tools to solve more complex problems in work or life. As AI gradually becomes a background utility, the adoption curve for new and more advanced AI applications may be steeper because users have been "trained" to expect AI assistance, thus creating a demand for more AI integration.
However, we also need to distinguish between high engagement in platform-level AI and sustained user stickiness of specific AI products. The high DAU/MAU ratios [User Query] of basic platforms such as ChatGPT reflect users' broad interest in and practical recognition of general AI capabilities (such as search, summarization, and ideation). But this does not automatically mean that every specific AI product built on or in parallel with these platforms can achieve the same lasting user engagement. Users may rely heavily on ChatGPT to complete general tasks, but if a vertical AI writing tool fails to provide core value beyond the general platform or fails to effectively solve specific workflow problems, users may quickly abandon it. Therefore, startups cannot simply assume that high engagement in AI as a whole will guarantee the success of their specific applications, and still need to prove their unique and lasting value. The DAU/MAU of "AI itself" is high, but for startups, what really matters is the DAU/MAU of "its AI products."
B. Breakthrough
1. Speech generation: Achieving the “Her moment” and crossing the “uncanny valley”
In 2023, AI speech generation technology has made a major breakthrough, with its realism and emotional expression capabilities greatly improved, reaching a milestone known as the "Her moment", successfully crossing the uncomfortable "uncanny valley" effect [User Query]. AI speech is no longer the easily recognizable, slightly stiff machine voice of the past, but can more accurately imitate human intonation, rhythm and emotion. Studies have shown that advanced AI speech generators are already able to convey complex emotions such as joy, sadness, and anger. Advanced systems such as Microsoft's VALL-E and Coqui's XTTS can even produce hyper-realistic speech that is almost indistinguishable from real human speech.
These advances are opening up new horizons for AI in podcasts, audiobook narration, customer service, and even broader human-computer interaction. Looking ahead to 2024 and beyond, AI voice technology is expected to continue to evolve in a direction that is closer to natural human speech, with a focus on improving emotional intelligence, contextual understanding, and personalized voice customization (such as adjusting tone, pitch, and style based on user preferences or conversation context).
The breakthrough of AI voice in emotional expression has unlocked a whole new category of applications. In the past, voice AI mainly undertook functional tasks, such as executing commands or retrieving information. Today, AI voice that can convey real emotions can be applied to scenarios that require empathy, companionship, or fine emotional communication, such as mental health companions, more attractive educational tools, and more contagious storytelling. This marks the advancement of voice AI from a simple practical tool to the field of user experience and emotional connection, and may give rise to a wave of new AI applications focusing on emotional computing and human-computer relationships.
The potential for personalized development of AI voice is huge, and it is expected to revolutionize brand identity building and technology accessibility. Brands can develop unique and easily recognizable AI voices as auditory symbols of their identity, providing customers with a consistent and personalized experience. Individual users can customize AI voices according to their preferences to make interactions more comfortable and natural. In terms of accessibility, AI voices can be adjusted to better suit individuals with specific hearing or cognitive needs. This deep personalization will free AI interactions from stereotypes and deepen user engagement, while also opening up new avenues for digital branding and more inclusive technology design.
2. Coding: Became the Breakthrough App of the Year, achieving “screaming product-market fit”
AI-assisted coding becomes the annual breakthrough application category in 2023-2024, achieving "screaming product-market fit" and fundamentally changing the economics and accessibility of software development [User Query]. AI not only improves development efficiency, but also enables developers to focus more on complex challenges.
Key trends for 2024 include:
The popularity of open source AI : Models such as Llama3 are rapidly catching up with commercial models such as GPT-4o, and the adoption rate of open source AI has surged by 60%, greatly promoting the democratization of code generation tools. Application of multimodal generative AI : Combining multiple sources of data such as text, images, and audio to provide more comprehensive solutions and richer contextual understanding for software development. The rise of customized AI models : AI models trained for specific industry needs can provide more accurate and relevant code generation services. The development of Augmented AI : significantly improving the productivity of human developers through features such as context-aware code suggestions, error detection, and automatic documentation generation. The emergence of AI microservices : modularizing complex applications to make development easier to manage and maintain. Advances in Natural Language Processing Systems (NLPS) and Large Language Models (LLMs) : Revolutionary advances in code understanding, capable of parsing complex datasets to construct optimal logic. Growing focus on AI ethics : ensuring transparency, fairness, and responsible use of AI-generated code.
Tools such as OpenAI Codex, DeepMind AlphaCode, GitHub Copilot, Cursor, Gemini Code Assist, and OpenAI’s GPT-4o and o1 models all play an important role in the field of AI-assisted coding.
The popularity of AI coding tools is democratizing the development field and may lead to the emergence of a group of "citizen developers". Powerful AI coding assistants, especially those based on open source models and natural language interfaces, significantly lower the technical threshold required to create software. This enables individuals with domain expertise but limited traditional coding skills (such as business analysts, designers, scientists) to build or prototype applications. This will greatly expand the ranks of software creators, accelerate innovation in niche areas, and may push the role of professional developers to more complex architectural design and oversight work.
At the same time, developers’ skill sets are also shifting, with a greater focus on AI orchestration and system design. As AI takes on more and more routine coding tasks (e.g., boilerplate code, simple functions, debugging), the value of developers is shifting from line-by-line coding to higher-level skills. For example, the ability to effectively ask AI questions (i.e., prompt engineering), select and fine-tune AI models, integrate AI components (e.g., AI microservices), and design overall system architectures is becoming increasingly important. The ability to understand and manage the output of AI coding tools (including their ethical considerations and potential biases) is equally critical. This means that the definition of a “developer” may evolve, and future roles may involve less direct coding and more configuration of AI tools, system integration, and ensuring the quality and reliability of AI-generated/assisted software.
C. Technology Trends: Reasoning, Synthetic Data, Tool Use, and AI Scaffolding Become New Power
Although the pace of large-scale pre-training has slowed down, a series of new technology trends are becoming new driving forces for the expansion of AI intelligence, including reasoning, synthetic data, tool use, and AI scaffolding [User Query].
AI Reasoning : AI is evolving from basic pattern recognition and information retrieval to more advanced logical reasoning, planning, and decision-making capabilities. This involves enabling AI systems to make structured inferences, rather than just probability-based predictions. Achieving stronger reasoning capabilities requires more computing resources for pre-training, post-training, and reasoning processes. Chain-of-thought prompting and multi-step reasoning workflows are key technologies for improving reasoning capabilities. Synthetic Data : With the popularity of AI applications, the demand for high-quality training data has surged. Synthetic data, which is artificially generated data that mimics real-world data sets, is becoming a mainstream technology. It helps solve problems such as data scarcity, privacy protection, and regulatory restrictions, especially in data-sensitive industries such as finance and healthcare. However, the quality of synthetic data is critical, and low-quality synthetic data may introduce noise, bias, and even reduce model performance. Tool Use / Function Calling : Large language models (LLMs) extend their capabilities by accessing external tools (such as code editors, web browsers, databases, APIs, etc.) to obtain real-time information, perform real-world operations, or utilize specialized functions. Frameworks such as LangChain and LlamaIndex are key to enabling tool use and retrieval-augmented generation (RAG), which focus on workflow orchestration and agent systems, and data integration and knowledge graph management, respectively. The two can work together to enable LLMs to call external functions to retrieve data or take actions. AI Scaffolding : This is a method of enhancing the capabilities of a model after it has been trained, enabling it to perform multi-step tasks that are more complex than the original model. It includes giving the model the ability to access tools, building agent loops (such as AutoGPT, Devin), enabling multi-AI collaboration (such as iterative amplification), and applying retrieval-augmented generation (RAG) and other techniques. Another related method, "Scaffolding Learning", first trains the model to master specific skills (such as arithmetic operations or first-order derivatives), and then applies it to more general tasks (such as solving word problems or second-order derivatives). Internal Data Mining : Enterprises are increasingly focusing on leveraging the value of their internal proprietary data (e.g., emails, case files, customer records, etc.). The trend is toward models that can mine and reason about this internal information through techniques such as RAG, internal data fine-tuning, and enterprise-level vector search.
Table 4: Key AI technology trends (2023-2024) and their impact
These technological trends collectively point to a paradigm of "Composable AI". Compared with relying on a single, all-powerful AI model, the future trend is to build a system composed of multiple small, specialized AI components, external tools, and data sources. Frameworks such as LangChain and LlamaIndex play the role of "glue" or orchestration layer in this process. This approach brings greater flexibility and modularity, and by integrating the advantages of different elements, more complex and powerful AI systems can be built. This also means that the industry's focus is shifting from building a single "perfect" model to how to cleverly assemble and orchestrate a series of tools and agents.
The development of technologies such as reasoning and scaffolding is effectively addressing the shortcomings of current large language models in terms of reliability and trust. Standard LLMs can produce "hallucinations", lack deep reasoning capabilities, and have difficulty handling complex multi-step problems, all of which limit their application in critical tasks. Explicit reasoning prompts (such as chain thinking), the use of tools for fact-checking, and scaffolding techniques (such as problem decomposition and iterative optimization) are designed to make LLM outputs more reliable, verifiable, and accurate. This focus on improving the quality and credibility of AI output is critical to promoting the adoption of AI in enterprise-level applications. As these technologies mature, AI will be better able to participate in high-risk decision-making in fields such as finance, healthcare, and engineering.
Synthetic data is becoming a strategic asset for addressing AI development and bias issues in specific fields. Obtaining large-scale, high-quality, and diverse datasets is one of the main bottlenecks for training AI models, especially in niche areas or for underrepresented groups. Synthetic data generation technology enables companies to create customized datasets to fill these gaps, improve the performance of models in specific scenarios, and potentially reduce bias in real-world data by building more balanced training sets. This not only makes it possible to accelerate the development of AI in areas where data is sparse but has a significant impact, but also brings competitive advantages to companies that master synthetic data generation technology.
D. The most innovative work occurs at the blurred boundary between research and product
The most innovative work in AI today often occurs at the intersection of academic research and actual product development, as exemplified by Deep Research projects and products like Google NotebookLM [User Query]. This suggests that the speed of transition from research breakthroughs to product features is accelerating. Companies that can effectively bridge this gap and promote close collaboration between researchers and product teams are more likely to stand out in the innovation race. This also means that the traditional, linear "research first, then development" model is being compressed.
This blurring of the line between research and products has led to a significant shortening of the "lab-to-market" cycle, which has intensified the intensity of competition. New AI capabilities can reach users faster, and companies are competing to productize the latest research results. Startups, with their inherent agility, are sometimes able to surpass slow-moving large companies. Therefore, keeping a close eye on the forefront of AI research and having the ability to quickly prototype and integrate new discoveries have become critical competitive factors. The moat traditionally built by "mature products" is eroding at an accelerated rate.
At the same time, a new trend is "Research-as-a-Feature" and "Product-Led Research". This means that certain elements of the research process or cutting-edge research capabilities are being presented directly to users as product features (for example, advanced reasoning capabilities, experimental model access rights, etc.). In turn, the large amount of data and feedback generated by users' interactions with these "research features" can directly guide and adjust the research agenda. This model establishes a tighter feedback loop between users, product development, and research, which not only accelerates product innovation, but also promotes the progress of research itself. This also changes users' perception of AI products, and they may become active participants in the continuous development and improvement of AI capabilities.
IV. Future Prospects of AI: Agent Economy
With the continuous evolution of AI technology, a future vision called "Agent Economy" is gradually emerging. This concept heralds a new paradigm in which autonomous AI agents deeply participate in and reshape economic activities. This chapter will explore the evolution path of the agent economy, its core definition, the key technical challenges faced in its implementation, and its potential far-reaching economic impact.
A. Evolution Path: From Single Agents -> Agent Swarms -> Agent Economy
The formation of the agent economy is expected to follow a gradual evolutionary path: from the current embryonic form of a single AI agent, to groups of agents (or agent clusters) that can work together, and finally to a complex, interdependent agent economy system [User Query].
Single Agents : Currently, we have seen the emergence of AI agents that can perform specific tasks. For example, some AI tools can help users draft emails, arrange schedules, or perform preliminary information retrieval. These agents are usually single-function and have limited autonomy. Agent Swarms : As technology develops, multiple AI agents with different expertise will be able to collaborate to form “agent swarms” to jointly complete more complex tasks. This collaboration may involve task decomposition, information sharing, and result aggregation. Specialized agents within enterprises (such as Silent Eight’s agents for financial crime compliance) have shown early signs of this trend. Agent Economy : The final stage is the formation of a mature agent economy, in which AI agents can not only collaborate efficiently, but also autonomously exchange resources, transfer value, and conduct business transactions, and establish interactive relationships based on trust and reliability.
Understanding this evolutionary path helps us foresee the phased characteristics of future AI development and the infrastructure and governance mechanisms required at different stages.
The core drivers of this evolution are specialization and interoperability. As AI tasks become increasingly complex, single, omnipotent agents will become inefficient. Therefore, specialized agents with specific skills or domain knowledge will have an advantage. In order for these specialized agents to solve larger-scale problems, they must be able to work together effectively, that is, to form a group of agents. And efficient collaboration is highly dependent on interoperability - a standardized way for agents to communicate, share context, and coordinate actions. Therefore, the development of agent interoperability protocols (see Section IV.C for details) and agent discovery markets will be key enabling factors for the transition from a single agent to a functional agent economy.
When multiple specialized agents collaborate dynamically, their collective capabilities may exceed the simple sum of their parts. Interactions within the “swarm” may give rise to emerging problem-solving strategies or innovative solutions that are not explicitly programmed into any individual agent. This is similar to the phenomenon of swarm intelligence in nature, such as ant colonies or bee colonies that can complete complex collective tasks. If swarms of agents can exhibit powerful emergent intelligence, they will be able to tackle higher-order complex problems. However, managing, debugging, and ensuring the security and goal consistency of such emergent systems will also pose significant challenges.
B. Definition of the Agency Economy: A New Economy Where Information, Resources, and Trust Intertwine
The core feature of the agent economy is that AI agents are not just transmitters or processors of information, they will become economic participants who can transfer resources, conduct transactions, and have concepts of trust and reliability similar to those in human society. This will be an economy where people and AI agents work in deep collaboration [User Query]. Going further, the agent economy is envisioned as a combination of the intelligence of AI and the decentralized nature of blockchain, where autonomous AI agents operate within a blockchain-based economic system. These agents are able to autonomously perform tasks, manage assets, and interact with each other, and their behavior is supported by the principles of decentralized finance (DeFi) to achieve interest alignment and shared ownership.
The core concepts of agency economics include:
Open-Source Genesis : Developers create and deploy AI agents through open source projects, encouraging the global community to collaborate on improving and enhancing agent capabilities. Autonomous Operations : AI agents operate autonomously like blockchain protocols, independently managing assets and performing tasks. Community Ownership & Participation : Each AI agent can be regarded as an independent crypto project, with governance tokens, utility tokens, and even NFTs as access credentials, giving ownership and control to the community. Seamless Inter-Agent Collaboration : Agents can seamlessly access and utilize the services of other agents and complete payments and transactions autonomously. Liquidity Infusion : Drawing on cryptoeconomic principles, it provides liquidity to AI agents and promotes smoother and more dynamic economic interactions in the digital realm. Super-Smart-Contracts : AI agents evolve into “super-smart contracts” that can autonomously adapt, learn, and interact with the environment.
This vision aims to overcome the challenges facing creators and users in the current digital environment, championing an open source, community-driven innovation model that ensures creators are fairly rewarded and recognized, and empowers users by giving them control and ownership of their digital interactions.
The emergence of the agent economy could dramatically reduce transaction costs for complex digital services. Many digital services today involve multiple human touchpoints, coordination efforts, and contract-related overhead that constitute transaction costs. If AI agents can autonomously discover needs, negotiate terms, perform tasks, and complete settlements, much of this human intermediary overhead could be eliminated. This could be particularly disruptive for complex, multi-step digital workflows that are currently difficult to implement due to high coordination costs, such as tasks that require coordination of multiple freelancers or service providers. The result could be that a large number of complex, customized or niche services become more accessible and affordable, creating new markets for tasks that are currently difficult to commercialize due to high coordination costs, and potentially leading to the replacement of existing platforms that manually broker such services.
The integration of blockchain and crypto plays a fundamental role in the conception of the agent economy that goes beyond just a means of payment. While cryptocurrencies can facilitate payments between agents, the more profound significance of blockchain technology is to provide a transparent and immutable record of agents' behavior, capabilities, and reputations, thereby fostering trust in a decentralized environment. Tokenization mechanisms can enable community governance of agent protocols and markets, aligning the interests of developers, users, and agents themselves. Blockchain-based decentralized identity solutions (see Section IV.C) can provide verifiable credentials for agents. Therefore, the success of a decentralized agent economy may be closely related to the maturity and popularity of Web3 technology. This suggests that the development of AI and Web3 may become increasingly intertwined and mutually empowering. However, this also means that the agent economy will also inherit some of the challenges faced by the crypto field in terms of scalability, ease of use, and regulation.
C. Technical Challenges of Implementing the Agent Economy: Persistent Identity, Communication Protocols, and Security
To achieve a fully functional and trustworthy agent economy, a series of key technical obstacles must be overcome, the most important of which are establishing a persistent identity authentication mechanism for AI agents, designing seamless communication protocols, and building a strong security system.
Persistent Identity : AI agents need to have a consistent "personality" and reliable "memory" [User Query] that goes beyond current retrieval augmentation generation (RAG) and vector database technologies. Traditional identity and access management (IAM) systems, such as OAuth, OpenID Connect (OIDC), and SAML, are designed primarily for human users or static machine identities, and cannot meet the dynamic, interdependent, and potentially short-lived characteristics of AI agents, especially in large-scale multi-agent systems. The coarse-grained control, single-entity focus, and lack of contextual awareness of these traditional protocols make them incompetent. AI agents need more sophisticated and adaptive access control mechanisms. To this end, the industry is exploring a new "agent-based AI-IAM" framework, the core of which is to use decentralized identifiers (DIDs) and verifiable credentials (VCs) to create rich and verifiable identities for AI agents. This identity will contain information such as the agent's capabilities, origin, scope of behavior, and security status. In addition, an agent naming service (ANS) is required to implement a secure, capability-based agent discovery mechanism. Context retention is also a difficult problem, especially in keeping information consistent across agent boundaries and across time. Seamless Communication Protocols : The agent economy requires an underlying communication protocol similar to the Internet TCP/IP protocol stack to ensure that information, value, and trust can flow efficiently and reliably between agents [User Query]. Without a unified standard, ad hoc, point-to-point integration solutions will be difficult to scale, ensure security, or be universal across different fields. The agent communication protocols that are currently emerging include: Model Context Protocol (MCP) : A JSON-RPC-based client-server interface for secure tool invocation and typed data exchange. Agent Communication Protocol (ACP) : A general communication protocol based on RESTful HTTP that supports multi-part messages of MIME type and synchronous and asynchronous interactions, and includes functions such as session management, message routing, and integration with DIDs. Agent-to-Agent Protocol (A2A) : A peer-to-peer task delegation framework that enables secure and scalable collaboration in enterprise-level workflows through capability-based “Agent Cards”. Agent Network Protocol (ANP) : A protocol that supports agent discovery and secure collaboration in open networks, leveraging DIDs and JSON-LD graphs. Security : In a world dominated by non-face-to-face AI agents, the importance of trust and security has been raised to an unprecedented level [User Query]. This includes not only protecting data from unauthorized access, but also going beyond traditional application-based access control to ensure secure authentication and authorization in dynamic environments while maintaining accountability and enforcing security policies.
In addition to the above core challenges, building a production-grade proxy economy also faces other difficulties, such as ensuring that the proxy service runs uninterruptedly 24 hours a day and has an automatically scalable infrastructure, an efficient proxy discovery mechanism, and a reliable payment and clearing system.
Table 5: Core technical challenges in realizing the agency economy
The "identity crisis" of AI agents is one of the core bottlenecks in realizing the agent economy. In order for agents to conduct transactions, own resources, build reputations, and take responsibility, they need a stable, verifiable, and secure form of identity. Current identity systems are mainly human-centric or designed for static machine identities, which are difficult to adapt to the dynamic, autonomous, and potentially ephemeral nature of AI agents (especially in decentralized systems). Without solving this problem, it will be difficult to establish trust, trace behavior, manage permissions, or ensure security in the agent economy. Therefore, it is crucial to develop a strong decentralized identity solution for AI agents, which may become a new important category of infrastructure.
Standardization of communication protocols is a prerequisite for the emergence of network effects in the proxy economy. Just as the Internet thrived thanks to standardized protocols such as TCP/IP and HTTP, an proxy economy that relies on the interaction of a large number of heterogeneous agents also requires universal communication standards. Otherwise, the ecosystem will become fragmented, forming "walled gardens" that limit the potential for broad collaboration. Therefore, the current development and promotion of protocols such as MCP and ACP is crucial. The "protocol war" may eventually converge to a set of dominant standards, which will be an important stage in the development of the proxy economy.
Security in the agent economy is a multifaceted issue that goes beyond traditional technical controls. Technical security (e.g., authentication, authorization, encryption) is important, but in an economy of autonomous agents, security is also about ensuring that agents act as expected (i.e., the “alignment” problem), preventing malicious agent behavior, safeguarding the integrity of agent-generated information, and establishing recourse mechanisms when things go wrong. This extends to areas such as AI safety, ethics, and governance, beyond traditional cybersecurity. Building a secure agent economy requires a holistic approach that combines technical security measures with a robust governance framework, ethical guidelines, and potentially new audit and oversight mechanisms for autonomous systems.
D. Economic impact of the agency economy: Reshaping industries and value chains
The widespread adoption of the agent economy is expected to have a profound impact on the global economy, comparable to previous industrial revolutions. This implies not only huge wealth creation potential, but also major social structural adjustments that may require prudent management and policy responses.
Productivity Leap : Generative AI alone has the potential to contribute $2.6 trillion to $4.4 trillion to the global economy each year. The widespread use of AI is expected to increase labor productivity by 1.5 percentage points per year, which could ultimately boost global annual GDP by up to 7%. Reshaping of employment structure : It is estimated that AI may automate about a quarter of current jobs, affecting 300 million full-time jobs worldwide. But this means more of a shift in job content and the emergence of new jobs rather than large-scale unemployment. Unique human abilities, such as creative problem solving, emotional intelligence, and interpersonal communication, will become more valuable. The rise of emerging industries : AI will give rise to new industrial fields, such as AI software development, AI ethics, AI security protection, and AI management. Improved cost-effectiveness : AI agents can significantly reduce the operating costs of enterprises. For example, a bank successfully reduced its related operating costs tenfold by deploying AI customer service agents. Budget structure adjustment : Enterprises will increasingly shift budgets from traditional labor costs to AI agents. It is estimated that the overall size of labor-related budgets is 35 times that of software budgets, and AI agents are preparing to take share from both budget pools at the same time. PwC predicts that by 2030, the global agent economy may generate up to $15.7 trillion in value annually, of which China (GDP is expected to increase by 26%) and North America (GDP is expected to increase by 14.5%) will be the main beneficiaries. Potentially widening economic gaps : Different regions and industries may benefit from the AI revolution to different degrees. Companies in developed regions may adapt faster and seize the economic opportunities brought by AI, while emerging markets may face the risk of falling behind, which requires policymakers to pay attention and try to strike a balance.
In an AI-agent driven economy, the relative value of skills that are difficult to automate and truly reflect unique human value - such as complex strategic thinking, deep empathy, originality, and leadership - may increase significantly. Jobs that rely on these skills may enjoy a "human premium". The focus of human work will shift to collaborating with AI agents, managing and guiding AI agents, and handling tasks that require sophisticated human interactions. This will inevitably require major adjustments to the education and retraining system to cultivate these "human premium" skills.
An agent economy of specialized agents that can collaborate and transact efficiently could give rise to highly dynamic “micro-economies” built around specific tasks or services. These agents could provide hyper-personalized services at scale, catering to the unique needs of individual users in ways that are currently cost-prohibitive. For example, an agent could assemble a customized news digest for a user by contracting in real time with multiple other specialized information collection and summarization agents. This would lead to an explosion of niche services, a more nuanced and personalized economy, and could challenge existing business models based on mass-market products.
However, the technical capabilities of AI agents and the potential pace of development of the agent economy are likely to outpace the ability of existing legal, regulatory, and ethical frameworks to adapt. Issues such as accountability for agent behavior, data ownership and privacy protection in agent interactions, algorithmic bias in agent decision-making, and the potential for agent-driven market manipulation all need to be properly addressed. The risk of widening economic disparity also suggests that new policies on social protection, education, and wealth distribution are needed. Therefore, proactively developing adaptive governance mechanisms and ethical guidelines is essential to ensure that the agent economy develops in a beneficial and fair manner.
V. The transformative power of AI: Reshaping thinking patterns and work paradigms
Artificial intelligence is not only an innovation in productivity tools, but on a deeper level, it is triggering a shift in human thinking patterns and completely reshaping work paradigms, organizational structures and even the way the entire economy operates.
A. Change of mindset: Embrace randomness and strengthen management thinking
Deep interactions with AI, especially with AI systems whose outputs are probabilistic in nature (such as large language models), are prompting a shift in the way humans think.
Stochastic Mindset : Traditional computing emphasizes determinism, that is, given input A, you can always get a certain output B. However, AI, especially the current mainstream deep learning model, operates more like probabilistic statistics. The output of AI may not be 100% accurate or completely consistent, but it can provide unprecedented leverage and processing speed. When AI develops to a certain scale, probabilistic methods are often more efficient than deterministic methods. This shift in thinking means that people need to shift from pursuing absolute accuracy to accepting and managing stochasticity. The workflow will shift from stereotyped execution to iterative development of tools, content, and strategies. People will trade less certainty for greater leverage, just like delegating tasks to others instead of doing everything themselves. Faced with information generated by AI, people will take a more critical look and recognize the possible random components in it. This mode of thinking is naturally more adaptable to changes and is closer to the methodology of proposing hypotheses and verifying them in scientific research. For entrepreneurs, having a stochastic mindset can help them better predict the unpredictable improvements of AI models and plan product roadmaps accordingly. Although there are studies exploring deterministic AI paths to circumvent some of the inherent problems of probabilistic models (such as hallucinations), the current mainstream trend and its impact on human thinking is more towards the cultivation of stochastic thinking. However, this shift may also bring new cognitive burdens. Although deterministic processes may be cumbersome, their predictability reduces the mental effort to deal with unexpected results. Working with stochastic AI means that humans must continuously evaluate, verify, and possibly correct or optimize the output of AI, which requires a high degree of critical thinking and judgment. Although AI provides leverage, the role of humans shifts to managing the uncertainty and risk associated with probabilistic results. If not managed properly, this may lead to new cognitive pressures. Therefore, it is increasingly important to develop tools and training that can help humans effectively manage and interpret probabilistic information. On the other hand, stochastic thinking may also inspire greater creativity and innovation. Deterministic thinking sometimes leads to rigid, incremental improvements within existing paradigms. However, a certain randomness and "unpredictability" inherent in AI output can sometimes introduce novel perspectives or unexpected solutions that humans have not considered, just as some unexpected moves of AlphaGo inspired the creativity of human players in the Go world. When used for creative exploration, the randomness of AI (often seen as a flaw, such as “hallucination”) can become a feature, enhancing human ingenuity by allowing AI to explore a vast space of possibilities. Management Mindset : As AI agents take on more and more executive tasks, the role of humans will shift more to that of “managers,” responsible for setting goals, overseeing processes, evaluating results, and making more complex management decisions [User Query].
B. The evolution of management thinking: from direct execution to AI orchestration
The role of humans at work is shifting from directly executing tasks to "managing" and "orchestrating" AI capabilities, which requires management thinking itself to evolve accordingly.
From creation to curation and guidance : For knowledge workers, a significant shift is from creating content or solutions from scratch to more reviewing, optimizing, and guiding the initial results generated by AI. This places more emphasis on critical evaluation capabilities, deep understanding of context, and the ability to effectively guide AI systems to work in the desired direction. People managing AI agents, or even AI agents managing AI agents : AI agents are expected to automate tasks for entry-level employees and even some supervisory and management levels, such as scheduling, report generation, and basic data analysis. This may lead to a leaner, flatter organizational structure. Leaders need to start thinking about how to manage AI agents, and in the future there may even be scenarios where AI agents manage other AI agents.
This shift means that "AI Prompt Engineering" and "AI Teaming" are becoming core management skills. Effectively "managing" AI first requires the ability to clearly communicate goals and constraints to the AI system - this is the essence of prompt engineering. As work increasingly involves collaboration between humans and multiple AI agents (or between AI agents), the ability to design, configure and coordinate these "hybrid teams" will become a key management function. This is more complex than traditional team management because it requires managers to understand the capabilities, limitations and complex interaction dynamics of AI.
At the same time, ethical oversight has become a key management responsibility. As managers delegate more decision-making and executive authority to AI agents, they must also be responsible for the ethical impacts that these AI behaviors may have, such as bias in hiring and fairness in customer treatment. Managers need to understand the decision-making mechanisms of AI models, identify potential biases, and ensure that AI systems are used in a way that is consistent with organizational values and social norms. This requires a new layer of ethical vigilance and oversight at the management level.
C. Change in work paradigm: greater leverage, less certainty, and human-machine collaboration becomes mainstream
AI is fundamentally changing the basic paradigm of work, with the core feature being that individuals and organizations can use AI to accomplish more work (greater leverage), but at the same time they also need to better manage the uncertainties and risks that come with it [User Query].
Improved task productivity : AI is increasingly handling routine tasks such as document processing and basic customer consultations, freeing humans from repetitive activities and focusing on those tasks where human expertise can bring the greatest added value. The speed at which work is completed is also significantly accelerated because AI can handle time-consuming tasks (such as data analysis) more quickly. Transformation of workflows : In workplaces where AI is integrated, tasks are often broken down into components and allocated between human employees and AI based on optimal principles. Humans provide situational understanding and judgment, while AI is responsible for pattern recognition, computational processing, and execution, forming complementary advantages. The emergence of new job roles : While AI is creating entirely new job categories, it is also reducing the demand for certain conventional skills (such as information processing and data entry). The value of unique human skills such as creativity, emotional intelligence, and complex problem-solving will become more prominent. Accelerating innovation : AI accelerates the pace of innovation across industries by enhancing human creativity and revealing previously undetectable opportunities through large-scale, high-speed data-driven analysis.
In this new paradigm, the "Centaur Model" of working - a collaborative model that combines human intelligence with AI capabilities - will become increasingly common. Humans excel at strategic thinking, complex judgment, ethical reasoning, and understanding subtle situations, while AI excels in data processing, pattern recognition, and rapid execution. Workflows will be redesigned to take full advantage of these complementary strengths, with humans guiding and optimizing AI outputs.
It follows that continuous learning and adaptability will become a "survival skill" for individuals in the AI era. AI technologies, tools, and the nature of AI-enhanced work are all evolving rapidly, which means that specific technical knowledge or proficiency with current AI tools will quickly become obsolete. Therefore, the meta-skill of continuous learning, adapting to new tools and new paradigms, and having the courage to abandon old ways of working will be more important than any static skill set.
D. Reshaping Organizations and the Economy: Towards a “Neural Network in a Neural Network”
The impact of AI will extend beyond individual tasks and workflows to the reshaping of organizational structures and even entire economies. AI agents are expected to integrate different organizational functions and complete end-to-end processes, and may eventually form a complex "neural network within a neural network" structure, thus completely changing individual work, company operations, and even the economic form [User Query].
Adjustment of organizational structure : AI tools give individual contributors greater scope and efficiency, while AI-driven automation takes over a large number of administrative tasks, which makes it possible for organizations to adopt a flatter and more self-organizing structure, and the role of middle management may be significantly affected. For example, Amazon has put forward requirements to increase the ratio of front-line employees to managers. AI-assisted organizational design : AI can analyze massive amounts of organizational data (such as communication patterns, performance indicators, etc.) to identify structural inefficiencies, communication bottlenecks, and collaboration opportunities, thereby supporting data-driven, continuously optimized organizational structure design.
This AI-driven organizational change may give rise to a new model of "algorithmic management" and its impact on organizational design. In this model, work allocation, performance evaluation and even some strategic decisions may be deeply involved or dominated by AI algorithms. This requires more flexible organizational design to quickly respond to the insights provided by AI systems and establish new human-machine collaborative governance mechanisms.
From a more macro perspective, AI, as a general-purpose technology, is becoming a key force in catalyzing new organizational forms and value networks with its wide penetration and deep application. Traditional, hierarchical, and fixed-process-based organizations may find it difficult to adapt to the requirements of speed, flexibility, and continuous innovation in the AI era. Instead, there may be more dynamic, modular, project-centric, and cross-disciplinary collaborative networks. In these networks, human experts and AI agents work closely together, flexibly combining according to task requirements to create value together. This transformation is not only an optimization of the internal structure of the organization, but also a profound reshaping of corporate boundaries, supply chain relationships, and the way the entire industrial ecosystem operates.
VI. Conclusion and Outlook
Artificial intelligence is leading a magnificent technological and economic transformation with its unprecedented development speed and far-reaching impact. This report provides an in-depth analysis of the current AI landscape from multiple dimensions, including macro market opportunities, core strategies for AI startups, recent technological breakthroughs and trends, the conception of the future agency economy, and AI's reshaping of human thinking and working methods.
1. Unprecedented opportunities and disruptive forces in the AI market : The AI market is not only huge in scale, but its growth rate far exceeds that of early technology waves such as cloud computing, heralding a more rapid industry disruption cycle. AI is profoundly changing the profit structure of the service and software industries, driving companies to shift their value proposition from "sales tools" to "sales results." The maturity of prerequisites such as computing, networks, data, and talent, as well as new product distribution mechanisms driven by global social media, have jointly accelerated the adoption of AI. Companies and investors need to realize that the window of time to adapt to this change is more urgent than ever.
2. Strategic priorities for AI startups : The application layer is still the main battlefield for AI startups. Startups should focus on complex problems in vertical industries or specific functions and build solutions with customers at the center. To successfully build an AI company, it is necessary to follow universal entrepreneurial rules such as solving important problems and attracting top talents, and also master AI-specific moat-building strategies, such as creating end-to-end solutions, using effective data flywheels, and accumulating deep industry knowledge. At the same time, we must be wary of the trap of "atmospheric income", identify the true product-market fit through in-depth analysis of user data, and plan a clear profit path to cope with high initial costs. In a market with high demand and rapid changes, "maximizing speed" - not only the speed to market, but also the speed of learning and iteration - is crucial.
3. Continuous evolution of the technological frontier : From 2023 to 2024, AI has achieved milestone breakthroughs in speech generation, assisted coding, and other fields, and user participation has also increased significantly. AI is deeply integrated into daily life and work. The enhancement of reasoning ability, the application of synthetic data, the popularization of tool use (such as through frameworks such as LangChain and LlamaIndex), and the innovation of AI scaffolding technology are becoming new engines to promote the expansion of AI intelligence boundaries. Innovation often occurs at the blurred boundary between research and productization, indicating that the cycle from "lab to market" will continue to shorten.
4. Future prospects and challenges of the agent economy : From a single agent to an agent group, and then to a mature agent economy, this is the long-term vision of AI development. In this economy, AI agents will not only exchange information, but also transfer resources, conduct transactions, and have the concepts of trust and reliability, and deeply collaborate with humans. However, realizing this vision faces multiple technical challenges such as persistent identity authentication, seamless communication protocols, strong security guarantees, and scalable infrastructure. Once the agent economy is realized, it will have a revolutionary impact on global productivity, employment structure, and economic form.
5. Profound transformation of thinking and work paradigms : The widespread application of AI is prompting human thinking to shift to "random thinking", that is, learning to seek leverage in uncertainty and transforming from direct executors to "managers" and "orchestrators" of AI. The work paradigm will change accordingly, and the "Centaur mode" of human-machine collaboration will become the mainstream. While individuals and organizations are empowered by AI, they must also improve their ability to manage uncertainty and risks. Continuous learning and adaptability will become the core competitiveness of the AI era. The organizational structure will also evolve to a flatter, more agile, and more data-driven form, and may eventually form a highly complex economic operation system similar to a "neural network in a neural network".
Looking ahead, the development of artificial intelligence will undoubtedly continue to accelerate, and the breadth and depth of its impact will continue to expand. For all market participants, whether large enterprises, start-ups or individuals, understanding the essential characteristics of AI, grasping its development trends, actively embracing change, and devoting themselves to building a responsible and sustainable AI ecosystem will be the key to staying invincible in this era. AI is not only a technological innovation, but also a comprehensive reshaping and upgrading of human wisdom, organizational capabilities and future social forms.