Code is dead, prompt is here? 9 predictions for AI development from a16z

Written by
Clara Bennett
Updated on:June-28th-2025
Recommendation

**How ​​does AI technology reshape the development process? Explore a16z's nine predictions. **

Core content:
1. The great changes in the developer tool chain in the AI ​​era
2. From line-by-line code review to prompt and test case version control
3. Vibe Coding and the flexibility of technology selection
4. Key management challenges in the AI ​​Agent era

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

There was a joke a while ago, the original words were something like this: "I saw a colleague who didn't use chatgpt, cursor, or copilot. He just typed code silently, like a madman~"

One year in AI is like ten years in the human world. As a developer, has the tool chain that we rely on undergone earth-shaking changes?

Over the weekend, a16z published a blog post about the analysis of emerging developer models in the AI ​​era. It mentioned nine trends, which were very insightful! Today, I will share my views based on these observations. This article categorizes and simplifies the overall understanding, hoping to help everyone~

Developers are either reinventing the wheel or managing the wheel. The former is coding, and the latter is version control, dependency control, etc. The intervention of AI has first brought about a huge change in these two core links.

AI native Git: Rethinking version control for AI agents

In the past, when we used Git, we were concerned about every line of code that was changed. But now, when AI Agent can generate or modify hundreds or thousands of lines of code with just one "apply", do we still care so much about line-by-line review? To be honest, many times we are more concerned about: "Is the code modified by AI functional? Has it been tested?"

This means that the focus of version control may shift from the code itself to the prompts that generate the code and the test cases that verify its behavior.

In the future, version records may no longer be cold Commit Messages, but a "Prompt + Tests Bundle". What about Git? It may change from a detailed code history book to a more macro "log of intentions and results", recording "why this change was made" and "who (or which AI model) made the change".

This reminds me of the time when I was doing requirement reviews. After a lot of wrangling, the final code implementation might be far from the original idea. If we can trace back to the original "intention", the efficiency of review and iteration will be much higher.

Forget templates and embrace vibe coding.

Once upon a time, when we started a new project, we always had to choose between create-react-app, vue create or various boilerplates. These templates gave us a very good starting point, but also limited our imagination.

Now, with tools like lovable and Cursor, we can just shout “I want an xxx server using xxxx” and it will set up everything for us.

This Vibe Coding (atmosphere programming, or intention-driven programming) model makes personalization and customization easier than ever before.

What does this mean? The "lock-in effect" of the framework may be greatly reduced. If you use Next.js today and decide that Remix + Vite is better tomorrow, you can directly ask AI Agent to help you refactor. It seems that it is not so difficult to regret the choice of technology.

Of course, this will also bring new challenges: the norms of team collaboration, the control of code quality, and the reliability of AI automatic reconstruction are all "pitfalls" that need to be solved.

How to manage keys in the AI ​​Agent era?

.env files, or some config files, which generally contain some API keys and database passwords, have always been the standard for local development. But when AI Agents start writing code and deploying services for us, who is responsible for this .env? Can AI read it? Is it safe to read it? These are all questions.

The future trend may be that AI Agents no longer have direct access to these exposed keys, but obtain time-limited and scope-limited credentials (Tokens) through an authorization framework similar to OAuth 2.1. Alternatively, a "secret butler service" is run locally, and when AI Agents need permissions (such as "deploy to the test environment"), they apply to the butler, who decides whether to grant them permission.

In addition to changing the production and management of code, AI is also reshaping the way we interact with systems and information.

Monitoring dashboards in the AI ​​era

The dashboards of various monitoring backends and cloud service consoles have more and more information and buttons. It is often confusing to find a function or see a trend. But what if LLM can intervene? We can directly ask: "Where should I adjust the current limit setting of xxx?" or "Summarize the error trends of all pre-release environment services in the past 24 hours." We can even let AI actively prompt: "Based on your business data, I suggest you pay attention to these indicators this quarter."

This means that the UI itself can become more dynamic and conversational. Going further, if the AI ​​Agent also needs to "see" these dashboards to understand the system status and take action, then we may need to design a dual-mode interface for both humans and AI. This feels very different, from passively receiving information to actively "talking" with the system to gain insights.

Documentation is becoming a combination of tools, indexes, and interactive knowledge bases

In the past, when we read documents, we were used to reading from beginning to end, or looking for them in the table of contents. What about now? When we encounter a problem, we just throw it to the search engine or AI. This shift from passive reading to active query is causing a qualitative change in the documents themselves. Documents are no longer just static pages for human developers to see, but also a context source, tool index, and interactive knowledge base that AI agents can understand and use.

Products like Mintlify have begun to structure documents so that they can be easily retrieved and referenced by AI. To put it bluntly, in the future, some documents will be written for humans, and the other part may be "API instructions" written for machines (AI agents). This is also a big challenge for our thinking on document writing.

Application from the perspective of LLM

Some AI applications are beginning to request accessibility permissions for macOS, not for traditional accessibility features, but to enable AI agents to "see" and interact with various application interfaces.

This method is actually very clever. If you think about it carefully, the accessibility API itself exposes the semantic structure of the application (such as buttons, titles, and input boxes). If it is expanded, wouldn’t it become a "universal interface" for AI Agents to understand and operate existing applications?

This means that even if an application does not provide a public API, AI Agents may be able to "use" it by simulating assistive technology. For developers, this may mean that in the future, when designing applications, in addition to the visual layer and DOM layer, an "Agent Accessibility Layer" must also be considered.

As AI becomes more and more deeply integrated into the development process, our collaboration model with AI and the entire developer ecosystem will be reconstructed.

Asynchronous execution agents are beginning to emerge

We are getting more accustomed to "dumping" some tasks to AI Agent, letting it work silently in the background and "report" after completion. It feels like going from working with AI to assigning tasks to AI. This asynchronous collaboration model can not only share the workload, but also reduce the coordination costs between teams. Things that previously required meetings, cross-departmental communication, and lengthy reviews can now be directly handed over to AI Agents to try to execute.

Moreover, the interfaces for interacting with AI are also expanding, not just IDE or command line, but also chatting in Slack, commenting on Figma design drafts, giving opinions in PR, and even through voice. AI is becoming ubiquitous throughout the development life cycle. Developers are more like capitalists, deciding which animals to complete the tasks assigned to you by the physical capitalists.

MCP Protocol

Simply put, MCP wants to solve two major problems: one is to provide LLM with the context it needs to complete the task, which it may have never seen before; the other is to provide a standardized interface so that various tools (as servers) can be called by any AI agent (as clients), avoiding N×M repeated integration.

If MCP can be popularized, and applications provide MCP interfaces by default, just like websites provide APIs by default, then AI Agents can easily combine and call various tools and services like building blocks. This is very important for building a prosperous AI Agent ecosystem.

Agents also need basic components!

No matter how powerful the AI ​​Agent is and how much code it can generate, it still needs to access some stable and reliable basic services. Just like human developers rely on Stripe to process payments, Clerk to manage authentication, and Supabase to provide databases, AI Agents also need these clear, easy-to-use, and highly available "service primitives" when building applications.

This means that in the future, these basic service providers will not only need to provide easy-to-use APIs and SDKs, but may also need to optimize for AI Agents’ consumption habits, such as providing more structured schemas, capability metadata, and even default MCP Server interfaces. This is crucial to improving the completion rate of AI Agents in building complex applications.

Summarize

Some of these emerging models are still in the very early stages, or even YY, but they all point to a common future : the way software is built is being redefined, developers are freed from coding, play the role of designers and leaders, and focus on business logic, user experience, and innovation itself.

At the same time, these changes are also accompanied by various challenges. For example, for developers, how can they adapt to this era and not be eliminated?

How to ensure the quality and maintainability of AI code? Will tool protocols bring new barriers?

There are no standard answers to these challenges yet. But it is these unknowns and uncertainties that make this era so exciting.