1 Prompt Top 10 Lines of Code: YC Partner Explains the AI ​​startup Moat

Written by
Caleb Hayes
Updated on:June-09th-2025
Recommendation

YC partners deeply analyze AI entrepreneurship moat, how Prompt engineering subverts traditional coding.

Core content: 1. The unique position and value of Prompt engineering in AI entrepreneurship 2. Prompt design, evaluation set, and distillation process to build an AI entrepreneurial moat 3. A 24-hour landable Prompt-FDE action list, converting instruction flow into moats

 
Yang Fangxian
53A founder/Tencent Cloud (TVP,) most valuable expert

On May 31, 2025, the top entrepreneurial incubator in Silicon Valley Y Combinator (YC) official podcast "Lightcone" has thrown out a technical explosion point:

Prompt engineering's status is very similar to programming in 1995-

Whoever writes it first will have the next generation of GitHub.

This is not an exaggeration, but YC President Garry Tan's judgment in the same episode. He cites the latest internal data of the acceleration camp:

  • One quarter of startups have produced more than 90% of the code and tests in the background using LLM;
  • "Vibe Coding" In practice, a team of 10 people has achieved a traditional development iteration of 50–100 people.

At the same time, "Prompt = The concept of code is flooding the global venture capital circle: from Business Insider's in-depth coverage of Vibe Coding to the Financial Times tracking the cost turning point of open source models, almost all trends point to the same conclusion:

The one who decides the winner is no longer the model Adjust to a larger scale , but who first writes the business logic into the "instruction flow" and solidifies it into an evaluation system.

What does this mean for domestic entrepreneurs

When overseas colleagues can already run the demo quickly in natural language, we are quietly smoothing out the time difference of "recruiting more engineers and writing hard code".

The truly difficult moat to replicate is moving from model parameters to Prompt Design, evaluation sets and distillation processes - and these three things are precisely the "non-code workload" that domestic teams generally ignore.

Next, this article will:

  • Cite the core text of the Lightcone interview paragraph by paragraph, reviewing the design details behind "1 Prompt top 10 lines of code";
  • Disassembly Prompt → Evaluation → The efficiency closed loop of distillation explains why "evaluation standards" are more difficult to plagiarize than "parameter scale";
  • Given a 24-hour landing Prompt-FDE action list to help local teams turn command flow into the next generation of AI entrepreneurship moat.

When the amount of code no longer determines the development speed, who will learn to write the business first Prompt, whoever can take the lead in this moat competition.

Section 1 | 1 prompt, support a set of "can work" AI tool chain

"Parahelp does something very, very good."

In the Lightcone program, YC partner Jared named the unknown startup.

Its business is actually not complicated - helping AI companies with customer service. If you submit a question on Perplexity or Replit, it is likely that the reply to you is not artificial, but Parahelp's AI proxy.

But what really convinces YC is the prompt behind it.

They exposed the full tips to drive this customer service agent to show it to the world on YouTube. This is rare.

Because for many companies, this prompt structure is already an IP.

This is not a sentence that you are a customer service assistant, but a task instruction manual

In the program, Diana dismantled the complete structure of this prompt:

This prompt document has 6 pages. It is not to directly answer the question at the beginning, but to determine the role first: you are the 'manager' of the customer service agent.

Then, it lists what you are going to do with a few key points:

  • You need to determine whether each user request needs to call the tool;
  • If necessary, adjust which one and how to adjust it;
  • If not, organize a natural language explanation.

It's not LLM ‘Free play’, but step by step disassembly: first judge, then decide whether to approve the call, and finally select the output format.

The whole paragraph prompt showed multiple "don't do this" reminders, such as:

  • Don't adjust the wrong tool, don't treat "order checking" as "cancel order";
  • Don't answer what you don't know;
  • Do not change the output format, make sure it matches other agents.

Why is this prompt so valuable

Because it is not a simple request, but a collaboration manual. It tells AI: Don't do it alone, your answer should be used directly by other AIs.

She mentioned a key detail: this prompt uses an expression similar to the XML tag format.

You don't write a sentence to tell the model to "please help me reply to the customer", but use a structure like <task>…</task> , <approve>yes</approve> to let the model output itself according to the module.

Why do this

Because customer service is not a single-point task, it involves multi-step actions: recognition, judgment, response, and calling tools.

If you don't write it clearly, the model can only "guess" what you want; if it is clearly written, it can automatically execute it according to the process. .

Evolution of prompts: not to become smart, but to become stable

Diana said: These hints are more like code than natural language. Their goal is not to make AI smarter, but to be more controllable.

This is also why many startups are reluctant to disclose their prompt structure - because it is no longer a temporary instruction, but the backbone of the entire service process.

It connects the model, calls the interface, coordinates the output, and even provides Think about it before where the model may go wrong.

So, the real value of that 6-page prompt is not how much it writes, but:

  • It can make the model work, connect the process, and stabilize the results.
  • It is not an ordinary request, but a set of easy-to-cooperate expressions.

This is also YC The partner repeatedly reminded:

The prompt format can be imitated, but what is really difficult to copy is the judgment criteria set behind you.

Section 2 | Prompt can be copied, but "right" is the moat

Parahelp Dare to make your own prompt public not because it is not important, but because you can't use it even if you take it.

YC Partner Gary made it clear in the show:

They do not treat prompts as IP, but evaluation logic as IP.

If you don't know why this prompt is written like this, then you can't change it, adjust it, or even judge what's wrong.

The real core is not the prompt content, but the judgment criteria

Diana gave a common example: Many people will find that when they optimize the prompts: "The model performance is unstable, and they can answer it today, but they will make an error tomorrow. But they can't explain clearly - what is the standard? What is the task you gave is unclear.

Diana said: The problem with most teams is that they don't know if a prompt is good or not, but they rely on slapping their heads.

Jared Added a sentence later: If you are not sure what effect this task will achieve in the end, you cannot make a prompt.

This is also the real moat of Parahelp— They not only write prompts, but also write a complete set of rules for effect checking.

Do not rely on manual review of each result, but on "What should a good answer look like" Written into the structure of the prompt.

How to do this is the key step

In the program, Jared mentioned a particularly practical design:

I'm not sure what you asked me to do now.

I am missing key information and cannot continue to execute.

This format is like what you leave for the model "Repent Medicine". If you don't give it an export, it will compile an answer to submit it . This is called "illusion".

Add this kind of "feedback slot", and you can see which step the model is stuck and what it misunderstood - this is actually your own workflow, and it has not explained clearly what it has exposed.

Prompt is essentially a work guide for AI

Gary said: In the past, the workflow was written in the mind and on the whiteboard, but now it was written in the prompt.

This sentence sounds easy, but it is YC behind it A complete set of methodological changes: not to find a smart model, but to write a working method that others can understand so that the model, tools and users can be connected.

So, Prompt

Always: Have you written the standard of "good answer" clearly?

Section 3| The founders that YC value the most are all writing tips around customers

In this interview, YC partners repeatedly mentioned a word: FDE - Forward Deployed Engineer.

But what they really want to talk about is not a position, but a method: you should not sit in the office to adjust the model, you should sit next to the client, listen to how they describe the problem, and then write out the prompts on the spot and show them the results.

In YC's view, this is the most valuable skill for AI entrepreneurs.

Gary mentioned the core characteristics of such people:

  • The technology is not necessarily the strongest, but it can understand the business;
  • Do not write long requirements documents, but directly click the prompt and take the demo;
  • Do not wait for the team to be producted, but "get it first" with the model.

He summarized it very well: it is not who does more completeness, but who calls out the prompts for the first version of work first.

The shadow of Palantir appears in the new generation of AI entrepreneurs

Gary himself is an early employee of Palantir (a famous American data analysis and intelligence software company). He recalled the past:

Sit on site in their office to write programs.

What these people need to do is very specific:

  • You can understand Word and Excel on the other person's desk;
  • You can hear which is an important requirement in a verbal feedback;
  • Write a prompt, not only looks smart, but can be used by customers.

Now, YC invests in many AI companies, such as Giga ML (enterprise-level local deployment model company) and HappyRobot (logistics voice AI agency) both use this method to get seven-digit orders:

They do not rely on financing to smash products, but on two or three founders who "run customers + adjustment tips". After the first meeting, they handed out a run-able first version and won the transaction by running first.

Prompt that can win: not rely on complexity, but on taking on work

Prompt project, 80% of the problem is not technology, it is not understanding what the customer wants.

Gary gave an example:

You wrote a prompt to let the model identify the user's intentions in the customer service scenario, but you have not been to the customer service team, and you don't know that what they fear most is "missive cancellation of orders", As a result, the model was very proactive, but the team did not dare to use it at all.

There is no mistake that you wrote it wrong, but that you are too far away from the scene.

Gary Said: These founders are not adjusting the model, they are building a set of work instructions for the model, which can be directly implemented.

Behind this, there is actually a new method that more and more YC companies are using:

Meta-Prompting

? Research released by OpenAI and Stanford University in 2024

Meta prompt does not refer to "smarter prompts", but: break up complex tasks into several small pieces, let the same model process separately, and finally integrate them back to the result.

Its essence is that the prompt is not just the content, but the process framework.

YC Partner Company Tropir (AI Debugging and Prompt Optimization Tool Company) In this way, let the model optimize the prompt structure by itself:

  • The user only writes a rough initial prompt,
  • The model will generate a new version based on historical failure experience and call context;
  • Let the engineer choose the best-effect as the final instruction.

This is what YC said: Don't write prompts by hand, teach AI to generate prompts by itself.

This ability is exactly the biggest advantage of the founder of FDE.

Section 4 | Prompt-FDE Implementation Guide: Run the first version in 24 hours

In YC's view, partner Jared emphasized many times in the interview:

Prompt The project is not a workflow diagram for office design,

It is a game that uses and modifys, delivers and optimizes.

So now, the earliest task of YC entrepreneurs is not to recruit teams or write business plans, but to use a model to run a process that can be executed, even if it is just the first edition.

We can summarize it into two concepts: Prompt-FDE Methodology + Minimum Deployable Prompt Loop .

Step 1: Use meta prompts to write "write prompts"

Jared's first trick is to not force the prompts yourself, but let the model help you draft the first version:

"You only need to tell the model: You are a senior prompt engineer, please help me write a prompt suitable for XX tasks. Requirements: There are role settings, process steps, possible errors and output formats. "

This is how to use "meta prompt".

Diana said: Our default operation now is to write meta prompts first, and then let the large language model generate a prompt trunk. Sometimes the structure given by Claude or GPT-4 is clearer than the engineer writes.

Even many YC teams have templated this step and used it out of the box.

Step 2: Pull out 5 real use cases and do a self-evaluation

Parahelp does not rely on manual checking of every reply to the model in the early training stage, but:

  • Collect customer historical data;
  • Pick out 3 to 5 representative examples per task;
  • Feed to the model with the prompt, let it check itself: Can you get the same answer as before

Gary calls this action the most underestimated part of YC:

The prompt is not how good you think, but whether you have any examples to tell the model: This is what I want.

If you don’t give it an example, it will play freely--you will never be able to hold it back.

Step 3: Add a debug channel. Where is the model confused? It tells you by itself

This is the "trick" that Jared highly recommends:

At the end of the prompt, add a fixed format, such as:

If you have any questions about the current task, the information is unclear, and the information cannot be completed, please explain the problem you are facing in the response_debug field.

The function of this sentence is:

  • The model no longer pretends to understand, but is willing to tell you "I don't understand";
  • The founder can quickly locate the problem: is the prompt written in a blur, is the context not connected.

YC Call this "let the model become your complaint portal": this small function basically determines whether you can call up a stable version of the prompt within two days.

Step 4: Run out of the version with a larger model and distillate it to the smaller model

When you use GPT-4 or Claude Opus to adjust the prompts, many YC teams will choose:

  • Define the prompt structure;
  • Switch to faster and lower-cost models (such as Claude Haiku, Gemma, Mistral);
  • See whether the same result quality can be maintained.

This process is also called the "distilled model".

Jared said: You can't use GPT-4 Run the service all the time, but you can use it to train a set of prompt structures and then leave it to the small model to run.

This is what YC entrepreneurs are doing now:

It is not about building a model, but about turning the prompt into a product part that can be landed, replicated, and inherited.

And Prompt-FDE There are only two things to implement:

  • A model that can write prompts (large model writes structure, small model runs and executes)
  • A founder who is willing to sit next to a client and write a prompt

Conclusion | You don't write a model, you write a way of working.

In the entire interview, YC partners were actually talking about one thing:

Entrepreneurs are not going to create a smarter AI, instead turns the task into a step that the model can perform, letting it start to move.

The model can be open source and the API can be shared, but the way you train it is difficult for others to copy it.

Gary emphasized at the end of the show: AI The real asset of a company is not the prompt itself, but how do you decide whether an answer is ‘good’.

Whoever writes out how the model should do things first, will get the prototype of the next generation of SaaS first.

In their opinion, Prompt It is not a sentence, not a parameter tuning, but:

  • Turn business logic into operational instructions;
  • Writing process experience into expressions that the model can understand;
  • Insert the founder's product ideas into the behavior of AI.

This is what "1 Prompt top 10 lines of code" really means.

It's not that you can save a few lines of code by writing a sentence, but that you can write a paragraph, the model can run and the product can be launched.

For today's entrepreneurs, Prompt is not just a language interface,

It is your product prototype, organizational idea, and even the future entrepreneurial moat form.