Taming AI agents: 11 tips from Google researchers to make them smarter

Written by
Jasper Cole
Updated on:June-28th-2025
Recommendation

11 practical tips shared by Google researchers to make your AI agent smarter and more efficient!

Core content:
1. Provide sufficient contextual information to reduce AI's guessing errors
2. Clarify the role and capabilities of AI and keep the information consistent
3. Think about the problem from the perspective of AI and users and provide appropriate details
4. Give detailed but not too specific examples to avoid overfitting
5. Gradually train AI to use various tools and correct incorrect usage

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

 


Taming AI agents: 11 tips from Google researchers to make them smarter


Now, let's talk about how to make your AI agents more obedient and useful. Don't expect them to understand everything right away, that's unrealistic. The "spells" you give them - that is, prompts - are the key to determining whether they are "artificial idiots" or capable assistants.

This article, to put it bluntly, is about how to write these tips more effectively. It's not some kind of metaphysics, but more like communicating with a newcomer who is a bit stubborn but has unlimited potential. You have to speak clearly and get to the point.

Don't be stupid, give enough context first

Think about it, if you are completely in the dark and have no information, what good work can you do? The same is true for AI. All the information it can access - system instructions, tool instructions you give it, what it said before, and the requirements you put forward - these are all its "worldview".

Therefore, the first and most important one is to feed it relevant data and background information . Don't be afraid of too much, the current models are very tolerant. The more complete the information, the less likely it is to make blind guesses. For example, if you want it to solve a code problem, you can throw relevant code snippets, function definitions, and even the error output of its previous attempts to it. Sometimes, it gets stuck just because a line of comment or a key error log is missing. Remember, when truncating the log, the information at the beginning and end is often more useful than the middle, so don't be stupid and start deleting from the end.

Describe the world clearly so it won't be confused.

You have to tell it who it is, what it is doing, and what tools it can use. For example, you can directly point out in the system prompt: "You are an AI programming assistant that can read and write code libraries and use these tools to work." Just such a simple sentence can make a huge difference in the effect.

Also, all the information given to it should be consistent . The system prompt says that the current directory is /home/user, then you provide Execute Command It is best to also use this as the default working directory for the tool.Reading Files If the tool gets a relative path, it also needs to know whether it is relative to /home/userDon’t fight yourself and confuse the AI. The model can be easily confused. If you say one thing and then another thing, it won’t know who to believe.

Think from its perspective, and also from the user's perspective

If the user is writing code in an IDE, then the "world" that the AI ​​sees should also match the state of the IDE. Which file is currently open? Which line is the cursor on? Even what code can be seen on the screen and what is the selected text. These details are sometimes the finishing touch. Of course, don't go to extremes. If the AI ​​pays too much attention to the details of the IDE, it may miss the point. There is a limit here.

Don't be afraid of being long-winded, but be careful with examples

Models like detailed instructions. If you want it to use a new tool, such as the version control tool called Graphite, then simply write out the common steps: how to create a PR, how to update, how to synchronize code.git commitgit pull etc.) also tell it.

However, be careful when giving examples. Models are masters of pattern matching, and if you give them too specific examples, they may "overfit" and just copy and paste, failing when they encounter slightly different situations. Instead, it is usually safer to tell them "what not to do", although it may not always be effective.

Tools need to be conditioned

Don’t expect the model to use the tools you give it. It may use the wrong parameters, miss a few points, or simply not understand when you want it to use which tool. For example, if you give it a simple Editing a file Tools, and gave a complex Clipboard Tools are used to move large sections of code, and if you ask it to move a class from file A to file B, it will probably still stupidly use the simple Editing a file The tools were being modified without much effort.

What to do? First, define the tool clearly. Second, when it uses the wrong tool or the wrong parameters, don't just let the program crash with an exception. You should let the tool return a clear error message to the model, such as "When you called tool X, parameter Y was required, but you didn't provide it." It usually understands and corrects itself.

Some "crooked tricks" and precautions

  • Sometimes, threatening it with threats like "If you make a mistake, the project will fail" can actually improve the effect. On the contrary, persuading it with kind words or yelling at it is not very useful. This may be some quirk in the training data.
  • The model is more sensitive to the content at the beginning and end of the prompt. Consider placing important instructions at the beginning of the user's question, or at the very beginning or end of the entire input.
  • Be careful with prompt caching. If your prompt contains information that changes frequently (such as the current time), don't put it in the system prompt or tool definition, as that will cause the cache to become invalid frequently. Put this changing information in each user question.
  • There comes a point where no matter how much you optimize your notifications, the improvement is minimal. This is the so-called “plateau period”. At this point, you need to consider whether it’s time to change your approach or introduce technology other than notification engineering.

In the final analysis

There is no magic to getting prompts from an AI agent right. It’s just a disciplined communication approach. You have to guide it, clean up its mess, and iterate your instructions like you would a stubborn but capable junior employee. Treat the prompts themselves as code—versioned, reviewed, and tested. That’s how you can truly turn AI into your right-hand man, not a troublemaker.

A complete prompt example

## Version control with Graphite
We use Graphite on top of git for version control. Graphite helps manage git branches and PRs.
Graphite maintains PR stacks: changes to a PR are automatically rebased on PRs higher up in the stack, saving a lot of manual work. The following sections describe how to perform common version control workflows with Graphite and GitHub.
If users ask you to implement this type of workflow, follow these guidelines.

### Prohibitions
Do not use `git commit`, `git pull`, or `git push`. These commands are replaced by Graphite commands beginning with `gt` as described below.

### Create a PR (and branch)
To create a PR:
- Use `git status` to see which files have changed and which are new
- Use `git add` to temporarily store related files
- Create a branch using `gt create USERNAME-BRANCHNAME -m PRDESCRIPTION` where:
  `USERNAME` can be obtained from other places, see the relevant instructions
  `BRANCHNAME` is a good name for your branch
  `PRDESCRIPTION` is a good description you write for your PR
- This may fail due to pre-commit issues. Sometimes pre-commit will fix these issues on its own. Check `git status` to see if any files have been modified. If so, `git add` them. If not, fix the issues yourself and `git add` them. Then try to create the PR again by repeating the `gt create` command.
- Run `gt submit` to create a PR on GitHub (skip this step if you are just creating a branch).
- If `gh` is available, use it to set the PR description.
NOTE: Don't forget to add the files before running `gt create`, otherwise you will get stuck!

### Update PR
To update a PR, do the following.
- Use `git status` to see which files have changed and which are new
- Use `git add` to temporarily store related files
- Use `gt modify` to commit your changes (no commit message required)
- This may fail due to pre-commit issues. Sometimes pre-commit will fix these issues on its own. Check `git status` to see if any files have been modified. If so, `git add` them. If not, fix the issues yourself and `git add` them. Then try to create the PR again by repeating the `gt create` command.
- Push changes using `gt submit`
- If you also need to update the PR description, use `gh` (if it is not installed, please inform the user, but do not force them to update the PR description)

### Pull changes from main branch
To sync your local repository with the main branch, do the following.
- Use `git status` to make sure your working directory is clean
- Use `gt sync` to pull changes and perform a rebase
- Follow instructions. If there are conflicts, ask the user if they want to resolve them. If the user agrees, follow the instructions displayed by `gt sync`.

### Other Graphite Commands
To find other commands, run `gt --help`.

A quick overview of the key skills

#
Skill
Key points in one sentence
Operational Check
1Focus on context first
First ensure that  all business and user context is  included in the prompt, then consider wording optimization.
✅ In the entry function, first concatenate the retrieved code/documents, and then write the system instructions.
2Presenting a complete world model
In the system prompt, explain the Agent's environment, permissions, and resources.
✅ Clearly state “You can read/write the repository using the following tools…”.
3Keep components consistent
System prompts, tool definitions, and user instructions must not contradict each other.
✅ Unit test: Verify that the tool schema is consistent with the default value in the system prompt.
4Close to user perspective
Put the user's current state (file, cursor, time zone...) into user messages instead of system prompts.
✅ UI layer comes with user context JSON for every interaction.
5Be detailed without being overly elliptical
If the token is sufficient, we tend to write it in a "verbose" way to reduce the room for model guessing.
✅ Only truncate long logs in the middle, keeping the beginning and the end.
6Preventing overfitting of examples
The more specific the examples, the easier it is to fit the template; mix in counterexamples or edge cases.
✅ Few-shot blocks cover unusual inputs.
7Limited tool calls
Explicitly list error returns from tools to allow the model to self-correct.
✅ All tool wrappers should return natural language error descriptions instead of throwing exceptions.
8Use "psychological suggestion" appropriately
Appropriate emphasis on costs/consequences can promote caution.
✅ Add "Errors will cause CI to fail" to the system prompt.
9Managing the Hint Cache
Do not write the changing status into the system prompt to avoid invalidation of the entire cache.
✅ Move dynamic information such as timestamp to user message.
10Pin key information to the top/bottom
The beginning and the end receive the most attention; important constraints are placed at both ends or in the user message.
✅ Put output formatting instructions on the last line.
11Identify “Prompt Earning Platforms”
When simple optimizations no longer improve, turn to RAGs, function calls, or fine-tuning.
✅ Run benchmark sets regularly and change strategies when the curve flattens out.