How MCP implements Agentic AI workflow

Written by
Clara Bennett
Updated on:June-28th-2025
Recommendation

Explore how Anthropic MCP empowers LLM to achieve autonomous decision-making and intelligent automation processes.

Core content:
1. The definition and value of Agentic AI workflow
2. How MCP provides tools, memory, and iterative reasoning for LLM
3. Practical application cases of Todo List MCP Server and Calendar MCP Server

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

 


Agentic AI workflows emerge! This article reveals how to use Anthropic’s MCP to empower LLMs and achieve autonomous decision-making. Todo List MCP Server and Calendar MCP Server Demonstration, showing how the MCP client coordinates tools and dynamic prompts to build modular, composable intelligent automation processes. MCP nesting implements microservice-like proxy delegation,dev-scaffolding Server linkage spec-writer,code-gen,test-writer, build a powerful tool system.

Translated from: How MCP Enables Agentic AI Workflows [1]

By Michael Field

Interest in Anthropic’s  Model Context Protocol (MCP) [2]  is as high as confusion about what it is and why you should use it. In Part 1 of this series, I took a deep  dive into MCP [3] — what it is, and what it is not. In this post, I’ll explore the main reason why there’s so much discussion around it: enabling agentic AI workflows.

MCP for Agentic Workflows

The original  Large Language Model (LLM) [4]  simply maps input to output. An agentic LLM system gives the LLM:

  • • Tools for action
  • • Memory of past steps
  • • Iteration loops and reasoning methods
  • • Optional goals or tasks

So when you connect the LLM to the tools, let it decide which tools to call, let it reflect on the results and let it plan the next step - you give it an agentic capability. It can now decide what to do next without being told every step.

So how does this relate to MCP? Well, as we mentioned, MCP can provide context beyond the tool. An MCP server can also provide parameterized hints, effectively allowing the MCP server to provide the next instruction to the LLM. This chaining of hints can open some very interesting doors.

Even more striking is how MCP surfaces the relevant tools at the right time without cramming every option into the prompt context. Rather than over-engineering the prompt description to account for every possibility and forcing the LLM into a deterministic workflow: “Here are the responses from this tool call, and if things get more complicated, here are some tools that might help.” This makes the system more adaptable and extensible, while still giving the LLM the flexibility to explore new paths if the initial instructions weren’t completely deterministic.

In fact, with these capabilities we have something like an agent, which emerges from the interaction of several aspects:

  • • LLM (Reasoning and Decision Making)
  • • MCP Server (provides tools and link hints)
  • • MCP Client (manages loops and execution)
  • • User (provides target)

Let's see this in action. I'll demonstrate a very simple agentic workflow where LLM calls tools from multiple MCP servers based on the prompts returned. Here are the servers I'm using:

Todo List MCP Server

[[tool]]
name = "add_task"
description = "Adds a new task to your todo list."
input_parameters = [
  { name = "task_description", type = "string", description = "The task to add to your todo list." }
]

[[prompt]]
name = "plan_daily_tasks"
description = "Plans the day by breaking down a user goal into actionable tasks."
input_parameters = [
  { name = "user_goal", type = "string", description = "The user's goal for the day." }
]
template = """Based on the user's goal: '{user_goal}', generate 2-3 specific, actionable tasks that would help the user achieve it.
For each task, call the `add_task` tool with a helpful task description."""

Calendar MCP Server

[[tool]]
name = "schedule_event"
description = "Schedules an event in your calendar."
input_parameters = [
  { name = "task_description", type = "string", description = "The task or event to be scheduled." },
  { name = "time", type = "string", description = "The time when the event should be scheduled (eg, '2pm today')." }
]

[[prompt]]
name = "schedule_todo_task"
description = "Schedules a task from the todo list into your calendar."
input_parameters = [
  { name = "task_description", type = "string", description = "The task to schedule." }
]
template = """The user wants to schedule the task: '{task_description}'.
Suggest a good time for today and call the `schedule_event` tool to add it to the calendar."""

OK, now imagine you have a chatbot that has access to the context provided by these MCP servers. When a user provides a high-level goal, such as "I want to focus on deep work today," the MCP client orchestrates a modular, multi-server workflow to fulfill the request. It packages the user message with tool metadata and prompt instructions from all connected MCP servers and sends it to the LLM. The LLM first selects a high-level planning tool from the Todo Server. plan_daily_tasks, the tool returns a prompt instructing LLM to use add_task Break down goals into actionable tasks.

As tasks are created and the LLM is notified, it further reasoned and decided to complete the task by calling schedule_todo_task to schedule a task, thereby triggering Calendar Server. The server uses a new prompt to guide the use of schedule_event Respond, at which point the LLM finalizes the day's schedule with a specific time.

Each tool interaction is routed and coordinated by the MCP client, which manages the reasoning loop, coordinates tool execution, and tracks the interaction state throughout the session. This forms a fully autonomous workflow: users set goals , LLM  reasoning and decisions , MCP servers expose tools and dynamic prompts , and MCP clients coordinate the process , resulting in intelligent, composable automation in various areas.


Starting with a very basic and high-level prompt, you now have an agent that can make multiple decisions on its own to reach a final goal. Of course, without understanding what deep work the user wants to focus their energy on, there is little value in generating these tasks, but improving this would simply require modifying the MCP server to have more comprehensive and thoughtful prompts.

MCP Nesting

Things start to get really interesting when you start looking beyond a single layer of MCP clients and servers. MCP servers can also be clients of other MCP servers. This nesting enables modularity, composition, and proxy-like delegation, where one server can "delegate" parts of reasoning or functionality to another server.

This is like microservices for agents  [5] . Just as we moved from a monolithic architecture for backend applications  to a microservices architecture [6]  , we are now using MCP servers to decouple tool logic from the agent runtime. With the rapid addition of new MCP servers, it is easy to imagine a large and highly composable system of tools that can be used like Lego blocks to build comprehensive workflows.

For example, you can have a dev-scaffolding MCP server, which acts as a high-level coordinator focused on helping developers turn ideas into working code by coordinating several specialized upstream MCP servers. When a user requests a new application feature (e.g., “add login functionality”), the coordinator server uses the upstream servers—spec-writer Generate API specifications,code-gen Building code from that specification, and test-writer Generate corresponding test cases.

These collective MCP servers can also be used for environment-specific functionality. In other words, they expose the same interface (e.g.query_database ), but configured for different environments. This will allow you to have a dev-app-server , including upstream MCP servers, such as those using the SQlite database dev-db-server , returns a simulated authentication response dev-auth-server and wrappers for native command line interface (CLI) tools dev-deploy-server .Then,prod-app-server Will point to the relevant upstream server associated with your cloud-based deployment.

Platforms like mcp.run already make heavy use of this composability. Mcp.run allows you to install a scalable, dynamically updatable server that leverages an upstream registry of MCP servers it calls servlets. These servlets  do not need  to be installed locally, but can be run remotely on the mcp.run infrastructure.

This is powerful for a number of reasons, but for the purposes of this article, it highlights an important shift taking place in the MCP ecosystem: remote MCP servers. That's the subject of the third and final article in this series.