MCP's three core concepts (2): Using MCP to implement a complete chain of data calls, API execution, and template generation

Written by
Clara Bennett
Updated on:July-15th-2025
Recommendation

In-depth exploration of the application of MCP technology in data calling, API execution and template generation.

Core content:
1. Creation of virtual file Resources and MCP integration
2. Specific examples of implementing API execution through Tools
3. The key role of prompts in template generation

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)


Below I will simulate three "virtual files", representing Resources, Tools and Prompts, and use the contents of these "files" to illustrate their usage in MCP. Each file will have actual code or pseudo-code, trying to be as close to the real development scenario as possible.


File 1: Resources usage demonstration

File name : server_logs.txt Description : Assume this is a server log file, exposed to LLM through MCP's Resources, allowing it to analyze errors. Contents :

[2025-02-22 10:00:01] ERROR: Database connection failed - Timeout
[2025-02-22 10:00:05] INFO: Retry attempt 1
[2025-02-22 10:00:10] ERROR: Database connection failed - Timeout
[2025-02-22 10:00:15] FATAL: System shutdown

MCP configuration (pseudocode):

// Expose the log file as a Resource on the MCP server
const server = new Server({
  name: "log-server",
  version: "1.0.0"
}, {
  capabilities: { resources: {} }
});

// List available resources
server.setRequestHandler("resources/list", async () => {
  return {
    resources: [{
      uri: "file:///logs/server_logs.txt",
      name: "Server Logs",
      mimeType: "text/plain"
    }]
  };
});

// Read resource content
server.setRequestHandler("resources/read", async (request) => {
  if (request.params.uri === "file:///logs/server_logs.txt") {
    const logs = await readFile("server_logs.txt");
    return {
      contents: [{ uri: request.params.uri, mimeType: "text/plain", text: logs }]
    };
  }
  throw new Error("Resource not found");
});

Usage analysis:

  • •  Actual operation : I typed in Claude Desktop: "Analyze the error causes in server_logs.txt." LLM accessed this Resource through MCP and read the log contents.
  • •  Result : LLM may output: "The log shows that the database connection timed out multiple times, eventually causing the system to crash. It is recommended to check the network latency or database configuration." Just like uploading a log file, LLM directly "sees" the content and processes it. The role of Resources is to feed data to LLM, similar to "uploading raw materials."

File 2: Tools usage demonstration

File name : weather_tool.py Description : This is a weather query tool that uses the Tools function of MCP to allow LLM to call external APIs to obtain real-time weather. Content :

Python MCP Server Definition Tool
from mcp.server import Server
import requests

app = Server("weather-server")

Define weather query tool
@app.tool("get_weather")
async def get_weather(city: str) -> str:
    url = f"http://api.weatherapi.com/v1/current.json?key=YOUR_API_KEY&q={city}"
    response = requests.get(url).json()
    temp = response["current"]["temp_c"]
    condition = response["current"]["condition"]["text"]
    return f"{city} Current temperature: {temp}°C, weather: {condition}"

Start the server
async def main():
    async with app.stdio_server():
        await app.run()

Usage analysis :

  • •  Actual operation : I input in the client: "Tell me the weather in Shanghai." LLM detects that real-time data is needed, and calls the get_weather tool through MCP, passing in the parameter city="Shanghai".
  • •  Result : The tool returns: "Current temperature in Shanghai: 15°C, weather: cloudy." LLM then integrates this result into the answer. It's like uploading a "weather query script". LLM doesn't just read it, but directly "executes" it to get the result. The core of Tools is to let LLM do the work, not just look at the data.

File 3: Prompts usage demonstration

File name : error_report_template.md Description : This is an error report template, provided to LLM through the Prompts function of MCP to generate a standardized report. Contents :

Error Reporting
Log file: {log_file}
Error Summary
{summary}
Detailed analysis
{analysis}
Restoration suggestions
{suggestions}

MCP configuration (pseudo code) :

const server = new Server({
  name: "report-server",
  version: "1.0.0"
}, {
  capabilities: { prompts: {} }
});

// Define the Prompt template
server.setRequestHandler("prompts/list", async () => {
  return {
    prompts: [{
      id: "error_report",
      name: "Error Report Generator",
      template: await readFile("error_report_template.md"),
      parameters: ["log_file", "summary", "analysis", "suggestions"]
    }]
  };
});

Usage analysis :

  • •  Actual operation : I input: "Generate an error report based on server_logs.txt." LLM obtains this prompt template through MCP, combines the log data in Resources, and fills in the parameters:
    • • log_file: "server_logs.txt"
    • • summary: "Database connection timeout caused system crash"
    • • analysis: "Multiple retries failed, possibly due to network issues"
    • • suggestions: "Check database port and network stability"
  • •  Result : LLM outputs a formatted Markdown report. Just like uploading a "report template", LLM fills in the content according to the template and generates structured output. The role of prompts is to give LLM a ready-made routine to improve efficiency.

How the three work together:

Suppose I "upload" all three files to an MCP-supported client (such as Claude Desktop):

  1. 1. Resources (server_logs.txt): LLM first reads the log to get the context.
  2. 2. Tools (weather_tool.py): If I ask in passing, "How does today's weather affect the server?", LLM calls the tool to check the weather.
  3. 3. Prompts (error_report_template.md): Finally, use the template to integrate log analysis and weather data into a report.
Final output (simulation):
Error Reporting
Log file: server_logs.txt
Error Summary
Database connection timeout causes system crash
Detailed analysis
The log shows multiple timeouts, which may be a network problem. Today in Shanghai, it is cloudy, with a temperature of 15°C, and no extreme conditions.
Restoration suggestions
Check the database port and network stability and eliminate hardware failures.

Write to the end

The above example simulates the intuitive experience of "uploading files" and shows how the three concepts of MCP are implemented. In reality, MCP requires server and client support, but the principle is so simple and direct.