MCP in-depth analysis

In-depth analysis of the MCP protocol reveals a new era of AI tool integration.
Core content:
1. The essence of the MCP protocol and its working principle
2. Demonstrate the collaborative workflow of MCP in practical applications through case analysis
3. How MCP simplifies AI tool integration, reduces costs and improves efficiency
MCP in-depth analysis
To paraphrase a famous quote about monoids: "MCP is an open protocol that standardizes how applications provide context to LLMs. What's the problem?" But even after spending hours reading the definition of MCP and practicing examples, it is still difficult to clearly grasp its specific operation process: What is the LLM responsible for? What does the MCP server do? What is the role of the MCP client? How does the data flow? Where is the decision point? This article will deeply analyze the essence of MCP, how it works, and show how the components work together and the specific operations at each step through a complete step-by-step example.
Specifically, we deployed a Cloudflare Worker that intentionally contained a vulnerability. After the error triggered an alert in Sentry, an AI assistant (Cline) running in Visual Studio Code (VS Code) extracted stack trace information through the hosted Sentry MCP server, created a corresponding GitHub issue through the local GitHub MCP server, patched the code, submitted the fix, and redeployed - all operations were completed under manual approval. MCP simplifies integration work from M×N custom bridge mode to M+N adapter mode, but it also brings costs such as latency, security review, and learning curve.
Why do we need MCP?
When AI assistants need to coordinate real-world systems (such as Sentry for monitoring, GitHub for code management, Jira for ticket systems, Postgres for databases, etc.), each additional pair of tool integrations requires a custom adapter, a set of token shims, and a potential failure point. The resulting "glue code" is not only difficult to maintain, but also increases security risks. The birth of MCP aims to replace this chaos with a unified interaction protocol, allowing any compatible host to communicate directly with compliance tools.
M×N integration cost
Without MCP, a separate connector is required between each Large Language Model (LLM) host or agent (such as ChatGPT, Claude, Cline, or VS Code Copilot) and each tool (such as Sentry, GitHub, Jira, MySQL, or Stripe) - that is, a combination of M hosts × N tools, resulting in a lot of glue code. Each connector needs to be re-implemented:
Authentication and token refresh
Data format mapping
Error handling and retry mechanism
Rate Limiting Rules
This leads to quadratic cost increases for security hardening. Predictably, teams end up prioritizing only a handful of integrations, with the rest relegated to the “backlog.”
Unified Protocol Management Connector
MCP attempts to bring a unified experience similar to USB-C to AI tools: each host only needs to implement the MCP protocol once, and each tool only needs to expose an MCP server, so that any combination of hosts and tools can communicate. The complexity is reduced from M×N to M+N. To verify this idea, we conducted actual tests.
Before we jump into the step-by-step demonstration, let’s quickly review the core concepts:
MCP Basics
If you are familiar with LSP (Language Server Protocol) or JSON-RPC, MCP will feel right at home. If not, here is a quick 30-second guide:
Core terms
Stateful Design
MCP uses a persistent connection design, where the remote server typically uses HTTP + Server Sent Events (SSE), and the local process uses pure stdio. The server can remember the context of each client (such as identity tokens, working directories, and ongoing job IDs). Although this design is more complex than the stateless REST protocol, it supports streaming differential updates, long-running jobs, and server-initiated callbacks.
Discovery Process
The client calls tools/list to ask the server: "What can you do?"
The server returns JSON data describing the name, summary, and JSON Schema of parameters and results of each tool.
The host injects this JSON into the model context.
When the user prompts for an action, the model generates a structured call:
ounter(line{ "name": "create_issue", "arguments": { "title": "...", "body": "..." }}
The MCP client performs the call over the transport layer and streams the results back in chunks. The conversation then continues.
Demo Scenario: 5pm Return Alert Nightmare
To understand this, imagine the following scenario: It’s Friday afternoon and you’re the last person in the office. As you’re packing up, Slack suddenly alerts you that a new regression has occurred in worker.ts:12. We want to find the shortest path from the first Slack message to deploying the fix.
Presentation Components
The vulnerable Cloudflare Worker reports anomalies to Sentry, which are passed to Cline in VS Code via the hosted Sentry MCP (SSE). The same session then connects to the locally running GitHub MCP (stdio), allowing the agent to create issues, add comments, and push pull requests to the GitHub repository under human supervision.
Construction process
Environmental requirements
Initialize the vulnerable Worker
Execute in the terminal:
ounter(linenpx -y wrangler init bug-demo
Follow the prompts to select:
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line
╭ Create an application with Cloudflare Step 1 of 3
│
├ In which directory do you want to create your application?
│ dir ./bug-demo
│
├ What would you like to start with ?
│ category Hello World example
│
├ Which template would you like to use?
│ type Worker only
│
├ Which language do you want to use?
│ lang TypeScript
│
├ Copying template files
│ files copied to project directory
│
├ Updating name in `package.json`
│ updated `package.json`
│
├ Installing dependencies
│ installed via `npm install`
│
╰ Application created
╭ Configuring your application for Cloudflare Step 2 of 3
│
├ Installing wrangler A command line tool for building Cloudflare Workers
│ installed via `npm install wrangler --save-dev`
│
├ Installing @cloudflare / workers - types
│ installed via npm
│
├ Adding latest types to `tsconfig.json`
│ added @cloudflare / workers - types / 2023-07-01
│
├ Retrieving current worker compatibility date
│ compatibility date 2025-04-26
│
├ Do you want to use git for version control?
│ yes git
│
├ Initializing git repo
│ initialized git
│
├ Committing new files
│ git commit
│
╰ Application configured
╭ Deploy with Cloudflare Step 3 of 3
│
├ Do you want to deploy your application?
│ no deploy via `npm run deploy`
│
╰ Done
Enter the project directory:
ounter(linecd bug-demo
Install the Sentry SDK:
ounter(linenpm install @sentry/cloudflare --save
Open the project in VS Code:
ounter(linecode .
Edit wrangler.jsonc and add the compatibility_flags array:
ounter(line "compatibility_flags": [ "nodejs_compat"]
Visit the Cloudflare Worker Sentry setup guide, copy the sample code to src/index.ts, and add an intentional vulnerability in the fetch() method:
ounter ( lineounter ( lineounter ( lineounter ( lineounter ( lineounter ( lineounter ( lineounter ( lineounter ( lineounter ( lineounter ( lineounter ( lineounter ( lineounter ( lineounter ( lineounter ( lineounter (line
// !mark(10:11)
import * as Sentry from "@sentry/cloudflare" ;
export default Sentry . withSentry (
( env ) => ({
dsn : "https://[SENTRY_KEY]@[SENTRY_HOSTNAME].ingest.us.sentry.io/[SENTRY_PROJECT_ID]" ,
tracesSampleRate : 1.0 ,
}),
{
async fetch ( request, env, ctx ) {
// ❌ intentional bug
undefined . call ();
return new Response ( "Hello World!" );
},
} satisfies ExportedHandler < Env >,
);
Deploy and trigger the vulnerability:
ounter(linenpm run deploy
Visit the Cloudflare Worker URL in your browser
ounter(linehttps://bug-demo.<your-cf-hostname>.workers.dev
You should see the error:
ounter(line error 1101 - Worker threw an exception Error
Configure Sentry MCP server in Cline
After installing Cline in VS Code, perform the following steps:
Click the Cline icon in the sidebar.
Click the "MCP Servers" button at the top of the Cline panel.
Select the "Installed" tab.
Click "Configure MCP Servers".
Paste the following Sentry MCP server configuration JSON and save (Cmd + S):
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line{ "mcpServers": { "sentry": { "command": "npx", "args": ["-y", "mcp-remote", "https://mcp.sentry.dev/sse"] } }}
After saving, the browser will pop up an authorization window, click "Approve" to allow Remote MCP to access the Sentry account.
Configure GitHub MCP server in Cline
Generate a GitHub personal access token:
Click on your avatar in GitHub and go to settings.
In the left column, select "Developer settings" → "Personal access tokens" → "Fine-grained tokens".
Click "Generate new token", enter a name, select "All repositories", and check the permissions:
Administration: Read and Write
Contents: Read and Write
Issues: Read and Write
Pull requests: Read and Write
Generate a token and save it.
Add the GitHub MCP server in Cline:
Repeat steps 1-4 for configuring Sentry MCP.
Paste the following configuration JSON (replace <YOUR_TOKEN> with your actual token):
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line{ "mcpServers": { "sentry": {...}, "github": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "GITHUB_PERSONAL_ACCESS_TOKEN", "ghcr.io/github/github-mcp-server" ], "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>" } } }}
Save and click "Done".
Practical demonstration: creating a GitHub repository
Ask Cline: "Create a private GitHub repository called bug-demo."
Process analysis:
System prompts and tool calls Cline sends a request to LLM containing prompts, tool lists and modes (see Cline repository src/core/prompts/system.ts for prompt building logic).
LLM generates tool calls LLM generates structured tool calls (XML format):
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line<use_mcp_tool> <server_name>github</server_name> <tool_name>create_repository</tool_name> <arguments> { "name": "bug-demo", "private": true } </arguments></use_mcp_tool>
Cline sends a call to the MCP server using the stdio transport to send a JSON-RPC request:
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line{ "jsonrpc": "2.0", "id": "42", "method": "create_repository", "params": { "name": "bug-demo", "private": true }}
The GitHub MCP server processes the request and calls the GitHub API to create a repository, returning the result:
ounter(line{ "jsonrpc": "2.0", "id": "42", "result": { "id": 123456789, "name": "bug-demo", "visibility": "private", "default_branch": "main", "git_url": "git://github.com/speakeasy/bug-demo.git", "etc": "..." }}
Cline displays the results and pushes them to the repository. The results are displayed in the UI and pushed to GitHub using the git command:
ounter(linegit remote add origin git@github.com:speakeasy/bug-demo.git && git push -u origin main
Fixing vulnerabilities with MCP
Give Cline the task:
Get the latest issues from Sentry.
Create a GitHub issue and associate a Sentry link.
Fix the vulnerability according to the Sentry issue.
Commit changes in a new branch, referencing both issues.
Push your branch to GitHub.
Open a pull request (PR).
Key steps:
Get Sentry Issues Cline calls the list_issues tool of Sentry MCP to get the latest error details (including stack trace):
ounter(line{ "jsonrpc": "2.0", "id": "42", "method": "list_issues", "params": { "sortBy": "last_seen" }}
The response shows that the vulnerability is located in undefined.call() on line 12 of src/index.ts.
To create a GitHub issue , call the create_issue tool of GitHub MCP:
ounter(line{ "jsonrpc": "2.0", "id": "44", "method": "create_issue", "params": { "owner": "speakeasy", "repo": "bug-demo", "title": "Fix bug in index.ts", "body": "Related Sentry issues: https://speakeasy.sentry.io/issues/NODE-CLOUDFLARE-WORKERS-1" }}
Code fixes and commits
LLM reads the code through the read_file tool and generates a repair solution (delete undefined.call()).
Cline creates a branch, commits changes, and pushes:
ounter(linegit checkout -b fix-bug-NODE-CLOUDFLARE-WORKERS-1 git add src/index.ts && git commit -m "Fixes NODE-CLOUDFLARE-WORKERS-1 and closes #1" git push origin fix-bug-NODE-CLOUDFLARE-WORKERS-1
Create a pull request calling the create_pull_request tool to merge branches:
adder(line{ "jsonrpc": "2.0", "id": "47", "method": "create_pull_request", "params": { "owner": "speakeasy", "repo": "bug-demo", "title": "Fix bug in index.ts", "head": "fix-bug-NODE-CLOUDFLARE-WORKERS-1", "base": "main", "body": "Fix Sentry issue and close #1" }}
Manual Approval and Deployment Developers review PRs, merge them, and redeploy Workers:
ounter(linenpm run deploy
The validation discovery vulnerability has been fixed and Sentry no longer reports new errors.
Lessons Learned
Advantage: Unified protocol
MCP delivers on its promise: you only need to configure it once for each service, rather than building custom bridges between each other. Take a demo with three components (Cline host, Sentry server, GitHub server) as an example:
Challenges: Latency, security, and learning curve
Delayed MCP introduces an extra layer between the LLM and the API, which causes a slight delay. Although the impact is negligible in most scenarios, it should be carefully evaluated in latency-sensitive applications or complex workflows.
Safety
Token management: Different servers require independent authentication (such as Sentry's OAuth, GitHub's API token).
Scope of authority: Users need to authorize each server separately, which is a cumbersome process.
Token refresh: Few clients support OAuth refresh, and currently it relies on wrappers such as mcp-remote for processing.
Learning Curve
Inadequate documentation: The specification is comprehensive, but provides limited practical guidance.