Model Context Protocol
Intent
A standardized, open protocol for connecting LLMs to external tools, data sources, and services through a universal interface.
Problem
Every LLM provider has a different way to define tools. Every tool server has a different integration method. Building N tools for M models requires N×M integrations. This fragmentation slows development and creates lock-in.
Solution
MCP provides a standard protocol (like USB for AI) where tool servers expose capabilities through a consistent interface, and LLM clients can discover and use any MCP-compatible tool. A single integration on either side connects to the entire ecosystem. MCP defines three primitives: Tools (functions the model can call), Resources (data the model can read), and Prompts (templates for common workflows). Servers can be local processes or remote services.
Diagram
LLM Client (Claude, GPT, etc.)
↕ [MCP Protocol]
MCP Server 1: Database → Tools: query, insert, update
MCP Server 2: GitHub → Tools: create_pr, list_issues
MCP Server 3: Slack → Tools: send_message, search
MCP Server 4: File System → Resources: read, write, list
One protocol, any combination of servers and clientsWhen to Use
- Building tool integrations that should work with multiple LLMs
- When you want to reuse tools across different projects
- Integrating with the growing ecosystem of MCP servers
- When standardization and interoperability matter
When NOT to Use
- Simple, one-off tool integrations where the overhead isn't justified
- When you need deep, custom integration that the protocol doesn't support
Pros & Cons
Pros
- Write once, use everywhere — tools work with any MCP client
- Growing ecosystem of pre-built servers
- Standardized discovery, authentication, and error handling
- Reduces integration complexity from N×M to N+M
Cons
- Protocol overhead for simple integrations
- Still evolving — specification changes
- Not all LLM providers support it natively yet
- Server quality varies across the ecosystem
Implementation Steps
- 1Choose your role: building an MCP server (tool provider) or client (LLM app)
- 2For servers: define your tools, resources, and prompts using the MCP SDK
- 3For clients: implement MCP client to discover and call server capabilities
- 4Use existing MCP servers from the ecosystem where available
- 5Test tool descriptions for LLM usability
- 6Monitor tool usage and errors in production
Real-World Example
Development Environment Integration
A coding agent connects to MCP servers for: GitHub (PRs, issues), PostgreSQL (database queries), file system (code editing), and Jira (task tracking). All through the same protocol. Switching from Claude to GPT requires zero changes to the tool servers.
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({ name: "weather-server", version: "1.0.0" });
server.tool(
"get_weather",
"Get current weather for a city",
{ city: z.string(), units: z.enum(["celsius", "fahrenheit"]).optional() },
async ({ city, units = "celsius" }) => {
const response = await fetch("https://api.weather.example/v1?city=" + city);
const weather = await response.json();
return { content: [{ type: "text", text: JSON.stringify(weather) }] };
}
);
const transport = new StdioServerTransport();
await server.connect(transport);