
How to Build Your First MCP Server in 2026 (Step-by-Step)
If you've been watching the AI developer tooling space, you've noticed Model Context Protocol (MCP) go from a niche Anthropic proposal to the de-facto standard for connecting LLMs to external data and tools. By early 2026, every major AI provider — OpenAI, Anthropic, Microsoft, AWS — had adopted it, and there are over 10,000 public MCP servers available.
This guide skips the theory. You'll build a working MCP server in Python, test it locally, and deploy it to production. Total time: about 45 minutes.
What is an MCP Server? (30-second recap)
An MCP server exposes three primitive types to any MCP-compatible AI client (Claude Desktop, Cursor, VS Code Copilot, etc.):
- Tools — functions the LLM can call (e.g., query a database, send an email)
- Resources — read-only data the LLM can fetch (e.g., a file, a live dashboard)
- Prompts — reusable prompt templates with parameters
Think of it as "a USB-C port for AI" — one protocol, any client.
Prerequisites
- Python 3.11+
- pip or uv (recommended)
- Claude Desktop installed (free) — or any MCP client
- Basic Python familiarity
Step 1 — Project setup
Create a new project and install the MCP Python SDK:
mkdir my-mcp-server && cd my-mcp-server
python -m venv .venv && source .venv/bin/activate
pip install mcp httpx
The mcp package gives you FastMCP, a decorator-based server that handles all protocol boilerplate.
Step 2 — Define your first MCP tool
A tool is a Python async function decorated with @mcp.tool(). The docstring is the tool description — the LLM reads it to decide when to call this tool. Write it clearly.
Key things to notice:
- The docstring is the tool description — the LLM reads it to decide when to call this tool
- Type hints become the parameter schema automatically
- The function must be async
Step 3 — Add a resource
Resources are read-only data sources fetched by URI, not called with arguments. A common use case: expose a company knowledge base or a live config file.
Use @mcp.resource("config://app/settings") to expose configuration data. The URI is how MCP clients reference this resource.
Step 4 — Add a reusable prompt
Prompts let you package prompt templates that users or the LLM can invoke by name. Use @mcp.prompt() with a GetPromptResult return type.
Step 5 — Test locally with Claude Desktop
Add the server entry to Claude Desktop's config file at ~/Library/Application Support/Claude/claude_desktop_config.json. Restart Claude Desktop. You should see the hammer icon in the chat input — that means Claude has detected your MCP tools.
Step 6 — Deploy to production (Streamable HTTP)
For local use, stdio transport works fine. For production, switch to Streamable HTTP:
python server.py http
Your server is now reachable at http://localhost:8000/mcp. Deploy behind an HTTPS reverse proxy for production. Use Docker for containerized deployment.
Common mistakes to avoid
- Writing vague tool descriptions — the LLM uses your docstring to decide when to call the tool
- Synchronous blocking calls — use httpx.AsyncClient, not requests
- Returning unstructured text — return structured JSON so the LLM can reliably parse results
- One giant server for everything — keep servers focused and composable
MCP vs REST API
Use MCP when you're building something an AI agent will consume. Use REST when a human application is the primary consumer. Many teams build both — a REST API for the product and an MCP server that wraps it for AI agents.
FAQ
Do I need to use Claude to build an MCP server? No. MCP is an open standard. Any MCP-compatible client works — Claude Desktop, Cursor, VS Code GitHub Copilot, Zed, and many more.
Is MCP production-ready in 2026? Yes. Streamable HTTP solved the stateful session and horizontal scaling problems. AWS, Azure, and GCP all support MCP-native deployments.
Can I use MCP with CrewAI or LangChain? Yes. LangChain has langchain-mcp-adapters. CrewAI supports MCP tool integration via its tool registry.
What's the difference between an MCP tool and a resource? Tools are callable functions with side effects. Resources are read-only data sources identified by URI.
How do I secure a remote MCP server? Use HTTPS, Bearer token auth, rate-limiting, input validation, and sandboxed containers.
Ready to Start Your Project?
Let's discuss how we can bring your vision to life with AI-powered solutions.
Let's Talk