MCP Server Architecture
Connecting LLMs to Enterprise Data Securely
TL;DR
- MCP is the USB-C of AI — a single open protocol connecting any LLM client to any data source or tool.
- Three primitives: Resources (data to read), Tools (actions to invoke), Prompts (reusable templates).
- Enterprise deployment requires: OAuth/API key auth, RBAC, input validation, audit logging.
- Works across databases, REST APIs, file systems, SaaS platforms, and internal microservices.
The Problem MCP Solves
Before MCP, every LLM integration required bespoke code. You wanted your AI assistant to query your PostgreSQL database? Write a custom function. Connect it to Salesforce? Write another custom integration. Connect it to your internal knowledge base, your Jira board, and your S3 buckets? Write three more. And when you switched LLM providers, rewrite everything.
Model Context Protocol eliminates this N×M integration problem. Instead of N LLM clients × M data sources = N×M custom integrations, you get N clients + M servers = N+M implementations. Each client implements the MCP client protocol once. Each data source implements the MCP server protocol once. Any client works with any server.
For enterprises with dozens of internal systems and AI use cases spanning multiple teams and LLM providers, this architectural shift is significant — it converts AI data access from a perpetual custom engineering problem into a standardised infrastructure component.
MCP Core Primitives
An MCP server exposes three types of capabilities to LLM clients:
| Primitive | What It Is | LLM Interaction | Example |
|---|---|---|---|
| Resources | Data the LLM can read | Model-controlled read access | Database rows, files, API responses |
| Tools | Actions the LLM can invoke | LLM-initiated function calls | Run SQL query, create Jira ticket, send email |
| Prompts | Reusable prompt templates | User-initiated prompt injection | Standardised analysis frameworks, report templates |
MCP Server Transport Layer
MCP supports two transport mechanisms, suited to different deployment contexts:
- stdio (Standard I/O): The LLM client spawns the MCP server as a subprocess and communicates via stdin/stdout. Suited for local development tools (Claude Desktop, Cursor, VS Code extensions) where the server runs on the same machine as the client. Simple to implement and debug; not suitable for remote or multi-client deployments.
- HTTP + SSE (Server-Sent Events): The MCP server runs as an HTTP service; clients connect via standard HTTP requests and receive streaming responses via SSE. Suited for enterprise deployments where the server runs remotely, serves multiple clients, requires authentication middleware, and needs to be independently scaled and monitored.
For enterprise production deployments, HTTP + SSE is the correct choice. It allows the MCP server to be deployed as a containerised microservice with standard DevOps tooling — load balancers, health checks, Kubernetes autoscaling, and centralised authentication.
Enterprise MCP Server Architecture Pattern
A production enterprise MCP server has six layers:
- Transport layer — HTTP server (FastAPI, Express, or Go net/http) handling MCP JSON-RPC messages; TLS termination at the load balancer
- Authentication layer — OAuth 2.0 token validation or API key verification; map authenticated identity to permission set
- Authorisation layer — role-based access control determining which resources and tools the authenticated agent can access; principle of least privilege per agent role
- Input validation layer — strict schema validation on all tool parameters; sanitisation of inputs before they reach data systems; rejection of malformed or potentially injected inputs
- Business logic layer — the actual resource readers and tool handlers; connection pooling for databases; retry logic and circuit breakers for external APIs
- Observability layer — structured logging of every resource read and tool invocation with agent identity, timestamp, parameters, and result; metrics for latency, error rate, and usage by tool; distributed tracing for multi-server workflows
Need to connect your AI systems to enterprise data?
We build production MCP servers for databases, EHR systems, banking APIs, and internal platforms — with full authentication and audit logging.
MCP Server ServicesTalk to an EngineerSecurity Considerations for Enterprise Deployment
MCP's flexibility is also its primary security surface. These are the non-negotiable security requirements for enterprise MCP deployment:
- Authentication: Every MCP client must authenticate before accessing any resource or tool. Use OAuth 2.0 client credentials flow for service-to-service authentication; short-lived tokens with automated rotation
- Authorisation scoping: Define tool and resource permissions per agent role. A customer support agent should be able to read customer records but not write to them. A data analyst agent should query analytics tables but not production databases. Map agent identity → permission set at the authorisation layer
- Tool parameter validation: Never pass raw LLM-generated parameters to database queries or API calls. Validate every parameter against a strict schema; use parameterised queries for all database interactions to prevent SQL injection
- Prompt injection via resources: Data returned by resource reads re-enters the LLM context window. A malicious document in your knowledge base could contain instructions that override the agent's system prompt. Sanitise resource content before returning it; flag content containing instruction-like patterns
- Audit logging: Log every MCP operation with: authenticated identity, tool name, parameters (sanitised), result summary, timestamp, and request ID. These logs are your audit trail for compliance and incident response
MCP in Multi-Agent Systems
MCP becomes particularly powerful in multi-agent architectures. Each agent in the system gets a tailored MCP client configuration exposing only the tools and resources its role requires. The orchestrating agent has broader access; specialist agents (researcher, analyst, writer) have scoped access matching their function.
This approach enforces the principle of least privilege at the agent level automatically — through the MCP layer — rather than requiring custom permission logic in each agent's system prompt. It also makes the system auditable: you can inspect the MCP access logs to understand exactly what data each agent accessed during a workflow run.
Frequently Asked Questions
What is Model Context Protocol (MCP)?
Model Context Protocol (MCP) is an open standard introduced by Anthropic that defines how LLMs communicate with external data sources and tools. It provides a standardised interface — analogous to USB-C for hardware — so that any MCP-compatible LLM client (Claude, Cursor, VS Code with Copilot) can connect to any MCP server without custom integration code for each combination. MCP servers expose resources (data the LLM can read), tools (actions the LLM can invoke), and prompts (reusable prompt templates) through a defined JSON-RPC protocol.
How is MCP different from function calling / tool use?
Function calling (OpenAI) and tool use (Anthropic) are model-level features that let an LLM request a function to be executed. MCP is an architectural layer above this — it standardises how those tools are defined, discovered, and executed across different clients and servers. Function calling requires custom integration code for each LLM-to-tool pair. MCP provides a single protocol that any compliant client and server can use interoperably. In practice: your LLM uses tool use/function calling to invoke MCP tools; MCP is the transport and discovery layer.
Is MCP secure for enterprise use?
MCP security depends on implementation. The protocol itself is transport-agnostic and does not mandate authentication, so security is the responsibility of the server implementer. For enterprise deployment: use OAuth 2.0 or API key authentication at the transport layer; implement role-based access control so each LLM agent can only access the data its role requires; validate all inputs to prevent prompt injection via tool outputs; run MCP servers in isolated network segments with egress controls; log all resource access and tool invocations for audit trails. A well-implemented enterprise MCP server is as secure as any other API — the risk comes from under-specified permissions.
What databases and systems can MCP servers connect to?
MCP servers can connect to any system accessible via code — which is effectively everything. Common enterprise integrations include: relational databases (PostgreSQL, MySQL, SQL Server), NoSQL stores (MongoDB, DynamoDB), file systems and object storage (S3, Google Drive, SharePoint), REST APIs and GraphQL endpoints, SaaS platforms (Salesforce, HubSpot, Jira, Notion), and internal microservices. Inventiple has built MCP servers for EHR systems (FHIR-compliant), core banking APIs, Kubernetes cluster management, and enterprise knowledge bases.