MCP Hits 97M Downloads: Model Context Protocol Guide
Model Context Protocol reaches 97 million monthly SDK downloads with 5,800+ servers. How MCP became the standard for AI agent tool integration.
Monthly SDK Downloads
MCP Servers Available
From Launch to Mainstream
Major AI Providers Adopted
Key Takeaways
In November 2024, Anthropic released an open standard called the Model Context Protocol — a specification for how AI agents connect to external tools and data sources. Sixteen months later, the protocol has reached 97 million monthly SDK downloads and 5,800+ community-built servers. Every major AI provider has adopted it. MCP has done in 16 months what took REST APIs several years: become the default infrastructure layer for a new category of computing.
The 97 million download milestone matters not because of the number itself but because of what it signals: MCP has crossed the threshold from “interesting experiment” to “required knowledge” for anyone building with AI agents. Organizations evaluating AI and digital transformation strategies now need to understand MCP to understand the current state of AI agent infrastructure. This guide explains what MCP is, how the ecosystem developed, and what it means for practical AI deployment.
This guide covers the MCP architecture, the growth of the server ecosystem, cross-provider adoption details, practical implementation starting points, business use cases, security considerations, and where the protocol is heading next.
What Is Model Context Protocol
Model Context Protocol is a JSON-RPC 2.0-based protocol that standardizes how AI models discover and call external tools. Before MCP, connecting an AI agent to a database, a CRM, or a web browser required building a custom integration for each model-tool pair. Switching from Claude to GPT-4 meant rebuilding all tool integrations. MCP separates tool implementation from model implementation: a tool is built once as an MCP server, and any compliant AI agent can use it.
MCP is fully open-source under the MIT license. The specification, reference implementations, and server directory are all public. Any developer can build MCP servers or clients without licensing fees or vendor lock-in.
MCP defines three core primitives: Tools (functions agents call), Resources (data sources agents read), and Prompts (reusable instruction templates). This minimal surface area makes the protocol simple to implement correctly.
MCP was designed specifically for AI agent workflows, not retrofitted from human-facing APIs. It includes streaming support, lifecycle management, and capability discovery that agentic workflows require but traditional API standards lack.
The protocol runs over two transport layers: stdio for local processes (tools running on the same machine as the agent) and HTTP with Server-Sent Events for remote servers. This dual transport supports both local development workflows — an agent accessing files on your laptop — and production deployments where tools run as cloud services. The choice of JSON-RPC 2.0 as the underlying protocol made implementation accessible to any developer with JSON parsing experience, which contributed directly to the rapid ecosystem growth.
97 Million Downloads Milestone
The 97 million monthly SDK download figure reported by Anthropic in March 2026 covers the official TypeScript and Python SDKs (@modelcontextprotocol/sdk and mcp on PyPI). The growth trajectory tells the adoption story clearly:
November 2024 (Launch)
~2M/monthAnthropic open-sources MCP with reference servers for filesystem, web browsing, and databases
January 2025
~8M/monthClaude Desktop ships built-in MCP support; developer adoption accelerates
April 2025
~22M/monthOpenAI announces MCP support in GPT-4 function calling; community server count exceeds 500
July 2025
~45M/monthMicrosoft integrates MCP into Copilot Studio; enterprise adoption begins
November 2025
~68M/monthAWS Bedrock adds MCP agent support; Google DeepMind begins integration
March 2026
97M/month5,800+ servers available; all major AI providers support MCP
For context: the React npm package took approximately 3 years to reach 100 million monthly downloads. MCP achieved comparable scale in 16 months. The faster adoption reflects both the urgency of the underlying need and the protocol's design simplicity. Unlike React, MCP did not require learning a new programming paradigm — it standardized patterns that agent developers were already implementing in incompatible custom formats.
MCP Architecture: How It Works
MCP defines a client-server architecture where the AI agent acts as the MCP client and external tools run as MCP servers. The client discovers available tools by requesting the server's capability manifest, then invokes tools by sending JSON-RPC requests. The server executes the tool logic and returns structured results that the agent can incorporate into its reasoning.
1. Agent requests tool list from MCP server
{"jsonrpc":"2.0","method":"tools/list","id":1}2. Server responds with available tools
{"result":{"tools":[{"name":"query_database","description":"...","inputSchema":{...}}]}}3. Agent calls a tool with parameters
{"method":"tools/call","params":{"name":"query_database","arguments":{"sql":"SELECT..."}}}4. Server returns structured result
{"result":{"content":[{"type":"text","text":"[{"row":1,...}]"}]}}The three MCP primitives handle different types of agent needs. Tools are callable functions that take parameters and return results — analogous to REST API endpoints. Resources are readable data sources that agents can request by URI — files, database tables, API responses — analogous to GET endpoints. Prompts are server-defined instruction templates that encode best practices for using the server's capabilities, helping agents use tools correctly without extensive prompt engineering by the agent developer.
Implementation note: MCP servers are stateful per-connection, which means each agent session gets its own server instance. This simplifies session management but means MCP servers cannot easily maintain state across multiple agent conversations. For workloads requiring persistent state, pair MCP tool calls with an external database resource.
5,800+ Servers: Ecosystem Overview
The 5,800+ MCP server count represents community and enterprise servers registered in public directories plus an unknown number of internal enterprise servers not publicly listed. For a comprehensive picture of how the ecosystem developed from its early days, the complete MCP ecosystem guide from 2025 tracks the server categories and notable implementations in detail.
- • GitHub: repos, PRs, issues, code search
- • GitLab, Bitbucket, Jira, Linear
- • Docker, Kubernetes, AWS, GCP, Azure
- • Databases: PostgreSQL, MySQL, MongoDB, Redis
- • IDE integrations: VS Code, JetBrains
- • CRM: Salesforce, HubSpot, Pipedrive, Zoho
- • Productivity: Notion, Confluence, Asana, Monday
- • Communication: Slack, Teams, Gmail, Outlook
- • Finance: Stripe, QuickBooks, Xero
- • HR: Workday, BambooHR, Rippling
- • Web browsing: Playwright, Puppeteer, Selenium
- • Search: Brave, Bing, SerpAPI, Perplexity
- • Content: Wikipedia, Arxiv, news APIs
- • Social: Twitter/X, LinkedIn, Reddit
- • Maps: Google Maps, Mapbox
- • Image generation: DALL-E, Stable Diffusion, Midjourney
- • Speech: Whisper, ElevenLabs, Google TTS
- • Automation: Zapier, Make, n8n
- • Analytics: Mixpanel, Amplitude, PostHog
- • Vector databases: Pinecone, Weaviate, Chroma
The server categories reflect the tool integration needs of AI agent applications, not the traditional SaaS landscape. The high concentration in developer tools (1,200+ servers) reflects the early adopter profile of MCP — developers building AI coding assistants and agentic development tools were the first wave. Business application servers (950+) reflect the second wave: enterprise deployments of AI agents for customer service, sales automation, and internal operations.
Cross-Provider Adoption
The defining characteristic of MCP's March 2026 status is cross-provider adoption. Infrastructure standards only become infrastructure when all major players adopt them — prior to that point, they are just one of several competing approaches. MCP crossed that threshold in 2025 when OpenAI committed to MCP support, breaking the provider-specific tool format fragmentation. For examples of how MCP enables new agent capabilities across providers, the Anthropic MCP Apps and interactive UI guide shows the application layer that MCP enables.
Full MCP support in Claude Desktop, Claude API, and Claude Code. Maintains the reference implementation and specification.
MCP support through the Assistants API tool framework. GPT-4 and o1 models can use MCP servers as function calling tools.
MCP integration in Google AI Studio and Vertex AI agents. Gemini 3.1 models support MCP through the Google AI Agent framework.
Copilot Studio supports MCP server connections. Microsoft 365 Copilot can use business application MCP servers.
Bedrock agents support MCP as a tool integration layer. AWS also maintains MCP servers for core AWS services.
Both AI coding IDEs ship with MCP client support. Developers configure MCP servers in their IDE settings.
Building with MCP
The practical starting point for MCP depends on whether you are using an existing server or building a custom one. For most integration needs, an existing server in the 5,800+ ecosystem covers the use case. Custom server development is appropriate for proprietary internal systems, specialized data sources, or tools with unique access control requirements.
Install the SDK
npm install @modelcontextprotocol/sdkCreate a server with one tool
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
const server = new Server({
name: "my-tool-server",
version: "1.0.0"
});
server.tool("get_data", "Retrieve data", {
query: { type: "string" }
}, async ({ query }) => ({
content: [{ type: "text", text: await fetchData(query) }]
}));
await server.run();- 1. Browse mcp.run or Anthropic MCP directory
- 2. Install via npm or pip
- 3. Add to agent config (claude_desktop_config.json or equivalent)
- 4. Test tool discovery and invocation
- 1. Install @modelcontextprotocol/sdk
- 2. Define tools with input schemas
- 3. Implement tool handlers
- 4. Add authentication and rate limiting
- 5. Deploy and register
- 1. Use MCP Inspector for local testing
- 2. Test tool discovery (tools/list)
- 3. Test each tool call with valid and invalid inputs
- 4. Verify error handling and edge cases
MCP for Business Use Cases
MCP's practical value for businesses is in enabling AI agents to operate across the full breadth of business software — not as a novelty, but as a productivity multiplier. The key insight is that MCP inverts the integration burden: instead of each AI application building integrations with each business tool, each business tool builds one MCP server and becomes available to all AI applications simultaneously.
Connect a Claude or GPT-4 agent to CRM (Salesforce MCP), ticketing (Zendesk MCP), order management (Shopify MCP), and knowledge base (Confluence MCP). The agent handles end-to-end customer requests — looking up orders, updating tickets, escalating issues — through a single conversation interface.
AI coding assistants with MCP access GitHub (PRs, issues, code search), databases (query production data), monitoring (read logs and metrics), and documentation (Confluence, Notion). Developers resolve issues without context-switching between tools.
Sales agents using CRM (HubSpot/Salesforce MCP), email (Gmail/Outlook MCP), calendar (Google Calendar MCP), and research (LinkedIn/web search MCP) can handle prospect research, outreach drafting, meeting scheduling, and pipeline updates through natural language instructions.
Content teams using CMS (WordPress/Webflow MCP), analytics (Google Analytics MCP), SEO tools (Ahrefs/SEMrush MCP), and social media (Buffer/Hootsuite MCP) can automate content workflows — from keyword research through publication and performance monitoring.
Security and Governance Considerations
MCP's power — giving AI agents direct access to business systems — is also its primary risk surface. An MCP server with write access to a production database or CRM is a significant attack vector if improperly secured. Security requirements for MCP deployments are analogous to API security: authentication, authorization, input validation, and audit logging are all required.
- • Implement OAuth 2.0 or API key auth on all production servers
- • Use service accounts with minimal permissions
- • Rotate credentials regularly
- • Never hardcode credentials in server code
- • Expose only the tools the agent actually needs
- • Separate read and write tools with different auth levels
- • Use allowlists for tool invocation rather than denylists
- • Review tool permissions on every deployment update
- • Log all tool invocations with agent identity and timestamp
- • Alert on unusual tool call patterns
- • Implement rate limiting per agent and per tool
- • Review audit logs in compliance reporting cycles
Prompt injection risk: MCP servers that process external data (web pages, emails, documents) can be vectors for prompt injection attacks — malicious content designed to manipulate the agent into unauthorized tool calls. Sanitize all external data before returning it as tool results. The ModelArmor pattern (content scanning before context injection) is the recommended mitigation.
MCP Roadmap and Future
Anthropic's MCP roadmap for 2026 focuses on three areas: enterprise authentication (OAuth 2.1 and enterprise identity provider integration), multi-agent coordination (agent-to-agent tool calling via MCP), and the MCP registry (a curated, verified server directory with security ratings). Each of these addresses observed gaps in the current ecosystem as enterprise adoption scales.
OAuth 2.1 flows with PKCE for browser-based agents. SAML/OIDC integration for enterprise identity providers (Okta, Azure AD). This unlocks regulated industry deployments that require enterprise-grade authentication.
MCP as the coordination protocol for multi-agent systems. One agent calls another as if it were a tool server. Enables hierarchical agent architectures where orchestrator agents delegate to specialized sub-agents through MCP.
Verified server directory with security audits, usage statistics, and SLA commitments. Enterprise teams can evaluate servers against security requirements before deployment without manual code review.
Conclusion
MCP reaching 97 million monthly downloads with cross-provider adoption from every major AI company is the infrastructure milestone that makes AI agent deployment substantially more practical. The integration tax that previously made multi-tool agent deployments expensive and fragile is significantly reduced. For organizations building AI agent workflows, MCP is now the default assumption — not a choice between competing approaches.
The 5,800+ server ecosystem means the integration work for most business applications is already done. The remaining work is selecting the right servers, configuring appropriate security controls, and designing agent workflows that use the available tools effectively. As the registry and enterprise authentication roadmap items land in 2026, the remaining adoption barriers for regulated industries will fall.
Ready to Build AI Agents with MCP?
MCP integration is one component of a broader AI transformation strategy. Our team helps organizations design and implement agentic workflows that leverage the full MCP ecosystem.
Related Articles
Continue exploring with these related guides