MCP server org deployment is the step most teams skip. Engineers build a single useful MCP server, three coworkers wire it into their Claude Desktop config, the team starts to depend on it, and then someone asks the question that should have come first: who owns this thing, how do we audit it, how do we distribute the next five, and what stops them from sprawling into an unaudited credential surface across the entire organisation.
Ninety days is the right horizon to do that work properly. Long enough to inventory what already exists, audit it against a known checklist, pick a distribution model that matches the org's scale, and wire the governance gates that prevent the sprawl from re-occurring under your control. Short enough that the plan ships, not stalls. The phased structure below is what we run for clients and use internally — every phase has explicit milestones, named artifacts, and a hand-off into the next.
This playbook assumes you already have at least one working MCP server in production or on a developer machine. If you are still building the first one, start with the MCP server build tutorial and come back here once it ships. The org rollout begins where the tutorial ends.
- 01MCP at org scale is infrastructure.Once more than one team consumes an MCP server, it stops being a developer tool and starts being internal platform. Plan the rollout the way you would plan any internal platform — catalog, ownership, audit, governance — not the way you would plan a tool.
- 02Security audit before distribution.Every server that ships through your distribution channel inherits its security posture. Audit each one against a fixed checklist before it is added to the catalog; the cost of a vulnerable server compounds with the number of consumers downstream.
- 03Distribution model matches scale.Three distribution patterns — project-scoped, plugin-marketplace, internal registry. The right pick depends on team count, security posture, and update cadence. Choosing the heavier model too early creates ceremony; choosing the lighter one too late creates sprawl.
- 04Governance gates prevent sprawl.A new MCP server should not be addable to the catalog without passing an audit, naming an owner, declaring its scopes, and accepting the re-audit cadence. The gates are cheap to set up and stop a class of incident before it can ship.
- 05Re-audit cadence aligned to SOC2.Quarterly re-audit on every catalog server, mapping findings onto SOC2 CC6 (access), CC7 (operations), and CC8 (change management) controls. The audit becomes evidence; the cadence becomes a control.
01 — Why 90 DaysMCP at org scale is infrastructure — plan it that way.
The clearest framing shift between a single MCP server and an org-wide catalog is the answer to a single question: who owns it. For one server consumed by one team, the answer is "the engineer who wrote it." That answer scales to roughly two teams. By the time five teams depend on a server, the answer needs to be a named owner with on-call coverage, a documented rotation, and a defined relationship to the security and platform groups — because the server is now infrastructure, regardless of how it was originally framed.
Ninety days is the right horizon for that transition for three reasons. First, it gives enough time to inventory honestly — most orgs already have more MCP servers in flight than the platform team is aware of, and surfacing them takes weeks of conversation, not days. Second, it gives enough time for a real security audit on each one — four hours per server times the corpus, plus remediation, is rarely a sprint of work. Third, it gives enough time to design and ship the distribution and governance layers before the next wave of servers lands, which it will, on a schedule the platform team does not control.
Days 1-30
Catalog · capability · baselineInventory every MCP server in flight across the org. Capability map per server — which tools, which downstream systems, which credentials. Security baseline against the 75-point audit. Owner named for each existing server. No new servers ship in phase one.
Discovery + auditDays 31-60
Distribution · auth · audit trailPick the distribution model based on phase-one findings. Wire shared auth, scope-bound tokens, structured audit logging. Land remediations on phase-one findings. Onboard two more servers through the new pipeline as a forcing function for the design.
Pipeline buildDays 61-90
Governance · incident response · cadenceGovernance charter — who can ship what under which gates. Incident-response runbook for credential compromise and tool misuse. Quarterly re-audit cadence scheduled and aligned to SOC2 control evidence. Catalog opens to the broader org under the new rules.
Operating modelTwo framing notes that pay off across the entire plan. First, treat ownership as a deliverable, not a side-effect. For every server in phase one, the artifact that closes the inventory entry is a named owner, an on-call rotation reference, and a documented relationship to the platform and security groups. Without that artifact, the inventory is a spreadsheet that decays in a fortnight; with it, the inventory becomes the source of truth the next two phases lean on.
Second, resist new-server pressure during phase one. The hardest discipline in this plan is telling consumer teams "not yet" while you are still mapping what already exists. The right framing for that conversation is the one a platform team would use for any other infrastructure layer: we're standardising the substrate so your server lands on something stable. Most consumer teams accept that framing once it is named.
02 — Days 1-30Server catalog, capability inventory, security baseline.
Phase one is discovery and audit. The deliverables are an honest inventory of every MCP server in flight across the org, a capability map per server, a security baseline against the 75-point audit, a named owner for each existing server, and a recommendation on which distribution model fits the population that surfaced. No new servers ship in phase one; the discipline of the freeze is what makes the inventory honest.
Five milestones below, each anchored to a week and a deliverable. The week assignments are approximate — the work overlaps in practice — but the deliverable list is what closes the phase.
Server inventory
Discovery sweep across teamsSurvey every engineering team. Inventory every MCP server in production, in development, and in someone's local config. Capture name, owner, consuming teams, transport, host platforms, and intended purpose. Expect to find roughly twice as many servers as the platform group was aware of.
Deliverable: inventory sheetCapability map
Tools · scopes · downstream systemsFor every server in the inventory, document the tool catalog — names, descriptions, mutation flags. Document the downstream systems each tool touches and the credentials it uses. The capability map is the input to phases two and three; without it, the audit cannot prioritise correctly.
Deliverable: capability matrixSecurity audit
75-point checklist per serverRun the 75-point audit against every server. Roughly four hours per server with an experienced reviewer; longer for first-pass auditors. Categorise findings as critical, high, medium, low. The audit pack per server becomes phase-two's remediation backlog.
Deliverable: audit packsOwner assignment
Named owner · on-call · platform linkFor every server, name a primary owner, a secondary, an on-call rotation reference, and the relationship to the platform and security groups. Servers without a willing owner are flagged for deprecation, not for orphaning. Ownership is a deliverable, not a side-effect.
Deliverable: ownership registerDistribution recommendation
Project · plugin · registryGiven the inventory and the audit findings, recommend the distribution model for phase two: project-scoped, plugin-marketplace, or internal registry. The full decision matrix is in §05; the recommendation is the bridge artifact between phases.
Deliverable: model pickTwo practical notes on the inventory step. First, survey by code, not by self-report. Self-reported inventories under-count by roughly half in our experience — engineers forget servers they shipped six months ago, and teams forget servers they consume but did not write. Grep the organisation for @modelcontextprotocol/sdk imports and for claude_desktop_config.json entries in dotfile repos and onboarding docs. The grep output is the floor; the survey results are the ceiling; the inventory lives between them.
Second, on the audit step: do not skip servers because they look small. The most consequential finding in one phase-one engagement was on a ninety-line server that wrapped a single read tool against a customer-data API. The credential the server held was scoped for convenience, not for least authority; the audit found that the same credential could mutate the data the tool was nominally read-only against. Size is uncorrelated with blast radius. Audit every server in the inventory, every time.
"A phase-one inventory that finds twice as many servers as the platform team expected is the rule, not the exception — and the gap is exactly the surface the rollout is meant to bring under control."— Digital Applied agentic infrastructure, on every org-wide MCP engagement to date
03 — Days 31-60Distribution model, auth + scope, audit trail.
Phase two builds the pipeline. With the inventory closed and the distribution model picked, the work in days 31-60 is to wire the shared infrastructure every catalog server will sit on: an auth model with scope-bound tokens, a centralised audit-log stream with PII redaction at source, a remediation pass on phase-one findings, and the onboarding of two new servers through the new pipeline as a forcing function for the design.
Five milestones below. The two-new-servers item is the most important and the most often skipped; without it, the pipeline is theoretical, and theoretical pipelines develop seams that only surface when consumers try to use them under pressure.
Distribution build
Channel · versioning · install pathStand up the distribution channel chosen in phase one. Internal npm scope for project-scoped, plugin manifest for plugin-marketplace, hosted registry endpoint for registry. Versioning policy, install path, update mechanism. The channel goes live empty.
Deliverable: distribution liveShared auth
Scope-bound tokens · OAuthWire the auth model every catalog server will share. Tokens carry scopes that map onto specific tool subsets. Short expiry, tested revocation path. For servers that wrap downstream APIs on behalf of a user, OAuth on-behalf-of flows replace long-lived admin tokens.
Deliverable: auth moduleAudit trail
Structured logs · redacted · retainedCentralised audit-log stream. One structured log line per tool invocation — caller identity, tool name, argument shape, response status, latency. PII redacted at the structured-logging layer. Retention aligned with regulatory and SOC2 requirements. Tool response bodies in a separate, access-controlled store.
Deliverable: log pipelineRemediation pass
Critical + high findings landedLand every critical and high finding from phase one. Medium and low findings scheduled into the next quarter. Remediation evidence — diff, test, audit re-run — attached to each closed finding. The audit packs become living documents, not one-shot artifacts.
Deliverable: remediation logTwo new servers onboarded
Through the new pipeline end-to-endOnboard two new MCP servers through the new pipeline as a forcing function. They pass the 75-point audit before being added to the catalog. They use shared auth and the centralised audit trail. Their onboarding surfaces the seams in the pipeline design while they are still cheap to fix.
Deliverable: catalog v0One pattern worth naming explicitly here: shared auth as a library, not a service. The common temptation is to stand up a centralised auth service that sits between every MCP server and its callers. That works, but it adds a network hop, a deploy unit, and an on-call surface. A shared auth library — a small package that every catalog server imports and uses to validate tokens, check scopes, and emit audit log lines — gives most of the benefit at a fraction of the cost. Centralise the policy in code, not in a separate process, unless you have specific reasons to do otherwise (e.g. a non-Node runtime mix, or compliance requirements that mandate a separate trust boundary).
On audit trails: the most common mistake in this phase is logging too much. Verbose JSON-RPC dumps that capture every argument value, every response body, every header are a leak vector — the log store becomes a secondary copy of every secret that has ever flowed through a tool call. Default to logging argument shapes, not values; opt non-sensitive fields back in explicitly. Redact at the source, never at the sink — sink-side redaction is permanently best-effort.
04 — Days 61-90Governance gates, incident response, re-audit cadence.
Phase three is the operating model. The catalog exists, the pipeline is real, two new servers have shipped through it. What remains is the governance layer that lets the catalog grow sustainably: the gates a new server has to pass to be added, the incident-response procedure when something does go wrong, the quarterly re-audit cadence that turns the audit packs into evidence, and the documentation and enablement that lets consumer teams actually use the catalog without bottlenecking through the platform group.
Five milestones below. The governance charter in week 9 is the artifact that closes the rollout — without it, phases one and two decay back into ad-hoc work within a quarter.
Governance charter
Gates · approvals · ownershipWritten charter. Defines what a new MCP server must satisfy to be added to the catalog: passed audit, named owner, declared scopes, on-call coverage, accepted re-audit cadence. Defines the approval path — security review, platform review, exec sign-off where the blast radius warrants it.
Deliverable: charter v1Incident response
Runbook · kill-switch · escalationRunbook for credential compromise, tool misuse, downstream-API abuse, and prompt-injection exploitation. Per-tool kill-switch — a feature flag, cheap to flip, that disables a single tool catalog-wide. Escalation path from on-call into security and exec when blast radius warrants it.
Deliverable: runbook + flagsRe-audit cadence
Quarterly · SOC2 alignedSchedule the next four quarterly re-audits. Each one is a re-run of the 75-point checklist with a one-page diff against the prior pack. Findings sorted by severity, mapped onto SOC2 CC6/CC7/CC8 controls. The cadence becomes the control; the audit becomes the evidence.
Deliverable: audit scheduleDocumentation + enablement
Consumer docs · author docsTwo doc surfaces. Consumer-facing — how to discover a catalog server, install it, configure auth, ask for help. Author-facing — how to propose a new server, run the audit, get through the gates. Without enablement, the catalog stays platform-team-mediated and consumer adoption stalls.
Deliverable: doc setCatalog open
Org-wide announcement · self-serveCatalog opens to the broader org under the new rules. Announcement to engineering, security, and exec channels. Office-hours for the first month while consumer teams ramp. By the end of the quarter, new servers should be proposable and shippable without platform-team hand-holding.
Deliverable: rollout completeOne operational pattern worth calling out: the per-tool kill-switch. A feature flag, server-side, cheap to flip, independent of deploy, that disables a single tool while leaving the rest of the catalog functional. When an incident-response event targets a single tool, the remediation is often hours-to-days to fix correctly; the kill-switch is the minutes-to-seconds containment that prevents exposure in the gap. Build it once into the shared auth library and inherit it across every catalog server.
On governance: the charter document is the artifact most organisations under-invest in and most regret under-investing in. A two-page charter that names the gates, the approvals, and the ownership rule is sufficient to prevent the most common form of catalog sprawl — a new server lands without an owner, without an audit, and without an entry in the inventory, and by the time the platform team notices, three other teams depend on it. The charter is the document that lets the platform team say no without it becoming a political fight every time.
05 — DistributionProject, plugin, team registry — pick by scale.
Three distribution models cover almost every org-scale MCP deployment we have seen. The trade-offs are real, the pick is consequential, and the wrong pick is expensive to undo once consumer teams have wired against it. The matrix below names each model, the org shape it fits, and the cost of getting it wrong.
One framing rule: pick the lightest model that still works for your scale. Heavier models bring ceremony — manifest schemas, hosted registries, signing pipelines — that pay off at the right scale and impede shipping at the wrong one. The phase-one inventory is the input to this decision; let the population of servers and consumer teams set the model.
Lightest weight · repo-local
Each MCP server lives in the repo of the project that primarily consumes it. Distribution is git plus an npm-style install path. Fits orgs with one or two consumer teams per server, no cross-team sharing. Lowest ceremony, lowest discoverability. Sprawl risk is medium-high once a fourth team wants any given server.
1-2 teamsMiddle weight · manifest-based
Servers ship as plugins to a host runtime — Claude Desktop's plugin manifest, an internal CLI's plugin folder, or a similar manifest-driven channel. Discoverable through the host UI. Versioned per plugin. Fits orgs with three to ten consumer teams and a host platform team willing to own the manifest schema.
3-10 teamsHeaviest weight · hosted catalog
A hosted catalog endpoint — internal npm scope, JSR scope, or a bespoke registry — that hosts MCP server packages with version metadata, signing, and discovery API. Fits orgs with ten or more consumer teams or strict supply-chain requirements. Highest ceremony, highest discoverability, highest safety margin.
10+ teamsOne pattern that holds across all three models: signing is not optional past a certain scale. Project-scoped servers can rely on git provenance plus repo access control. Plugin-marketplace and registry models need a signing pipeline once consumer teams cannot reasonably review every server they install — and that cutover happens earlier than most teams plan for. Build signing into the registry model from the start; retrofit it into the plugin model around team five or six.
A second pattern: update cadence drives the model choice as much as team count. Servers that ship daily push organisations toward the registry model regardless of team count, because the alternative is asking every consumer team to update its config daily. Servers that ship monthly tolerate the lighter models gracefully. Capture update cadence in the phase-one inventory and let it weight the recommendation.
06 — TemplatesCatalog template, security audit checklist, governance charter.
Three template artifacts ship with the plan. The catalog template below is the source-of-truth shape for an inventory entry — every field is one we have needed across multiple engagements; none is ornamental. The security audit checklist is summarised in §02 and covered in full in our companion guide. The governance charter is sketched below; the full version is the artifact a platform team adapts to its specific approval flows.
Adapt each template to your org's naming and tooling, but keep the field set complete. The fields earn their place; the sparse templates we have seen in the wild correlate strongly with the failure modes covered in §07.
# MCP catalog entry — one file per server in the catalog
name: crm-tools
version: 1.4.2
status: active # active | deprecated | quarantined
transport: streamable-http # stdio | sse | streamable-http
hosts:
- claude-desktop
- internal-agent-runner
ownership:
primary: alice@company.com
secondary: bob@company.com
oncall_rotation: platform-agentic
team: platform-ai
scopes:
- crm:read
- crm:write:contacts
- crm:write:notes
downstream_systems:
- name: zoho-crm
credential: oauth-on-behalf-of
rate_limit_class: standard
- name: internal-search
credential: service-account
rate_limit_class: read-heavy
tools:
- name: search_contacts
mutates: false
scopes_required: [crm:read]
- name: create_note
mutates: true
scopes_required: [crm:write:notes]
requires_confirmation: true
audit:
last_run: 2026-05-02
pack_url: /audits/crm-tools/2026-05-02.md
findings: { critical: 0, high: 1, medium: 3, low: 2 }
next_due: 2026-08-02
governance:
charter_version: 1.0
approved_by: [security, platform]
approved_on: 2026-04-18
reaudit_cadence: quarterly
The governance charter that goes alongside the catalog is shorter than most teams expect — a two-page document is usually sufficient. The structure we ship to clients:
- Scope. Which MCP servers the charter applies to — typically every server that interacts with org-owned data or systems, regardless of where it runs.
- Ownership rule. Every catalog server has a named primary and secondary owner, an on-call rotation reference, and a relationship to the platform group. Servers without an owner are deprecated, not orphaned.
- Addition gate. A new server is addable to the catalog only after passing the 75-point audit, declaring its scopes, naming its owner, and accepting the re-audit cadence. The approval path lists the named reviewers.
- Change gate. Material changes — new tools, new downstream systems, new scopes — trigger a delta audit and re-approval. Non-material changes — bug fixes, refactors — do not.
- Audit cadence. Quarterly re-audit on every active server. Annual deep re-audit aligned to SOC2 evidence collection. Audit packs filed alongside the control narratives.
- Incident response. Reference to the runbook. Names the kill-switch authority — who can flip the per-tool flag without further approval — and the escalation path when blast radius warrants it.
- Deprecation rule. Servers without owners, servers that fail two consecutive re-audits without remediation, and servers without consumers for two quarters are deprecated on a documented timeline.
For the full security audit checklist, see our companion guide, the 75-point MCP server security audit checklist. That guide is the source-of-truth for the audit step in phase one and the re-audit cadence in phase three. If you are running the rollout described here, the two documents are designed to be read together.
07 — PitfallsFour enterprise deployment failure modes.
The four failure modes below are the ones we have seen most often in org-scale MCP rollouts that did not follow a plan like the one above. Each is recoverable, none is exotic, all four are cheaper to avoid than to remediate.
Four common failure modes · ordered by frequency
Source: Digital Applied MCP rollout engagements, 2025-2026P1 — Catalog without ownership
Servers in the catalog without a named primary owner is the single most common failure mode. The catalog grows, ownership stays implicit, and by the time something goes wrong nobody is on the hook. The fix is procedural — make ownership a deliverable of the catalog-entry workflow and reject entries that lack it — but the discipline of enforcing it is harder than it looks. The platform team has to be willing to say no to a server without an owner, and to defend that no the first time a consumer team pushes back.
P2 — Distribution before audit
Standing up the distribution channel before the audit pass is done creates the worst of both worlds. The channel becomes convenient, servers ship through it, and the audit gate becomes a retrospective "we'll audit it later" — which never happens at the same fidelity as a pre-distribution audit. The sequencing in §03 — distribution channel built in week 5, but empty until phase-one findings are remediated — exists to prevent this. The two-new-servers milestone is the controlled exception.
P3 — Governance charter not written
The two-page governance charter is the artifact teams most often defer indefinitely. The cost of deferral compounds: every new server that lands without a written gate erodes the discipline of the next, and within a quarter the addition gate is ad-hoc and political. Write the charter in week 9, even if it is rough — a rough charter is enforceable; a missing charter is not.
P4 — Re-audit cadence skipped
The initial audit passes, the catalog opens, the quarterly re-audit gets pushed by one sprint, then by two, then quietly falls off the schedule. The audit becomes a one-shot artifact and the catalog drifts away from its baseline at the rate servers change. The fix is to schedule the next four quarterly re-audits in week 11 as named calendar events with named reviewers — make the cadence a control, not an intention. SOC2 evidence requirements give the schedule teeth where the platform team needs them.
Ownership retrofit
Retrofitting ownership onto an already-deployed catalog of fifteen-plus servers takes a quarter of platform-team time. Doing it as a phase-one deliverable takes a week.
Procedural · slowPre-audit ship
Roughly one in three pre-audit distributions in our corpus correlated with a security incident inside the first six months. Audit before distribution; the cost asymmetry is severe.
Security · sharpCharter absent
Orgs without a governance charter accumulate roughly five times the unaudited servers per quarter compared with orgs that wrote and enforced one. The compounding is unforgiving.
Governance · compoundingCadence skipped
An annual catch-up re-audit is two-to-three times more expensive than four quarterly ones, because findings compound and remediation context decays. The cadence is cheaper than the catch-up.
Cadence · cheaperOne final framing on these four failure modes: they are not independent. A catalog without ownership (P1) tends to ship without audit (P2) because nobody owns the gate; a missing charter (P3) makes the cadence (P4) easy to skip because nobody wrote it down. Treat them as a single failure cluster; the plan in §01 through §04 addresses all four together, which is why the sequencing matters more than any individual milestone.
For teams considering whether to run this internally or bring in a partner: the plan in this guide is sufficient to run a credible internal rollout on your own MCP catalog. The reason teams engage us is rarely capability; it is calibration — having seen the failure cluster play out across multiple orgs, the sequencing decisions get made faster and the governance charter lands in the form that maps onto SOC2 evidence requirements. If that calibration matters, our agentic AI transformation engagements include MCP org rollouts as a discrete deliverable; if it does not, the plan above is yours to run.
Org-wide MCP is infrastructure — 90 days is the right horizon to do it right.
MCP server org deployment is the step between "we have an MCP server" and "we run MCP at organisational scale" — and it is the step where most teams either install the controls that let the catalog grow sustainably, or skip them and pay the cost in incidents and retrofit work over the following year. The 30/60/90 plan above is what we run for clients and use internally to make sure the controls land.
The single most consequential mental shift is the one in §01: MCP at org scale is infrastructure, not a developer tool. Plan the rollout the way you would plan any internal platform — inventory, audit, distribution, governance, cadence — and the failure cluster in §07 stops being your default outcome. Skip the sequencing and the failure cluster is what you ship.
The next step is concrete: schedule the phase-one inventory survey, name the platform-team lead, and freeze new-server ships for thirty days. The rest of the plan follows from those three decisions. Quarterly re-audit thereafter — the cadence is the control, and the control is what compounds.