SYS/2026.Q1Agentic SEO audits delivered in 72 hoursSee how →
DevelopmentDeep Dive15 min readPublished May 10, 2026

Local test loop, tool-call visualization, transport testing — the developer-workflow inspector every MCP-server team should standardise on.

MCP Inspector Deep Dive: Developer Workflow 2026

A hands-on deep dive into the official MCP Inspector — installation paths, the 15-minute local test loop, full tool-call input / output / error inspection, manifest validation against the spec, transport testing across stdio, SSE, and streamable HTTP, and the eight failure modes every MCP server team eventually hits in production.

DA
Digital Applied Team
Senior engineers · Published May 10, 2026
PublishedMay 10, 2026
Read time15 min
SourcesAnthropic MCP docs + SDK
Dev cycle reduction
50%
vs raw Claude Desktop restarts
Failure modes covered
8
production-grade debugging patterns
Transport modes
3
stdio · SSE · streamable HTTP
CI integration
Yes
scriptable, headless-friendly

The MCP Inspector is the single most underrated tool in the Model Context Protocol ecosystem. It is a browser-based developer harness that launches your MCP server in a controlled subprocess, speaks the protocol on your behalf, lists every tool, prompt, and resource you expose, and lets you fire arbitrary calls while showing the full JSON-RPC envelope on the wire — request, response, error, transport metadata. Teams that standardise on it ship MCP servers roughly twice as fast as teams that drive everything through Claude Desktop restarts.

This guide walks the full Inspector workflow end to end. We cover installation across npm and Docker, the canonical 15-minute local test loop, how to read the tool-call visualization to surface schema bugs early, manifest validation against the current spec, transport testing across stdio / SSE / streamable HTTP, eight common MCP server failure modes that Inspector makes obvious, and CI-pipeline integration so you stop publishing servers that the host refuses to load.

Everything below assumes Node 22 LTS or newer and the current @modelcontextprotocol/sdk + @modelcontextprotocol/inspector packages. If you have not built an MCP server before, the companion piece — Build an MCP Server in TypeScript: From Scratch 2026 — covers the end-to-end scaffold this Inspector walkthrough assumes is already in place. The Inspector evolves quickly, but the workflow patterns are stable: if you finish this article and only adopt one habit, make it leaving Inspector running alongside your editor for the entire duration of any MCP server project.

Key takeaways
  1. 01
    Inspector cuts dev time by roughly 50%.The tight feedback loop — edit, save, re-invoke a tool in seconds — replaces the multi-minute Claude Desktop restart cycle. Teams report a near-halving of time-to-first-working-tool across new MCP server projects.
  2. 02
    Tool-call visualization surfaces schema bugs early.Seeing the exact JSON Schema Claude will receive, the validated arguments, the structured response, and the error envelope side-by-side catches a class of bugs that production traces only hint at.
  3. 03
    Manifest validation prevents production issues.Inspector validates your server's capabilities, tool definitions, and metadata against the live MCP spec. Drift between SDK version and spec version is one of the most common silent failures — Inspector makes it noisy.
  4. 04
    Transport testing matters for SSE and HTTP.Remote MCP servers behave differently across stdio, SSE, and streamable HTTP — auth, CORS, keep-alives, reconnect semantics. Inspector can target each transport, which is what lets you confirm parity before deploying.
  5. 05
    CI integration enables automated MCP testing.Inspector ships a scriptable, headless-friendly mode you can wire into GitHub Actions. Smoke-test the tool catalog and a curated set of representative tool calls on every PR — broken builds never reach the registry.

01Why InspectorBuilding MCP servers blind is a productivity tax.

The first time anyone builds an MCP server, the workflow looks roughly like this: write a tool registration, save the file, run npm run build, quit Claude Desktop entirely, relaunch it, open a fresh chat, prompt the model in a way that should trigger the tool, watch what happens, then guess at what went wrong. Each iteration is two to three minutes. On a server with five tools, that workflow turns a single afternoon of work into two days.

The deeper problem is that this loop hides almost everything you need to debug. You cannot see the JSON Schema the SDK derived from your Zod definition. You cannot see the exact arguments Claude sent. You cannot see the raw response envelope before the host rendered it. You cannot see whether the server even registered. All of that is on the wire, and the wire is invisible. Inspector exists to make the wire visible.

"The teams that ship MCP servers fastest are not the teams with the cleanest code — they are the teams that watch the JSON-RPC envelope every time they touch a schema."— Digital Applied engineering, on MCP production rollouts

What Inspector actually does is unspectacular and that is the point. It launches your server in a child process using whatever command you tell it to. It speaks JSON-RPC over the configured transport. It calls tools/list, populates a left-hand panel with every tool the server exposes, and gives you a form-based right-hand panel to fire individual tool invocations with arbitrary arguments. Below that, a console view streams every JSON-RPC message — request and response — so you see exactly what the model would see, byte for byte.

The compound effect across a multi-week MCP project is large. Edit cycles drop from minutes to seconds. Bug repro shifts from I cannot reproduce this to scripted replays of the offending tool call. Onboarding new engineers stops requiring a Claude Desktop install in the first hour. By the time you have shipped three or four servers this way, running anything else feels like deliberately tying your hands.

House rule
Leave Inspector running for the entire duration of any MCP server project. The cost is one terminal tab; the payoff is that every change you make to a schema, a description, or a handler is exercised the instant the file is saved. That habit alone separates teams shipping MCP servers in days from teams shipping them in months.

02Installnpm, docker, claude_desktop_config.json wiring.

There are three reasonable ways to run Inspector: as an npx one-shot pinned to your project (the default), as a project-level dev dependency (the team default), or via the official Docker image (the CI / sandbox default). All three speak the same protocol and surface the same UI; the choice is mostly about how tightly you want to version-pin and where you want the runtime to live.

Most projects should start with the dev-dependency path. It pins the Inspector version inside the same package-lock.json as the SDK, so version drift between team members and CI machines is a non-issue. Add it alongside the SDK:

$ npm install --save-dev @modelcontextprotocol/inspector

$ cat package.json | jq '.devDependencies'
{
  "@modelcontextprotocol/inspector": "^0.10.0",
  "@types/node": "^22.0.0",
  "tsx": "^4.20.0",
  "typescript": "^5.6.0"
}

$ npm pkg set scripts.inspector="npx @modelcontextprotocol/inspector tsx src/index.ts"
$ npm run inspector

> @your-scope/weather-mcp@0.1.0 inspector
> npx @modelcontextprotocol/inspector tsx src/index.ts

Starting MCP inspector...
[server] [weather-mcp] ready on stdio
Inspector running on http://localhost:5173

The npx invocation is what makes the local server discoverable: Inspector launches tsx src/index.ts as a child process, talks to it over stdio, and exposes the UI on localhost:5173. The first run downloads the package into the local node_modules; subsequent runs are instant. Pair this with tsx watch in a sibling terminal if you want server-side hot reload on every file save.

Default
npm
Local dev dependency

Add @modelcontextprotocol/inspector to devDependencies, expose an npm run inspector script. Pins the version per-project, shares the same lockfile semantics as the rest of your toolchain. Recommended for >90% of teams.

npm install --save-dev
Containerised
Docker
Sandbox + CI

The official ghcr.io image runs Inspector inside a container with the server mounted as a volume. Right answer for sandboxed evaluation of third-party servers or for CI environments where global Node installs are inconvenient.

ghcr.io/modelcontextprotocol/inspector
Ad-hoc
npx
One-shot, unpinned

npx @modelcontextprotocol/inspector ... downloads the latest version on demand. Useful for quick spelunking against someone else's server; not recommended as the primary workflow because every team member can land on a different version.

Unpinned, fast
Wiring
JSON
claude_desktop_config.json parity

Inspector reuses the same command + args + env shape that Claude Desktop uses in its config file. You can usually copy your published config block verbatim into the Inspector launch arguments — what works in one works in the other.

Config-format parity

The Docker path is worth knowing even if you do not use it daily. It is the safest way to evaluate a third-party MCP server you do not trust yet: the server runs inside the container, Inspector connects to it from inside the same container, the host machine sees nothing but the localhost UI port. This is also the path the CI examples in Section 07 rely on.

$ docker run --rm -it \
    -p 5173:5173 \
    -v "$PWD":/workspace \
    -w /workspace \
    ghcr.io/modelcontextprotocol/inspector \
    npx tsx src/index.ts

Starting MCP inspector...
[server] [weather-mcp] ready on stdio
Inspector running on http://localhost:5173

For Claude Desktop wiring parity, the Inspector launch arguments mirror the structure of claude_desktop_config.json entries directly. If a server runs in Claude Desktop with a given command and args, the same pair passed to Inspector will work — and any divergence between the two usually points at an environment variable or a working-directory assumption you forgot to document in the published config.

The version-pin habit
Pin both @modelcontextprotocol/sdk and @modelcontextprotocol/inspector in package.jsonusing exact or caret semver, and bump them together. Drift between SDK and Inspector versions is the second most common "works on my machine" failure mode in MCP server projects. Treat them as a single matched pair, not two independent dependencies.

03Test LoopThe 15-minute dev cycle Inspector enables.

A productive MCP server dev cycle has four moves, repeated in a tight loop until the tool works the way you want: edit a schema or handler in the editor, save, switch to Inspector, invoke the tool. With tsx watch driving the server and Inspector holding the connection, the time from keystroke to result is about one second. Fifteen minutes of that loop will get a first tool from skeleton to production-ready.

The fifteen-minute shape is roughly: minutes 0-3 are the scaffolding round-trip — register the tool, see it appear in Inspector's left panel with the description and schema you wrote, fire one call with valid arguments, confirm the response envelope is well-formed. Minutes 3-8 are schema tightening — invoke the tool with deliberately bad inputs, watch how Zod errors surface in the Inspector error view, refine constraints and .describe() strings until the failure modes are useful for the model to read. Minutes 8-15 are the happy-path scenarios — three or four representative inputs that mirror what you expect a real user prompt to produce.

0 – 3 min
Schema appears
register → reload → confirm

Save the tool registration. Inspector reconnects automatically via tsx watch. Confirm the tool name, description, and derived JSON Schema all appear in the left panel exactly as you intend.

Scaffolding round-trip
3 – 8 min
Error path tightening
bad inputs → structured errors

Deliberately fire invalid arguments. Watch how Zod's validation errors surface — these are the strings the model sees at runtime. Tighten .describe() copy until the error guides the next call.

Schema tightening
8 – 15 min
Happy-path coverage
3-4 representative inputs

Three or four inputs that mirror what real user prompts will produce. Save each as a replayable invocation in Inspector for the next time you refactor — instant regression coverage with no test framework.

Representative inputs

The single biggest unlock in this loop is the ability to replay a tool call. Inspector remembers each invocation; the next time you change a schema or a handler, one click rerun confirms whether the change broke a previously-working call. That is regression coverage at the speed of clicking, and it accumulates for free as you build.

Two practical conventions are worth establishing on day one. The first: never log to stdout from a stdio-transport server — Inspector will flag the corruption, but you should still train the habit. Log to stderr via console.error; Inspector renders that stream in a dedicated "Server output" panel that is the most useful debug surface you have. The second: write the tool description for the model, not for a human reader of the source file. The description ships through to the JSON Schema Claude sees at tool-selection time, and vague descriptions are the single biggest cause of tools that the model refuses to invoke.

"Inspector turns MCP server development from a multi-restart guessing game into a tight schema-and-handler dialogue with yourself."— Internal review, Q2 2026 MCP server projects

04Tool-Call VizInput / output / error inspection.

The Inspector tool-call view splits the work of debugging into three clear surfaces: the input the client sent, the output the server returned, and the error envelope when something went wrong. Each surface is the literal JSON the host sees — no pretty-printing that hides structure, no compaction that hides keys. The three views together are what makes Inspector qualitatively different from logging or browser devtools.

Input mode
Validated arguments
tools/call · params.arguments

The exact argument object the server received after Zod (or your schema layer) validated it. Compare against the raw JSON-RPC request in the wire log — diffs reveal coercion, default-value insertion, or silent dropping of unknown fields.

Pre-handler payload
Output mode
Response envelope
{ content: [...], isError: false }

The structured envelope the handler returned, rendered both as parsed JSON and as the text content the model would see. This is where you confirm the response is well-formed before any host rendering pretties it up.

Post-handler payload
Error mode
Structured failures
{ content: [...], isError: true }

Either a Zod validation rejection (schema-level) or a handler-returned error envelope (logic-level). Inspector renders both with the underlying exception trace from server stderr — the production version of this is invisible to you.

Schema + handler

The most useful workflow this enables is bug-isolation by envelope diffing. When a tool is misbehaving, fire it from Inspector with the same arguments the production host sent, then compare the response envelope side-by-side with the production trace. Differences cluster into three buckets: argument transformation (something in the host is munging inputs before they reach the server), environment drift (Inspector has a different env var set than production), or version drift (the published server version is older than your local).

Pay particular attention to the error path. Tools that throw an uncaught exception look very different in Inspector than tools that return a structured { isError: true } envelope. The thrown version terminates the JSON-RPC turn with a generic error; the structured version gives Claude a readable message and a chance to recover, retry, or ask the user. Inspector makes the difference visually obvious, which is why teams that build against it tend to converge on the structured-error pattern naturally.

What to copy into your test plan
For every tool, Inspector should hold at least four saved invocations: one happy-path call, one boundary-condition call (minimum or maximum input), one explicit validation-failure call (intentionally bad input), and one upstream-failure simulation (mock the dependency, return an error envelope). That four-call baseline is the difference between an MCP server that works and one that survives a year of production use.

05ManifestValidation against the spec.

Every MCP server presents a manifest on initialization: a structured declaration of which capabilities the server supports (tools, prompts, resources, sampling, logging), the schemas for each tool, the metadata about the server, and the protocol version it speaks. Inspector validates this manifest against the live MCP spec on connection — drift surfaces immediately in a dedicated panel rather than as a cryptic runtime error inside Claude Desktop hours later.

The three classes of manifest drift Inspector catches matter disproportionately for production stability. Capability drift means your server claims a capability it does not implement, or implements one it does not declare — both confuse the host. Schema drift means a tool's JSON Schema is malformed, or uses a JSON Schema construct the spec does not allow. Protocol drift means the server is built against a different MCP spec version than the host expects — usually a stale SDK version.

Capabilities
6
Declared surfaces

tools, prompts, resources, sampling, logging, completion. Inspector confirms each declaration round-trips through the appropriate spec request — if you declare prompts but prompts/list returns an error, the validation panel says so immediately.

Initialize handshake
Tool schemas
JSON Schema
Per-tool validation

Every tool's input schema is run through a JSON Schema validator at registration time. Common catch: unsupported JSON Schema dialect features (oneOf with nested $ref, recursive types) that the SDK silently accepts but the host rejects.

Strict mode available
Spec version
Match
Server ↔ host parity

Inspector reports the spec version your server's SDK was built against vs the version Inspector itself speaks. Mismatched majors are a hard fail; mismatched minors usually work but surface as warnings worth investigating.

SDK pinning
Metadata
Strict
Name, version, description

Server name must be unique within the host's config; version must be valid semver; description is bounded in length. Inspector flags missing fields, malformed semver, and over-long descriptions before they cause a published-package rejection.

Pre-publish gate

Run manifest validation as the first check on every code change, not the last. The discipline pays off because manifest-level errors are the cheapest to diagnose — Inspector tells you exactly which field is wrong — and the most expensive to discover late, because once a server is published with a bad manifest the entire install fails opaquely on user machines.

For teams running multiple MCP servers, treat the manifest like an API contract. Snapshot the rendered manifest into your repo (a small JSON file under tests/manifests/) and assert against the snapshot in CI. Any change to a tool name, description, or schema is a deliberate code review event, not a silent shipping decision. Inspector's manifest export command produces the canonical JSON for this snapshot in one shot.

"A clean Inspector manifest panel is the only pre-publish signal that actually correlates with installs that succeed on user machines."— Engineering retrospective, multi-server MCP rollout

06Transportstdio, SSE, HTTP — pick by deployment.

Inspector can drive any of the three MCP transports — stdio, SSE, or streamable HTTP — against the same server, provided the server itself supports them. Testing across transports is not a nice-to- have when you intend to deploy remotely: each transport has its own auth story, reconnect semantics, and failure modes, and the only reliable way to catch them is to exercise them deliberately before users do.

The choice of which transport to support is upstream of the choice of which transport to test. The matrix below is the rule of thumb most production MCP teams converge on; pick by deployment posture, not by what is fashionable in the spec changelog.

Local
stdio for desktop hosts

The default for any server that runs on the user's machine — Claude Desktop, Claude Code, Cursor. Zero network surface, no auth needed, the host launches your server as a subprocess. Inspector treats this as the canonical local path; if you only support one transport, support this one.

Pick stdio
Remote
SSE for legacy

Server-Sent Events over a long-lived HTTP connection. Older remote MCP servers still use this; some hosts only support it. Inspector handles SSE reconnect semantics gracefully, which is what makes it useful for diagnosing flaky proxies and aggressive load-balancer timeouts.

Pick SSE
Cloud
Streamable HTTP for serverless

The 2025-revision transport: bidirectional streamable HTTP designed for serverless and load-balanced deployments. The right answer for any MCP server hosted in Vercel, Cloudflare Workers, AWS Lambda, or behind a CDN. Inspector targets it natively.

Pick streamable HTTP
Strategy
Test all three or pick one

Servers intended for broad distribution should support stdio plus one remote transport, and Inspector-test both on every release. Servers intended for a single deployment posture should support just that one and resist the urge to add more.

Match transports to audience

For remote transports, the failure modes Inspector helps surface are the ones nobody hits until production. Auth header propagation: are you correctly passing bearer tokens on the initialize handshake and on every subsequent JSON-RPC turn? CORS: does the server emit the headers any browser-based host needs? Keep-alives: does the SSE connection survive the two-and-a-half-minute idle timeout most cloud load balancers enforce by default? Reconnect: when Inspector intentionally drops the connection, does the server clean up its state correctly?

# Test stdio (default)
$ npx @modelcontextprotocol/inspector tsx src/index.ts

# Test SSE — server already running on :8080
$ npx @modelcontextprotocol/inspector --transport sse \
    --url http://localhost:8080/mcp

# Test streamable HTTP — server already running on :8080
$ npx @modelcontextprotocol/inspector --transport http \
    --url http://localhost:8080/mcp \
    --header "Authorization: Bearer dev-token"

One practical rule for teams shipping remote MCP servers: run the Inspector transport tests against a deployed preview, not just against localhost. Localhost masks every interesting networking failure — TLS termination, sticky-session routing, proxy buffering — that production deployments exhibit. A staging URL with the same edge config as production is the right target, and the auth model you settle on for that staging URL is the same one our MCP server security best-practices guide walks through end to end.

Remote-transport checklist
Before shipping any remote MCP server, drive it from Inspector against the staging URL with: a valid auth token (succeeds), an invalid token (fails with the right error envelope), a missing token (fails with the right error envelope), a 90-second idle gap (connection survives or reconnects cleanly), and a deliberate client disconnect mid-tool-call (server cleans up handler state). Any failure on that list is a do-not-ship signal.

07DebuggingEight common MCP server failure modes.

After enough MCP servers built and shipped, the same eight failure modes account for the overwhelming majority of bugs teams hit. Inspector surfaces each one within seconds. The list below is ordered roughly by frequency — the first three account for more than half of all real-world MCP server bugs.

Mode 01
stdout corruption
console.log → broken transport

A stray console.log writes plain text into the JSON-RPC channel and the host disconnects. Inspector renders the corruption inline. Fix: log to stderr only, audit dependencies for libraries that write to stdout.

Most common
Mode 02
Vague tool descriptions
model refuses to invoke

Description tells the model what the tool does but not when to call it. Inspector lets you ship a draft description and then ask Claude in a real chat to invoke it — observed refusal is the diagnostic.

Second most common
Mode 03
Schema over-restriction
Zod refuses valid input

Required field that should be optional, or an enum that misses a valid case. Surfaces as a Zod validation error in Inspector's error panel — the message names the exact field.

Schema-level
Mode 04
Missing timeouts
upstream stall → session hang

A handler awaiting fetch() with no AbortSignal.timeout() stalls the entire MCP session. Inspector shows the request hanging indefinitely. Fix: every outbound call gets a hard ceiling.

I/O bound
Mode 05
Thrown vs structured errors
uncaught vs isError: true

Uncaught exceptions kill the turn with a generic error; structured envelopes let Claude recover. Inspector's error panel shows the difference at a glance — converge on the structured pattern.

Handler-level
Mode 06
Stale manifest
host caches the old tool list

Tool added or renamed; the host is still showing yesterday's list. Inspector always re-reads the manifest on connect — confirms whether the issue is server-side or host-side. Fix: bump version, full host restart.

Caching artifact
Mode 07
Missing env vars
process.env.X is undefined

Server reads an env var that exists in your shell but not in claude_desktop_config.json's env block. Inspector lets you reproduce the missing-env state exactly by launching without the variable.

Config-level
Mode 08
Protocol version drift
SDK older than host expects

Server SDK was built against an old MCP spec; host speaks a newer one. Inspector's manifest panel flags the version mismatch. Fix: bump @modelcontextprotocol/sdk, rebuild, republish.

Versioning
Pattern
Run all eight in CI
scripted Inspector smoke test

Every PR triggers a headless Inspector run that exercises one representative invocation per tool and one deliberate failure per mode. Broken builds never merge.

Automated coverage

The eight modes above are not the entire failure surface, but they are the eight you should be able to recognise on sight from an Inspector trace. The compounding payoff is that engineers second or third on a project will reach the same diagnosis from the same evidence — Inspector becomes a shared vocabulary for debugging, not a private debugger for whoever was on the keyboard.

For CI integration, the pattern is small and worth wiring up early. A GitHub Actions step runs Inspector in headless mode against your server, fires a curated list of tool invocations (the same four-call baseline per tool from Section 04), asserts on the response envelope shape, and fails the build on any deviation. The script below is a minimal version; production setups extend it to cover all eight failure modes deliberately.

# .github/workflows/mcp-smoke.yml
name: mcp-smoke
on:
  pull_request:
  push:
    branches: [main]

jobs:
  smoke:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: "22"
      - run: npm ci
      - run: npm run build
      - name: Inspector smoke test
        run: |
          npx @modelcontextprotocol/inspector \
            --headless \
            --script tests/smoke.mcp.json \
            -- node dist/index.js
        env:
          UPSTREAM_API_KEY: ${{ secrets.UPSTREAM_API_KEY_TEST }}

The tests/smoke.mcp.json file contains a small JSON array of tool calls plus expected envelope assertions — name, arguments, expected isError, expected substring in the text content. Two pages of JSON catches the eighty percent of regressions that matter, and updates as you add tools rather than as you remember to write tests.

Where this slots into your stack
Inspector is the developer-loop tool; the CI script is the gate; your published server is the artifact. The three should match. When the Inspector test passes locally, the CI smoke should pass, and the published install should succeed on user machines. Any divergence between those three signals is a process bug, and the first place to look is the version pins between your @modelcontextprotocol/sdk and @modelcontextprotocol/inspector entries. For architecture engagements around MCP servers in production, our AI transformation team designs this loop end-to-end.
Conclusion

MCP Inspector is the dev loop — without it, you're shipping blind.

Every team that ships MCP servers eventually arrives at the same workflow: Inspector running in one tab, the editor in another, tsx watch driving the server, four to six saved tool invocations per tool that replay on every code change, a CI smoke step that gates publish on the same set. Teams that arrive at it on day one ship roughly twice as fast as teams that arrive at it on month two. The cost of adoption is a single npm install; the benefit is a structural change to how quickly you can move.

Beyond raw velocity, Inspector gives a team a shared vocabulary for debugging. The eight failure modes in Section 07 become things engineers can recognise on sight from a screen-share, rather than category errors that require a senior to come over and squint at a log file. That alone changes the staffing economics of an MCP server portfolio — junior engineers can own production servers because the diagnostic surface is shared and visual, not tribal.

The next concrete step is small. If you currently run an MCP server without Inspector wired into the dev loop, install it as a dev dependency, add the npm run inspector script, save the four-call baseline per tool, and add the CI smoke step. One afternoon of work; the payoff compounds across every server you ship from that point forward. Standardise now; do not wait for the second production incident to make the case for you.

Standardise on MCP Inspector

MCP Inspector is the dev loop — without it, you're shipping blind.

Our team designs MCP server dev workflows — Inspector integration, CI test automation, manifest validation, transport testing — for production MCP teams.

Free consultationExpert guidanceTailored solutions
What we build

MCP workflow engagements

  • Inspector dev-loop standardisation
  • CI test automation with Inspector
  • Manifest validation in CI pipelines
  • Transport testing across stdio / SSE / HTTP
  • Debugging-pattern playbooks
FAQ · MCP Inspector

The questions MCP teams ask before production.

Hand-rolled tests typically check a single layer — schema validation, handler logic, or end-to-end via a mock host — and require ongoing maintenance to stay in sync with the spec. Inspector exercises the entire MCP surface as a real host would: it negotiates the protocol version, reads the manifest, validates each tool's JSON Schema against the spec, and fires real JSON-RPC calls across whichever transport you configure. The tradeoff: hand-rolled tests are reproducible in CI by default, Inspector is interactive by default. The right pattern is both — Inspector for the dev loop where its visual feedback is most valuable, plus a headless Inspector script in CI for regression coverage that survives across team members. The headless mode means you do not have to choose.