MCP — the Model Context Protocol — is the cleanest answer Anthropic and the agent ecosystem have produced to a problem every team rolling tool-use into production hits eventually: how do you stop rewriting the same OpenAPI-to-prompt glue every time you bolt a new tool onto a new model? This tutorial walks the full build of an MCP server in TypeScript, end to end, with no skipped steps.
By the time you finish reading you will have a working server that registers two tools — a weather lookup and a calendar lister — using schema-validated inputs via Zod, communicates over stdio with JSON-RPC, has been exercised end-to-end with the official mcp-inspector browser UI, is wired into Claude Desktop via a claude_desktop_config.json entry, and is ready to publish on npm under your scope so any other Claude Desktop or Claude Code user can install it with one command.
Everything in this guide runs on Node 22 LTS or newer. The full code is short — under a hundred lines of TypeScript — but the surrounding file-system, transport, and config conventions are where most first-time MCP authors stumble. Those are exactly the steps we slow down on.
- 01MCP is JSON-RPC plus a tool schema.A small spec on top of an old protocol. Familiarity transfers from any RPC framework you've used — handlers, request/response, error envelopes.
- 02Stdio transport is the default — and the easiest.One process, one binary, one config line. Reach for SSE or HTTP only when you genuinely need multi-client access or a remote-server deployment.
- 03Zod schemas are your contract with the model.They generate the JSON Schema Claude sees at tool-call time. Vague or mis-described schemas are the single biggest cause of tools that Claude refuses to invoke.
- 04mcp-inspector is the development feedback loop.Browser UI, full request/response visibility, replayable tool calls. Faster than restarting Claude Desktop on every change and far easier to debug.
- 05Distribution is the under-discussed half.A server is adopted not because it works, but because the README, npm metadata, and version cadence are professional. Treat your published package as a product.
01 — Why MCPTools become first-class citizens.
Before MCP, every team integrating tool-calling into an LLM application built the same plumbing twice: once for the model provider's tool format, once for the actual transport layer to the underlying capability — REST, gRPC, a local script, whatever. The schemas drifted. The error envelopes drifted. The auth story was ad-hoc per integration. And the moment you wanted the same tool to work in Claude Desktop, in Claude Code, in a custom agent, you wrote three versions of the same wrapper.
MCP solves a narrow, specific problem: it standardises the contract between an MCP host (Claude Desktop, Claude Code, Cursor, any IDE or agent that speaks MCP) and an MCP server (your code, exposing tools, prompts, and resources). The wire format is JSON-RPC 2.0. The transport is negotiated — stdio for local, SSE or HTTP for remote. The tool schema is JSON Schema. None of that is novel; the contribution is the packaging.
Anthropic open-sourced the spec and the reference SDKs (TypeScript, Python, Kotlin) in late 2024, and adoption has compounded through 2025 and into 2026. The npm registry shows roughly 1,200 published MCP servers as of Q2 2026 — Slack, GitHub, Linear, Postgres, Figma, internal-corpus search, weather, file-system, screenshot, browser-control. The pattern is clear: anything you currently integrate with the Claude API by hand has, or will have, an MCP server you can drop in instead.
"MCP shifts the question from how do I integrate? to what should the tool look like? The wire format is solved; the design question is what matters."— Digital Applied engineering, on production MCP rollouts
For a team designing agentic workflows in 2026, the practical move is to assume MCP for any tool you intend to reuse across more than one host. The cost of writing a tool as an MCP server is barely higher than writing it as a bespoke wrapper; the upside is that every Claude surface, plus an expanding circle of third-party agents, can consume it without further work.
02 — Scaffoldnpm init to first build.
We will keep the tooling vanilla — no monorepo, no framework, just a single TypeScript package that produces an executable Node binary. Create a directory and initialise it. The terminal session below is verbatim; nothing has been edited for length.
$ mkdir weather-calendar-mcp && cd weather-calendar-mcp
$ npm init -y
Wrote to /Users/you/weather-calendar-mcp/package.json:
{
"name": "weather-calendar-mcp",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": { "test": "echo \"Error: no test specified\" && exit 1" },
"keywords": [],
"author": "",
"license": "ISC"
}
$ npm install --save @modelcontextprotocol/sdk zod
$ npm install --save-dev typescript @types/node tsx @modelcontextprotocol/inspector
$ npx tsc --init
Created a new tsconfig.json with:
target: es2022
module: nodenext
...Two production dependencies. The first, @modelcontextprotocol/sdk, is the official TypeScript SDK — it ships both the server primitives we use here and the client primitives you would use to consume an MCP server from your own agent. The second, zod, is the schema library MCP's TypeScript SDK is designed around: define a Zod schema once, the SDK converts it to JSON Schema for the model and validates inbound arguments at runtime against the same definition.
Replace the generated tsconfig.json with a tight config suited for a published Node package. Set "module": "NodeNext", "target": "es2022", "outDir": "dist", and "declaration": true so consumers get types. Update package.json scripts to match:
{
"name": "@your-scope/weather-calendar-mcp",
"version": "0.1.0",
"type": "module",
"bin": {
"weather-calendar-mcp": "./dist/index.js"
},
"files": ["dist", "README.md"],
"scripts": {
"build": "tsc",
"dev": "tsx watch src/index.ts",
"inspector": "npx @modelcontextprotocol/inspector tsx src/index.ts",
"prepublishOnly": "npm run build"
}
}The bin entry is what makes your server installable as a named command — once published, a user runs npx @your-scope/weather-calendar-mcp and Node executes dist/index.js. Add a shebang line (#!/usr/bin/env node) to the top of src/index.ts so that file is directly executable after compilation.
package.json. The MCP SDK is ESM-only; CommonJS imports will fail at runtime with cryptic errors about require() of an ES module. Pair it with module: NodeNext in tsconfig.json and the import/extension story stays sane.03 — Server ShapeStdio transport, JSON-RPC, the request lifecycle.
An MCP server is, at runtime, a long-running Node process that reads JSON-RPC requests from one transport, dispatches them to your tool handlers, and writes JSON-RPC responses back over the same transport. The SDK gives you a Server (or higher-level McpServer) class to model that lifecycle and three built-in transports to choose from.
Standard streams
One MCP host launches one MCP server as a child process. Requests on stdin, responses on stdout, log lines on stderr. Zero network setup, single-tenant. This tutorial uses stdio throughout.
Local · Single-hostServer-sent events
HTTP endpoint, server pushes JSON-RPC events to the host over a long-lived connection. Suitable for remote servers and multi-client scenarios. More moving parts: auth, CORS, keep-alives.
Remote · Multi-clientStreamable HTTP
Newer transport — bidirectional streamable HTTP, designed for serverless and load-balanced deployments. Use when SSE's long-lived sockets are awkward to operate.
Serverless-friendlyPer-server choice
A single server typically binds to one transport at startup. If you need multiple, run multiple processes — the simpler model survives more refactors than a multi-transport router.
Operational ruleStdio is the right default for almost every Claude Desktop or Claude Code integration. The host launches your server as a subprocess on demand, the user runs in a single trust boundary (their machine), and there is nothing to deploy or to expose. The only real reason to reach for SSE or HTTP is a server that needs to be remote— running in your cloud, serving multiple users — or one that wraps a stateful resource the host shouldn't own.
Concretely, here is the minimum viable server skeleton. We will fill in the actual tool registrations in Section 04; this is the shape every MCP server in TypeScript starts from.
#!/usr/bin/env node
// src/index.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
const server = new McpServer({
name: "weather-calendar-mcp",
version: "0.1.0",
});
// (tool registrations go here — Section 04)
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
// server now reads JSON-RPC from stdin and writes responses to stdout
// log to stderr only — stdout is the transport channel
console.error("[weather-calendar-mcp] ready on stdio");
}
main().catch((err) => {
console.error("Fatal:", err);
process.exit(1);
});A single rule will save you an evening of debugging: with stdio transport, never write anything to stdout except framed JSON-RPC. Any stray console.log corrupts the protocol stream and the host disconnects. Always log to stderr. The MCP SDK's logging helpers do this for you, but plain console.error() works fine.
04 — Tool RegistrationSchema-first with Zod.
Tools are the thing that makes an MCP server worth running. The higher-level McpServer.tool() API takes four arguments: a tool name (kebab-case is conventional), a human-readable description, a Zod input schema, and an async handler function that receives the validated arguments and returns a structured response.
The description and schema both ship to Claude. The description tells the model when to call this tool; the schema tells it what arguments to pass and what they mean. Tools that Claude refuses to invoke — the most common newbie frustration — almost always have a vague description or under-specified parameter docs.
Simple string tool
server.tool(name, desc, {}, fn)No parameters — a tool that returns the current server time, version info, or a fixed message. Zod schema is an empty object literal. Useful for health-check style endpoints.
0 parametersParameterised lookup
z.object({ city: z.string() })Single required argument with a string-typed schema and a .describe() call. Zod's .describe() text flows into the JSON Schema description Claude reads — this is where you teach the model what 'city' means.
1+ parametersAsync external call
async fn → fetch → formatHandler is async, calls an upstream API, formats the result as text content. Use AbortController for cancellation, return a structured error response on failure rather than throwing.
I/O boundHere is the canonical tool registration shape, with the descriptions written for the model rather than for a human reader of the source file. This pattern is what gives you reliable tool invocation in practice.
import { z } from "zod";
server.tool(
"get_weather",
"Look up the current weather for a city by name. Use this when the user " +
"asks about temperature, conditions, or whether to bring an umbrella. " +
"Returns a short natural-language summary plus the temperature in Celsius.",
{
city: z
.string()
.min(2)
.describe(
"City name, optionally with country code, e.g. 'Berlin' or 'Austin, US'. " +
"Avoid ambiguous names — pass the country code when possible."
),
},
async ({ city }) => {
const summary = await fetchWeatherSummary(city);
return {
content: [{ type: "text", text: summary }],
};
}
);The SDK derives the JSON Schema Claude sees from the Zod shape. The .describe() string flows through to the schema and is visible to the model at tool-selection time, which is why we use it liberally. The handler returns a { content: [{ type: "text", text }] } envelope; the SDK serialises that into the JSON-RPC response.
05 — Real ToolsWeather plus calendar — the full implementation.
Time to make it real. The server registers two tools: get_weather calls a public weather API and returns a short summary; list_calendar_events reads from a local calendar source (we will stub the storage layer here — replace it with Google Calendar, Microsoft Graph, or a SQLite database as your integration target). Both handlers are async and both return text content.
#!/usr/bin/env node
// src/index.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
const server = new McpServer({
name: "weather-calendar-mcp",
version: "0.1.0",
});
// --- Tool 1: weather ---
server.tool(
"get_weather",
"Look up the current weather for a city by name. Use this when the user " +
"asks about temperature, conditions, or whether to bring an umbrella.",
{
city: z
.string()
.min(2)
.describe("City name, e.g. 'Berlin' or 'Austin, US'."),
},
async ({ city }) => {
try {
const res = await fetch(
`https://wttr.in/${encodeURIComponent(city)}?format=j1`,
{ signal: AbortSignal.timeout(8000) }
);
if (!res.ok) {
return {
content: [
{ type: "text", text: `Weather lookup failed: HTTP ${res.status}` },
],
isError: true,
};
}
const data = (await res.json()) as {
current_condition: { temp_C: string; weatherDesc: { value: string }[] }[];
};
const cur = data.current_condition[0];
return {
content: [
{
type: "text",
text: `${city}: ${cur.weatherDesc[0].value}, ${cur.temp_C}°C`,
},
],
};
} catch (err) {
return {
content: [
{ type: "text", text: `Weather lookup error: ${(err as Error).message}` },
],
isError: true,
};
}
}
);
// --- Tool 2: calendar ---
type CalendarEvent = { id: string; title: string; start: string; end: string };
// Replace with your real calendar source — Google Calendar, Microsoft Graph, etc.
async function readCalendarBetween(
from: string,
to: string
): Promise<CalendarEvent[]> {
return [
{ id: "1", title: "Team standup", start: `${from}T09:00`, end: `${from}T09:15` },
{ id: "2", title: "Design review", start: `${to}T14:00`, end: `${to}T15:00` },
];
}
server.tool(
"list_calendar_events",
"List the user's calendar events between two ISO dates. Use this when the " +
"user asks what is on their schedule, whether they are free at a given " +
"time, or to summarise upcoming meetings.",
{
from: z
.string()
.describe("ISO date (YYYY-MM-DD) for the start of the range."),
to: z
.string()
.describe("ISO date (YYYY-MM-DD) for the end of the range, inclusive."),
},
async ({ from, to }) => {
const events = await readCalendarBetween(from, to);
if (events.length === 0) {
return {
content: [{ type: "text", text: `No events between ${from} and ${to}.` }],
};
}
const lines = events.map(
(e) => `- ${e.title} (${e.start} → ${e.end})`
);
return {
content: [
{
type: "text",
text: `Events ${from} → ${to}:\n${lines.join("\n")}`,
},
],
};
}
);
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("[weather-calendar-mcp] ready on stdio");
}
main().catch((err) => {
console.error("Fatal:", err);
process.exit(1);
});Three details in that file repay attention. First, the AbortSignal.timeout(8000) on the outbound fetch — every tool handler should set a hard ceiling on upstream calls, otherwise a slow third-party API stalls the entire MCP session. Second, the error path returns structured error content with isError: true rather than throwing — this lets Claude see the failure message and decide whether to retry, ask the user, or surface the error. Third, the stub readCalendarBetween() is the seam where a real integration plugs in; everything around it stays the same.
Build it: npm run build. You should see a populated dist/ directory with an executable index.js. The server does nothing on its own — it needs a host to connect to it. Section 06 is how you exercise it in isolation; Section 07 is how you wire it to Claude Desktop.
06 — Local Testingmcp-inspector and the dev loop.
The single most valuable habit you can pick up early: drive your server with mcp-inspector while you build it. It is a small browser UI bundled with the MCP project that launches your server in a child process, speaks the protocol on your behalf, lists the tools your server exposes, and lets you fire arbitrary tool calls with arbitrary arguments while showing every JSON-RPC message on the wire. Iteration is seconds, not the minutes it takes to restart Claude Desktop on every change.
We already added the script in Section 02. Run it:
$ npm run inspector
> @your-scope/weather-calendar-mcp@0.1.0 inspector
> npx @modelcontextprotocol/inspector tsx src/index.ts
Starting MCP inspector...
[server] [weather-calendar-mcp] ready on stdio
Inspector running on http://localhost:5173Open the URL. The left panel lists your tools — both should appear with the descriptions you wrote — and the right panel is a form-based invocation surface. Try get_weather with {"city": "Berlin"}. You should see the request envelope, the response envelope, and the rendered text content inline. Now break it on purpose — pass {"city": ""}. Zod's .min(2) constraint rejects it, the SDK returns a structured validation error, and the inspector renders that error. That is exactly what Claude will see at runtime — which is why getting the error story right here pays off later.
"Tool returned an error: Cannot read properties of undefined(reading 'temp_C')" — almost always means the upstream API changed its response shape and your code is parsing what it expected, not what arrived. Log JSON.stringify(data) to stderrat the top of the handler, re-run, and you will see exactly what came back. The stderr stream is visible in the inspector's "Server output" panel; do not be tempted to use stdout.
Keep the inspector running in one terminal while you edit code in your editor. The tsx watchmode in the inspector script restarts the server on every save — your turnaround for "change a schema field and re-invoke" is about a second. That speed is what makes MCP development pleasant; lose it and the iteration cost on tool tuning is the difference between "ship it tonight" and "ship it next week".
07 — Claude Desktopclaude_desktop_config.json — the wiring.
Once the inspector is happy, wiring your server into Claude Desktop is a single config edit. Claude Desktop reads a JSON file at startup that tells it which MCP servers to launch and how. The location of that file is OS-specific:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/Claude/claude_desktop_config.json
Create the file if it does not exist. The minimal entry for the server we just built — pointing at the local dist/ build, before you publish — looks like this:
{
"mcpServers": {
"weather-calendar": {
"command": "node",
"args": ["/Users/you/weather-calendar-mcp/dist/index.js"],
"env": {}
}
}
}Save the file. Fully quit Claude Desktop and relaunch it — a window reload is not enough. Open any chat, click the tools indicator in the input area, and you should see weather-calendarlisted with its two tools. Ask Claude a question that triggers tool use: "What is the weather in Berlin and what is on my calendar tomorrow?" — and watch it invoke both tools in sequence.
Once the server is published to npm (Section 08), the config entry simplifies. Most published servers use the npx form so users do not need a clone:
{
"mcpServers": {
"weather-calendar": {
"command": "npx",
"args": ["-y", "@your-scope/weather-calendar-mcp"],
"env": {}
}
}
}The -y flag auto-confirms the first-run install prompt. The env object is where you pass any API keys or feature flags your server reads via process.env — keep that documented in the README so users know what to set.
"Restart Claude Desktop fully, not just reload — the MCP server registry is read once at process start. This single point catches more first-time integrations than any other."— Issue tracker pattern across MCP server repos, 2025-2026
08 — Ship Itnpm publish, CI, and versioning.
A server that works on your machine is not a shipped server. The difference between a published MCP server that gets adopted and one that sits unstarred on GitHub is the same difference that separates any published npm package from a private one: a readable README, sensible semver, a working install command, and a CI pipeline that stops you publishing broken builds.
Three distribution paths are worth knowing. Pick the one that matches your audience and security posture.
Public npm package
Publish under your npm scope, version with semver, document the install line in the README. Users add a one-line npx command to claude_desktop_config.json. Discoverable via npm search and the growing MCP server lists.
Pick public npmGitHub-pinned
Skip npm; users add a git+https URL or a tarball release to their config. Fine for prototypes, internal tools, or work-in-progress servers. Less discoverable, no automatic update story, but zero supply-chain surface.
Pick GitHub-pinnedPrivate registry
Publish to a private npm registry (Verdaccio, GitHub Packages, Artifactory) with scoped credentials. Right answer for internal-only servers that touch proprietary data or APIs. Pair with mandatory code review.
Pick private registryBefore npm publish, set the things that determine whether your package looks professional in the registry listing. Fill in description, repository, homepage, bugs, license, and keywords in package.json. Use the mcp keyword — the MCP server directory tools pick it up. Write a README that opens with one sentence on what the server does, followed by the exact claude_desktop_config.json snippet to install it.
$ npm login
$ npm publish --access public
npm notice
npm notice 📦 @your-scope/weather-calendar-mcp@0.1.0
npm notice === Tarball Contents ===
npm notice 864B README.md
npm notice 712B package.json
npm notice 4.3kB dist/index.js
npm notice 2.1kB dist/index.d.ts
npm notice === Tarball Details ===
npm notice name: @your-scope/weather-calendar-mcp
npm notice version: 0.1.0
npm notice published from: /Users/you/weather-calendar-mcp
npm notice
+ @your-scope/weather-calendar-mcp@0.1.0For CI, a minimal GitHub Actions workflow is enough to keep you honest. The workflow below runs on every push and PR — builds, type-checks, and runs the inspector in a smoke-test mode against a scripted set of tool calls. On a tag push it publishes.
# .github/workflows/ci.yml
name: ci
on:
push:
branches: [main]
tags: ["v*"]
pull_request:
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "22"
registry-url: "https://registry.npmjs.org"
- run: npm ci
- run: npm run build
- run: npx tsc --noEmit
publish:
needs: test
if: startsWith(github.ref, 'refs/tags/v')
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "22"
registry-url: "https://registry.npmjs.org"
- run: npm ci
- run: npm publish --access public
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}Versioning matters. Treat the tool schema as the public API: any breaking change to a tool name, a required argument, or an argument type is a major bump. Adding an optional argument is a minor; an implementation-only fix is a patch. MCP hosts cache tool definitions per session — users will not necessarily notice a backwards-compatible change, but they will absolutely notice a tool that suddenly errors because a previously-optional field became required.
claude_desktop_config.json snippet, a list of the tools with one-line descriptions, the env vars required, a troubleshooting section that names the top two or three failure modes, and a contact method for issues. That structure is what gets your server starred, forked, and recommended.MCP shifts the question from 'how do I integrate?' to 'what should the tool look like?'
What you have just built is the canonical MCP shape — TypeScript SDK, stdio transport, Zod-validated tool schemas, mcp-inspector for the development loop, Claude Desktop wiring through a single config entry, and a published npm package that any other Claude user can install. The whole server is under a hundred lines of TypeScript. Everything else is convention.
The reason this matters: every tool you write as an MCP server becomes reusable across every MCP host. Claude Desktop today, Claude Code today, an expanding circle of agents and IDEs over the next year. The work you do once in this format compounds; the work you do as a bespoke wrapper in a single integration is throwaway. Picking the format that gives you reuse is the single highest-ROI engineering decision in agentic tooling right now.
The next step is small and concrete. Pick one tool you currently invoke from the CLI every week — a database query, an internal API call, a script that scrapes a page, a file operation, anything — wrap it as an MCP server using the structure above, publish it under your scope, and ship the install line to your team. The first server is the hardest; everything after it is template work.