SYS/2026.Q1Agentic SEO audits delivered in 72 hoursSee how →
AI DevelopmentTutorial12 min readPublished May 2, 2026

From an empty .claude/skills/ directory to a working /blog-stats slash command — frontmatter, scripts, references, allowed-tools, all in one tutorial.

Build a Claude Skill from Scratch: Step-by-Step Tutorial

Build a Claude Skill end-to-end. Walk through SKILL.md frontmatter, reference docs colocated in the skill directory, callable scripts, project vs user scope, auto-registration as slash commands, and the description-design rules for reliable auto-invocation. Worked example: a /blog-stats skill.

DA
Digital Applied Team
Senior strategists · Published May 2, 2026
PublishedMay 2, 2026
Read time12 min
SourcesAnthropic Claude Code docs
Lines for a typical SKILL.md
20100
frontmatter + body
Skill auto-invocation latency
<1s
registry lookup
Skills in a mature project repo
520
team library scale
User vs project scope
project
wins on conflict

A Claude Skill is a reusable workflow — written in markdown, stored in a .claude/skills/ directory, and exposed automatically as a slash command. The point of this tutorial is to walk through every moving part of a skill, from frontmatter to scripts to allowed-tools, then ship a working example you can paste into your own repo today.

Skills are one of the three primitives Claude Code gives agentic teams — alongside subagents and hooks — and they're arguably the most under-used. A skill is just an instruction document plus a workflow body plus permission rails. When the description is right, Claude discovers the skill, invokes it, executes it, and reports back. No glue code, no MCP server, no separate process.

This guide covers the anatomy of a skill, the frontmatter fields that determine discoverability and blast radius, the description-design rules that decide whether Claude actually auto-invokes it, and a paste-ready /blog-stats implementation you can adapt to your stack. Everything below matches the current Anthropic Claude Code documentation as of May 2026.

Key takeaways
  1. 01
    Skills are slash commands plus context — markdown is the spec.A SKILL.md file with frontmatter and a workflow body becomes /skill-name automatically. No registration step. No separate metadata file. The markdown is the entire contract.
  2. 02
    Description is what makes it discoverable to Claude (and to humans).Trigger-phrase laden, action-oriented, specific. Vague descriptions mean Claude won't auto-invoke. The description field is read at every prompt; treat it as a routing decision, not flavor text.
  3. 03
    allowed-tools narrows blast radius without limiting expressiveness.Glob-allowlists like Bash(npm *) prevent the skill from doing anything unrelated to its purpose. Most production skills only need three or four tool patterns to do their job.
  4. 04
    Project skills track in git; user skills don't.Team-shared workflows belong in the repo at .claude/skills/. Personal aliases belong in ~/.claude/skills/. Project scope wins on naming conflicts — design accordingly.
  5. 05
    disable-model-invocation flips a skill from auto-discovery to manual-only.Use it for destructive or expensive skills you only want fired by an explicit slash command. Let everything else stay discoverable — that's where the leverage is.

01What's a SkillReusable workflows, loaded on demand.

A skill is a single markdown file — SKILL.md — with a small amount of YAML frontmatter and a workflow body written for Claude to read. The frontmatter names the skill, describes it, gates the tools it can touch, and optionally disables auto invocation. The body is the instructions: what to do, in what order, with what guardrails. That's it. There is no compile step, no separate registration file, no JSON manifest. The markdown is the contract.

Skills sit in one of two directories. Project skills live at .claude/skills/<skill-name>/SKILL.md in the repo — they ship with the codebase and become part of the team's shared muscle memory. User skills live at ~/.claude/skills/<skill-name>/SKILL.md on the developer's machine — personal aliases that follow you across projects but never get committed.

Claude Code scans both directories on every session start, registers each skill by name, and exposes it as a slash command. A skill at .claude/skills/blog-stats/SKILL.md shows up as /blog-stats. The description field is also indexed so Claude can auto-invoke the skill when a user's prompt matches the description's intent — that's the difference between a skill and a subagent, and the reason description design matters so much.

Skills vs subagents vs hooks

The three Claude Code primitives are easy to confuse. A quick disambiguation:

  • Skills are reusable workflows. Claude loads them on demand, follows the instructions, executes the tools the frontmatter allows, and reports back in the same conversation. Best for: repeatable processes, lookup tasks, structured outputs.
  • Subagents are forked Claude sessions with their own system prompt and tool allowlist. They run in their own context window and return a summary. Best for: parallel research, isolated investigations, tasks where you want to firewall context from the main session.
  • Hooks are deterministic shell commands triggered by lifecycle events (PreToolUse, PostToolUse, Stop, SubagentStop). They run regardless of model decisions. Best for: formatters, validators, notifications, audit logging.

If you're tempted to write a subagent for something that's essentially "read a config and run a script," it's probably a skill. If you're tempted to write a skill for something that needs to fire on every tool call, it's a hook. Skills are the right answer when the workflow is genuinely user-triggered and benefits from a human-readable name like /blog-stats or /release-notes.

Mental model
A skill is the noun version of a workflow. A subagent is the verb. Skills are named things teams refer to by slash command; subagents are delegated investigations the main session triggers and forgets. Build a skill when the team will name it and reach for it. Build a subagent when the work is too noisy to do in the main conversation.

02AnatomySKILL.md, scripts, references, colocated.

A skill is a directory, not a file. The directory name is the slug Claude uses to register the slash command: .claude/skills/blog-stats/ becomes /blog-stats. Inside the directory, SKILL.md is the only required file. Everything else — scripts, reference docs, fixtures, templates — is optional but colocated so the entire skill is a self-contained unit that can be reviewed, version-pinned, and forked.

Colocation matters more than it looks. When a teammate opens review on a PR touching .claude/skills/release-notes/, every piece of the skill is right there: the workflow body, the shell script it calls, the markdown reference it cites, the example output it targets. No hunting through scripts/ or docs/ to reconstruct what the skill does.

Required
SKILL.md
The contract

Frontmatter (name, description, allowed-tools, disable-model-invocation) plus a workflow body. Claude reads this top-to-bottom every time the skill fires.

1 per skill
Optional
*.mjs
Callable scripts

Node, bash, python — anything the skill needs to execute. The body invokes them via Bash. Keep them small, idempotent, and documented inline so review is cheap.

0..n per skill
Optional
*.md
Reference docs

Style guides, prompt templates, lookup tables, decision matrices. The workflow body cites them by relative path. Edits to reference docs reshape skill behavior without changing the contract.

0..n per skill
Optional
*.json
Fixtures & templates

Schema snippets, response templates, configuration baselines. Useful when the skill produces structured output and you want the format pinned to a file, not buried in the prose.

0..n per skill

Most production skills have a SKILL.md plus one or two supporting files. Skills that grow beyond five files are usually doing too much — that's the signal to split into two skills, or to extract a subagent. Skills are deliberately lightweight; the value comes from having dozens of them, not from any one being elaborate.

03Frontmattername, description, allowed-tools, disable-model-invocation.

Every SKILL.md opens with YAML frontmatter. Four fields matter; the rest are conventions teams can adopt as their library grows. A minimal frontmatter block looks like this:

---
name: blog-stats
description: Generate a stats digest for the Digital Applied blog — post counts by category, average reading time, recent publishing cadence. Use when the user asks for "blog stats", "blog metrics", "how many posts in <category>", or wants a snapshot of editorial output.
allowed-tools:
  - Bash(node *)
  - Bash(git log *)
  - Read(lib/blog-posts.ts)
disable-model-invocation: false
---

name

The kebab-case identifier. Becomes the slash command verbatim: name: blog-stats/blog-stats. Must match the directory name. Convention is two-to-four words separated by hyphens; longer names get unwieldy to type. Pick a name a teammate could guess from the workflow's intent.

description

The most important field. Claude reads this on every user prompt to decide whether to auto-invoke the skill. Write it as a routing instruction: what the skill does, plus the trigger phrases that should fire it. The next section covers the rules in depth. Aim for 30–80 words; longer descriptions get ignored, shorter ones miss the trigger phrases.

allowed-tools

A list of tool-pattern strings the skill is permitted to call. If omitted, the skill inherits the session's allowlist (everything). Specifying allowed-tools narrows the skill's blast radius — see the dedicated section below. Patterns use glob syntax: Bash(npm *), Read(lib/*), mcp__supabase__*.

disable-model-invocation

When true, Claude won't auto-invoke the skill from description matching — the only way to fire it is an explicit /blog-stats typed by the user. Default is false (discoverable). Use true for skills that are destructive, expensive, or context-heavy enough that you want the human in the loop for every invocation.

Workflow
Generate a report
/blog-stats · /release-notes

Multi-step process that produces a structured artifact. Calls scripts, reads files, formats output. The most common skill archetype — anything that turns inputs into a digest, a changelog, or a brief.

discoverable: true
Lookup
Answer a factual question
/zoho-leads-report · /faq-lookup

Read a source of truth (API, database, doc), return a focused answer. No mutations. Fast, cheap, safe to auto-invoke — this is where the description-design rules earn their keep.

discoverable: true
Action
Mutate state
/deploy · /rotate-keys

Writes to a database, hits a deploy endpoint, modifies production state. Disable model invocation, narrow the tool allowlist hard, and require the human to type the slash command explicitly.

discoverable: false

The frontmatter is the routing layer. Workflow and lookup skills should be discoverable; action skills should not. A team library of ten lookup skills plus three action skills plus two workflow skills is a healthy distribution — most calls go through discoverable skills, the destructive ones stay manual.

04Description DesignKeyword-rich, trigger-phrase-laden, specific.

The description field is the difference between a skill that gets auto-invoked and one that sits dormant until somebody remembers to type the slash command. Claude reads every skill's description at every prompt and decides — fast — whether the prompt matches. Vague descriptions don't match. Specific descriptions match well. Descriptions stuffed with the exact trigger phrases users say match best.

The four rules

  • Lead with the action. "Generate a stats digest…" not "A skill for generating stats…". Verb-first matches the way users phrase requests.
  • Include trigger phrases verbatim. If users say "blog stats", "how many posts", "post breakdown" — write those phrases into the description. Claude matches against the words, not the gist.
  • Specify the scope. "Digital Applied blog" (specific) beats "the blog" (vague). The more concrete the scope, the less likely Claude misfires on adjacent requests.
  • Cap at ~80 words. Long descriptions get skimmed. Aim for 30–80 words, every word carrying weight. If you're past 80, the workflow body is doing what the description should.

Good vs bad — same skill

Bad: "A skill for analyzing the blog. Useful for reporting." Eight words, no triggers, no scope, no action shape. Claude will never auto-invoke this — there's nothing to match against.

Good: "Generate a stats digest for the Digital Applied blog — post counts by category, average reading time, recent publishing cadence. Use when the user asks for 'blog stats', 'blog metrics', 'how many posts in <category>', or wants a snapshot of editorial output." Forty-three words, three explicit trigger phrases in quotes, named output shape, scoped to a specific blog. Claude reliably fires on "blog stats" or "how many posts in AI Development".

"The description field is a routing decision the model makes on every prompt. Treat it like a router config, not a README."— Anthropic Claude Code documentation, Skills section

One field shapes everything downstream. A team can write the best workflow body in the world and the skill still won't get used if the description is vague. Conversely, a one-paragraph workflow with a tight description gets reached for daily. Spend more time on the description than on the body — the body is read once per invocation, the description is read at every prompt.

05Allowed ToolsGlob-allowlists that prevent surprises.

The allowed-tools frontmatter field gates which tools the skill is permitted to call. Each entry is a glob pattern. If the field is omitted, the skill inherits the session's full allowlist — usually too wide. Specifying allowed-tools is the single best way to make a skill safe to auto-invoke.

Pattern syntax

Tool patterns follow Claude Code's standard tool-name glob format. Common shapes:

  • Bash(git *) — any git subcommand. Allows git log, git status, git diff but not rm, not curl.
  • Bash(npm *) — npm operations. Permits builds, installs, and audits but not arbitrary shell.
  • Read(lib/*) — file reads under lib/. Useful when a skill consults a registry like lib/blog-posts.ts but shouldn't poke into app/ or node_modules/.
  • mcp__supabase__* — every Supabase MCP tool. Useful for skills that hit a database; pair with disable-model-invocation: true if the skill mutates.
  • Bash(node scripts/blog-*.mjs) — a specific script family. The tightest pattern; appropriate when the skill is essentially a wrapper around one script.

The security model

Claude Code enforces tool patterns at the call site. If a skill tries to call a tool outside its allowlist, the call fails with a permission error — the skill can recover and report the failure, but it can't bypass the gate. This means the allowed-tools field is the trust boundary; everything inside the skill body operates under that ceiling.

The corollary: don't grant tools you don't need. A lookup skill that only reads files should not list Bash(*). A skill that runs one script should not list Bash(npm *). Narrow allowlists let you keep discoverability on while keeping the blast radius small.

Workflow skill
Report generator

Reads a registry, runs an analyzer script, formats markdown. Tight allowlist: Read(lib/*) plus Bash(node scripts/<family>-*.mjs). No git, no network, no arbitrary shell. Safe to auto-invoke.

Read + scoped Bash
Lookup skill
CRM query

Hits an MCP server, returns a count or a breakdown. Allowlist is the MCP namespace plus zero local tools — mcp__zoho__* and nothing else. Description includes the trigger phrases users actually say.

MCP namespace only
Action skill
Database mutation

Writes to production. Disable model invocation entirely, narrow Bash to the one deploy script, require the human to type the slash command. Auto-discovery off, blast radius minimized.

One-script Bash + manual
Maintenance skill
Toolchain updater

Runs brew, npm, pnpm, mas. Allowlist Bash(brew *), Bash(npm *), Bash(pnpm *) — but never Bash(*). The script can still call dozens of commands; the gate keeps it inside the toolchain family.

Per-tool Bash globs

The allowlist is also documentation. A reviewer skimming the frontmatter sees exactly what the skill is permitted to touch — no need to read the body to find out whether the skill might shell out to curl. Treat allowed-tools as part of the contract, not an afterthought.

06Worked ExampleBuilding /blog-stats — paste-ready.

Time to ship a real skill. /blog-stats is a project skill that reads the blog registry, calls a Node analyzer, and returns a markdown digest — post counts by category, average reading time, the last five published slugs, and a one-paragraph summary. The full file pair below drops into .claude/skills/blog-stats/ with no other changes.

Step 1 — create the directory

From the repo root, run:

mkdir -p .claude/skills/blog-stats
touch .claude/skills/blog-stats/SKILL.md
touch .claude/skills/blog-stats/analyze.mjs

Step 2 — write SKILL.md

The full file. Frontmatter at the top, workflow body below, references to the script and the registry. Paste this verbatim:

---
name: blog-stats
description: Generate a stats digest for the Digital Applied blog — post counts by category, average reading time, recent publishing cadence, and the last five published slugs. Use when the user asks for "blog stats", "blog metrics", "how many posts in <category>", "publishing cadence", or wants a snapshot of editorial output. Reads lib/blog-posts.ts as the source of truth.
allowed-tools:
  - Read(lib/blog-posts.ts)
  - Bash(node .claude/skills/blog-stats/analyze.mjs *)
disable-model-invocation: false
---

# /blog-stats

Generate a stats digest for the Digital Applied blog.

## When to fire

Auto-invoke when the user asks for any of:
- "blog stats" or "blog metrics"
- "how many posts in <category>"
- "publishing cadence" or "recent posts"
- "editorial output" snapshot

## Workflow

1. Read `lib/blog-posts.ts` to confirm the registry is present.
2. Run the analyzer:
   ```
   node .claude/skills/blog-stats/analyze.mjs
   ```
   Pass `--category="<name>"` if the user named a specific category.
3. Format the analyzer output as a markdown digest with these sections:
   - **Total posts** (one line)
   - **By category** (table — category, count, % share)
   - **Reading time** (mean, median, range)
   - **Recent five** (slugs with publish dates)
   - **One-paragraph summary** — call out the highest-volume category and the cadence trend over the last 30 days.
4. Return the digest in the conversation. Do not write a file unless the user asks.

## Constraints

- Read-only. Never mutate the registry from this skill.
- If the registry shape changes (new field, renamed field), report the discrepancy and stop — don't guess.
- Cap the digest at 400 words; if categories explode, group the long tail under "Other".

Step 3 — write analyze.mjs

The supporting script. Parses the TypeScript registry as text (no transpilation), aggregates the fields the workflow needs, prints JSON to stdout. Paste verbatim:

#!/usr/bin/env node
// .claude/skills/blog-stats/analyze.mjs
// Reads lib/blog-posts.ts and emits aggregate stats as JSON.

import { readFileSync } from "node:fs";
import { resolve } from "node:path";

const REGISTRY = resolve(process.cwd(), "lib/blog-posts.ts");
const ARG_CATEGORY = (process.argv.find((a) => a.startsWith("--category=")) ?? "").split("=")[1];

const src = readFileSync(REGISTRY, "utf8");

// Coarse parser — registry entries are well-formed object literals, so a
// per-block regex is sufficient. Don't try to TypeScript-parse this; the
// shape is stable enough to grep.
const entryRe = /\{\s*slug:\s*"([^"]+)",\s*title:\s*"([^"]+)",[\s\S]*?category:\s*"([^"]+)",[\s\S]*?readingTime:\s*(\d+),[\s\S]*?publishedTime:\s*"([^"]+)"/g;

const posts = [];
let m;
while ((m = entryRe.exec(src)) !== null) {
  const [, slug, title, category, readingTime, publishedTime] = m;
  if (ARG_CATEGORY && category !== ARG_CATEGORY) continue;
  posts.push({ slug, title, category, readingTime: Number(readingTime), publishedTime });
}

const byCategory = posts.reduce((acc, p) => {
  acc[p.category] = (acc[p.category] ?? 0) + 1;
  return acc;
}, {});

const readingTimes = posts.map((p) => p.readingTime).sort((a, b) => a - b);
const mean = readingTimes.reduce((a, b) => a + b, 0) / (readingTimes.length || 1);
const median = readingTimes[Math.floor(readingTimes.length / 2)] ?? 0;

const recent = posts
  .slice()
  .sort((a, b) => b.publishedTime.localeCompare(a.publishedTime))
  .slice(0, 5)
  .map((p) => ({ slug: p.slug, publishedTime: p.publishedTime }));

const out = {
  total: posts.length,
  byCategory,
  readingTime: {
    mean: Number(mean.toFixed(1)),
    median,
    min: readingTimes[0] ?? 0,
    max: readingTimes[readingTimes.length - 1] ?? 0,
  },
  recent,
  filter: ARG_CATEGORY ?? null,
};

console.log(JSON.stringify(out, null, 2));

Step 4 — try it

Save both files. Restart your Claude Code session — skill registry is scanned at session start. Type /blog-stats (explicit) or say something like "give me blog stats" or "how many posts in AI Development?" (auto-invoked). Claude reads the SKILL.md, runs the analyzer, formats the output, returns the digest. The entire flow takes under a second of latency on top of the analyzer runtime.

What changed in your repo
Two new files in .claude/skills/blog-stats/, no other edits. The skill is already discoverable to Claude on the next session start. Commit both, and your team gets the slash command at their next git pull. That's the entire shipping ceremony for a project skill.

Skim the SKILL.md once more. The whole contract is in plain markdown — anyone on the team can review it, fork it, tighten the allowlist, or rewrite the workflow body without learning a new tool. That's the leverage skills offer: a Git-friendly, review-friendly, paste-ready unit of agentic capability that grows with the codebase.

07Scope & ShareUser scope, project scope, plugin distribution.

Skills live in one of two directories — and the choice determines who gets them, whether they version with the codebase, and what happens when two skills have the same name. The split is simple, but getting it right matters more than it sounds.

Project scope — .claude/skills/

Lives in the repo. Tracks in git. Ships with the codebase. Every teammate who clones the project gets every project skill on their next session start — no setup ceremony, no install command. This is where 80% of skills belong: anything tied to the project's stack, conventions, registries, or workflows.

Use project scope for: report generators (/blog-stats, /release-notes), lookups (/component-docs, /zoho-leads-report), action skills (/deploy, /rotate-keys), maintenance skills, onboarding flows. Anything a teammate could reasonably need within five minutes of cloning the repo.

User scope — ~/.claude/skills/

Lives in your home directory. Doesn't track in git. Doesn't ship with any codebase. Available in every Claude Code session you start, regardless of project. This is where personal aliases and cross-project skills live — workflows that follow you, not the repo.

Use user scope for: personal aliases (/morning-standup, /inbox-zero), cross-project workflows (/audit-repo, /preflight), and skills that depend on credentials or tools only on your machine. If a teammate would be confused or broken by the skill, it's user scope.

What happens on a naming conflict

Project scope wins. If you have ~/.claude/skills/audit-repo/ and the repo also ships .claude/skills/audit-repo/, the project version is the one Claude registers as /audit-repo. The user-scope version is shadowed in that session. Design accordingly: don't name user-scope skills the same as known project skills, and if you're building a project skill, audit user-scope first to avoid surprise shadowing for teammates.

Distributing skills as plugins

Beyond the two built-in scopes, Claude Code supports plugin distribution — a third path where a separate package bundles skills (and subagents, hooks, MCP servers) and makes them installable via npx. Useful when you want to share a skill set across multiple repos that aren't worth coupling, or when you're publishing a public skill library.

For most teams, project scope plus a small user-scope library is enough — plugin distribution is the right answer once a skill set outgrows a single repo or starts shipping to outside consumers. Start with project scope; promote to a plugin when the same skill ends up duplicated in three repos.

For agencies and engineering teams formalizing their agentic stack, curating a project-scope skill library is the highest-leverage move we see in client engagements. Five to twenty well-named skills, tight allowlists, descriptions tuned for the team's actual phrasing — that's institutional memory rendered into Claude Code. Our AI digital transformation engagements usually start with exactly this exercise — naming, scoping, and calibrating a team's first ten skills.

Conclusion

Skills turn institutional knowledge into invocable workflows.

A Claude Skill is the simplest possible unit of agentic capability: a markdown file with frontmatter, an optional script or two, and a kebab-case directory name that becomes a slash command. The cost of writing one is an hour; the cost of the team adopting it is zero. Once SKILL.md lands in the repo, every teammate gets it on the next session start.

What changes when a team has ten to twenty skills isn't any single invocation — it's the shape of the work. Onboarding a new hire stops being a two-week shadowing exercise; it becomes "here are the slash commands, fire them as you work." Shared muscle memory gets rendered into executable documentation. The skills library is the institutional memory of the team, version-controlled, reviewable, and improvable by anyone with a PR.

What to build next: combine skills with subagents for compound leverage. A subagent can fire a skill; a skill can dispatch a subagent. The interesting workflows live in that combination — a release-notes skill that spawns three subagents to investigate categories in parallel, then merges the outputs through a template. Start with a few simple skills, get the description design right, then graduate to compound patterns once the library matures.

Ship your team's skill library

A team's skill library is its institutional memory rendered into Claude Code.

Our agentic AI team designs project skill libraries — naming conventions, distribution model, governance — and accelerates onboarding from weeks to days.

Free consultationExpert guidanceTailored solutions
What we ship

Skill-library engagements

  • Curated skill library calibrated to your repos and workflows
  • Naming conventions and discovery rules for reliable auto-invocation
  • Distribution model — project, plugin, or team registry
  • Skill + subagent combinations for compounded leverage
  • Onboarding playbook turning new hires productive in days
FAQ · Skills

The questions teams ask before shipping their first Claude Skill.

Use a skill when the workflow has a stable name your team will reach for — /blog-stats, /release-notes, /deploy. Skills run in the main conversation, share context with everything else happening in the session, and feel like first-class commands. Use a subagent when you want a forked Claude session with its own context window and tool allowlist — parallel research, isolated investigations, anything where you'd rather not pollute the main thread. A common pattern is to combine them: a skill that dispatches one or more subagents and merges their findings back into the main conversation. Skills are nouns the team names; subagents are verbs the main session triggers.