SYS/2026.Q1Agentic SEO audits delivered in 72 hoursSee how →
DevelopmentMigration14 min readPublished May 15, 2026

Cache Components, the use cache directive, cacheLife and cacheTag, the proxy.ts rename — the Next.js 16 changes that touch every app router app.

Next.js 15 to 16 Migration Playbook: Cache Components 2026

A practical playbook for moving an App Router app from Next.js 15 to 16. Cache Components, the use cache directive, cacheLife and cacheTag, the proxy.ts rename, codemod coverage, and a phased rollout that minimizes risk.

DA
Digital Applied Team
Senior engineers · Published May 15, 2026
PublishedMay 15, 2026
Read time14 min
SourcesNext.js 16 release notes
Breaking-change axes
5
Cache · use cache · proxy · codemods · runtime
Codemod coverage
80–90%
of mechanical changes
Typical migration
1–3 wk
mid-size app · phased
New defaults shift
Proxy + Cache
rename + opt-in caching

The Next.js 15 to 16 migration is one of the more substantive upgrades in the App Router era — Cache Components reshape how caching is reasoned about, the use cache directive replaces unstable_cache, and middleware.ts is renamed to proxy.ts. The good news: most of the mechanical work is codemoddable, and a phased rollout lands cleanly in one to three weeks for a mid-size app.

What changed and why it matters. Next.js 16 ships five distinct axes of change at once: a new caching model (Cache Components plus Partial Prerendering as the default), a new directive ( use cache) with companion helpers (cacheLife, cacheTag, updateTag), a deprecation path for unstable_cache, a literal file rename from middleware.ts to proxy.ts, and runtime tuning (Turbopack as the default, Node.js and Bun runtimes available on routing middleware). Any one of these would be a quarter of work to adopt deliberately; together they require a plan.

What this playbook covers: a phase-by-phase rollout from the audit step through codemod application, opt-in cache adoption, and post-migration cleanup. Concrete codemods Vercel ships and what they cover, what they leave untouched, the patterns that bite in production (over-broad cache invalidation is the number one culprit), and the right rollback procedure for the first 24 hours after the bump lands.

Key takeaways
  1. 01
    Cache Components is the killer feature of 16 — adopt it gradually.Page-by-page, not big-bang. Start with a stable, high-traffic page where you can measure cache-hit ratio and rollback cleanly if invalidation behaves unexpectedly.
  2. 02
    unstable_cache to use cache codemod handles most cases.Audit the edge cases — closures over request-scoped data, cache keys derived from headers, and conditional caching all need manual review even after the codemod runs.
  3. 03
    proxy.ts is a literal file rename — don't overthink it.The codemod handles the rename and updates the matcher syntax where it shifted. The conceptual model is unchanged; everything you know about middleware still applies under the new name.
  4. 04
    Over-broad cache invalidation is the #1 production issue.Be specific with cacheTag. A single 'home' tag invalidating every cached function on every mutation is the most common foot-gun — scope tags to the narrowest entity that actually changed.
  5. 05
    Build-time vs request-time confusion bites everyone once.Read the docs on what runs at build vs request under Cache Components. The model is coherent once you have it, but the first incident is almost always a request-only API called at build.

01What's New16 ships in five axes — Cache, use cache, proxy, codemods, runtime.

Next.js 16 is not a single-feature release. The major-version bump consolidates a year of work across five distinct axes, each of which has its own migration implications. Reading the release as a checklist of five independent migrations rather than one monolithic upgrade is the right mental model.

The five axes, in the order they typically affect a migration: the caching model shifts from the legacy fetch-cache plus revalidate approach to Cache Components — Partial Prerendering on by default, the use cache directive for explicit cached functions, and a tag-based invalidation model via cacheTag and updateTag. The unstable_cache primitive enters a deprecation window with a codemod path to use cache. The middleware file is renamed to proxy.ts at the filesystem level — same primitive, new name. Vercel ships codemods covering roughly 80 to 90 percent of the mechanical changes. Turbopack is the default bundler, and routing middleware (proxy.ts) gains Node.js and Bun runtime options alongside the existing Edge runtime.

Cache
Cache Components
PPR default · use cache directive

The headline change. Partial Prerendering is on by default, the use cache directive marks functions for cache participation, cacheLife and cacheTag control TTL and invalidation. Replaces the implicit fetch-cache model.

Largest behavioral change
Deprecation
unstable_cache out
codemod-assisted migration

unstable_cache enters a deprecation window in 16, with a codemod that rewrites most call sites to the use cache directive. Edge cases — closures, conditional caching — need manual review.

Codemod covers most
Rename
middleware → proxy
filesystem rename

middleware.ts becomes proxy.ts. Same primitive, same matcher API. The rename clarifies that the file runs as a request-time proxy before the cache, not only as auth middleware. Codemod handles the file move.

Mechanical change
Runtime
Turbopack + Node/Bun
default bundler + new runtimes

Turbopack is the default for both dev and build. Routing middleware (proxy.ts) supports Edge, Node.js, and Bun runtimes — pick per use case. The build pipeline is materially faster on Turbopack.

Performance lift
Read this before you start
The five axes interact. Cache Components changes when functions run (build vs request); the use cache directive changes what is cached and for how long; proxy.ts can now run in Node.js, which expands what you can do in it but also affects cold-start cost. Plan the migration end-to-end before you start — do not codemod first and reason later.

The release adopts a clear philosophy: caching becomes explicit and composable rather than implicit and global. The Next.js 13–15 era cached fetch calls by default and required opt-outs via { cache: 'no-store' } or revalidate: 0; Next.js 16 inverts the default — nothing is cached unless you mark it with use cache. That single change clarifies many years of confusion about why a particular page was or was not cached, and is the right move architecturally, but it does mean every existing app needs an audit of where caching was being relied on implicitly.

02Cache ComponentsPPR + use cache + cacheLife + cacheTag.

Cache Components is the umbrella name for the new caching model. The four pieces work together. Partial Prerendering (PPR) splits a page into a static shell (prerendered at build) and dynamic holes (rendered at request and streamed in). The use cache directive marks a function as cacheable — its return value is cached on first invocation and reused on subsequent calls that match the same arguments. cacheLife configures the TTL profile for a cached function — short, default, long, days, weeks. cacheTag attaches one or more tags to a cache entry; updateTag (called from a Server Action or a Route Handler) invalidates every entry carrying that tag.

The four primitives, in plain English

The mental model

Three lines of code. use cache at the top of a function says "memoize my return value across requests." cacheLife('hours') says "keep that memoization for the hours profile." cacheTag(`product:${id}`) says "associate this entry with the product:42 tag so a future updateTag('product:42') wipes it." That is the entire surface area — everything else is which functions to cache and how to scope tags.

The shift from revalidate (a time-based interval) to cacheLife profiles is a quiet but important simplification. Instead of every cache call site picking a number of seconds, you choose from a small set of named profiles (typically seconds, minutes, hours, days, weeks, and max), and the profile encodes both a stale time and a revalidate-on-error budget. Centralizing the profiles in configuration means a single change tightens or loosens caching across the entire app — useful during incident response and for consistency across a team.

Tag-based invalidation via cacheTag and updateTag is the right model for any app with a non-trivial data graph. The old revalidatePath/revalidateTag from 14 and 15 are still available but the new pair are the recommended primitives — they participate in PPR cleanly and have better telemetry in the Vercel dashboard. The discipline rule for tags is covered in Section 07: scope tags as narrowly as the entity that actually changes, never broader.

"Caching becomes explicit and composable rather than implicit and global — the right move architecturally, but it means every existing app needs an audit."— Our reading of the Next.js 16 release notes, May 2026

One implementation note worth pinning down: use cache can be applied at three scopes — file, function, or component. File-level "use cache" at the top marks every export as cached; function-level wraps an individual async function; component-level wraps a Server Component. Start at function scope when migrating; the granular scope is easier to reason about and easier to roll back if a particular cache key misbehaves. File-scope and component-scope are sensible consolidations once a page is stable.

03unstable_cacheThe deprecation path.

unstable_cache was the workaround that many teams adopted in Next.js 14 and 15 to memoize non-fetch data calls — database queries, third-party SDK responses, computed values. In 16, the primitive is formally deprecated and the migration path is the use cache directive. The codemod @next/codemod next-async-request-api rewrites the majority of call sites mechanically; what remains is the manual review.

What the codemod handles cleanly

The straightforward case — a top-level async function wrapped in unstable_cache with a static key array — converts to an async function with "use cache" at the top, with the cache key options translated to cacheLife and cacheTag calls inside the function body. The codemod recognizes the common patterns and produces clean output.

What needs manual review

Three patterns the codemod surfaces as TODO comments rather than converting in place. First, closures over request-scoped data — if the cached function references a value from the surrounding scope that varies per request (a header, a cookie, a user ID), the conversion is not mechanical; you either pass that value as an argument so the cache key includes it, or move the dynamic read outside the cached function. Second, conditional caching — code that wraps unstable_cache only when a condition holds needs to be re-expressed; use cache is always-on once applied. Third, nested caching — a cached function calling another cached function works under use cache but the cache-key derivation may need adjustment.

unstable_cache codemod coverage · approximate breakdown

Source: Digital Applied internal migration data
Mechanical rewritesunstable_cache → use cache · static keys
~85%
Closures over request dataManual review · pass as argument
~10%
Conditional cachingManual review · refactor to always-on
~3%
Nested cachingManual review · key-scope check
~2%

For teams with a large surface area of cached calls — anything past a hundred call sites — the practical workflow is: run the codemod, audit every TODO comment in a single pass, and ship the converted code behind a feature flag if the cache behavior is critical. The use cache directive is supported in 15.5 onward as well, so the rewrites can land on 15 first and run there in production before the major-version bump — strongly recommended for risk-averse rollouts.

Don't skip the audit
The codemod is good but not magic. Every TODO comment it emits is a place where the cache behavior may change in subtle ways. A single missed closure over a request-scoped value can produce a cross-tenant data leak when one tenant's cached result is served to a different tenant. Treat the audit as a security review, not a refactor.

04proxy.tsmiddleware.ts becomes proxy.ts.

The simplest of the five axes — and the one teams overthink the most. middleware.ts is renamed to proxy.ts. Same primitive, same matcher API, same request and response shapes. The rename is a clarification: "middleware" had become an overloaded term across the web ecosystem, and the file's actual behavior — intercepting requests before they hit the cache layer — is more accurately described as a proxy. Vercel's docs and the Next.js changelog both note that the rename is not changing the model, only the name.

What the codemod does

The next-proxy-rename codemod moves the file from middleware.ts to proxy.ts at the project root and updates the export name if you were exporting middleware as a named export. The matcher config is unchanged. Any imports of the old file (rare, since middleware files are not typically imported) are also updated.

What changed beyond the name

One material change: proxy.ts supports the Node.js and Bun runtimes in addition to Edge. Most existing middleware files are Edge-runtime by default and should stay there for latency reasons. The Node.js runtime is useful when the file needs to call a library that does not run on Edge (a heavyweight SDK, a database client, a crypto library with native bindings) — previously these calls had to happen in a route handler downstream of middleware, adding latency. With the Node.js runtime on proxy.ts, the call can happen at the proxy layer.

Practical rule of thumb: if your existing middleware does authentication, A/B-test cohort assignment, or geographic rewrites, leave it on Edge after the rename. If your existing middleware was working around an Edge-runtime limitation by calling out to a separate route handler, consider switching to the Node.js runtime on proxy.ts to collapse the indirection.

05CodemodsWhat Vercel ships and what they cover.

Vercel ships codemods through the @next/codemod CLI. Running npx @next/codemod@latest opens an interactive picker; running npx @next/codemod@latest upgrade latest attempts the full sweep. The codemods cover the mechanical changes — file renames, API signature shifts, import path updates, and the unstable_cache migration — and leave the conceptual decisions (which functions to cache, how to scope tags, what to put behind PPR) to the team.

The matrix below summarizes what each codemod covers

Run the codemods in the order shown — earlier codemods produce input the later codemods expect. Skipping the order is the single most common reason migrations stall on the first day.

Step 1
next-async-request-api sweep

Converts the synchronous cookies(), headers(), params, and searchParams reads to their async forms. Required for 15 → 16; if you already ran this on a 15 codebase, skip. Largest mechanical sweep.

Run first
Step 2
next-proxy-rename

Renames middleware.ts to proxy.ts and updates the export name if needed. Mechanical. No conceptual change. Runs in seconds even on large repos.

Run second
Step 3
unstable_cache → use cache

Rewrites unstable_cache call sites to the use cache directive. Handles ~85% mechanically and emits TODO comments for the rest. Audit every TODO before merging — closures over request data are security-sensitive.

Run third · audit output
Step 4
manual: PPR opt-in per page

There is no codemod for adopting Partial Prerendering page-by-page — the right granularity is a human decision. Pick high-traffic, stable pages first; defer pages with heavy dynamic content until the static shell is well understood.

No codemod · manual

What the codemods deliberately do not cover: any decision that requires understanding the data model. Cache-tag scope, cache-life profile selection, which functions deserve use cache at all, where to apply PPR — these all require thinking about how data flows through the app. The codemods clear the mechanical debt so the team can focus on the substantive decisions; they do not replace the substantive decisions.

The other thing worth knowing: the codemods are not perfect. Always run them on a clean working tree, review the diff before merging, and add a regression-test pass before pushing to production. For larger apps, run the codemods on a long-lived branch and let the team poke at it for a day before merging — the first hour after a codemod sweep is when most subtle issues surface.

06Phased RolloutAudit → codemod → opt-in cache → cleanup.

The four-phase rollout below is the shape that consistently lands cleanly across mid-size apps. Skipping a phase or compressing two into one is the most common cause of post-merge incidents — each phase has a distinct purpose and a distinct rollback model.

Phase 1
Audit & inventory
1–2 days · no code changes

Inventory every cache call site (unstable_cache, fetch with revalidate, route segment config). Map the data graph — which entities invalidate together, which mutations affect which pages. Document the existing cache-tag taxonomy if one exists.

Read-only · low risk
Phase 2
Run codemods
1 day · long-lived branch

Run the codemods in order on a long-lived branch. Audit every TODO. Run the full build and full test suite locally. Deploy to a preview environment and exercise the critical user paths. Do not merge until preview is green.

Mechanical · reviewable
Phase 3
Opt-in cache per page
3–10 days · page-by-page

Adopt use cache and PPR on one stable, high-traffic page first. Measure cache-hit ratio in the Vercel dashboard for 24 hours. If clean, roll forward to the next page. Be patient — this is the phase where most production issues surface.

Iterative · measurable
Phase 4
Cleanup
1–2 days · post-stable

Once every targeted page is on the new model, remove the now-dead unstable_cache imports, consolidate cacheLife profiles, document the tag taxonomy in the repo. Final lint + type-check pass to catch any drift the codemods left behind.

Tidy-up · low risk

The discipline that holds the whole rollout together: do not skip Phase 1. The audit step is where you discover the call site depending on an implicit cache that nobody documented, the route that quietly expects revalidate: 0, the third-party integration that breaks when its responses get cached. Skipping the audit and discovering these in production is the most expensive way to find them.

Measurement is the discipline that holds Phase 3 together. The Vercel dashboard exposes cache-hit ratios per route after Cache Components are enabled — watch this number on the first page you migrate before rolling to the second. A page sitting at 95% cache hits is doing what you want; a page at 30% has a cache-key problem that will compound if you replicate the pattern across other pages. The first page is the one where you tune the model; subsequent pages should be roll-outs of a known-good pattern, not re-discoveries.

The rollback rail
Every phase has a documented rollback. Phase 1: no code changed, no rollback needed. Phase 2: revert the merge, redeploy 15.x. Phase 3: revert the specific page's adoption of use cache and cacheLife; the rest of the app continues to run on 16. Phase 4: re-add anything that was prematurely deleted. Never deploy a phase without confirming the rollback procedure for that phase in advance.

For teams with multiple environments, the right cadence is: Phase 2 lands in staging for a week, Phase 3 rolls out one page at a time to production with a 24-hour soak between pages, Phase 4 lands the week after the last Phase 3 page goes green. For teams with a single environment, double every duration. For teams with no preview infrastructure, set one up before starting — running a migration of this scope without a preview environment is a recoverable mistake the first time and a non-recoverable one the second.

If your team has not migrated to App Router yet, that work precedes this playbook. Our web development engagements include App Router migrations and Next.js major-version bumps as a packaged offering — the Pages Router to App Router move is a larger lift than this 15 to 16 step, but the playbook shape is similar.

07Common PitfallsFour cache-related failure modes.

Every team that ships Cache Components into production trips one of the four pitfalls below in the first month. Pre-reading them is cheaper than rediscovering them. Each one has a clean fix; the risk is not knowing to look for it.

Pitfall 01
1tag
Over-broad cache invalidation

A single 'home' or 'app' tag on every cached function. The first updateTag('home') call wipes the entire cache. Scope tags to the narrowest entity that actually changed — product:id, user:id, order:id — never broader.

The #1 production issue
Pitfall 02
2
Build vs request confusion

A function marked use cache that calls cookies() or headers() — these are request-only and cannot be read at build time. The error surfaces only on the build pipeline, not on dev. Move the dynamic read outside the cached function or pass it as an argument.

Build-time crash
Pitfall 03
3
Closure-over-request leak

A use cache function closes over a request-scoped value (a session ID, a tenant key) without including it in the cache key. The result: one tenant's cached output served to a different tenant. Pass the value as an argument so it participates in the cache key.

Cross-tenant data leak
Pitfall 04
4
cacheLife profile mismatch

Using cacheLife('weeks') on data that mutates daily, or cacheLife('seconds') on data that almost never changes. The first wastes correctness, the second wastes the cache. Centralize the profile catalog and apply consistently per data class.

Tune in Phase 3
"Scope cache tags to the narrowest entity that actually changed — never broader. The #1 production issue with Cache Components is over-broad invalidation."— Internal playbook for Cache Components rollouts

One pattern worth internalizing for Pitfall 03 specifically: any request-scoped value you reference inside a use cache function must be passed as an explicit argument, not closed over from the surrounding scope. The cache key is derived from the function arguments; closures are invisible to the key. The codemod for unstable_cache emits TODO comments for this case but cannot fix it automatically — the manual review is mandatory.

For teams who want to go deeper on the stack-level decisions before starting a migration like this, the Next.js 16 AI chatbot tutorial covers the streaming and route-handler patterns that interact with the cache model; the Vercel AI SDK v5 to v6 migration playbook walks through a parallel-shaped upgrade if your app also depends on the AI SDK.

A final note on observability. Once Cache Components is enabled, the Vercel dashboard exposes per-route cache-hit ratios, average cache-entry age, and tag-invalidation counts. Add an alert when cache-hit ratio drops below the expected band for a given route — a 95% steady-state route dropping to 40% is almost always a cache-key bug or an over-eager updateTag call site, and catching it within an hour is the difference between a five-minute fix and a half-day incident. The same dashboard surfaces tag-invalidation rates, which is the right place to look when Pitfall 01 is suspected.

Conclusion

Next.js 16 is a meaningful upgrade with predictable codemod coverage — phased adoption wins.

The 15 to 16 migration is a meaningful upgrade — five axes of change, a new caching model, a literal file rename, and a material shift in defaults. None of the individual pieces is exotic, but the interaction surface is large enough that a disciplined rollout is worth a week of planning. The four-phase shape — audit, codemod, opt-in cache per page, cleanup — lands consistently across mid-size apps in one to three weeks of elapsed time.

The codemods carry the mechanical load. next-async-request-api sweeps the synchronous request-API reads, next-proxy-rename moves the middleware file, the unstable_cache migration handles ~85 percent of call sites with TODO comments for the rest. What the codemods cannot do is the substantive caching decisions — which functions to mark, how to scope tags, what to put behind PPR, which cacheLife profile fits each data class. Those are human decisions and the right place to spend the team's time.

The broader signal: Next.js is moving from an implicit-cache framework to an explicit-cache one. Every team that adopts the new model gains a clearer mental picture of where their data comes from, how long it persists, and what triggers its invalidation — and that clarity is the foundation for every higher-order optimization a Next.js app eventually wants (per-tenant cache, regional cache, durable cache backed by a specific store). The migration is the ticket; the long-term payoff is being able to reason about caching at all.

Migrate to Next.js 16 cleanly

Next.js 16 reshapes cache strategy — phased adoption beats big-bang.

Our team executes Next.js migrations — codemod sweep, cache-strategy design, proxy.ts rename, runtime selection — with measurable rollout and rollback.

Free consultationExpert guidanceTailored solutions
What we ship

Next.js 16 migration engagements

  • Codemod sweep and audit
  • Cache Components adoption strategy
  • use cache directive rollout per-page
  • cacheLife and cacheTag design
  • Rollback procedures for the first 24 hours
FAQ · Next.js 16

The questions Next.js teams ask before the major-version bump.

Gradually, page-by-page. The right rollout is to pick one stable, high-traffic page, mark its data-fetching functions with use cache, configure cacheLife, attach cacheTag values, and measure the cache-hit ratio in the Vercel dashboard for 24 hours before moving to the next page. Big-bang adoption is technically possible but loses the per-page observability signal that tells you when a cache-key choice is wrong. The first page is also where you discover the team's preferred cacheLife profile defaults and tag-scope conventions — patterns that should be consistent across the codebase. Start with a page where rollback is a clean revert of a single feature flag.