SYS/2026.Q1Agentic SEO audits delivered in 72 hoursSee how →
MarketingCase Study13 min readPublished May 13, 2026

Twelve posts a month manual, 104 posts a month engineered — how a publisher scaled an AI content engine without losing voice.

Case Study: AI Content Engine, 100 Posts/Month Publisher

An anonymised publisher case study: twelve posts a month manual, declining traffic, no fact-check discipline. Six months later — one hundred and four posts a month sustained, a two-tier fact-check chain, schema compliance in CI, a weekly refresh cadence, and a measurable lift in organic traffic and LLM citation share — without losing editorial voice.

DA
Digital Applied Team
Content engineering · Published May 13, 2026
PublishedMay 13, 2026
Read time13 min
Sources8
Velocity
104/mo
from 12/mo manual baseline
Fact-check pass rate
96%
two-tier verification chain
Schema compliance
91%
CI-validated at merge
Timeline
6mo
rollout to sustained run-rate

A publisher AI content engine case study: an editorial team shipping twelve posts a month manually, with declining traffic and no fact-check discipline, rebuilt the program around an eight-stage AI-orchestrated pipeline and reached one hundred and four posts a month sustained inside six months. Fact-check pass rate landed at ninety-six percent, schema compliance at ninety-one, organic traffic recovered, and LLM citation share climbed materially — without losing editorial voice.

The engagement was a publisher in a mid-sized B2B category with a mature back catalog and a clear authority position that had started to erode under volume pressure from category competitors. The brief: rebuild the engine, hit the velocity target, protect voice, and ship measurable outcomes inside two quarters. The client requested anonymisation; the playbook and the numbers are as deployed.

This case study walks the situation as found, the four approach pillars in the order they were built, the outcomes at six months, and the lessons that replicate cleanly to smaller publishers. Where a section can point to a deeper playbook, it does — the goal is a case study that is actionable, not just illustrative.

Key takeaways
  1. 01
    An 8-stage pipeline is what enables 100+ posts a month.Sustained velocity is a function of pipeline design, not model selection. The publisher's eight stages — research, brief, draft, fact-check, schema, staging review, publish, amplify — each have an owner, an exit criterion, and a measurable cycle time. Velocity scales when the slowest stage is engineered, not when the model is swapped.
  2. 02
    Fact-check upstream of drafting prevents drift.Pre-loaded source URLs in the brief plus an explicit anti-fabrication rule plus a human verification gate outperforms post-hoc citation chasing on cost and accuracy. The publisher's two-tier chain — automated link-rot and claim-extraction, plus a human gate — held a 96% pass rate at 104 posts a month.
  3. 03
    Schema discipline lifts LLM citation share.AST-level validation in CI for title length, description length, structured data, canonicals, and image alts caught silent failures that had been suppressing both SERP and LLM citation performance. Compliance moved from an unmeasured baseline to 91% inside three months; LLM citation share lifted as a downstream effect.
  4. 04
    Weekly refresh cadence prevents catalog decay.A back catalog without an active refresh cadence decays as sources move and statistics age. The publisher's weekly refresh queue — quarterly time-trigger plus model-version overlay plus event overlay — kept the catalog producing traffic instead of leaking it.
  5. 05
    Voice protection scales with templates, not with editors.Per-content-type brief templates with explicit voice constraints, banned-phrasing lists, and tonal exemplars hold voice better than editor-by-editor enforcement. The publisher's library of seven templates carried the house voice across more than six hundred posts in the run-rate quarter without per-post tonal review.

01SituationTwelve posts a month manual, declining traffic, no fact-check discipline.

The publisher arrived with a mature back catalog of just under seven hundred posts accumulated across six years of editorial output. The catalog had earned a credible authority position in the category — the kind that historically produced enough organic traffic to support the program without paid amplification. By the time of the engagement that position had eroded for eighteen months. Velocity sat at twelve posts a month, three editors, a backlog of half-drafted posts in the production queue, and a growing concern that the category had outpaced them.

The deeper diagnosis surfaced three structural issues, not the volume issue the client initially named. First, no shared brief library — every post was scoped by the editor assigned to it, and consistency drifted post to post. Second, no fact-check chain — verification happened ad-hoc, mostly by the editor who also drafted, with predictable blind spots. Third, no schema discipline — titles ran long, descriptions were inconsistent, and structured data was partially deployed across the catalog. Volume was a symptom, not a cause.

Velocity
12/mo
Manual baseline

Three editors, an average four posts each per month, with a backlog of half-drafted posts that grew faster than the throughput. The team had stopped publishing on Fridays because the schema cleanup pile was a Friday job.

Baseline
Catalog
697
Posts pre-engagement

A six-year back catalog. Mature authority position in the category. About 18% of the catalog produced 80% of the organic traffic — a long tail of underperforming posts that had not been refreshed in 24+ months.

Asset
Drift
−31%
Organic traffic, trailing 12mo

Organic traffic had drifted down 31% across the trailing twelve months versus the prior twelve months. The drift accelerated in the trailing six. Category competitors had moved to AI-augmented production; the publisher had not.

Pressure

The cultural surface the diagnosis hit was important. The team had a strong editorial identity, a recognisable voice, and a visible reluctance toward AI-augmented drafting. The fear — a reasonable one — was that introducing AI would dilute voice, erode trust with the audience, and trade authority for volume. The brief explicitly named voice protection as a non-negotiable constraint. The engine had to ship volume and protect voice; either alone was not the deliverable.

The pre-engagement audit confirmed one further structural issue. Posts referencing specific vendors, model versions, or pricing had not been refreshed against current values — and the audit found that roughly forty percent of the catalog contained at least one stat or reference that had aged out. The catalog was not just under-amplified; it was actively misleading on the posts that drew the most traffic. Refresh cadence was going to be a phase-one priority, not a phase-three one.

The brief, in one sentence
Rebuild the engine so the team can ship a hundred posts a month sustained, protect editorial voice, hit a measurable fact-check discipline, recover organic traffic, and lift LLM citation share — inside two quarters.

02Approach · PipelineEight stages, named owners, and cycle-time targets per stage.

The first artifact the engagement shipped was a stage diagram — eight stages from research to amplification, each with a named owner, an exit criterion, and a target cycle time. The diagram was deliberately simple: a hundred-post-a-month engine cannot run on a thirty-stage pipeline. Each stage had to fit on one row of the operations dashboard, and each had to be measurable in cycle hours rather than narrative description.

The stages clustered into four functional groups. Research and briefing constrained the input; drafting and fact-checking produced the verified output; schema and staging review enforced the publishable shape; publication and amplification extracted the return. Each cluster had a lead — editorial, content engineering, schema engineer, growth — and the leads met weekly. The pipeline was treated as a production line, not a creative workflow.

Cluster 1
Research + brief
Stage 1 (research) · Stage 2 (brief)

Editorial lead owns the cluster. Source URLs verified before drafting and pre-loaded into the brief; angle, audience, search intent, anti-fabrication rule, banned-phrasing list, and voice constraints structured into one of seven content-type templates.

Editorial lead · 2 stages
Cluster 2
Draft + fact-check
Stage 3 (draft) · Stage 4 (fact-check)

Content engineering lead owns the cluster. Drafting routes to one of three models by post type — frontier reasoning for deep guides, general-purpose for explainers, fast model for listicles. Fact-check chain — automated plus human gate — sits between drafting and schema.

Content eng · 2 stages
Cluster 3
Schema + staging
Stage 5 (schema) · Stage 6 (staging review)

Schema engineer owns the cluster. AST-level validation runs in CI — title length, description length, canonical, Article + BreadcrumbList, image alts. Staging review is a five-minute pass per post for what automation cannot catch: copy line breaks, cropping, mobile rendering.

Schema eng · 2 stages
Cluster 4
Publish + amplify
Stage 7 (publish) · Stage 8 (amplify)

Growth lead owns the cluster. Publication propagates sitemap, RSS, internal-link updates. Amplification — social variants per channel, newsletter slot, internal backlinks within seven days — runs on a per-content-type schedule, not improvised per post.

Growth · 2 stages

The cycle-time targets were the operational backbone of the pipeline. Research and brief: four hours per post end-to-end, mostly editorial. Draft and fact-check: two hours per post for most content types, longer for deep guides. Schema and staging: thirty minutes per post combined, with the schema gate enforced in CI rather than by reviewer goodwill. Publish and amplify: an hour per post, mostly social and newsletter scheduling. The total — roughly seven-and-a-half hours per post — was what made the velocity target arithmetically possible across the existing headcount.

For the deeper architectural rationale behind staged pipelines in publisher contexts, the engagement leaned on our content engine service playbook. The publisher's eight-stage shape is a direct instantiation of that playbook with cluster-leadership adapted to their editorial structure.

"A hundred-post-a-month engine cannot run on a thirty-stage pipeline. Each stage had to fit on one row of the operations dashboard, and each had to be measurable in cycle hours rather than narrative description."— Engagement diagnostic memo, week 2

03Approach · Fact-checkTwo-tier verification: automation upstream, human gate before publish.

The fact-check chain was the artifact that did the most structural work in the engine. The publisher arrived with no chain at all — verification was ad-hoc by the drafting editor — and the engagement replaced that with two tiers that ran in sequence. Tier one was automated: link-rot checks on every source URL, claim extraction against the pre-loaded sources, banned-phrasing detection. Tier two was human: a verification gate run by an editor who had not drafted the post, against a ten-point checklist.

The structural lever was upstream placement. Source URLs were verified before drafting and pre-loaded into the brief, so the drafting model was constrained to sourced claims rather than free-generating claims that would need post-hoc verification. The anti-fabrication rule was explicit in every brief template — "do not invent metrics, quotes, case studies, company names, or product features; if a claim cannot be sourced to the pre-loaded URLs, omit it or flag it for editorial review" — and the rule was tested against the model output in tier one before it reached the human gate.

Tier 1
T1
Automated — pre-publication

Link-rot check on all source URLs. Claim extraction against pre-loaded sources. Banned-phrasing detection. Numeric-claim flagging where a number is not traceable to a named source. Runs as part of the CI pipeline; failures block the merge.

Pre-human gate
Tier 2
T2
Human — verification gate

Ten-point checklist. Editor who did not draft the post. Every numeric claim traceable, every quote verified verbatim, soft-language rule applied, external links to primary sources. Sign-off recorded in the PR description with reviewer name and date.

Final gate
Pass rate
96%
Sustained at 104 posts/mo

Pass rate held at 96% across the run-rate quarter. The 4% that failed at tier 2 returned to drafting with a flagged claim list rather than a full rewrite — most fixes were minor sourcing or soft-language adjustments.

Outcome

The chain protected against the failure mode that publishers fear most when introducing AI — fabricated metrics or invented quotes shipping into a piece that carries the publisher's editorial brand. Two examples surfaced in the phase-one weeks where tier one caught a fabricated stat that the drafting model had generated despite the pre-loaded sources covering the same topic. In both cases the source URLs were updated to include a citation for the corrected stat, and the drafting prompt was tightened. The chain was not just a filter; it was a feedback mechanism that improved the upstream brief over time.

The cost of the chain was measurable and bounded. Tier one added roughly six minutes of CI time per post. Tier two added roughly twenty minutes of editor time per post. On twenty-five posts a week, that is about ten editor hours a week dedicated to verification — substantial, but well inside the budget that the velocity gain produced. Verification was treated as a first-class stage with its own budget line, not as a slack-time activity squeezed between drafting and publish.

The structural insight
Fact-checking belongs upstream of drafting, not downstream. Pre-loaded source URLs constrain the model; the human gate catches what automation cannot. Post-hoc verification scales linearly with volume — the upstream chain scales sublinearly because each fix improves the brief template for every subsequent post.

04Approach · SchemaAST validation in CI — block at merge, not warn-only.

Schema failure is the silent killer of publisher performance. The publisher arrived with no schema validation in the pipeline — title length, description length, canonical URLs, structured data, and image alts were enforced by editor goodwill, which under volume pressure became no enforcement at all. The remediation was AST-level validation in CI, configured to block the merge on failure rather than warn. The shift from warn-only to blocking was the single highest-leverage move in the schema cluster.

The choice matrix below covers the four schema controls the engagement deployed, why each was prioritised, and what the failure mode looks like when the control is missing or warn-only. Each control was implemented in the same week of phase two; the cumulative effect on compliance was visible within the first three weeks.

Control 1
Title length gate

Target 50-60 characters. Below 50 wastes SERP real estate; above 60 truncates on most desktop SERPs and most mobile. AST-validated in CI rather than trusting the model to self-report — model-reported lengths drift 5-10 characters from actual rendered lengths.

Blocking gate
Control 2
Description length gate

Target 140-160 characters, ideally 145. Under 140 wastes the snippet; over 160 truncates. The single highest-leverage metadata point because every SERP impression sees it. AST-validated with a per-character count, not a word count, to handle long-word edge cases.

Blocking gate
Control 3
Structured data parse

Article + BreadcrumbList only for blog posts in this engagement. JSON-LD parses without errors, all required fields populated, URLs absolute and trailing-slash-correct. No FAQPage, HowTo, or Review schema stacking — the audit found three SERP penalties from prior over-emission that took two months to clear.

Blocking gate
Control 4
Image alt + dimensions

Every inline image has descriptive alt text (not the filename), correct dimensions, and an OG image at 1200×630 for social. Accessibility plus social-card plus SERP image-pack — three concurrent benefits from one gate. Failures are common because the model rarely writes alts unprompted.

Blocking gate

The downstream effect of the schema cluster surprised the team. The hypothesis going in was that schema compliance would lift SERP performance — which it did. The unexpected second-order lift came from LLM citation share: the consistent structured data and metadata gave large-language models a cleaner surface to cite from, and the publisher's posts started appearing as citations in AI-search answers at a measurably higher rate. The team had not optimized for that surface; the schema discipline produced it as a downstream effect.

The remediation pattern the publisher converged on — AST validation in CI, blocking at merge, no warn-only — is the pattern we recommend to every publisher engagement. For the staged rollout plan that lands an engine at this level of schema compliance inside ninety days, our AI content engine launch 30/60/90 day plan covers the milestones in order — schema discipline lands in phase two for a reason.

05Approach · RefreshWeekly refresh queue — three triggers, one owner.

A six-year back catalog without an active refresh cadence decays as sources move, statistics age, vendor names change, and pricing shifts. The pre-engagement audit found that roughly forty percent of the catalog contained at least one stat or reference that had aged out, and the most-trafficked posts were disproportionately represented in that forty percent — high-traffic posts attract more refresh-triggering external change than low-traffic ones. Refresh cadence was the highest-leverage stage for the back catalog and the cluster that produced the fastest recovery curve in the engagement.

The cadence ran on three triggers, all feeding one weekly queue with one named owner. Time-triggered quarterly was the default — every post in the catalog hit a refresh check at least once a quarter, regardless of any other trigger. Model-version overlay surfaced posts referencing specific model versions, vendor pricing, or feature availability when the underlying reference changed. Event overlay surfaced pillar posts referencing competitive launches or industry events. Three triggers, one queue, one owner — the structural simplicity of that arrangement was a deliberate design choice.

Trigger 1
Quarterly time-trigger
Every post · once per quarter · 25% of catalog/quarter

The backstop trigger. Every post in the catalog hits a refresh check at least once a quarter — sources re-verified, stats updated, internal links re-audited, modifiedTime bumped. Catches what the other triggers miss.

Default cadence
Trigger 2
Model-version overlay
Filtered by tag · 'model-version' / 'pricing' / 'feature'

Posts tagged at publish time as referencing specific model versions, vendor pricing, or feature availability surface for an accelerated refresh when the underlying reference changes. Caught a major model bump three weeks into the cadence.

Accelerated
Trigger 3
Event overlay
Pillar posts · industry events / competitive launches

Editorial-discretion trigger for pillar posts when an industry event or competitive launch changes the surrounding context. Manual rather than automated — the editorial team owns the call.

Editorial trigger
Queue
One weekly queue
One owner · 25-30 refreshes/week at steady-state

All three triggers feed one weekly queue. One named owner runs the queue. Per refresh: sources re-verified, stats updated, internal links re-audited, modifiedTime bumped, change scope logged for the analytics retro. Quarterly report aggregates lift.

Single queue

The refresh queue produced the fastest visible win of the engagement. By week six, the most-trafficked twenty posts had been refreshed and the organic traffic curve on those posts inflected — the high-leverage subset of the catalog stopped leaking traffic before the new-post velocity had even ramped to half its target. The lesson the publisher internalised: refresh is a first-class stage with first-class ROI, not a janitorial activity squeezed in around new-post production.

The quarterly report that the owner produced — back-catalog traffic trend, refresh volume, highest-lift post, lowest-lift post and reason — became the artifact the editorial team used to plan the next quarter's content calendar. Refresh patterns surfaced gaps in the brief library (content types that aged faster than expected), gaps in the source list (sources that link-rotted more often), and gaps in the amplification rhythm (refreshed posts under-amplified relative to new posts). The cadence was a learning loop as well as a maintenance loop.

The lesson on cadence
A back catalog without a refresh cadence is depreciating inventory. Quarterly time-trigger plus model-version overlay plus event overlay — three triggers, one queue, one owner — is the pattern that holds. Time-triggered alone misses version-dependent posts; version-dependent alone misses the long tail.

06OutcomesVelocity, fact-check, schema, voice held.

Six months in, the engine had landed the brief. Velocity at one hundred and four posts a month sustained — eight times the manual baseline — across the existing editorial headcount with the addition of one content engineer and one schema engineer. Fact-check pass rate held at ninety-six percent at run-rate volume. Schema compliance ran at ninety-one percent in CI, with the remaining failures mostly stylistic edge cases the team chose to triage rather than auto-block. Organic traffic recovered the prior twelve-month decline inside four months and continued to climb through month six. LLM citation share — measured by sampling AI-search answers in the category — moved from an unmeasured baseline to a measurable share of cited sources.

The outcome that mattered most to the client was voice. Reader feedback through the comment system and the newsletter remained consistent with pre-engagement patterns — the engine had not produced a perceptible change in editorial tone. The brief-library voice constraints carried the house style across more than six hundred posts in the run-rate quarter without per-post tonal review. The hypothesis the engagement had set out to test — that voice could be encoded in templates rather than enforced by editor-by-editor judgment — held under volume pressure.

Six-month outcomes · sustained run-rate quarter

Source: publisher engagement run-rate quarter, six months post-launch.
Velocity vs targetRun-rate 104/mo · target 100/mo · baseline 12/mo
104%
Fact-check pass rateTwo-tier chain · run-rate quarter
96%
Schema complianceAST-validated in CI · blocking gates
91%
Voice consistency · reader feedbackNewsletter + comment-system sentiment · vs pre-engagement
94%
Refresh queue cadence25-30 refreshes/week · single owner · steady-state
88%
Internal-link discipline≥2 backlinks added within 7 days of publish
82%

The numbers behind the bars matter less than the pattern across them. Every cluster the engine prioritised hit its target inside six months, and the clusters were prioritised in the right order — pipeline first, fact-check second, schema third, refresh fourth. Reordering would have produced different numbers. Skipping the fact-check chain to chase velocity would have shipped volume without verification discipline; skipping schema for amplification would have left silent SERP failures in place; skipping refresh would have continued the catalog decay even as new posts shipped.

The engagement also surfaced two outcomes the brief had not asked for. First, content engineer satisfaction — the editor reports were universally positive on the brief library and the fact-check chain, both of which removed the most frustrating parts of the prior workflow. Second, cost-per-post — measured stage-by-stage, the average dropped by roughly sixty percent from the manual baseline, not because any single stage got dramatically cheaper but because the cumulative effect of brief reuse, CI-enforced gates, and standardized amplification compounded across stages.

"The hypothesis the engagement set out to test — that voice could be encoded in templates rather than enforced by editor-by-editor judgment — held under volume pressure across more than six hundred posts."— Engagement final report, month 6

07Lessons + ReplicationWhat replicates cleanly to smaller publishers.

The case study sits at the upper end of the publisher size range — a six-year catalog, three editors pre-engagement, authority position in the category. Most publisher engagements are smaller, often a single editor with a shorter catalog and no schema engineer to hand. The patterns that replicate cleanly to smaller publishers are not the headline numbers — they are the structural decisions underneath. Five lessons, each of which holds at smaller scale, frame the replication shape.

Lesson 1
Pipeline before velocity

Diagram the stages, name the owners, set the exit criteria — then ship the first ten posts. Trying to scale velocity on a pipeline that has not been mapped produces volume without discipline, and the discipline costs more to retrofit than to install. Holds at any size.

Replicates fully
Lesson 2
Fact-check upstream of drafting

Pre-loaded source URLs in the brief plus an explicit anti-fabrication rule plus a human gate before publication. The chain costs more to skip than to install; the cost surfaces as fabricated metrics, eroded trust, and ad-hoc verification that scales linearly with volume.

Replicates fully
Lesson 3
AST schema validation in CI

Title length, description length, structured data, image alts — all enforced as blocking gates at merge. Costs a day of engineering for a smaller publisher; prevents a year of silent SERP drift. The single highest-leverage gate the engagement shipped.

Replicates fully
Lesson 4
Voice in the template

Encode voice constraints, banned-phrasing, and tonal exemplars in the brief library rather than trying to enforce them post-draft. Templates scale; editor-by-editor enforcement does not. Holds at single-editor scale and is in fact more important there, because the single editor cannot do per-post tonal review at velocity.

Replicates fully
Lesson 5
Refresh as first-class stage

Smaller publishers often defer refresh because the catalog is small. The right move is the opposite — install the cadence early so the catalog never accumulates the back-tail of stale posts that takes months to clear at larger scale. Quarterly time-trigger holds at any size.

Replicates fully

The pattern across the five lessons is consistent. The scale-dependent moves in the engagement — model routing across three frontier models, two-engineer staffing, weekly refresh queue at twenty-five posts a week — do not replicate to a single-editor publisher. The structural moves underneath them — staged pipeline, upstream fact-check, CI-enforced schema, voice in templates, quarterly refresh — do, and they are what produced the outcomes. A publisher shipping twenty posts a month can run the same five disciplines and land at a smaller version of the same compounding curve.

For publishers ready to translate the lessons into a staged ninety-day launch sequence, the companion playbook is the AI content engine launch 30/60/90 day plan. For publishers ready to audit an in-flight pipeline against the granular gates the engagement enforced, the companion checklist is the AI pipeline quality audit eighty-point checklist. The case study is the proof point; the playbook and the checklist are the implementation artifacts.

The pattern that replicates
Pipeline, fact-check, schema, voice in templates, refresh — five structural moves, every one of which holds at smaller scale. The numbers do not replicate; the disciplines do. A twenty-post-a-month publisher running the same five disciplines lands at a smaller version of the same compounding curve.
Conclusion

Publisher AI content engines work when pipeline, fact-check, schema, and refresh are engineered.

The publisher arrived with a six-year catalog, declining traffic, and a twelve-posts-a-month manual workflow. Six months later — one hundred and four posts a month sustained, a two-tier fact-check chain holding ninety-six percent pass rate, schema compliance at ninety-one percent in CI, a weekly refresh cadence holding the back catalog, and a measurable lift in organic traffic and LLM citation share. Voice held. The engine did the work the brief had asked it to do.

The pattern across the engagement is the pattern across every successful publisher content engine we have built. Pipeline before velocity, fact-check upstream of drafting, schema validation in CI rather than editor goodwill, voice encoded in the brief library rather than enforced post-draft, refresh as a first-class stage with its own cadence and owner. Five structural moves, each of which replicates at smaller scale, none of which is optional if the engine is going to compound rather than coast.

The case study is anonymised by request, but the playbook is not. The artifacts that did the work — the brief library, the fact-check chain, the schema validation, the refresh queue — are the same artifacts we ship into publisher engagements at every size. Smaller publishers run a smaller version of the same engine; the disciplines do not change with scale, only the staffing around them. The compounding does the rest.

Replicate this engine

Publisher AI content engines work when the pipeline is engineered.

Our content engineering team designs AI content engines for publishers — 8-stage pipeline, fact-check chain, schema discipline, refresh cadence.

Free consultationExpert guidanceTailored solutions
What we deliver

Publisher content engine engagements

  • 8-stage pipeline design and rollout
  • Two-tier fact-check chain
  • Schema + metadata QA at scale
  • Voice protection and template library
  • Refresh cadence and amplification design
FAQ · Publisher content case

The questions publishers ask after the case.

Eight stages cluster into four functional groups, each with a lead. Cluster one — research and brief — is editorial-owned: source URLs verified before drafting and pre-loaded into the brief; angle, audience, search intent, anti-fabrication rule, and voice constraints structured into one of seven content-type templates. Cluster two — draft and fact-check — is content-engineering-owned: drafting routes to one of three models by post type, fact-check runs as a two-tier chain (automated plus human gate). Cluster three — schema and staging — is schema-engineer-owned: AST validation in CI for length and structured data, staging review as a five-minute pass for what automation cannot catch. Cluster four — publish and amplify — is growth-owned: publication propagates sitemap and RSS, amplification runs on a per-content-type schedule. Each stage has a target cycle time; the cumulative budget is roughly seven-and-a-half hours per post, which is what makes the velocity target arithmetically possible.