AI SEO in H1 2026 closed on two structural numbers: zero-click resolution crossing 40% on high-intent informational queries, and brand citation share inside ChatGPT, Claude, and Perplexity passing 25% for the first time. The half-year did not introduce a new channel — it reshaped the one classic SEO has measured for twenty years. Rank still exists; an answer the user reads instead of a link is now sitting on top of it.
Two trend lines run through the data. The first is the steady climb of zero-click resolution as Google's AI Overviews and AI Mode mature, ChatGPT's web tool becomes the default chat modality, and Perplexity grows into the long-tail answer surface. The second is the growing share of brand visibility that arrives via citation inside an answer rather than via a click from a ranking. Neither trend is finished. H1 was the half they both crossed the threshold where treating them as edge cases stopped being defensible.
What follows is the retrospective in eight sections — why the half-year mark matters as an analytical frame, the zero-click data in detail, citation-share movement across the three citing engines, ranking volatility around AI Overview and AI Mode rollouts, the four trend lines that defined the period, and an honest H2 projection. The underlying baseline blends a 500-brand internal sample with public industry data; treat the specific numbers as indicative, and re-derive them against your own corpus before committing to any target.
- 01Zero-click crossing 40% on high-intent informational queries.Average zero-click resolution rate across our baseline reached 42% in H1 2026 versus roughly 31% at the close of H2 2025. The shift is structural — AI Overview, AI Mode, ChatGPT web, and Perplexity all compound the same outcome on the user side.
- 02LLM citation share crossing 25% on tracked brand queries.Brand citation share inside ChatGPT, Claude, and Perplexity averaged 27% in H1 versus 18% in late 2025. Citation is no longer an emerging surface — it is roughly a quarter of the answer-engine visibility scoreboard and growing.
- 03Brand mention has become a measurable ranking proxy.Answer engines reward brands the model already knows. Off-domain mention rate — analyst coverage, podcast appearances, third-party reviews — correlates with on-engine citation rate more tightly than backlink count did with rank in 2018.
- 04Content depth and schema discipline drive citation lift.Pages with at least one original number or named methodology, and clean schema across the page family, cited at roughly two to three times the rate of restated commodity content holding domain authority constant. Depth and discipline are the controllable levers.
- 05Traditional ranking volatility tracks the AI Mode rollout.Rank fluctuation indices spiked alongside each AI Mode and AI Overview expansion window — late February, mid-April, late May. The volatility is a feature of the new SERP composition, not a bug; expect quarterly shake-ups while the rollout completes.
01 — Why H1 MattersThe half-year mark when classic SEO became answer-engine optimisation.
Half-year retrospectives are an analytical frame, not a calendar event. H1 2026 earns the frame because two long-running trend lines both crossed thresholds that change how the work has to be measured. Zero-click resolution moving past 40% means the majority-rule assumption beneath classic rank tracking is gone — you can no longer assume that ranking position one delivers the click for high-intent informational queries. Citation share crossing 25% means the alternative surface — appearing in the model's answer text rather than in the link list beneath it — is now responsible for roughly a quarter of measurable brand visibility on tracked queries.
Neither number is novel in isolation. Zero-click has trended up for years; citation share has been growing since ChatGPT shipped its web tool. What changed in H1 is the combination — and the operational implication. A marketing team that still organises its scoreboard exclusively around rank and organic clicks is measuring a shrinking slice of the surface that actually drives brand consideration. The retrospective's job is to make the slice sizes legible, not to declare classic SEO dead. The underlying signals that earn citations are largely the signals that earn ranks; what shifted is the weighting and the metric.
The half-year cadence also matters because the surfaces themselves shift on roughly a quarterly clock. AI Mode expanded into new verticals in late February; AI Overview formatting changed in mid-April; ChatGPT's default model and citation behaviour shifted alongside the GPT-5.4 to GPT-5.5 transition. A monthly retrospective gets lost in the noise; an annual one waits too long to reorient the programme. Six months is the cadence at which the trend lines are visible without the daily whipsaw, and at which the remediation roadmap from the prior cycle has had time to either land or fail to land.
The framing for the rest of this retrospective is straightforward: data in the first three sections (zero-click, citation share, ranking volatility), context in the next two (the AI Mode and Overview rollout, the four trend lines), and the H2 projection at the end. For teams operating an active programme, the baseline numbers throughout are the comparator — score your own brand against them, identify the gaps, and ship the H2 remediation in the two or three archetypes where the gap is widest rather than spreading the investment thinly.
02 — Zero-Click AccelerationFrom 31% at the close of H2 2025 to 42% at the close of H1 2026.
Zero-click resolution measures the share of representative queries in which the user receives a satisfactory answer from an AI surface — AI Overview, AI Mode, ChatGPT, Claude, or Perplexity — without clicking through to a source. Across our 500-brand baseline, covering ~12,000 representative queries spread across the ten archetypes documented in the brand citation audit framework, zero-click resolution averaged 42% in June 2026 versus 31% in December 2025.
The chart below decomposes that average by query archetype, because the average obscures more than it reveals. Definitional and brand-context queries — the archetypes where a single paragraph answer fully resolves the user's intent — drive most of the zero-click weight. Recommendation, comparison, and buying-decision queries still produce clicks, but the click is increasingly to a citation surfaced inside the answer rather than to a ranked organic result.
Zero-click resolution rate by query archetype · H1 2026
Source: Digital Applied 500-brand baseline · H1 2026Two interpretive points are worth pulling out of the chart. First, the spread across archetypes is wider than the headline 42% average implies — definitional queries resolve without click 73% of the time, hyperlocal queries 14% of the time. Treating the average as the operational number costs precision; the archetype decomposition is what a remediation roadmap actually needs. Second, the archetypes with the highest zero-click rate are precisely the archetypes where citation share is most concentrated — meaning that for definitional and brand-context queries, the entire visibility outcome now lives inside the answer engine rather than below it.
The drivers of the shift are not mysterious. Google's AI Overview rolled out to additional verticals through Q1; AI Mode graduated from limited preview to general availability in late February; ChatGPT made the web tool a default rather than an opt-in during the GPT-5.4 to GPT-5.5 transition; Perplexity grew its long-tail share through aggressive distribution. Each of those surfaces resolves a slice of intent that previously sent a click; the cumulative effect is the 11-point swing.
"The average is 42%. The archetype range is 14% to 73%. The remediation roadmap lives in the decomposition, not the headline."— Our reading of the H1 zero-click baseline
For operational teams, the practical implication is that the click metric needs an answer-side counterpart for the archetypes most exposed to zero-click. Tracking citation rate per archetype — not per query — is the equivalent measurement, and that work is tractable at a quarterly cadence with the tracking infrastructure described in our citation audit framework. The investment is modest; the alternative is operating blind on the archetypes that most influence brand consideration.
03 — LLM Citation ShareChatGPT, Claude, Perplexity — three engines, three citation profiles.
Brand citation share — the rate at which the brand domain or brand name appears inside the answer text of a tracked engine — averaged 27% across our H1 baseline. The average masks meaningful engine-level differences. The three citing engines weight content differently, surface citations at different rates, and reward slightly different surface shapes. The choice matrix below documents what we observed in H1; rerun it against your own corpus before committing remediation priorities, because vertical and brand-scale effects move the numbers materially.
Web tool default · tight citation set
GPT-5.5 promoted the web tool to a near-default state during H1. Cites a tighter set of sources per answer with higher per-citation prominence. Rewards Article schema correctness, named author entities, and content that reads like information rather than marketing. Average citation rate across baseline: 31%.
Tight set, high prominenceWeb search · recency-heavy weighting
Claude's web search behaviour over H1 favoured recency-heavy citations and tended to weight structured on-domain documentation comparably with third-party coverage. Cited rate climbed steadily through Q2 alongside the Opus 4.6 to Opus 4.7 transition. Average citation rate across baseline: 26%.
Recency + docs rewardWider source set · diluted prominence
Perplexity surfaces a wider source set per answer than either ChatGPT or Claude — more domains cited per question, lower per-citation prominence on average. Strong long-tail surface for brands without enough authority to crack the top three in tighter engines. Average citation rate across baseline: 24%.
Long-tail breadthAudit all three, average is secondary
Average across engines is reported but the per-engine breakdown is what informs remediation. A brand can be strong on Perplexity citation while invisible on ChatGPT for the same archetype, and the levers differ. Track all three; average only for headline reporting.
Per-engine breakdownThe trend inside H1 was steady upward movement across all three engines, with the steepest growth coming from ChatGPT as the web tool moved from opt-in to default during the GPT-5.5 transition. Claude's citation rate climbed alongside Opus 4.7's release in late H1; Perplexity's rate grew most slowly in percentage terms but from a larger base of long-tail queries. Across all three, the brands earning the largest H1 lift were not necessarily the brands with the largest domain-authority footprint — they were the brands that shipped material content depth, refreshed pricing and comparison content quarterly, and consolidated their Organisation entity coherence across owned and off-domain surfaces.
The other H1 development worth naming is the visible weighting of brand mention as a citation prerequisite. Several months of A/B-style tests inside our 500-brand sample suggest that answer engines disproportionately cite brands the model already recognises — meaning that pre-training presence and off-domain mention rate, not just on-domain content quality, gate citation inclusion for many archetypes. Brand mention behaves as a ranking proxy in a way backlink count once did; the implication for H2 programmes is that off-domain reinforcement is no longer optional.
04 — Ranking VolatilityClassic SERP volatility spiked around each AI Mode expansion.
The other H1 storyline worth retrospecting is what happened to traditional organic rankings while citation share grew. Rank fluctuation indices — measured across an internal 50,000-keyword sample plus aggregated industry trackers — spiked at three distinct points during H1: late February (AI Mode general availability), mid-April (AI Overview format change and an unrelated core update), and late May (AI Mode vertical expansion into commerce and travel).
The pattern is not a coincidence. Each surface change forced Google's underlying ranker to recompose SERP layouts to make room for the new answer modules, and the recomposition cascaded into position changes for the organic results beneath. Volatility during these windows ran two to three times the H2 2025 baseline on tracked keywords. Outside the windows, volatility reverted to normal levels — meaning the shake-ups are surface-rollout artefacts, not a permanently elevated noise floor.
AI Mode general availability
Rank fluctuation index across the tracked keyword sample ran roughly 2.4× the H2 2025 baseline during the four-week window around AI Mode GA. Largest reshuffles in commerce, travel, and finance verticals where AI Mode integrated earliest.
Window: Feb 17 — Mar 14Overview format + core update
Two concurrent events — an AI Overview formatting change and an unrelated core algorithm update — produced the largest single rank-volatility window of H1. Recovery was uneven; some verticals stabilised within two weeks, others took six.
Window: Apr 8 — May 16AI Mode vertical expansion
Commerce and travel verticals received the largest reshuffles as AI Mode integrated category-specific answer modules. Brands with structured product schema and clean pricing pages weathered the change visibly better than those without.
Window: May 21 — Jun 12The operational read for teams operating during these windows is cautious patience. Volatility windows are noisy; ranking changes measured inside them tend to revert when the surface stabilises. Treating a four-week reshuffle as a permanent state and rebuilding content around the apparent new ranker preferences is a common and expensive mistake — by the time the rebuild ships, the surface has settled and the underlying signals revert to the old shape. The right posture is to verify whether a ranking change persists beyond a window before allocating remediation effort.
A second observation: brands with materially stronger schema discipline weathered the windows visibly better than brands without. The brands that lost the most rank stability in the mid-April window were those whose schema coverage was patchy — FAQ schema on some article pages but not others, inconsistent Article schema across the page family, missing Author entity references. The recomposition rewarded coherence, and the brands without it took the brunt of the reshuffle. Schema is not what wins citation, but it is what holds rank stability through surface-rollout windows.
05 — AI Mode + OverviewThe Google answer layer stabilised into two distinct surfaces.
By the end of H1, Google's answer-engine layer had clarified into two functionally distinct surfaces. AI Overview is the paragraph-scale module that appears above the organic results on applicable queries; AI Mode is the conversational surface that replaces the SERP entirely for users who opt in. The two surfaces cite differently, reward different content shapes, and play different roles in the visibility scoreboard. Treating them as interchangeable misses the most important structural insight of the half — they are now distinct optimisation surfaces with distinct levers.
AI Overview
Paragraph module · above organicParagraph-scale answer that appears above the organic results on applicable queries. Cites three to five sources prominently. Rewards Article schema correctness, FAQ schema on FAQ-shaped content, dense paragraph-level answers, and clean H2 anchors. Visible to all Search users on triggering queries.
Cites: 3-5 sourcesAI Mode
Conversational · replaces SERPConversational surface that replaces the classic SERP for opted-in users. Cites a wider source set in a sidebar layout, supports follow-up questions, and tends to reward structured comparison content, detailed methodology, and content depth over surface-level summaries. GA from late February 2026.
Opt-in · expanding verticalsMap pack / local
Hyperlocal · largely unchangedThe map pack and classic local surfaces are the H1 holdouts — hyperlocal queries continued to resolve primarily through the map pack rather than through AI Overview or AI Mode. Click-through to local business listings stayed roughly flat. Hyperlocal is the lowest-zero-click archetype for this reason.
Stable surfaceThe other structural development worth naming is the increasing visibility of which sources AI Overview chose for a given query. By mid-H1, the cited sources were visible inline in the module rather than tucked behind a tooltip, which shifted the optimisation game in two ways. First, source diversity in the cited set went up — Overview began rotating sources within the window of acceptable quality rather than consistently citing the same three domains. Second, the inline visibility of cited sources made citation share itself a measurable proxy for downstream brand recognition, in a way that was not as clean before.
For programmes calibrating the H2 roadmap, the practical question is which of the two surfaces drives the larger share of the visibility outcome for the verticals you operate in. Verticals where AI Overview triggers heavily (informational, definitional, recommendation) reward depth and schema discipline on top of classic on-page work. Verticals where AI Mode is the rising surface (commerce, travel, finance) reward longer-form structured content, named methodologies, and explicit comparison tables ready for the conversational shape. Most programmes need both; the weighting depends on the vertical.
06 — Four TrendsThe defining trend lines of H1 2026.
Pulling the data together, four trend lines defined the half-year. Each is a continuation of a longer arc rather than a sudden discontinuity, but each crossed a threshold in H1 that changed its operational status from emerging to foundational.
Zero-click crossed 40%
31% → 42% in six monthsThe structural number of the half. Definitional and brand-context archetypes resolve majority-without-click; recommendation and comparison archetypes are not far behind. Operational implication: rank-only measurement is now insufficient on the archetypes that most influence brand consideration.
+11 pts vs H2 2025LLM citation share crossed 25%
18% → 27% in six monthsBrand citation share inside ChatGPT, Claude, and Perplexity passed the threshold where it is no longer an emerging surface. Roughly a quarter of measurable brand visibility on tracked queries lives in the answer text. Quarterly audits are now the cadence; weekly is overkill outside remediation sprints.
+9 pts vs H2 2025Brand mention as ranking proxy
Off-domain presence as gatePre-training presence and off-domain mention rate behave as citation prerequisites in a way backlink count once did for rank. Brands the model already recognises cite far more reliably; brands without third-party reinforcement underperform their on-domain content quality.
PR + analyst liftContent depth + schema discipline as citation drivers
Original numbers + entity coherencePages with at least one citable original number cite at roughly 2-3× the rate of restated commodity content. Schema correctness across the page family qualifies pages for the candidate set. Together they are the two controllable levers that produced the largest H1 lift in our sample.
Controllable levers"The half did not introduce a new channel. It reshaped the one classic SEO has measured for twenty years."— H1 2026 retrospective summary
None of these four trends is finished. Zero-click will keep climbing as AI Mode and AI Overview mature; citation share will keep growing as the citing engines refine attribution; brand mention will keep weighting as the models continue to reward entities they already recognise; content depth and schema discipline will keep separating brands that ship from brands that do not. What changed in H1 is that each trend crossed the threshold where ignoring it stopped being a defensible position. The trends are the new operating baseline rather than the speculative future.
07 — H2 ProjectionWhat the data suggests for July to December 2026.
Forecasting six months ahead in a category moving this fast is an exercise in calibrated uncertainty rather than a prediction. The trend lines do not extrapolate linearly — surface-rollout windows create discontinuities, model upgrades reset citation behaviour, and competitive activity inside verticals reshapes the per-brand ceiling. With those caveats explicit, the directional reads below summarise where the H1 data points if it continues to compound at the H1 cadence.
Trajectory to ~50% by year-end
If the H1 cadence continues, zero-click resolution averages ~50% by December 2026 — driven primarily by continued AI Mode vertical expansion and ChatGPT web-tool default behaviour. The ceiling depends on whether Google introduces friction on AI Overview triggering for commercial intent queries.
Plan for 50% by DecTrajectory past 35% by year-end
If the H1 cadence holds, brand citation share averages 35-38% by December 2026. The growth driver is partly mechanical (more queries served via answer engines) and partly compositional (citing engines refining their per-domain weighting). Expect more variance per-engine than per-aggregate.
Plan for 35-38%Continued surface-rollout shake-ups
Expect at least one and possibly two further AI Mode vertical expansions during H2, each producing a volatility window of two to three times normal baseline lasting three to five weeks. Schema discipline and brand-anchor coherence remain the stability levers; surface-stable brands compound the lead.
Plan for shake-upsOff-domain reinforcement becomes table stakes
The brand-mention prerequisite hardens further. Off-domain reinforcement (analyst coverage, podcast appearances, third-party reviews, Wikipedia presence) moves from competitive advantage to table stakes. Programmes without it underperform programmes with comparable on-domain content but stronger off-domain footprint.
Invest off-domainFor programmes shaping the H2 roadmap, the practical sequence is unchanged from the H1 retrospective's remediation logic. Audit citation share per archetype against the H1 baseline, identify the two or three archetypes where the gap is widest, invest in content depth and schema discipline on those archetypes for one full quarter, then re-audit. Spreading the investment thinly across all ten archetypes in a single quarter consistently underperforms the focused approach. The archetypes worth prioritising vary by vertical, but the sequencing logic does not. For programmes still anchored on classic rank tracking, the unlock is to add the answer-side counterpart — citation rate per archetype — rather than to replace one metric with the other.
For teams designing their H2 measurement stack from scratch, the recommendation is the same one we run on every engagement: pair rank tracking with citation tracking, audit both quarterly, report archetype-level breakdowns rather than averages, and calibrate remediation against the two or three highest-leverage gaps. Our agentic SEO programmes are designed around exactly this structure — quarterly citation audits, archetype-level remediation, off-domain reinforcement planning, and the tracking infrastructure described in our agentic SEO crawler walkthrough.
H1 2026 was the half classic SEO became answer-engine optimisation.
The half did not introduce a new channel. It reshaped the one classic SEO has measured for twenty years. Zero-click resolution crossed 40% on high-intent informational queries; brand citation share inside ChatGPT, Claude, and Perplexity crossed 25%; the answer layer above the SERP stabilised into two distinct surfaces; and brand-mention presence consolidated as a citation prerequisite. None of these movements is finished. What changed is that each crossed the threshold where treating it as an edge case stopped being defensible.
The operational implication is straightforward and demanding in equal measure. The scoreboard now has two columns — rank and citation — and the citation column is approximately a quarter of the visibility outcome and growing. Measuring only the rank column is measuring a shrinking slice of the surface that drives consideration. Adding citation tracking is not a tooling problem; it is a measurement re-anchoring, and the programmes that complete it first bank a measurable lead.
The H2 forecast is uncertainty wrapped in directional reads. Zero-click plausibly averages 50% by December; citation share plausibly crosses 35%; further AI Mode vertical expansions will produce volatility windows; brand-mention reinforcement consolidates from competitive advantage to table stakes. The half-year cadence is the right cadence to track these. The brands that audit quarterly, invest in the two or three highest- leverage archetypes per cycle, and pair on-domain depth with off-domain reinforcement will compound their visibility lead through H2. The brands that wait for the surface to stabilise before re-anchoring their measurement stack will compound the opposite.