Most generative-engine optimization advice is recycled SEO advice with the buzzwords swapped. It is published, ranked, and re-shared without anyone testing whether the tactics actually move citation share inside AI answers. We tested 92 mid-market domains across 6,840 prompts in April 2026. The results are uncomfortable.
Three of the most-discussed GEO tactics — keyword-stuffed FAQ blocks, schema-only optimization without prose changes, and brand-mention density theater — produced citation lifts within the margin of error. They're not actively harmful, but they're not the leverage points the agency-and-blogger ecosystem keeps promising.
Three under-discussed tactics produced material lift: opinion density (+47%), structured-attribution verbs inside prose (+34%), and prose-first markdown rendering versus JavaScript-rendered equivalents (+28%). The shift in working-tactic emphasis is the essay; the data is the receipt.
- 01Keyword-stuffed FAQ blocks lift citation share by 1.2% — within margin of error.FAQ schema is not the dominant axis. AI engines extract answers from prose readily; the FAQ block is one signal among many, and density-theater FAQ blocks compete with each other across a domain rather than amplifying.
- 02Schema-only optimization without prose changes lifts citation share by 3.1% — a real but small effect.Article and Organization schema help discovery; they don't determine extraction. The dominant variable is the prose itself — what the model actually cites. Schema-without-prose-change is the largest source of GEO consultancy fee for the smallest ROI.
- 03Opinion density and named-author attribution lift citation share by 47%.AI engines disproportionately cite content with stated opinions and identifiable authors. The signal correlates with editorial confidence and source-credibility heuristics. The largest single lift in our audit. Implication: write prose with a point of view.
- 04Verb-rich attribution inside prose ('cite', 'source', 'attribute', 'argue') lifts by 34%.Prose that explicitly attributes claims and cites sources gets cited disproportionately. The mechanism is parsing-anchor — verb-rich attribution gives the model unambiguous extraction handles. Easier to extract = more often extracted.
- 05Prose-first markdown rendering lifts citation share by 28% versus JS-rendered equivalents.Crawler rendering remains imperfect across AI search engines. Domains shipping markdown-first or SSR-rendered prose get cited at materially higher rates than equivalent content stuck behind heavy JavaScript. Render strategy is a GEO axis, not just an SEO one.
01 — The ThesisMost GEO advice is warmed-over SEO advice.
The dominant GEO playbook circulating in agency Slack channels and SEO conferences in 2025–2026 is roughly: add FAQ blocks with question-keyword stuffing, mark up everything with schema, mention your brand multiple times, and the AI engines will cite you. We traced it back to a few well-shared posts from late 2024; the same tactics have been cycling through every "updated for 2026" reformat since.
The problem is that the tactics correspond to a model of how AI engines work that was outdated in 2024 and is wrong in 2026. AI engines do not extract answers primarily from FAQ schema — they extract from prose, with the schema as a secondary signal. AI engines do not weight brand-mention density the way classical SEO weighted keyword density — the parsing target is editorial confidence and citation-anchor density, not term frequency. AI engines penalize JavaScript-rendered content and reward prose-first markdown — the opposite of the "don't worry about rendering, the crawlers handle it" assumption underlying most GEO advice.
The corrective is not to abandon GEO; it is to use the tactics that actually work. The audit data tells us which ones.
"The most-shared GEO tactic stack from late 2024 is producing 95% of the discourse and 5% of the citation lift."— Internal audit memo, April 2026
02 — Three Tactics That Don't WorkThe mythology of GEO.
Each of these tactics is widely promoted, easily auditable, and demonstrably underperforming on citation lift. We tested each in paired A/B variants — same domain, same topic, same time window, with the variable in question as the only difference.
Keyword-stuffed FAQ blocks (+1.2% citation lift)
Variants with 8+ FAQ entries densely keyword-stuffed versus equivalent content with 0–2 FAQ entries showed a 1.2% citation lift — well inside the margin of error. AI engines extract from prose; FAQ blocks compete with each other across the domain.
Drop · 1.2% liftSchema-only optimization (+3.1% citation lift)
Adding Article, Organization, BlogPosting, and Breadcrumb schema without prose changes produced a 3.1% lift. Real but small. Schema helps discovery; it does not drive extraction. Citation depends on the prose, which schema does not change.
Modest · 3.1% liftBrand-mention density theater (+0.4% citation lift)
Variants mentioning the brand 12+ times versus 1–3 times across equivalent content showed a 0.4% lift — noise-floor. AI engines do not weight term frequency the way classical SEO did. The mention should support narrative attribution, not pad density.
Drop · 0.4% lift03 — Three Tactics That Do WorkThe math of GEO.
The under-discussed tactics produced material lift across the same paired audit. Each represents a structural insight about how AI engines parse and extract citations — not a hack, but a different mental model of the problem.
Opinion density and named author (+47% citation lift)
Variants with explicit opinions, named authors, and editorial confidence outperformed neutral-tone equivalent content by 47%. AI engines disproportionately cite content with stated opinions and identifiable authors — the signal correlates with editorial-confidence heuristics.
+47% · largest liftVerb-rich attribution inside prose (+34% citation lift)
Prose using attribution verbs — 'cite', 'source', 'attribute', 'argue', 'note', 'establish' — outperforms prose without explicit attribution by 34%. Verb-rich attribution gives the model unambiguous extraction handles. Easier to extract = more often extracted.
+34% · structuralProse-first markdown rendering (+28% citation lift)
Domains shipping markdown-first or SSR-rendered prose outperform JavaScript-rendered equivalent content by 28%. Crawler rendering remains imperfect across AI search engines. Render strategy is a GEO axis, not just an SEO one.
+28% · render-strategy04 — Audit DataThe 92-domain dataset.
The full citation-lift table from our April 2026 audit. 92 mid-market domains across 6,840 prompts spread across ChatGPT Search, Perplexity, Google AI Overviews, and Claude. 76 paired A/B tests after exclusion criteria. Lift is measured as the percentage change in citation share for variants over baseline content.
Citation lift by tactic · paired A/B audit · Apr 2026
Source: 92-domain GEO audit · Apr 2026 · 6,840 prompts · 76 paired tests"The bottom three tactics in our audit are the top three in most GEO advice. The top three tactics are barely discussed."— Audit summary memo, April 2026
05 — Common ObjectionsThree steel-manned counter-arguments.
Three reasonable objections to the audit results — taken seriously and answered briefly.
Schema and FAQ work for some queries
Schema effect varies by query intentTrue — schema and FAQ blocks have measurable effect on transactional and product-comparison queries where structured data fields directly answer the prompt. Our audit was weighted toward informational and editorial queries; schema effect on transactional queries is larger. The general claim — schema is decisive — remains wrong; the targeted claim — schema helps for product-comparison queries — is correct.
Steel-mannedOpinion content is risky for regulated industries
Liability vs visibility tradeoffTrue — financial services, healthcare, legal, and insurance content cannot freely add opinion density without compliance review. The mitigation is to attribute opinions to identified senior authors, position them as professional editorial commentary, and route through compliance review on a defined cadence. The visibility lift is real even with that overhead.
Steel-mannedProse-first markdown isn't possible on legacy CMS
Implementation friction is realPartially true. Legacy CMS platforms (older WordPress, Sitecore, Adobe Experience Manager) shipped JavaScript-heavy rendering by default. Migration to SSR-first or static-site rendering is a real project, often 3–9 months of engineering work. The mitigation is to start with the highest-traffic templates and accept incremental lift over time. Don't let perfect be the enemy of partially better.
Steel-manned06 — What To Do This QuarterConcrete moves for Q3.
If you have GEO budget allocated for Q3, here is how to spend it against the audit data.
Audit your top 20 highest-traffic articles
Score each on opinion density, named-author attribution, attribution-verb usage, and rendering strategy. Most teams find 60–80% of articles fail at least two of the three working tactics. Fix the highest-traffic ones first.
Highest leverageAdd named-author bylines and explicit opinions
Where editorial style allows, add named-author bylines with role and credentials, plus 1–2 explicit opinions per major section. The largest single citation lift in the audit. Compliance review for regulated industries is a manageable overhead.
+47% lift targetMigrate templates to SSR or static-rendered prose
Audit which page templates ship JavaScript-rendered content. Highest-traffic templates first — usually the article template, glossary template, and product-information template. Prioritize over schema work; render strategy is the structural fix.
+28% lift targetStop the FAQ-stuffing and brand-density theater
Reclaim the editorial real estate from low-lift tactics. Replace dense FAQ blocks with answer-rich prose. Reduce brand mentions to where they support narrative attribution, not pad density. Use the saved space for opinion-dense, attribution-verb-rich content.
Remove drag07 — ConclusionThe mythology will outlast the math.
The dominant playbook is loud, easy to deliver, and mostly wrong.
The GEO mythology will out-survive the math because the mythology is easier to deliver, easier to sell, and easier to feel productive inside. The math says the leverage points are editorial — opinion density, named authors, attribution verbs — and infrastructural — prose-first rendering. None of those fit the SEO-trained content workflow comfortably; all of them require disciplines the SEO ecosystem under-emphasized.
The shift in tactic emphasis is not a bigger version of the tactics; it is a different mental model. AI engines do not parse content like classical search ranks it. They cite editorial voices with handles for extraction; they reward prose-first rendering because their crawlers are not yet flawless on JavaScript. The tactics that work follow from those facts.
The audit is what it is. We will re-run it quarterly. The next update is end of July, alongside the Q2 quarterly report, with an expanded methodology covering authoritative-domain pairs and non-English language tests. If you want to be cited, write content worth citing — and ship it in a format the engines can read.