HTTP Status Codes: Complete 2026 SEO Reference Guide
Complete 2026 reference for every HTTP status code, from 2xx success to 5xx server errors, with SEO implications and handling best practices.
Status code classes
SEO-relevant codes
Signal passed by 301
Signal from soft 404
Key Takeaways
2xx Success codes
The 2xx class signals that a request was received, understood, and processed successfully. For SEO, the only status code that matters most here is 200 OK — every page you want indexed must return it. The nuances around partial content, no-content responses, and newly created resources matter more for APIs and headless architectures.
Returned when a resource was fetched successfully. Every public URL you want Google to index must return 200. Verify with Google Search Console URL Inspection or curl -I. A common mistake: custom error pages that render visually like errors but still return 200 — these become soft 404s.
Used by REST APIs after POST/PUT creates a new resource. Not typically seen by crawlers on public HTML. If a headless CMS returns 201 to a GET request (misconfiguration), Google treats it as successful but expect edge cases in tooling.
Valid for tracking pixels, form submissions, and beacon endpoints. Never serve 204 on a user-facing HTML URL — Googlebot receives no body to index, effectively removing the page.
Used when a client requests a byte range (video streaming, large file download resume). Googlebot supports range requests for video indexing, so proper 206 handling on video assets matters for Video results in Search.
3xx Redirects — most important for SEO
The 3xx class is where SEO lives or dies. Choosing the wrong redirect type during a migration or domain change can vaporize years of link equity. Always verify redirects with curl -I -L or a dedicated crawler before going live. Review our technical SEO audit checklist for redirect QA patterns.
Signals that a resource has permanently moved to a new URL. Google consolidates link equity and ranking signals from the old URL to the new one. Use for URL restructures, HTTPS migrations, domain changes, and any scenario where the old URL will never return. 301 is cached aggressively by browsers — test carefully because reverting is painful.
Indicates a temporary move — the original URL will return. Use for A/B tests, geolocation redirects, temporary promotions, or maintenance. Google still passes signals through 302 (policy changed in 2016), but leaving a 302 in place long-term confuses canonical selection. After ~12 months Google will often treat a persistent 302 as a 301, but do not rely on that.
Like 302 but the HTTP method must be preserved (POST stays POST). For SEO purposes, 307 and 302 behave the same — both are temporary and keep the original URL as the canonical. Common in HSTS upgrades from HTTP to HTTPS before certificates are fully rolled out.
A stricter version of 301 that preserves the HTTP method. Google treats 308 and 301 identically for ranking purposes. Use 308 when you need method preservation (rare for public HTML sites); stick to 301 for standard content redirects where tooling compatibility matters.
Tells the client to fetch the target with a GET regardless of the original method. Classic use: redirect after form submission to prevent duplicate POSTs. Not commonly used for public content redirects — if you see a 303 on an indexed URL, investigate whether it should be 301 instead.
4xx Client errors
4xx responses tell the client the request is malformed, forbidden, or points to a missing resource. For SEO, these codes govern how Google prunes the index. Used correctly, they clean up stale URLs quickly. Used incorrectly, they can strip real pages from Search overnight.
The server could not understand the request. Usually not seen by Googlebot on HTML URLs unless the site has a broken rewrite rule or the URL contains illegal characters. Treated similarly to 404 by crawlers.
The request requires authentication that was not provided. Googlebot cannot authenticate and will not index the URL. If a public page accidentally returns 401 (session middleware applied too broadly), it disappears from Search. Check with the URL Inspection tool when pages drop from the index unexpectedly.
The server understood the request but refuses to authorize it. Unlike 401, no authentication attempt will succeed. Common cause: IP-based blocks, WAF rules, or mod_security firing on Googlebot. Verify Googlebots user-agent and IP ranges are allow-listed on your CDN and origin.
The most famous status code. Returned when the URL does not map to any resource. Google de-indexes 404s after repeated crawls (typically weeks). 404s are normal and healthy — every large site has them. Worry about 404s only when a previously indexed, high-traffic URL starts returning them unexpectedly.
Signals the resource was intentionally removed and will not return. Google processes 410 faster than 404 — typically de-indexed within days rather than weeks. Use for discontinued products, deleted user accounts, retired campaigns, or legal takedowns where no redirect target exists.
Not SEO-relevant but included for completeness. Occasionally used by Cloudflare and other CDNs to block abusive bots with a playful message. Never return 418 on pages you want indexed.
The client sent too many requests in a given time. Googlebot handles occasional 429s gracefully — it backs off and retries. Persistent 429s reduce crawl rate and delay indexing. If your WAF rate-limits Googlebot itself, raise the limit or allow-list verified crawler IPs. Include a Retry-After header to guide backoff timing.
Used when content is removed for legal reasons — DMCA takedowns, GDPR erasure requests, geographic restrictions, court orders. Google treats it as de-indexable. Prefer 451 over 403 when the reason is genuinely legal, so users and regulators understand the context.
5xx Server errors — especially 503 for maintenance
5xx responses signal that the server failed to fulfill a valid request. Sustained 5xx errors are the most dangerous category for SEO: Google reduces crawl rate, delays indexing, and eventually drops URLs if outages persist for weeks. Monitor 5xx rate as a reliability SLI.
The catch-all 5xx. Usually indicates an unhandled exception, database connection failure, or misconfiguration. Occasional 500s are normal; sustained 500s signal instability. Alert on 500 rate > 1% of total requests.
A proxy or gateway received an invalid response from an upstream server. Common in microservices architectures when an origin crashes behind a CDN or load balancer. Googlebot retries 502s patiently but sustained 502s reduce crawl budget.
The correct status code for planned downtime. Return 503 with a Retry-After header (seconds or HTTP date) so Googlebot backs off and retries later. Never return 200 with a maintenance message — Google will treat the maintenance page as the canonical content and potentially de-index real pages. Keep 503 windows under 24 hours; sustained 503 for multiple days signals site abandonment.
A gateway or proxy did not receive a timely response from the origin. Common in slow database queries, cold-start serverless functions, or overloaded origins. Googlebot waits patiently on 504s but they count against crawl budget efficiency. Fix by optimizing slow paths or increasing upstream timeouts thoughtfully.
SEO handling patterns
Status codes do not exist in isolation — they combine with headers, sitemaps, and canonicalization rules to shape how Google treats your site. These are the patterns we apply on client migrations, outages, and international rollouts.
Migration redirect map
Before launch, export every indexed URL from Search Console (Pages report) and third-party tools (Ahrefs Top Pages, Semrush Organic Pages, log files). Map each to its destination on the new site with explicit 301s — no wildcards, no patterns that rely on fallbacks. Validate the map by crawling the old domain against staging with Screaming Frog in list mode and checking every response is 301 → 200 with no chains.
Outage response
During an unplanned outage, if the origin returns 500 but the CDN is healthy, configure a stale-while-revalidate or serve-stale policy to return cached 200 responses. If cache is empty, fall through to a custom 503 page on the CDN itself. Never let 500s leak to Googlebot for extended periods — the CDN should absorb them.
International redirects
Never 302-redirect users based on IP geolocation for the canonical URL. Googlebot crawls primarily from US IPs — if your homepage 302s all US traffic to /en-us/, your /en-ca/ and other locale versions may never be crawled. Use hreflang annotations instead, and if a locale redirect is truly necessary, gate it on Accept-Language and exclude verified crawler user-agents.
Parameter handling
Session IDs, tracking parameters, and sort orders create infinite URL spaces. Return a 200 with a rel=canonical tag pointing to the clean URL rather than 301-redirecting parameters away (which breaks analytics). Combine with disciplined UTM usage so tracking parameters never leak into internal links.
HTTP to HTTPS upgrades
Always 301 from http:// to https:// at the edge. Pair with HSTS (Strict-Transport-Security) headers so subsequent requests skip the redirect entirely. Include preload directives only after you are certain HTTPS is stable across all subdomains — preload is effectively permanent.
Debugging tools
Status codes are only useful if you can verify them reliably. The toolkit below covers day-to-day QA, incident response, and periodic audits. Most teams underuse log analysis and over-rely on browser devtools — balance both.
The foundational tool. curl -I shows response headers only; curl -IL follows redirects and prints the chain. Essential for verifying 301 vs 302, Retry-After values, and cache headers. Use --user-agent "Googlebot" to test from the crawlers perspective (not a substitute for real Googlebot IPs but catches most UA-based gating).
The Pages report shows exactly how Google classifies each URL — Indexed, Not indexed (with reasons like "Not found (404)", "Soft 404", "Redirect error"). Crawl Stats reveals 5xx rate over time. URL Inspection shows the live response code and rendered HTML. Check weekly.
Crawls your site like a search engine and reports every response code. The Redirects tab surfaces chains, loops, and status mismatches. Use list mode with your historical URL export to validate migration redirects in minutes.
The ground truth. Access logs show every request Googlebot made and the exact status code you returned. Tools: Screaming Frog Log Analyser, Splunk, Elastic, or a simple BigQuery export of your CDN logs. Filter by verified Googlebot IPs (not user-agent alone) to avoid spoofed traffic.
For debugging individual pages, the Network panel shows status codes, redirect chains, timing, and response headers. Enable "Preserve log" to capture redirects. Useful for frontend-triggered redirects (meta refresh, JavaScript window.location) that server-side tools miss.
For testing how your monitoring, CDN, or crawler handles specific codes, httpstat.us returns any status on demand (e.g., httpstat.us/503 with a Retry-After). Useful when validating error-handling logic without breaking production.
Get your status codes audited
Wrong status codes silently erode traffic. A thorough audit surfaces redirect chains, soft 404s, accidental 302s, and sustained 5xx patterns that Google is reacting to right now. Our team combines log analysis, Search Console data, and live crawls to rebuild clean response handling across your entire site.
Frequently asked questions
Related Articles
Continue exploring with these related guides