A technical SEO audit is the systematic process of identifying issues that prevent search engines from crawling, indexing, and ranking your website effectively. Without regular audits, technical problems accumulate silently — broken internal links, duplicate content, slow page speed, missing canonical tags, and crawl traps that waste Googlebot's time on low-value pages. A comprehensive technical SEO audit in 2026 covers seven domains: crawlability, indexing, page speed and Core Web Vitals, structured data, mobile optimisation, international SEO (where applicable), and site security. This 50-point checklist covers every critical check across all seven domains, with specific tools and actions for each item.
Crawlability: Ensuring Googlebot Can Access Your Site
Crawlability issues prevent search engines from discovering and processing your pages. The first step in any technical SEO audit is verifying that your most important pages are accessible to crawlers. Start by reviewing your robots.txt file (yourdomain.com/robots.txt) for overly restrictive Disallow rules that block important sections of the site. Many sites accidentally block CSS, JavaScript, or entire subdirectories. Next, check your XML sitemap for accuracy — it should include only canonicalised, indexable URLs and should not include noindexed pages, redirecting URLs, or 404 error pages. Use Screaming Frog SEO Spider or Semrush Site Audit to crawl your site from the perspective of Googlebot and identify crawl errors, blocked resources, and crawl traps (infinite URL parameters, session IDs in URLs, faceted navigation pages that generate thousands of near-duplicate URLs).
- Review robots.txt — no Disallow rules blocking CSS, JS, or important content directories
- Check XML sitemap contains only 200-status, canonical, indexable URLs
- Run a full site crawl with Screaming Frog to identify blocked resources and crawl errors
- Identify URL parameter issues that generate crawl traps — configure in Google Search Console
- Check for infinite crawl loops caused by faceted navigation, session IDs, or calendar pages
- Verify internal linking reaches all important pages within 3 clicks from the homepage
- Confirm crawl budget is not being wasted on pagination, duplicate filters, or legacy redirects
Indexing: Verifying the Right Pages Are in Google's Index
Indexing issues are among the most impactful technical SEO problems because a page that is not in Google's index cannot rank for anything. Use the 'site:yourdomain.com' search operator to check your approximate index count and spot-check that important pages appear. Cross-reference this with your total page count — a significant discrepancy indicates indexing problems. In Google Search Console, review the Index Coverage report (now called the Indexing report in the new Search Console interface) to see which pages are indexed, which are excluded, and why. Common exclusion reasons include: pages with noindex tags (check these are intentional), pages blocked by robots.txt, pages that are 'Discovered but not indexed' (Googlebot found the URL but has not yet crawled it), and pages with crawl anomalies. For each exclusion reason, determine whether it is intentional or a problem to fix.
- Use site:yourdomain.com in Google to estimate index count — compare to total page count
- Review Search Console Indexing report for excluded pages and their exclusion reasons
- Check that noindex tags are only on pages you intentionally want excluded
- Fix Discovered but not indexed issues by improving internal linking and crawl budget efficiency
- Verify all important pages return HTTP 200 status — not redirects or error codes
- Check canonical tags are correct — self-canonical on canonical pages, pointing to canonical on duplicate pages
- Ensure hreflang tags are implemented correctly for multilingual or multi-regional sites
On-Page Technical SEO: Title Tags, Meta Descriptions, and Headings
On-page technical SEO elements are the basic signals Google uses to understand each page's topic and relevance. A technical audit must check these systematically across the full site, not just on a page-by-page basis. Title tags should be unique across all pages (duplicate titles confuse Google about page differentiation), include the primary target keyword, and be between 50-60 characters (longer titles get truncated in search results). Meta descriptions should be unique, 150-160 characters, and written as compelling click prompts — they do not directly affect rankings but influence CTR. H1 tags should be present on every page (one per page), contain the primary keyword, and differ from the title tag. H2 and H3 tags should be used hierarchically and help structure content for both users and search engines. Use Screaming Frog to export title, meta description, and H1 data site-wide and identify duplicates, missing tags, and length violations.
- Check for duplicate title tags site-wide using Screaming Frog export
- Identify missing or empty title tags, meta descriptions, and H1 tags
- Flag title tags over 60 characters or under 30 characters
- Ensure every page has exactly one H1 tag containing the primary target keyword
- Verify meta descriptions are unique and between 150-160 characters
- Check that image alt text is present and descriptive for all important images
- Audit OG (Open Graph) tags for correct title, description, and image on all shareable pages
Duplicate Content and Canonical Tag Audit
Duplicate content — multiple URLs serving the same or substantially similar content — dilutes ranking signals by splitting backlinks and crawl equity across multiple versions of the same page. The most common causes are: www vs. non-www versions both being accessible, HTTP and HTTPS both serving content, trailing slash inconsistency (example.com/page vs. example.com/page/), URL parameters creating multiple versions of the same page, printer-friendly page versions, and content syndicated across multiple pages. Canonical tags (rel='canonical') signal to Google which version of a page is the authoritative one. Audit your canonical tag implementation by crawling the site with Screaming Frog and checking that: every page has a canonical tag, canonicals are self-referencing on canonical pages, and non-canonical duplicate pages point to the correct canonical URL. Also check that your site enforces a consistent www/non-www preference and 301-redirects the non-preferred version.
- Enforce consistent www vs non-www via 301 redirect — test both in browser
- Ensure HTTP redirects to HTTPS with a 301 — all HTTP URLs should redirect permanently
- Check trailing slash consistency — pick one convention and 301-redirect the other
- Audit URL parameter pages — use Search Console's URL parameters tool or noindex/canonical for duplicates
- Verify canonical tags are present on all pages and self-referencing on canonical pages
- Check that pagination pages (page 2, 3 etc.) are handled correctly — rel=next/prev or canonical
- Identify near-duplicate content using Siteliner or Copyscape for internal content similarity
Internal Linking and Site Architecture Audit
Internal linking is one of the most underutilised technical SEO levers. It distributes PageRank (link equity) through your site, signals to Google which pages are most important, and ensures crawlers can discover all content. A strong internal linking architecture means: every important page receives internal links from multiple relevant pages, your most important pages (service pages, pillar content) have the most internal links pointing to them, anchor text is descriptive and keyword-relevant (not generic 'click here' or 'read more'), and no important page is more than 3 clicks from the homepage. In an audit, use Screaming Frog to identify orphaned pages (pages with no internal links pointing to them), pages with only one internal link (low authority), and pages that are over-linked to with the same anchor text (over-optimisation risk). Also check for broken internal links (links pointing to 404 or redirected URLs) which waste link equity.
- Identify orphaned pages — pages with zero internal links pointing to them
- Fix broken internal links (links to 404s or redirected URLs) identified in Screaming Frog
- Ensure important pages receive internal links with keyword-rich, descriptive anchor text
- Check that all important content is reachable within 3 clicks from the homepage
- Review internal link anchor text diversity — avoid over-optimisation with repetitive exact-match anchors
- Add breadcrumb navigation sitewide to improve both crawlability and user navigation
- Build internal links from high-authority pages (high internal link count) to pages you want to rank
Page Speed and Core Web Vitals Audit
Page speed and Core Web Vitals are confirmed Google ranking factors and should be audited systematically across the site, not just on the homepage. Use Google Search Console's Core Web Vitals report to identify which URL groups are failing — this field data covers real user experiences across your entire site, not just lab tests. Use PageSpeed Insights to diagnose individual URLs showing issues. The most common technical causes of poor Core Web Vitals across Indian websites are: images served without WebP conversion, LCP images that are lazy-loaded (preventing fast load), no CDN configured (leading to high TTFB for Indian users from international servers), render-blocking JavaScript from analytics or chat scripts, and CLS caused by images without dimensions or dynamically injected content. Prioritise URLs with the highest organic traffic and worst Core Web Vitals scores for immediate remediation.
- Run Google Search Console Core Web Vitals report to identify failing URL groups
- Use PageSpeed Insights to diagnose LCP, INP, and CLS issues for top traffic pages
- Check all images for WebP format, explicit dimensions, and fetchpriority on LCP images
- Measure Time to First Byte — flag pages above 800ms for hosting or CDN investigation
- Audit render-blocking resources using PageSpeed Insights opportunities section
- Test on real mobile devices — mobile Core Web Vitals are the ranking signal, not desktop
- Set up Lighthouse CI to prevent future performance regressions in the development workflow
Structured Data and Schema Markup Audit
A structured data audit checks whether your site's schema markup is present, accurate, and error-free. Start with Google Search Console's Enhancements reports — these show every schema type detected on your site, the number of valid items, and any errors or warnings. Then use the Rich Results Test to validate schema on individual URLs. Common schema issues found in audits include: FAQ schema with promotional answers (Google deprioritises these), Article schema missing required author or datePublished properties, missing schema on pages that would benefit from it (product pages without Product schema, blog posts without Article schema), and schema that describes content not visible on the page (a spam violation). Also check that Organisation schema is implemented on the homepage with all required properties including sameAs links to social profiles.
- Review Search Console Enhancements reports for schema errors and warnings site-wide
- Validate schema on key page types using Google's Rich Results Test
- Check FAQ schema answers are genuinely helpful — not promotional content
- Verify Article schema includes author (with name and url), datePublished, and dateModified
- Confirm Organisation schema is on the homepage with sameAs links to all social profiles
- Identify high-value pages missing schema (product pages, how-to guides, FAQ pages)
- Test that implemented schema matches visible page content — mismatches trigger Google penalties
Security, HTTPS, and Mobile Optimisation Audit
HTTPS is a confirmed Google ranking signal and a requirement for browser security indicators. Verify that your entire site is served over HTTPS with a valid SSL certificate, that all HTTP URLs redirect to HTTPS with 301 redirects, and that no mixed content warnings appear (HTTP resources like images or scripts loaded on HTTPS pages). Mobile optimisation is audited via the Mobile Usability report in Google Search Console, which flags specific pages with mobile usability errors: text too small to read, clickable elements too close together, content wider than the screen. Google uses mobile-first indexing, meaning your mobile page version is the one used for ranking — mobile issues are ranking issues. Also check Core Web Vitals on mobile specifically, as mobile scores are typically 30-50% worse than desktop due to lower CPU performance and variable network conditions.
- Verify SSL certificate validity and check expiry date — set up auto-renewal if not configured
- Confirm all HTTP URLs redirect to HTTPS with 301 — check with Screaming Frog or httpstatus.io
- Check for mixed content warnings using Chrome DevTools Security panel
- Review Google Search Console Mobile Usability report for mobile-specific errors
- Test touch target sizes — interactive elements should be at least 44x44px on mobile
- Verify viewport meta tag is present and set to width=device-width, initial-scale=1
- Check font size legibility — minimum 16px body text for mobile readability
A technical SEO audit is not a one-time project — it is a recurring process that should happen at minimum annually for stable sites, and quarterly for actively developed sites where changes are frequent. The 50 checks across these eight domains cover the full range of technical issues that limit search performance. For most sites, the highest-impact fixes are in crawlability (robots.txt, XML sitemap accuracy), indexing (canonical tags, noindex misuse), and page speed (image optimisation, TTFB). Fix these three areas first — they consistently deliver the largest ranking improvements per hour of work invested.
Frequently Asked Questions
How often should I run a technical SEO audit?
For actively developed websites, run a lightweight technical audit monthly (crawl errors, indexing issues, Core Web Vitals) and a full comprehensive audit quarterly. For stable sites with infrequent changes, a full audit every 6 months is sufficient. Always run a full technical audit after major site migrations, CMS upgrades, or significant structural changes to your site architecture.
What is the best free tool for a technical SEO audit?
Google Search Console is the most valuable free tool — it provides real data on indexing coverage, Core Web Vitals (field data), mobile usability, structured data errors, and manual actions. Screaming Frog SEO Spider's free version (up to 500 URLs) handles crawl analysis. PageSpeed Insights is free for page speed diagnosis. Together these three free tools cover 80% of a comprehensive technical SEO audit.
What is the most common technical SEO issue found in audits?
The most commonly found issues in technical SEO audits are: duplicate content caused by inconsistent URL conventions (www vs non-www, HTTP vs HTTPS, trailing slash), missing or incorrect canonical tags, images without explicit dimensions causing CLS, slow TTFB from poor hosting choices, and render-blocking JavaScript delaying LCP. For Indian websites specifically, slow TTFB from servers located outside India is extremely common.
What is crawl budget and should I worry about it?
Crawl budget is the number of pages Googlebot will crawl on your site within a given timeframe. For most sites under 10,000 pages with clean architecture, crawl budget is not a limiting concern — Googlebot will crawl everything. Crawl budget management becomes critical for large sites (100,000+ pages), e-commerce sites with faceted navigation generating millions of URL combinations, or sites with high levels of duplicate or low-value content that wastes crawl allocation.
How do I check if Googlebot can crawl my site?
Use Google's URL Inspection tool in Search Console — enter any URL to see whether Google has crawled it, when it was last crawled, what Google sees when it renders the page, and whether there are any indexing issues. For site-wide crawl analysis, run your site through Screaming Frog configured with Googlebot's user agent to simulate how Google crawls your site and identify blocked resources or crawl errors.
What is the difference between a crawl error and an indexing error?
A crawl error occurs when Googlebot cannot access a URL — the server returns a 4xx or 5xx error, or the URL is blocked by robots.txt. An indexing error occurs when Googlebot can access the URL but chooses not to index it — due to a noindex tag, low content quality, duplicate content, or a manual action. Both types appear in Search Console but require different fixes: crawl errors need technical fixes (server issues, robots.txt), while indexing errors need content or tag corrections.