Render check — Googlebot view
> curl -A "Googlebot" yoursite.com
> grep -c "<h1>" rendered.html

  Result: 0
This is what Googlebot sees on a CSR-only React site. Zero<h1>tags. Zero indexable content.
JS rendering auditincluded
@graph schema rebuildincluded
Core Web Vitals fix listincluded
Crawl-budget log analysisincluded
Technical SEO Services

Most SEO agencies can't spell technical SEO.
The ones who can charge more — for good reason.

Site architecture audits, @graph schema markup, Core Web Vitals optimization, JS rendering decisions, crawl budget recovery, indexation control, migration ranking-preservation. Engineer-friendly. Specific. No marketing fluff.

300+ sites audited · 94% retention · Audit deliverable in 4–6 weeks
The technical thresholds Google actually uses
<2.5s
LCP target — 75th-percentile field data, Google's published ranking threshold
<200ms
INP target — replaced FID in March 2024, measures whole-session responsiveness
60%+
of crawl budget wasted on non-priority URLs in a typical large-site audit
80%
of CMS migrations lose significant organic traffic without an expert SEO partner
Definition

What is technical SEO?

Technical SEO is the discipline of optimizing the structural and infrastructural layer of a website so search engines can crawl, render, and rank it.

It covers site architecture, schema markup, Core Web Vitals, JavaScript rendering decisions, crawl budget, indexation control, internal linking, and migration ranking-preservation. Distinct from content SEO (which is what's said on the page) and link-building SEO (which is who points to the site). Technical SEO is what makes the site readable to crawlers and rankable in the first place — without it, the other two layers can't produce results.

The buyer for technical SEO is typically a larger site (10K+ URLs), a JavaScript-heavy single-page application, a site going through a CMS migration or rebuild, or a technical product / marketing team that already understands what technical SEO is and just needs an agency that doesn't bullshit about it.

The seven disciplines of technical SEO

Each one is a separate engineering surface.
Each one fails quietly when it's missed.

01

Site architecture audit

Information architecture, URL structure, navigation depth, faceted-navigation handling, pagination strategy.

We map every URL on the site, classify it by template, and analyse click-depth from the homepage. Pages buried four clicks deep get crawled rarely and rank poorly — we surface them through navigation rewrites, related-content modules, and breadcrumb hierarchies that distribute internal PageRank toward priority pages. Faceted navigation is where most ecommerce sites haemorrhage crawl budget: every color/size/sort combination generates a unique URL, and Googlebot wastes weeks crawling permutations of pages it should never index. We define which facets are indexable, which are noindex, and which are blocked at robots.txt — and we ship the implementation, not just the recommendation. Pagination follows the same logic: rel=prev/next is dead, so the choice is canonical-to-page-1, view-all canonicalisation, or self-canonical paginated pages — depending on whether the paginated content is the indexable surface or a navigation aid.

02

Schema markup graph (@graph JSON-LD)

Organization → Person → Article/Product/Service entity chaining. Rich-result eligibility and AI-engine entity association.

Most CMS plugins ship flat schema — one isolated JSON-LD block per page. The @graph approach chains entities: a single JSON-LD block per page references Organization, WebSite, WebPage, Person (the author with sameAs to LinkedIn / Crunchbase / Wikipedia), Article (with author and publisher pointing back to Person and Organization via @id), and BreadcrumbList. Search engines parse the relationships, not just the entities. Google's E-E-A-T signal extraction depends on these relationships — knowing 'who wrote this' (Person → author of Article) and 'what's the publisher entity' (Organization → publisher) is exactly the data Google's quality systems consume. We rebuild schema at the layout level so every page inherits the Organization, Person, and WebSite entities, then layers page-specific entities on top. Rich-result eligibility goes up. AI-engine citation likelihood goes up because LLMs lean heavily on structured data when training their entity associations during indexing.

03

Core Web Vitals optimization

LCP, INP, CLS — Google's published 75th-percentile field-data ranking signals.

Core Web Vitals are now a confirmed Google ranking signal, measured against 75th-percentile real-user field data — not lab data from Lighthouse. The three metrics: Largest Contentful Paint under 2.5s (the time from navigation to the largest visible element rendering), Interaction to Next Paint under 200ms (replaced FID in March 2024 — measures responsiveness across the whole session, not just the first interaction), and Cumulative Layout Shift under 0.1 (measures unexpected layout shifts during rendering). We start every Core Web Vitals engagement with the CrUX BigQuery dataset to get historical field data segmented by route template — most teams optimise the homepage and ignore the templates that drive 80% of organic traffic. LCP fixes are usually image optimisation (modern formats, responsive sizing, lazy-loading the right elements), webfont strategy (preload critical fonts, font-display swap), and JS bundle reduction. INP fixes are almost always third-party scripts and main-thread blocking from heavy frameworks — code-splitting, deferring, web-workering. CLS fixes are reserving space for images, ads, and embeds with declared dimensions.

04

JavaScript rendering audit

CSR vs SSR vs ISR vs SSG decisions per route. Next.js, React, Vue, Angular, Svelte sites need rendering modes that match SEO requirements.

Modern JS frameworks default to client-side rendering, which is the SEO failure mode. Googlebot does render JS, but on a delayed second-pass crawl — indexation gets unreliable past a few thousand URLs, content updates take days to surface in the index, and Bing/Yandex/AI engines render JS far less reliably than Google. The fix is choosing the right rendering mode per route. SSG (static generation at build time, e.g. Next.js getStaticProps or generateStaticParams) is the gold standard for marketing pages, blog posts, documentation — fastest possible SEO performance, but rebuilds get expensive past ~10K pages. ISR (incremental static regeneration) handles large catalogs that update periodically — ecommerce product pages, programmatic SEO templates — by regenerating pages on demand or on a schedule. SSR (server-side rendering on every request) is right for pages with fresh data on every load. Pure CSR is acceptable only for authenticated routes that don't need to rank. We audit the route map, classify each route, and migrate routes off CSR onto the right rendering mode — usually as part of a broader Next.js / Nuxt / SvelteKit migration.

05

Crawl budget management

Robots.txt, sitemap segmentation, internal-link bottleneck analysis, parameter handling, log-file analysis.

For sites past 10K URLs, crawl budget is mostly about preventing waste. Five concrete levers. First, robots.txt: block faceted-navigation parameters, infinite-scroll pagination URLs, and any duplicate-content surface that has no business being crawled. Second, sitemap segmentation: split sitemaps by content type and priority — Googlebot crawls high-priority sitemaps more aggressively than low-priority ones. Third, internal linking: pages with few internal links get crawled rarely. We audit orphan pages (pages with zero internal links pointing to them) and rewire navigation, footer, and related-content modules to point crawl toward priority pages. Fourth, parameter handling: canonical the parameterized URLs to the clean version, return 301s on legacy parameters, set Vary headers correctly, configure URL parameter rules in Search Console. Fifth, log-file analysis: ingest server logs (Splunk, Cloudflare Logs, Logflare, or custom parsers) to see which URLs Googlebot is actually crawling vs ignoring — almost always the surprise is that Google is wasting 60%+ of its crawl on URLs you don't care about. Every percentage point of recovered crawl budget translates to faster indexation of pages you do care about.

06

Indexation control

Canonical handling, hreflang, noindex/nofollow strategy, duplicate-content resolution, GSC inspection at scale.

Indexation is where technical SEO becomes a yes/no question — either a page is in Google's index or it isn't. We audit indexation programmatically: every URL classified as indexed, excluded by canonical, excluded by noindex, discovered-not-indexed, or crawled-not-indexed via the GSC URL Inspection API. We automate batch inspection for audits that span 10K+ URLs because manual inspection at that scale is impossible. The diagnosis matrix: 'discovered-not-indexed' usually means low quality or duplicate-content perception (fix: improve content uniqueness or noindex if genuinely thin), 'crawled-not-indexed' means Google saw it and chose not to index (fix: improve content quality, internal linking, or remove if redundant), 'excluded by canonical' means a canonical tag is pointing elsewhere (fix: audit canonical implementation, look for accidental cross-domain canonicals). Hreflang for multilingual sites is its own discipline — we audit hreflang return-tag pairing, x-default declaration, and self-referential implementation. Most multilingual sites have at least one broken hreflang relationship blocking the entire international architecture from working.

07

Migration / rebuild SEO

Ranking-preservation when re-platforming. WordPress to Next.js, Shopify to BigCommerce, custom to headless. Distinct project scope.

Migration SEO is its own discipline because the failure mode is silent and catastrophic — you can lose 80% of organic traffic in a launch week and not realize until rankings drop weeks later. Most agencies don't do migration SEO at all; the few that do treat it as a separate engineering project. Six-phase framework. Phase one: pre-migration crawl of the legacy site (every URL, every redirect chain, every canonical, every internal link target, every sitemap entry). Phase two: URL mapping spreadsheet — every legacy URL mapped to its destination on the new site, with a redirect strategy for each (301 to direct match, 301 to closest match, 410 for genuinely retired pages). Phase three: schema, hreflang, canonical, robots.txt parity audit — the new site must replicate the working signals from the legacy site exactly, before adding new ones. Phase four: staging-environment crawl to verify redirects, schema, internal links, render parity for Googlebot. Phase five: launch with real-time monitoring (GSC URL Inspection API, log files, rank tracking on the top 200 keywords). Phase six: 30/60/90-day post-launch reconciliation, fixing the inevitable broken redirects and indexation issues that surface only at scale. Most migrations fail because phase two gets rushed and phase six gets skipped.

Engagement scenarios

Four scopes. Each with a different shape
and a different success metric.

Scenario 01
4–6 weeks

Audit-only

Technical audit deliverable. Hand-off to your engineering team.

Full technical audit covering all 7 disciplines: architecture, schema, Core Web Vitals, JS rendering, crawl budget, indexation, migration-readiness if relevant. Deliverable is an 80–120-page working document with prioritised remediation roadmap, code samples for every fix, and a live walkthrough with your engineering team. We hand over and your team executes. Best fit for engineering-led teams who want senior strategy without ongoing agency overhead.

Best fit: Engineering-led teams · 10K+ URLs · clear in-house implementation capacity
Scenario 02
Audit 4–6 weeks, then ongoing

Audit + implementation

We audit, then ship the fixes alongside your engineering team.

Same audit deliverable, then we work directly with your engineering team — pull requests, code reviews, schema implementations, redirect maps, sitemap configs. We're embedded enough to ship code but separate enough to bring the strategic frame. Pricing structured as audit fee plus monthly retainer based on engineering velocity and scope. Best fit for teams that have engineering capacity but no senior SEO direction.

Best fit: Mid-market sites · engineering capacity · no in-house senior SEO
Scenario 03
90-day project scope

Migration / rebuild SEO

Ranking-preservation across a re-platform or rebuild.

Distinct project scope because the work is non-recurring and time-bound. Six-phase framework: pre-migration crawl, URL mapping, parity audit, staging-environment verification, launch monitoring, 30/60/90-day reconciliation. We've shipped migrations from WordPress to Next.js, Shopify to BigCommerce, custom legacy to headless, and major domain consolidations. Ranking-preservation is the deliverable — and we run dashboards across the full top-200-keyword set to prove it.

Best fit: Re-platforms · domain consolidations · CMS migrations · headless rebuilds
Scenario 04
Monthly, month-to-month

Ongoing technical retainer

Core Web Vitals monitoring, schema upgrades, indexation health.

After the audit and initial implementation, the technical SEO surface keeps moving. New page templates ship and need schema. Core Web Vitals regress as features get added. Algorithm updates surface new indexation patterns. We run monthly technical health reviews — CrUX field data, GSC indexation reports, log-file ingestion, schema validation — and ship fixes as they're identified. Best fit for sites with continuous deployment velocity and no in-house technical SEO function.

Best fit: SaaS · enterprise ecommerce · continuous-deploy teams · scaled programmatic
The discipline most agencies don't do

Migration SEO is where most sites
lose their rankings.

When a site re-platforms — WordPress to Next.js, Shopify to BigCommerce, custom legacy to headless, two domains consolidating into one — the SEO migration plan determines whether the site keeps or loses 80%+ of its organic traffic. The failure mode is silent and catastrophic: rankings hold for the first week post-launch because Google's index hasn't caught up yet, then they collapse over weeks two through six as Google reconciles the new URL space. By the time anyone notices, recovery is a six-to-twelve-month project, if it's recoverable at all.

Most agencies don't do migration SEO. The few that do treat it as a separate engineering discipline with its own scope, timeline, and success metrics. We treat it the same way: ninety-day project scope, six-phase framework, ranking-preservation dashboards covering the top-200 keyword set, post-launch reconciliation through the 90-day mark when the index has fully reconverged.

Joel's published methodology in The Growth Architecture covers migration extensively because it's where most growth programs collapse — and where the gap between operators who know what they're doing and operators who're guessing becomes a six-figure traffic loss.

SCOPE A MIGRATION SEO PROJECT
Why work with us

Engineer-friendly SEO partners.
Specific deliverables.
No marketing fluff.

We work with engineering teams the way good engineering consultants do — with precise scope, concrete deliverables, and code-level specificity in every recommendation.

Published methodology, not pitch material

Joel House wrote The Growth Architecture and AI for Revenue, both on Barnes & Noble at 5.0 stars. The technical SEO chapter in The Growth Architecture covers schema graph implementation, JS rendering decisions, and migration framework — the same playbook we ship to clients. Most agencies have decks. We have published books.

Audit deliverables you can actually act on

Our technical audit deliverable is an 80–120-page working document with prioritised remediation roadmap, code samples for every fix, GSC URL Inspection batch results, log-file analysis if accessible, and a live walkthrough with your engineering team. Not slides. Not a Notion page with bullet points. A document your engineers can implement from.

We ship code, not just recommendations

Audit + implementation engagements include direct work with your engineering team — pull requests for schema implementations, redirect maps, sitemap configs, robots.txt rewrites, Core Web Vitals fixes. We&apos;re embedded enough to ship, separate enough to bring senior strategic frame. Code-review-ready PRs, not handoff documents.

AI-engine visibility built in

Technical SEO in 2026 isn&apos;t just about Google — ChatGPT, Perplexity, Gemini, Claude, and Google AI Overviews all consume structured data, render content differently, and cite different surfaces. Every technical engagement includes Mention Layer tracking and AI-engine citation analysis as a baseline. The schema graph we build serves both Google rich results and LLM entity associations.

Senior operators only, no juniors on the audit

The technical audit is performed by a senior operator with site-architecture, schema, and rendering experience — not handed to a junior with a Screaming Frog license. Auditing a 50K-URL site and missing the canonical loop hidden in the legacy parameter handling is the kind of failure mode that costs six figures in lost organic. We don&apos;t scale by adding juniors. We scale by being deliberate about what we take on.

Common questions

What technical teams ask before they hire a technical SEO agency.

Google's published thresholds for the 75th percentile of real-user data: LCP under 2.5s, INP under 200ms (replaced FID in March 2024), CLS under 0.1. We benchmark against field data from CrUX, not lab data from Lighthouse — lab scores are a useful debugging tool but they're not what Google actually ranks on. For most sites we audit, the LCP element is either an unoptimised hero image, a render-blocking webfont, or a large JS bundle that delays paint. INP issues are almost always third-party scripts or main-thread blocking from a heavy framework. CLS is usually webfonts shifting layout or ads/embeds with undeclared dimensions. The fixes are mechanical once you've correctly identified the bottleneck — most sites are guessing because they're reading the wrong tool.

Flat schema means dropping a single JSON-LD block per page (an Article or Product or Service). @graph schema chains entities together using @id references — Organization, WebSite, WebPage, Person (the author), Article, BreadcrumbList — so search engines parse the relationships between entities, not just the entities themselves. The @graph approach is what Google's E-E-A-T signal extraction actually consumes: 'who wrote this' (Person → Article author), 'who published it' (Organization → publisher), 'what site does it belong to' (WebSite → isPartOf). Most sites ship flat schema because their CMS or framework defaults to it. We rebuild schema as a graph at the layout level so every page inherits Organization + Person + WebSite identities, then layers page-specific entities on top. Rich-result eligibility goes up significantly. AI-engine citation likelihood goes up because LLMs lean heavily on structured data when training their entity associations.

The decision rests on three axes: how often content changes, how big the URL space is, and how SEO-critical each route is. SSG (static generation at build time) is the gold standard for pages that change infrequently — marketing pages, blog posts, documentation. Fastest possible SEO performance, but rebuilds get expensive past ~10K pages. ISR (incremental static regeneration in Next.js) handles large catalogs that update periodically — ecommerce product pages, programmatic SEO templates, large content libraries — by regenerating pages on demand or on a schedule. SSR (server-side rendering on every request) is the right call for pages that need fresh data on every load: search results, personalized content, real-time dashboards. CSR (pure client-side rendering) is the trap most JS-heavy sites fall into — Googlebot can render JS but does it on a delayed second-pass crawl, indexation gets unreliable past a few thousand URLs, and Bing/Yandex/AI engines render JS far less reliably than Google. We audit the route map, classify each route, and migrate routes off CSR onto the right rendering mode for their SEO profile.

Crawl budget management at scale is mostly about preventing waste, not maximizing crawl. Five concrete levers. First, robots.txt: block faceted-navigation parameters (color, size, sort, page=2..N) that generate thousands of duplicate URLs. Second, sitemap segmentation: split sitemaps by content type and priority — Googlebot crawls high-priority sitemaps more aggressively. Third, internal linking: pages with few internal links get crawled rarely; we audit orphan pages and rewire navigation/footer/related-content to point crawl toward priority pages. Fourth, server-side parameter handling: canonical the parameterized URLs to the clean version, return 301s on legacy parameters, set Vary headers correctly. Fifth, log-file analysis: we ingest server logs (or use a tool like Logflare/Splunk/Cloudflare Logs) to see which URLs Googlebot is actually crawling vs ignoring — almost always the surprise is that Google is wasting 60%+ of its crawl on URLs you don't care about. Every percentage point of recovered crawl budget translates to faster indexation of pages you do care about.

Migration SEO is its own discipline because the failure mode is silent and catastrophic — you can lose 80% of organic traffic in a launch week and not realize until the rankings drop weeks later. The framework is six phases. Phase one: pre-migration crawl of the legacy site (every URL, every redirect chain, every canonical, every internal link target). Phase two: URL mapping spreadsheet — every legacy URL mapped to its destination on the new site, with a redirect strategy for each (301 to direct match, 301 to closest match, 410 for genuinely retired pages). Phase three: schema, hreflang, canonical, robots.txt parity audit — the new site must replicate the working signals from the legacy site exactly, before adding new ones. Phase four: staging-environment crawl to verify redirects, schema, internal links, render parity for Googlebot. Phase five: launch with real-time monitoring (GSC inspection API, log files, rank tracking on the top 200 keywords). Phase six: 30/60/90-day post-launch reconciliation, fixing the inevitable broken redirects and indexation issues that surface only at scale. Most migrations fail because phase two gets rushed and phase six gets skipped.

A technical audit deliverable from us is a working document, not a slide deck. Eight sections. (1) Site architecture map: full URL inventory, click-depth analysis, orphan-page list, navigation hierarchy review. (2) Indexation report: every URL classified as indexed/excluded/discovered-not-indexed via the GSC URL Inspection API, with the diagnosis for each exclusion class. (3) Schema markup audit: current schema inventory, validation errors, recommended @graph rebuild with code samples. (4) Core Web Vitals report: field-data benchmarks per route template, LCP/INP/CLS bottleneck identification, prioritised fix list. (5) JavaScript rendering audit: which routes are CSR vs SSR vs SSG, render-difference comparison between fetched-as-HTML and fetched-as-rendered, recommendations per route class. (6) Crawl-budget analysis: log-file ingestion if accessible, parameter handling review, robots.txt and sitemap optimisation. (7) Internal linking audit: PageRank-style flow analysis, deep-page-discovery review, anchor-text distribution. (8) Prioritised remediation roadmap: every issue ranked by SEO impact × engineering effort, with the actual fix for each (not just 'fix this'). Typical audit deliverable runs 80–120 pages plus appendices. We hand it over and walk through every section live.

An in-house technical SEO is the right hire when you have continuous engineering velocity that needs SEO review on every release — typically that's a SaaS company $5M+ ARR with weekly deploys, or an enterprise ecommerce site over 100K SKUs with constant catalog changes. The role works because there's enough recurring SEO surface area to justify a full-time salary ($120–180K USD for a senior, $180–260K USD for a head of SEO). An agency is the right choice when (a) you need a one-time technical lift — audit, migration, schema rebuild, Core Web Vitals fix; (b) you have engineering velocity but the SEO work is project-shaped rather than continuous; (c) you're under $5M ARR and a full-time hire is over-investment; or (d) you want a senior strategist alongside an in-house junior who handles execution. We routinely operate as the senior-strategy partner alongside in-house teams — they ship the code, we set the technical direction and review the deploys.

The kit, by category. Crawl analysis: Screaming Frog (default), Sitebulb (relationship visualisation), our own custom Node.js crawler for client-specific edge cases. Log file analysis: Splunk for enterprise, Cloudflare Logs / Logflare for mid-market, custom parsers for self-hosted server logs. Indexation: Google Search Console URL Inspection API at scale (we automate batch inspection for 10K+ URL audits), Bing Webmaster Tools, IndexNow API submission. Schema validation: Google's Rich Results Test, Schema.org validator, our own internal validator that checks @graph relationship integrity. Core Web Vitals: PageSpeed Insights for spot-checks, CrUX BigQuery dataset for historical field data, Vercel Analytics / Cloudflare Web Analytics for real-user telemetry. JS rendering: Mobile-Friendly Test (deprecated but still useful for render comparison), Puppeteer / Playwright scripts for custom render diffing. Migration: WayBack Machine + custom diff scripts for legacy-site reconstruction, Screaming Frog redirect-chain analysis. Plus the GSC and Google Analytics MCPs we use across every engagement for data ingestion. Tooling matters less than the operator using it — but the kit above is the kit.

Technical SEO that ships

Your JS framework is either rendering for Googlebot or it isn't.
Most agencies don't check.

30-minute technical strategy call with Joel. We'll do a live render check on your top route templates, a quick GSC indexation spot-audit, and a Core Web Vitals field-data pull from CrUX. No deck. No “we'll get back to you with a proposal.”