> curl -A "Googlebot" yoursite.com > grep -c "<h1>" rendered.html Result: 0
<h1>tags. Zero indexable content.Most SEO agencies can't spell technical SEO.
The ones who can charge more — for good reason.
Site architecture audits, @graph schema markup, Core Web Vitals optimization, JS rendering decisions, crawl budget recovery, indexation control, migration ranking-preservation. Engineer-friendly. Specific. No marketing fluff.
What is technical SEO?
Technical SEO is the discipline of optimizing the structural and infrastructural layer of a website so search engines can crawl, render, and rank it.
It covers site architecture, schema markup, Core Web Vitals, JavaScript rendering decisions, crawl budget, indexation control, internal linking, and migration ranking-preservation. Distinct from content SEO (which is what's said on the page) and link-building SEO (which is who points to the site). Technical SEO is what makes the site readable to crawlers and rankable in the first place — without it, the other two layers can't produce results.
The buyer for technical SEO is typically a larger site (10K+ URLs), a JavaScript-heavy single-page application, a site going through a CMS migration or rebuild, or a technical product / marketing team that already understands what technical SEO is and just needs an agency that doesn't bullshit about it.
Each one is a separate engineering surface.
Each one fails quietly when it's missed.
Site architecture audit
Information architecture, URL structure, navigation depth, faceted-navigation handling, pagination strategy.
We map every URL on the site, classify it by template, and analyse click-depth from the homepage. Pages buried four clicks deep get crawled rarely and rank poorly — we surface them through navigation rewrites, related-content modules, and breadcrumb hierarchies that distribute internal PageRank toward priority pages. Faceted navigation is where most ecommerce sites haemorrhage crawl budget: every color/size/sort combination generates a unique URL, and Googlebot wastes weeks crawling permutations of pages it should never index. We define which facets are indexable, which are noindex, and which are blocked at robots.txt — and we ship the implementation, not just the recommendation. Pagination follows the same logic: rel=prev/next is dead, so the choice is canonical-to-page-1, view-all canonicalisation, or self-canonical paginated pages — depending on whether the paginated content is the indexable surface or a navigation aid.
Schema markup graph (@graph JSON-LD)
Organization → Person → Article/Product/Service entity chaining. Rich-result eligibility and AI-engine entity association.
Most CMS plugins ship flat schema — one isolated JSON-LD block per page. The @graph approach chains entities: a single JSON-LD block per page references Organization, WebSite, WebPage, Person (the author with sameAs to LinkedIn / Crunchbase / Wikipedia), Article (with author and publisher pointing back to Person and Organization via @id), and BreadcrumbList. Search engines parse the relationships, not just the entities. Google's E-E-A-T signal extraction depends on these relationships — knowing 'who wrote this' (Person → author of Article) and 'what's the publisher entity' (Organization → publisher) is exactly the data Google's quality systems consume. We rebuild schema at the layout level so every page inherits the Organization, Person, and WebSite entities, then layers page-specific entities on top. Rich-result eligibility goes up. AI-engine citation likelihood goes up because LLMs lean heavily on structured data when training their entity associations during indexing.
Core Web Vitals optimization
LCP, INP, CLS — Google's published 75th-percentile field-data ranking signals.
Core Web Vitals are now a confirmed Google ranking signal, measured against 75th-percentile real-user field data — not lab data from Lighthouse. The three metrics: Largest Contentful Paint under 2.5s (the time from navigation to the largest visible element rendering), Interaction to Next Paint under 200ms (replaced FID in March 2024 — measures responsiveness across the whole session, not just the first interaction), and Cumulative Layout Shift under 0.1 (measures unexpected layout shifts during rendering). We start every Core Web Vitals engagement with the CrUX BigQuery dataset to get historical field data segmented by route template — most teams optimise the homepage and ignore the templates that drive 80% of organic traffic. LCP fixes are usually image optimisation (modern formats, responsive sizing, lazy-loading the right elements), webfont strategy (preload critical fonts, font-display swap), and JS bundle reduction. INP fixes are almost always third-party scripts and main-thread blocking from heavy frameworks — code-splitting, deferring, web-workering. CLS fixes are reserving space for images, ads, and embeds with declared dimensions.
JavaScript rendering audit
CSR vs SSR vs ISR vs SSG decisions per route. Next.js, React, Vue, Angular, Svelte sites need rendering modes that match SEO requirements.
Modern JS frameworks default to client-side rendering, which is the SEO failure mode. Googlebot does render JS, but on a delayed second-pass crawl — indexation gets unreliable past a few thousand URLs, content updates take days to surface in the index, and Bing/Yandex/AI engines render JS far less reliably than Google. The fix is choosing the right rendering mode per route. SSG (static generation at build time, e.g. Next.js getStaticProps or generateStaticParams) is the gold standard for marketing pages, blog posts, documentation — fastest possible SEO performance, but rebuilds get expensive past ~10K pages. ISR (incremental static regeneration) handles large catalogs that update periodically — ecommerce product pages, programmatic SEO templates — by regenerating pages on demand or on a schedule. SSR (server-side rendering on every request) is right for pages with fresh data on every load. Pure CSR is acceptable only for authenticated routes that don't need to rank. We audit the route map, classify each route, and migrate routes off CSR onto the right rendering mode — usually as part of a broader Next.js / Nuxt / SvelteKit migration.
Crawl budget management
Robots.txt, sitemap segmentation, internal-link bottleneck analysis, parameter handling, log-file analysis.
For sites past 10K URLs, crawl budget is mostly about preventing waste. Five concrete levers. First, robots.txt: block faceted-navigation parameters, infinite-scroll pagination URLs, and any duplicate-content surface that has no business being crawled. Second, sitemap segmentation: split sitemaps by content type and priority — Googlebot crawls high-priority sitemaps more aggressively than low-priority ones. Third, internal linking: pages with few internal links get crawled rarely. We audit orphan pages (pages with zero internal links pointing to them) and rewire navigation, footer, and related-content modules to point crawl toward priority pages. Fourth, parameter handling: canonical the parameterized URLs to the clean version, return 301s on legacy parameters, set Vary headers correctly, configure URL parameter rules in Search Console. Fifth, log-file analysis: ingest server logs (Splunk, Cloudflare Logs, Logflare, or custom parsers) to see which URLs Googlebot is actually crawling vs ignoring — almost always the surprise is that Google is wasting 60%+ of its crawl on URLs you don't care about. Every percentage point of recovered crawl budget translates to faster indexation of pages you do care about.
Indexation control
Canonical handling, hreflang, noindex/nofollow strategy, duplicate-content resolution, GSC inspection at scale.
Indexation is where technical SEO becomes a yes/no question — either a page is in Google's index or it isn't. We audit indexation programmatically: every URL classified as indexed, excluded by canonical, excluded by noindex, discovered-not-indexed, or crawled-not-indexed via the GSC URL Inspection API. We automate batch inspection for audits that span 10K+ URLs because manual inspection at that scale is impossible. The diagnosis matrix: 'discovered-not-indexed' usually means low quality or duplicate-content perception (fix: improve content uniqueness or noindex if genuinely thin), 'crawled-not-indexed' means Google saw it and chose not to index (fix: improve content quality, internal linking, or remove if redundant), 'excluded by canonical' means a canonical tag is pointing elsewhere (fix: audit canonical implementation, look for accidental cross-domain canonicals). Hreflang for multilingual sites is its own discipline — we audit hreflang return-tag pairing, x-default declaration, and self-referential implementation. Most multilingual sites have at least one broken hreflang relationship blocking the entire international architecture from working.
Migration / rebuild SEO
Ranking-preservation when re-platforming. WordPress to Next.js, Shopify to BigCommerce, custom to headless. Distinct project scope.
Migration SEO is its own discipline because the failure mode is silent and catastrophic — you can lose 80% of organic traffic in a launch week and not realize until rankings drop weeks later. Most agencies don't do migration SEO at all; the few that do treat it as a separate engineering project. Six-phase framework. Phase one: pre-migration crawl of the legacy site (every URL, every redirect chain, every canonical, every internal link target, every sitemap entry). Phase two: URL mapping spreadsheet — every legacy URL mapped to its destination on the new site, with a redirect strategy for each (301 to direct match, 301 to closest match, 410 for genuinely retired pages). Phase three: schema, hreflang, canonical, robots.txt parity audit — the new site must replicate the working signals from the legacy site exactly, before adding new ones. Phase four: staging-environment crawl to verify redirects, schema, internal links, render parity for Googlebot. Phase five: launch with real-time monitoring (GSC URL Inspection API, log files, rank tracking on the top 200 keywords). Phase six: 30/60/90-day post-launch reconciliation, fixing the inevitable broken redirects and indexation issues that surface only at scale. Most migrations fail because phase two gets rushed and phase six gets skipped.
Four scopes. Each with a different shape
and a different success metric.
Audit-only
Full technical audit covering all 7 disciplines: architecture, schema, Core Web Vitals, JS rendering, crawl budget, indexation, migration-readiness if relevant. Deliverable is an 80–120-page working document with prioritised remediation roadmap, code samples for every fix, and a live walkthrough with your engineering team. We hand over and your team executes. Best fit for engineering-led teams who want senior strategy without ongoing agency overhead.
Audit + implementation
Same audit deliverable, then we work directly with your engineering team — pull requests, code reviews, schema implementations, redirect maps, sitemap configs. We're embedded enough to ship code but separate enough to bring the strategic frame. Pricing structured as audit fee plus monthly retainer based on engineering velocity and scope. Best fit for teams that have engineering capacity but no senior SEO direction.
Migration / rebuild SEO
Distinct project scope because the work is non-recurring and time-bound. Six-phase framework: pre-migration crawl, URL mapping, parity audit, staging-environment verification, launch monitoring, 30/60/90-day reconciliation. We've shipped migrations from WordPress to Next.js, Shopify to BigCommerce, custom legacy to headless, and major domain consolidations. Ranking-preservation is the deliverable — and we run dashboards across the full top-200-keyword set to prove it.
Ongoing technical retainer
After the audit and initial implementation, the technical SEO surface keeps moving. New page templates ship and need schema. Core Web Vitals regress as features get added. Algorithm updates surface new indexation patterns. We run monthly technical health reviews — CrUX field data, GSC indexation reports, log-file ingestion, schema validation — and ship fixes as they're identified. Best fit for sites with continuous deployment velocity and no in-house technical SEO function.
Migration SEO is where most sites
lose their rankings.
When a site re-platforms — WordPress to Next.js, Shopify to BigCommerce, custom legacy to headless, two domains consolidating into one — the SEO migration plan determines whether the site keeps or loses 80%+ of its organic traffic. The failure mode is silent and catastrophic: rankings hold for the first week post-launch because Google's index hasn't caught up yet, then they collapse over weeks two through six as Google reconciles the new URL space. By the time anyone notices, recovery is a six-to-twelve-month project, if it's recoverable at all.
Most agencies don't do migration SEO. The few that do treat it as a separate engineering discipline with its own scope, timeline, and success metrics. We treat it the same way: ninety-day project scope, six-phase framework, ranking-preservation dashboards covering the top-200 keyword set, post-launch reconciliation through the 90-day mark when the index has fully reconverged.
Joel's published methodology in The Growth Architecture covers migration extensively because it's where most growth programs collapse — and where the gap between operators who know what they're doing and operators who're guessing becomes a six-figure traffic loss.
Engineer-friendly SEO partners.
Specific deliverables.
No marketing fluff.
We work with engineering teams the way good engineering consultants do — with precise scope, concrete deliverables, and code-level specificity in every recommendation.
Published methodology, not pitch material
Joel House wrote The Growth Architecture and AI for Revenue, both on Barnes & Noble at 5.0 stars. The technical SEO chapter in The Growth Architecture covers schema graph implementation, JS rendering decisions, and migration framework — the same playbook we ship to clients. Most agencies have decks. We have published books.
Audit deliverables you can actually act on
Our technical audit deliverable is an 80–120-page working document with prioritised remediation roadmap, code samples for every fix, GSC URL Inspection batch results, log-file analysis if accessible, and a live walkthrough with your engineering team. Not slides. Not a Notion page with bullet points. A document your engineers can implement from.
We ship code, not just recommendations
Audit + implementation engagements include direct work with your engineering team — pull requests for schema implementations, redirect maps, sitemap configs, robots.txt rewrites, Core Web Vitals fixes. We're embedded enough to ship, separate enough to bring senior strategic frame. Code-review-ready PRs, not handoff documents.
AI-engine visibility built in
Technical SEO in 2026 isn't just about Google — ChatGPT, Perplexity, Gemini, Claude, and Google AI Overviews all consume structured data, render content differently, and cite different surfaces. Every technical engagement includes Mention Layer tracking and AI-engine citation analysis as a baseline. The schema graph we build serves both Google rich results and LLM entity associations.
Senior operators only, no juniors on the audit
The technical audit is performed by a senior operator with site-architecture, schema, and rendering experience — not handed to a junior with a Screaming Frog license. Auditing a 50K-URL site and missing the canonical loop hidden in the legacy parameter handling is the kind of failure mode that costs six figures in lost organic. We don't scale by adding juniors. We scale by being deliberate about what we take on.
The SEO surfaces around technical SEO.
What technical teams ask before they hire a technical SEO agency.
Your JS framework is either rendering for Googlebot or it isn't.
Most agencies don't check.
30-minute technical strategy call with Joel. We'll do a live render check on your top route templates, a quick GSC indexation spot-audit, and a Core Web Vitals field-data pull from CrUX. No deck. No “we'll get back to you with a proposal.”