In 2026, half the buying journey ends in an AI answer.
Most UK agencies score four of the seven things that matter.
Here are the seven.
Pulse runs seven weighted categories on every audit. Same rubric, same weights, same number for the same site next month. The categories that didn't exist five years ago carry one third of the composite, by design. That's where the market moved.
Version 1 · April 2026
Why seven, not six.
The classic UK SEO audit covers six things. Technical health. On-page. Off-page. Content. Local. Analytics. The pattern is older than the iPhone. Pulse keeps the technical and content categories that still earn their weight, and adds three that have appeared since 2024.
AI Citability measures whether ChatGPT and Perplexity will actually quote your page. Platform Optimisation measures readiness across five distinct AI search surfaces, each with different preferences. Agent Readiness measures the twelve emerging standards for AI-agent discoverability that Cloudflare, OpenAI, and Anthropic are building infrastructure around right now.
A 2018 audit framework grades a 2026 buyer's site against rules nobody is buying against any more. Three of the seven cards below are flagged for a reason.
The seven categories.
AI Citability & Visibility
Weight 25%How likely ChatGPT, Perplexity, Google AI Overviews, and Gemini are to quote the page. Five-component weighted scorer: answer-block quality (30%), self-containment (25%), structural readability (20%), statistical density (15%), uniqueness signals (10%). Graded A to F.
Almost no UK SEO audit measures this. AI engines decide what gets cited based on passage shape, not domain authority.
Brand Authority
Weight 18%Presence across the platforms AI models rely on to decide whether an entity is real and worth citing. Scans Wikipedia, Wikidata, LinkedIn, Crunchbase, G2, Trustpilot, industry directories, and press mentions. Produces a Brand Authority Score (0 to 100) with platform-specific recommendations.
Content Quality & E-E-A-T
Weight 18%Experience, Expertise, Authoritativeness, Trustworthiness across the content corpus. Detects AI-generated filler, thin pages, missing author attribution, absent review or date metadata, vague claims without sources. Chunked map-reduce scoring for long pages.
Technical Foundations
Weight 13%Core Web Vitals (INP, LCP, CLS), Lighthouse performance, SSL and TLS posture, HTTP caching strategy, security headers (HSTS, CSP, Permissions-Policy, X-Content-Type-Options), mobile optimisation, server-side rendering. Measured via Google PageSpeed Insights plus a full site crawl.
Structured Data
Weight 8%JSON-LD graph completeness. Organization, LocalBusiness, ProfessionalService, Person, WebSite, WebPage, FAQPage, HowTo, Article, plus speakable specifications. Schema-per-page coverage, not homepage-only. Twelve recommended types scored. Penalty for missing required fields or invalid JSON-LD.
Platform Optimisation
Weight 8%Readiness for each major AI search surface. Google AI Overviews, ChatGPT web search, Perplexity, Gemini, Bing Copilot. Each platform has distinct preferences (freshness, schema types, content structure). Pulse reports per-platform grades plus specific fixes.
Each AI search surface has different preferences. A retainer that only ships for Google leaves four platforms unoptimised by default.
Agent Readiness
Weight 10%Twelve emerging standards for AI-agent discoverability, authentication, commerce, and tool exposure. Benchmarked against isitagentready.com coverage. Includes llms.txt validity, MCP server exposure, WebMCP probing, agent-payment standards (x402), structured availability declarations.
Twelve new standards published since 2024. Most agencies have never heard of llms.txt, MCP, or x402. Ours is one of the only audits that scores them.
What drives the score.
Two sites in the same industry can land twenty points apart. Variance comes from six factors. Knowing which one is hurting you is half the audit.
Site age and crawl depth
A two-year-old site with 40 indexed pages caps differently from a 10-year-old site with 4,000. Older domains carry link equity and historical citations. New domains start at the floor.
Content shape, not just length
AI engines preferentially cite passages that are 100 to 200 words, self-contained, statistically dense, and structurally clean. A 4,000-word essay with no extractable answer block scores lower than four 150-word answers in the same category.
Schema-per-page, not homepage-only
Most sites ship Organization schema on the homepage and nothing else. Pulse penalises this. Every public page should carry WebPage plus its content type (Article, Product, FAQPage, HowTo) plus speakable specifications.
Brand presence across non-Google platforms
Wikipedia, Wikidata, Crunchbase, LinkedIn company pages, Trustpilot, G2, Hacker News, Reddit. AI models cross-reference these to decide if an entity is real. Strong Google rankings don't help here.
AI crawler access
GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Applebot-Extended, Amazonbot, Meta-ExternalAgent, Bytespider. Eighteen distinct crawlers Pulse checks. Default WordPress robots.txt blocks several without the operator realising.
llms.txt and agent-readiness exposure
Sites that publish a valid llms.txt, expose an MCP server, and declare agent-payment compatibility (x402) score higher because they're discoverable by the next layer of AI tooling. Most sites publish nothing of the kind.
Where most sites land on first audit.
Pulse has run against UK businesses across four archetypes. The numbers below are first-audit clusters, not targets. Use them to calibrate expectation before you run yours.
| Archetype | First-audit range |
|---|---|
Local service Plumbers, dentists, accountants, local trades. GBP-driven, NAP-driven, less brand authority by design. | 35 to 55 |
Regional growth business Multi-location operators and growing UK SMEs. Stronger schema, more content depth, mid-strength brand presence. | 40 to 65 |
National brand National D2C, professional services, mid-market SaaS. Decent infra, real content team, gaps usually in AI categories. | 45 to 70 |
Global B2B International agencies, enterprise SaaS, listed companies. Stronger floor, ceiling capped by AI platform readiness. | 50 to 75 |
What the number means.
Composite scores collapse into four bands. Each band implies a different shape of work and a different time-to-results.
Foundation work blocks anything else. Crawler access, schema, technical hygiene, basic content rebuild. Eight to twelve weeks of focused work before the score moves.
Most sites land here on first audit. Real gaps in two or three categories. Quick wins available alongside structural work. Twelve weeks lifts most sites by ten to fifteen points.
Foundation is solid. Work shifts to AI surfaces, brand entity signals, content shape. Smaller gains per week, compounding over a quarter.
Site is in the top decile. The work is defending the score against algorithm shifts and competitor moves, not chasing new ground.
How the composite works.
The composite health score is a weighted rollup over measurable categories only. If a category cannot be scored for a given run, for example no Google Search Console connection means no platform-optimisation signal, that category is skipped. Not zeroed. Weights renormalise over the remaining categories so a partial run still produces an honest weighted average over what was measured.
This is deliberate. A black-box scorer that penalises you for missing data you never supplied is not a reproducible scorer. It's a sales tactic.
Reproducibility guarantees.
- Same site plus same content plus same week equals same score, within normal API-response variance.
- All LLM-graded categories (E-E-A-T, citability, platform optimisation) use deterministic prompts and temperature zero.
- Chunked-content scoring uses a published map-reduce algorithm. No "the model decides how to split" magic.
- When a score changes month on month, Pulse names exactly which sub-signals moved and by how much.
What this score does not measure.
Honest limits. If your problem is on this list, Pulse is the wrong product and we'll tell you at the scoping call.
Paid media
Google Ads, Meta Ads, programmatic. Different discipline, different team, different budget line. We don't price audits against it.
Conversion rate optimisation
Heatmaps, A/B tests, funnel analysis, checkout repair. Pulse measures whether you can be found. CRO measures whether the visitor converts.
Sales enablement and lifecycle
Email automation, CRM sequencing, customer onboarding. Outside scope.
Brand strategy and creative
Logo, voice, positioning, campaign creative. We measure brand entity signal across AI-relevant platforms, not brand health in the marketing-team sense.
We run Pulse on ourselves.
Our own live Pulse score, run against this domain as a Dominance-tier audit on 2026-04-20, is published in full. Composite 50 / 100. The honest category breakdown shows exactly where we score well and where we're still fixing things. Content E-E-A-T and Agent Readiness are real gaps, flagged for the next fix cycle. The Deep Audit PDF is downloadable too. If a competitor's audit isn't this transparent about its own site, ask why.
Next step.
Run a Pulse Check on your own site. Sixty seconds. Measured data, not marketing claims. The Pulse Check uses a subset of the same scorers that run inside the paid audit, so the result is calibrated against the rubric on this page.