A site can score perfect 100 on Lighthouse, rank top three on every commercial keyword, and still be invisible to ChatGPT. We've audited sites where exactly that happens. The composite report says "everything is fine". The buyer never sees the brand because the AI engines never quote it. Single-bucket scoring lets that happen. The Pulse rubric splits the work into seven categories so a strong score in one cannot quietly hide a zero in another.
This post walks through each category, why it earns its place in the rubric, and what a passing score looks like in practice.
Why seven, not three
Older SEO rubrics collapse the work into three buckets: technical, on-page, off-page. That model worked when Google was the only customer the page had to satisfy. It stops being useful the moment AI engines enter the buying journey, because they retrieve, extract, and attribute on different criteria from Google's ranking algorithm. A single "content quality" bucket can score well for SEO and zero for GEO at the same time. Seven categories isolate the signals so the composite cannot average them away into a falsely reassuring single number.
Per-category weights and the full scoring contract are at /scoring.
The seven categories
1. AI Citability
Per-passage scoring of every block of prose on the site. Each passage is graded on length (60 to 200 words is the citation sweet spot), claim density, statistic presence, source attribution, and semantic self-containment. The output is a list of passages with grade A through F. Rewriting F-grade passages is usually the highest-leverage post-audit task because it converts copy you already have into text the model can actually quote.
Passing score: at least 60 percent of body passages graded C or above. No F-grades on commercially important pages.
2. Crawlers and Schema
Which AI agents can fetch the page, which schema types render in the head, and whether an llms.txt file is present and accurate. Eighteen named AI crawlers are checked against robots.txt. JSON-LD is parsed and validated against Google's Rich Results rules. Schema text is compared character-for-character against visible content to catch the "mismatched schema" case where the page tells the crawler one thing and the user another, which Google flags as a quality issue.
Passing score: all major AI crawlers allowed, at least eight schema types rendering, llms.txt present and naming the canonical pages.
3. Platform Readiness
Per-engine fitness scoring for ChatGPT, Perplexity, Gemini, Bing Copilot, and Google AI Overviews. Each engine has measurable preferences. ChatGPT favours older, broader editorial sources. Perplexity weights recency and citation density. Gemini integrates Google's index plus authoritative sources. Each engine is scored independently so the composite does not hide a per-engine zero behind a healthy average.
Passing score: three of five engines green-flagged. No engine scoring below 30 of 100.
4. Content E-E-A-T
Trust signals visible to the retrieval model. Bylines on every post, dated content, named sources, methodology pages, founder bios with verifiable credentials, internal author pages with sameAs links to external profiles. AI engines pattern-match these signals when deciding which sources to trust. A page with no author and no date is easier to ignore than one written by a named human with a public track record.
Passing score: trust signals evidenced on at least three landing pages plus the homepage.
5. Brand Authority
Off-site mention surface across Hacker News, Reddit, Wikipedia, Common Crawl, plus archetype-relevant directories (Clutch, LinkedIn, sortlist for B2B services; GBP and Trustpilot for local services). Each platform contributes to a 0 to 100 brand authority score. Consistent metadata across platforms strengthens the signal more than any single high-authority mention, because the model is looking for corroboration, not a single source of truth.
Passing score: active presence on three platforms with consistent NAP (name, address, phone) and a Wikipedia article or equivalent canonical entity record.
6. Technical Foundations
The classic SEO floor. Indexability (no accidental noindex, canonical sanity), HTTP headers (HSTS, CSP, content-type), Core Web Vitals (INP under 200ms, LCP under 2.5s), JavaScript dependency check (does the page render without JS for raw-HTML crawlers), mobile rendering parity. AI engines piggyback on Google's index, so a broken technical floor drags every other category down regardless of how good the content is.
Passing score: Lighthouse SEO and accessibility 95 or above, INP green-banded, no CLS spikes, all routes server-side rendered or prerendered.
7. Agent Readiness
Twelve emerging standards covering RFC 8288 Link headers, RFC 9116 security.txt, RFC 9727 api-catalog, OpenAPI 3.1 publication, robots.txt content signals, markdown content negotiation, MCP server cards, OAuth discovery, and JWKS publication. The standards are recent, so most sites still have headroom here. Closing the gap is often the cheapest pts-per-hour work in the rubric, which is why we treat it as the first place to look when a client wants a quick lift.
Passing score: at least 70 of 100 on countable scored checks. The composite caps at the structural maximum given which checks are applicable to the site type.
How the composite is calculated
Each category returns a 0 to 100 score. The weighted sum produces the headline 0 to 100 composite. The exact weights and how they interact are documented at /scoring.
Bands: 0 to 39 critical, 40 to 59 below the floor, 60 to 79 industry-respectable, 80 to 100 best-in-class for a UK agency-archetype business.
Seven categories so a strong technical score cannot hide a zero on AI Citability, and a high citability score cannot hide a broken technical floor.
Per-finding tagging
A finding without an action plan is just a complaint with footnotes. Every finding emitted by every category carries four tags so the audit converts to a delivery plan without re-interpretation.
- Phase 0 to 4. Phase 0 is no-recrawl quick fixes (under one hour each). Phase 1 is week-1-to-4 quick wins. Phase 2 is week 4-to-8 structural work. Phase 3 is week 8-to-12 authority-building. Phase 4 is the re-audit.
- Impact: critical, high, medium, low.
- Effort hours: a numeric estimate, not a t-shirt size.
- Time to see days: how long after shipping the fix the score moves. Schema fixes ship in the next recrawl, brand authority changes take 60 to 90 days.
These tags drive the per-run delivery_plan.md artefact that ships alongside the PDF.
Run the rubric on your own site
Free 60-second pulse at /pulse-check. Full priced audits at /pricing. Sample audit on our own domain at /audit/latest.html, scored on the same rubric we run for clients.