Skip to content
GEOExplainer

What is GEO? A 2026 guide

Generative engine optimisation explained: what AI search engines reward, how it differs from SEO, and the seven categories that drive citations.

Emad
Emad
Founder & Lead Architect
8 min read
Cover image for What is GEO? A 2026 guide

Open ChatGPT, ask it "who does priced GEO audits in the UK?" and watch what happens. The model doesn't browse the web for ten minutes. It doesn't show you a list of links. It writes one paragraph naming a small number of companies. If your business isn't in that paragraph, the buyer never sees you. They've already moved on to the next question.

That moment, repeated millions of times a day across ChatGPT, Claude, Perplexity, Gemini, and Google AI Overviews, is what generative engine optimisation, or GEO, is about. It is the work of being the source the model reaches for, not the link a user might or might not click.

Why this is a different job from SEO

Classic search engine optimisation has one job: rank ten blue links. The user clicks one, leaves Google, lands on your site, and the work pays off. GEO has three jobs stacked on top of each other.

First, an AI crawler has to be able to fetch your page. Google has been crawling sites for twenty-five years; the AI agents only started showing up around 2023, and most websites still don't know they exist. Second, when the crawler does arrive, your page needs to contain text the model can lift verbatim, in clean self-contained passages. And third, the model has to trust that citation enough to keep using you when other users ask similar questions later. That trust is built off your own site, on the platforms the model already knows.

The shift in plain terms

AI search is no longer a fringe channel. Google has been rolling AI Overviews across most English-language queries since 2024. ChatGPT shipped a dedicated web search product in 2024. Perplexity exited 2024 with a citation-first product positioned as a Google alternative. Industry analysts including Gartner have published projections that traditional search volume will decline materially as AI assistants and agents handle a rising share of informational and transactional intent.

The shift matters because the AI answer is often the only thing the user sees. None of those buyers see your Google ranking. They see the answer the model gave them, followed at most by a small row of source citations.

If buyers ask an AI assistant first, an SEO programme that doesn't measure GEO leaves a meaningful gap in the report.

What being cited by AI actually means

Picture the engine as a journalist on deadline. It pulls a handful of source pages, skims them for usable passages, and decides which to attribute by name in the final answer. A passage gets cited when three things line up.

  • Crawler access. The page is reachable by GPTBot, ClaudeBot, PerplexityBot, Google-Extended, and the long tail of other agents. Many sites accidentally block one or more of these in default robots configurations. A common example: a CDN's default rules block any user-agent containing "bot", which catches every AI crawler in one stroke.
  • Citable structure. The page contains short, self-contained answer passages. A model can lift a 60-word block. It cannot lift a 600-word essay with the claim buried in paragraph nine. The simple test: if a passage were quoted out of context in someone else's article, would it still make sense?
  • Trust signal. The brand is mentioned with consistent metadata across Wikipedia, Reddit, Hacker News, Common Crawl, and the trade directories that match the business archetype. Without an off-site footprint, the model has nothing to corroborate against, and the citation gets quieter or doesn't show up at all.

How GEO differs from SEO

The mechanics overlap. The success criteria diverge.

  • SEO target: a top-three position on a Google results page. Measured in impressions, clicks, and Core Web Vitals.
  • GEO target: verbatim citation in an AI answer. Measured in citation rate across a curated question set, answer-block ownership, and brand mention recall.
  • Shared floor: crawlability, schema markup, page speed, content depth. AI engines piggyback on Google's index, so weak SEO drags GEO down with it. Strong SEO without GEO leaves money on the table; strong GEO without SEO is rarely possible.

The seven categories that drive GEO outcomes

Pulse audits score every UK site against seven measurable categories. Three are visible to the AI crawler the moment it arrives. Three are about what happens off-site. One is the connective tissue that holds the rest together.

  1. AI Citability. Per-passage scoring of every block of prose on the site. Length, claim density, statistic presence, semantic self-containment.
  2. Crawlers and Schema. Which AI agents can fetch the page, which schemas render in the head, whether llms.txt is present and accurate.
  3. Platform Readiness. Per-engine fitness for ChatGPT, Perplexity, Gemini, and AI Overviews. Each has its own preferences for content shape and source variety.
  4. Content E-E-A-T. Trust signals visible to the model: bylines, dated content, named sources, methodology pages.
  5. Brand Authority. Off-site mention surface across Hacker News, Reddit, Wikipedia, Common Crawl, and archetype-relevant directories.
  6. Technical Foundations. The classic SEO floor. Indexability, headers, INP, JavaScript dependency, mobile rendering.
  7. Agent Readiness. Twelve emerging standards covering Link headers, the /.well-known/ linkset, OpenAPI publication, robots.txt content signals, and markdown content negotiation.

The full rubric, weights, and per-category scoring methodology are documented at /scoring.

What you can do this month

You don't need a retainer to make the first move. Three changes can lift most sites from a low score to a respectable one in a single afternoon of work.

  • Publish an llms.txt file. A single markdown file at the root of your site that names your product, your audience, and your most important pages. AI crawlers consume it as a shortcut to your site map, much the way Googlebot uses XML sitemaps. Ours lives at /llms.txt if you want a working example.
  • Allow the AI crawlers in robots.txt. GPTBot, ClaudeBot, PerplexityBot, and around fifteen others. Many CMS templates either block them by default or fail to name them. Add explicit allow rules and a Content-Signal directive declaring how your content can be used.
  • Rewrite three landing pages as answer passages. Lead each section with the question a buyer would actually type. Answer it in 60 to 120 words. Include one statistic with a source. AI engines lift these passages as-is.

What we measure when we audit a site

A Pulse audit returns a 0 to 100 composite score, seven category sub-scores, an explicit list of citable answer passages, a list of failures with named causes, and a 12-week delivery plan with effort hours and time-to-see windows on every fix. The sample at /audit/latest.html is our own site, scored on the same rubric we run for clients. We update it monthly so the report reads exactly as a client report would.

For a free 60-second pulse on your own domain, try the Pulse Check. For the full audit, the catalogue is at /pricing.