Skip to content

Product

Before AI rank tracking, fix AI search readiness

Most sites are not ready to be understood, summarised or cited by AI answer engines. PageLens AI is adding an AEO readiness lens to find the fixable gaps first.

Richard Moore6 min read

The obvious version of an AI search product is tempting.

Track where you appear in ChatGPT.

Track whether Perplexity cites you.

Track whether Gemini mentions your competitors.

Track a hundred prompts every day and draw a graph.

That product will exist. For enterprise marketing teams, it already does.

But it is not where PageLens AI should start.

The more useful question for most founders, agencies, indie hackers, vibe coders and small SaaS teams is much simpler:

Can AI systems understand, trust and cite your site at all?

That is the layer we are going after.


AI visibility starts with boring basics

Before you worry about ranking in ChatGPT, your site has to clear a more basic bar.

Can an AI crawler access the important pages?

Is OAI-SearchBot allowed if you want to appear in ChatGPT Search?

Is the content actually in the HTML, or does the page depend on a pile of client-side JavaScript before the main copy exists?

Do your headings explain the product?

Does the homepage clearly say what the company does, who it serves, what it costs, where it operates, and why anyone should trust it?

Do you have answer-ready pages for the questions buyers actually ask?

Do your claims have dates, authors, sourceable statements, pricing, use cases, comparisons, or anything an AI answer engine can confidently quote?

Most early-stage sites fail here.

Not because the team is careless. Because the web has spent the last ten years optimising for humans skimming hero sections and Google reading metadata, and AI answer engines need something slightly different: clear, factual, reusable, citeable information.


What we are adding: AI Search / AEO Readiness

We are preparing a new PageLens lens:

AI Search / AEO Readiness

The goal is not to claim that we know your exact rank in every AI platform.

The goal is to tell you whether your site is easy for answer engines to crawl, understand, summarise, cite and trust.

That fits PageLens because it is still the same product promise:

A senior consultant reviewed your site and gave you quick wins.

Only now the consultant is asking:

Would AI answer engines understand and cite this?

The report can check:

  • AI crawlability — whether robots.txt, noindex, status codes or crawler rules block important pages.
  • ChatGPT Search readiness — whether OAI-SearchBot is allowed when a site wants ChatGPT Search visibility.
  • Extractability — whether meaningful content is server-readable and semantically structured.
  • Entity clarity — whether the company, product, audience, pricing, geography and trust signals are obvious.
  • Answerability — whether the page contains concise answers that an AI system can lift into a response.
  • Citation quality — whether claims are factual, attributable, dated or sourceable enough to quote.
  • Structured data — whether organisation, product, FAQ, article, breadcrumb or review schema exists where appropriate.
  • Content gaps — whether the site is missing comparison, alternative, use-case, pricing or FAQ pages.

That is a practical readiness audit, not a black box visibility graph.


The kind of finding this should produce

This is the shape I want:

Issue: Your homepage explains the product emotionally, but does not provide a clean, extractable definition of what PageLens AI does.

Why it matters: AI answer engines need short, factual, reusable statements.

Fix: Add a 40-60 word "What is PageLens AI?" block near the top of the page.

That is useful.

It is specific. It is fixable. It does not require pretending we have millions of private prompt logs.

Another example:

Issue: robots.txt blocks OAI-SearchBot.

Why it matters: OpenAI documents OAI-SearchBot as the crawler used for ChatGPT Search. Blocking it can stop pages from being considered for those surfaces.

Fix: If appearing in ChatGPT Search is part of your strategy, update robots.txt to allow OAI-SearchBot on public marketing and content pages.

Again: practical, honest, actionable.


What we are not building first

We are not starting with "track your exact AI ranking everywhere."

That sounds attractive, but it brings a lot of baggage:

  • prompt generation at scale
  • scheduled daily runs
  • expensive APIs or browser automation
  • account/session management
  • localisation
  • competitor tracking
  • historical storage
  • parsing messy answers
  • non-deterministic results
  • platform changes
  • possible scraping and ToS issues

There is a version of this product for large marketing teams. It needs real prompt-volume data, lots of infrastructure, and a careful methodology.

PageLens is better off starting where we already have leverage: auditing the site itself.

Before you measure whether AI recommends you, make sure your site gives AI something clear and credible to recommend.


Why this is perfect for AI-built products

AI-built apps often ship with unclear content.

The UI works. The auth flow exists. The pricing card renders. The homepage sounds plausible.

But when you ask basic questions, the page can be surprisingly hard to quote:

  • What exactly is this product?
  • Who is it for?
  • What does it cost?
  • How is it different from a consultant, an agency, or a generic tool?
  • What proof exists that it works?
  • Where is the pricing explanation?
  • Where are the use cases?
  • What buyer questions does it answer directly?

AI coding tools can generate a nice-looking page without generating the knowledge structure around the product.

That is the gap AEO readiness should surface.

It is not just SEO. It is not just content marketing. It is not just schema.

It is whether the site contains enough clear, crawlable, trustworthy information for an answer engine to represent it accurately.


What good looks like

An AI-search-ready site does not need to become a glossary farm.

It needs a few basics done well:

  • a clear "What is [product]?" explanation
  • a concise "Who is this for?" section
  • pricing explained in plain English
  • FAQs written as real answers, not marketing fragments
  • comparison or alternative pages where buyers expect them
  • organisation and product schema
  • author, company, contact and trust signals
  • crawlable docs, help or use-case pages
  • content that exists in HTML, not only after JavaScript hydration
  • an intentional AI crawler policy

Most of those are one-day fixes.

That is why this belongs in PageLens. We are not trying to create another abstract dashboard. We are trying to turn uncertainty into a fix list.


The positioning

The phrase I keep coming back to is:

Before you worry about ranking in ChatGPT, make sure AI can actually understand your site.

That is the PageLens angle.

Enterprise AEO platforms can fight over massive prompt datasets and long-running visibility monitoring.

PageLens can own the launch-readiness layer:

  • Is the site crawlable?
  • Is the content extractable?
  • Is the product understandable?
  • Are the answers clear?
  • Are the claims citeable?
  • Are the missing pages obvious?
  • What should you add this week?

That is enough to be valuable today.

And it fits the product we are already building: a practical, evidence-backed report for teams who need to launch with fewer blind spots.

We have added the new AI Search Readiness page and are preparing the worker-side checks now.

— Richard

Before AI rank tracking, fix AI search readiness | PageLens AI