Most website audits answer one narrow question.
Is the page fast?
Is the SEO metadata present?
Are the accessibility basics covered?
Are the security headers configured?
Those are useful questions. But they are not the question a founder, agency, product owner, or AI-builder is actually asking before launch.
The real question is simpler and harder:
Is this thing ready for real people?
That is what PageLens AI is built to answer.
Launch readiness is bigger than Lighthouse
Lighthouse is good at what Lighthouse is good at.
It can tell you a lot about one page's performance, accessibility, SEO hints, and best-practice checks. PageSpeed Insights can tell you whether Google has field data for the URL. Security scanners can inspect headers. SEO crawlers can find missing titles. Accessibility tools can catch obvious markup problems.
The problem is that launch risk does not live in one category.
A product can be fast and still look untrustworthy.
A product can have perfect metadata and still leak private data behind login.
A product can pass a homepage check and still have a broken dashboard.
A product can have beautiful design and still fail on mobile navigation, contrast, tap targets, form labels, cookie behaviour, tracking pixels, or payment copy.
That is why PageLens AI is not trying to be "another speed report." It is a launch readiness report: a single, opinionated review of the things that make a site feel credible, safe, usable, discoverable, and fixable.
What PageLens AI checks
A PageLens report pulls together the categories that normally require several tools, several browser tabs, and a lot of manual judgement.
It reviews:
- Performance — Core Web Vitals-style lab metrics, page weight, request count, third-party impact, render-blocking resources, layout shift, image sizing, and perceived speed.
- SEO — titles, meta descriptions, canonical tags, robots, sitemaps, Open Graph tags, structured data, heading structure, indexability, and search-result quality.
- Accessibility — labels, contrast, keyboard basics, tap targets, alt text, semantic structure, and common WCAG-adjacent problems that make a site harder to use.
- Security — headers, CSP tradeoffs, exposed implementation details, risky defaults, HTTPS hygiene, and obvious signs that a product has been shipped before it has been hardened.
- Privacy and tracking — analytics pixels, third-party scripts, cookie behaviour, consent signals, and the business risk of silently loading too much third-party code.
- UX and conversion — confusing flows, weak calls to action, form friction, unclear empty states, mobile navigation, trust signals, and the small moments where users decide whether to continue.
- Design polish — visual hierarchy, spacing, alignment, readability, contrast, responsive behaviour, and the difference between "it renders" and "it feels launch-ready."
- Content quality — vague copy, missing explanations, inconsistent claims, weak pricing language, and pages that do not answer the questions a buyer actually has.
- Authenticated routes — dashboards, account pages, settings, billing, admin-like screens, and the private product surface most scanners never see.
- AI-agent readiness — Markdown exports, stable rule IDs, evidence, suggested fixes, MCP access, and report structure that a coding agent can actually use.
That breadth matters because users do not experience your product as categories.
They experience the whole thing.
The business value is not the score
The score is useful. It gives you a baseline. It gives you a way to measure progress. It helps you see whether a release moved the product in the right direction.
But the real value is not the number.
The real value is turning vague launch anxiety into a ranked list of fixable work.
Before a launch, most teams have a foggy sense that "we should probably check the site." The hard part is knowing what to check, what matters, what can wait, and what a customer will actually notice.
PageLens turns that into:
- a clear executive summary
- a score broken down by category
- severity-rated findings
- evidence for each issue
- suggested fixes
- screenshots and page context
- quick wins ranked by impact and effort
- persona reviews for different types of user
- exports for stakeholders and AI coding agents
That is a different business asset from a raw audit log.
It tells you where trust is leaking.
It tells you where conversion may be leaking.
It tells you where SEO visibility may be leaking.
It tells you where an AI-generated feature works in the happy path but still feels unfinished to a human.
Why this matters for AI-built apps
AI coding tools have changed the speed of software.
A solo builder can now generate a landing page, dashboard, checkout flow, auth system, admin panel, blog, and settings screen in days. That is extraordinary.
It also creates a new problem: the app can look finished before it has been properly reviewed.
AI-generated products often have the same kinds of issues:
- routes that work but are not properly protected
- forms that submit but are not accessible
- dashboards that render but do not explain themselves
- payment pages that technically work but feel confusing
- mobile layouts that were never seriously tested
- components with tiny contrast or spacing problems
- SEO metadata copied from a template
- security headers accepted without understanding the tradeoff
- tracking scripts added and forgotten
- performance problems hidden behind modern tooling
None of this means AI-built apps are bad. It means they need a different kind of QA loop.
Not a 40-page consultant PDF that nobody reads.
Not a single Lighthouse score.
Not a checklist scattered across five tools.
A launch readiness report that gives the builder, the stakeholder, and the AI coding agent the same structured picture of what still needs attention.
The report is built for humans and agents
One of the unusual things about PageLens AI is that the report is not just a web page.
Every paid scan can become:
- a live report for the person doing the work
- a PDF for stakeholders, clients, or records
- a Markdown export for Claude, Cursor, Codex, or another coding agent
- an MCP connection so an assistant can inspect scans directly
That last part matters more than it sounds.
The report is full of stable rule IDs, evidence strings, page URLs, severity levels, categories, and suggested fixes. That is exactly the shape an AI coding agent needs if you want it to move from "general advice" to "open the files and propose a patch."
The agent can read the report, group issues by file, suggest a plan, and help fix the quick wins.
And if it disagrees with a finding, it can submit structured feedback.
If the finding is real but intentional — a security-header tradeoff, for example — it can record an accepted decision so future reports show the context without hiding the issue or changing the score.
That is the loop we care about:
scan → understand → fix → rescan → improve.
It is not just technical hygiene
Launch readiness is commercial.
Slow pages cost conversions.
Weak SEO costs discovery.
Poor accessibility excludes users and creates legal and reputational risk.
Confusing pricing copy creates support tickets.
Missing security basics make buyers hesitate.
Broken authenticated flows damage trust after conversion, which is worse than losing the user before they sign up.
Too many tracking scripts can slow the product and widen the privacy surface area.
Unclear reports make teams argue about opinions instead of fixing evidence-backed issues.
The business value of PageLens is that it compresses that risk into a report people can actually act on.
For a founder, it is a pre-launch confidence check.
For an agency, it is a handover quality report.
For a product team, it is a release-readiness snapshot.
For an AI builder, it is the missing review layer between "the app works" and "the app is ready."
For a stakeholder, it is proof that someone looked beyond the homepage.
What "comprehensive" should mean
Comprehensive should not mean bloated.
It should not mean dumping every possible warning into a report and calling it insight.
It should mean the report sees enough of the product to make a useful judgement, explains what matters in plain English, and gives you a practical next step.
That is the bar we are aiming for with PageLens AI:
- broad enough to cover the real launch surface
- specific enough to show evidence
- opinionated enough to prioritise
- plain enough for non-specialists
- structured enough for AI agents
- honest enough not to hide findings just because they are awkward
Quite probably the most comprehensive launch readiness report out there is not the one with the most rows.
It is the one that helps you launch with fewer blind spots.
That is what we are building.
If you have an app nearly ready to ship, start with a free instant check or run a full audit from New Audit. The most useful report is the one you run before your first real customers find the problems for you.
— Richard