The full anatomy of an audit you actually trust
Every PageLens AI report packs deterministic engineering findings, eight expert persona reviews, full-page screenshots, and a copy-paste-into-Cursor markdown export — the whole audit, ready to ship through your assistant or your team.
No card required. Takes about 90 seconds.
What you get in every report
A PageLens audit isn't a Lighthouse PDF with a logo. Each report combines a worker-generated, deterministic findings layer (axe-core, WCAG 2.2 checks, Core Web Vitals via web-vitalsand CDP) with an AI judgement layer that asks eight different personas to read the same page and tell you what they'd change.
One number, nine dimensions
A single 0–100 PageLens score, computed from severity-weighted findings across SEO, performance, accessibility, security, UX, design, content, headers and errors.
Findings with evidence
Every finding ships with the exact selector, the measurement, the WCAG / Core Web Vitals threshold it tripped, and a copy-able DOM snippet — not just a vague suggestion.
Full-page screenshots
Desktop + mobile captures stitched at full height. Hover any finding to see where it lives on the page — no guessing which div the report means.
Eight persona reviews
Marketer, CRO, UX, Accessibility, Brand, Executive, Performance, SEO. Each gives a one-line headline + score for their lens, so you can hand the right slice to the right stakeholder.
Quick wins, ranked
An impact × effort rank surfaces the 5–10 fixes that move the score most for the least work. Pick a preset (pre-launch, conversion, investor) to re-rank without re-scanning.
Deluxe-tier extras
Higher tiers add multi-page audits, deep-link auth, scheduled rescans, score-drift alerts, and verified-domain badges you can publish on the live site.
Three ways to put it to work
Same report, three surfaces. Read the audit, paste it into your coding agent, or wire your assistant straight to your scan history over MCP. Pick whichever loop fits how you actually ship.
Open the report in your browser
Severity-grouped findings, full screenshots, persona reviews, share link, optional password gate. The complete picture without leaving the tab.
See an example reportDownload Markdown for Cursor / Claude / Codex
One click in the report toolbar. The markdown ships with a structured frontmatter, severity-aware headings, and per-finding evidence — drop it into your AI agent and have it work the list.
Read the dogfood-loop write-upPipe it through MCP into your assistant
PageLens ships an OAuth 2.1 MCP server. Connect once from Claude Desktop, Cursor or Codex; query findings, quick-wins, and feedback right from chat.
Get the MCP setupBuilt so the score actually means something
The score is a pure function of the findings list and the severity weights — no opaque ranking, no vendor lock-in, no weights that mysteriously shift between scans. Every report stamps the scoring version it was computed under, so a badge you publish today still means the same thing six months later.
Deterministic findings (axe, WCAG, CWV) carry stable rule IDs. AI findings are clearly marked, capped at HIGH unless they cite a measured threshold breach, and quarantined from the score on their first appearance — so a one-shot hallucination can't tank your headline number.
How the score is computed- rule_id · stable identifier (e.g. SEC-022) for deterministic checks
- severity · CRITICAL → INFO, weighted into the score
- persona · which lens flagged it
- evidence · selector, measurement, threshold, DOM snippet
- suggestion · concrete fix, including Tailwind hints where applicable
- effort · QUICK · MODERATE · INVOLVED — feeds the quick-wins ranker
See your own report in 90 seconds
Drop in any URL. We'll spin up a real audit — same deterministic engine, same AI personas, same markdown export — and surface the quick wins worth your morning.
No card required. The audit is free; the trial includes the markdown export and one persona review.