One score, nine dimensions

What the PageLens score actually measures.

One number from 0 to 100, computed across 9 audit categories. Deterministic — the same findings always produce the same score. Versioned — every scan records which scoring algorithm it was graded under, so a badge from last year still means what it said.

PageLens AI92 · Excellent

Scoring algorithm: PageLens v1

One number you can trend.

Most audit tools hand you a stack of category sub-scores and let you do the prioritisation maths. We compute one top-line number — designed to move when something on your site actually got better or worse — so you can trend it across deploys without rolling your own weighted average.

  • Comparable site-to-site, deploy-to-deploy.
  • Backed by a per-category breakdown — drill down any time.
  • Pinned to a versioned algorithm so historical numbers remain meaningful.

Designed to move with reality.

A single broken hero on mobile drops the score immediately. Fixing it brings it back on the next scan. The weights are tuned so the score doesn't flatline in the 80s for everyone — the spread between "ships clean" and "has real issues" is wide enough that improvement is visible.

  • Every Critical finding takes 10 points off the top.
  • Lows accumulate but never dominate — polish debt is visible without drowning out blockers.
  • Score is clamped at 0 — a truly broken page reads zero, not a misleading negative.

The 9 categories

Every finding is filed under exactly one category. The category drives the colour, the icon and the per-category roll-up — but the score itself is computed across the full pool of findings, regardless of category.

Errors

Console exceptions, broken links, dead images, missing assets — the things that make a visitor distrust the page on first paint.

UX

Navigation clarity, form friction, tap-target sizing, signposting on long pages — the difference between a visitor finishing the task and bouncing.

Accessibility

WCAG-aligned checks for colour contrast, alt text, focus order, semantic landmarks and ARIA. Not just compliance — usability for the 1-in-5 visitors with assistive needs.

SEO

Title and meta hygiene, structured data, canonicals, OG tags, robots and sitemaps. What the search-engine and LLM crawlers see when they look at the page.

Performance

Render-blocking weight, image sizing and format, layout shift, largest contentful paint. The numbers Core Web Vitals actually grades you on.

Design

Visual hierarchy, typographic rhythm, alignment, spacing, polish. The cumulative feel that signals premium vs template.

Content

Headline clarity, value-prop strength, scannability, jargon density, CTA copy. Whether the page actually says what it's for.

Security

TLS posture, cookie flags, mixed content, third-party script risk. Things a security reviewer would flag in a five-minute look.

Security Headers

Content-Security-Policy, HSTS, X-Frame-Options, Referrer-Policy and the rest of the security-headers checklist. Independently graded because it's a clean signal of operational maturity.

How the number is computed

Pure function of the findings list. No black box.

score = max(0, round(100 − Σ severityWeight(finding)))

Walk every finding the audit produced, look up its severity weight (table below), subtract from 100, clamp at zero. That's the entire algorithm — no per-category normalisation, no LLM in the loop on the score itself. The AI does the finding-detection; the score does the maths.

Severity weights

SeverityPoints offWhen we use it
Critical−10Site is materially broken or insecure for real visitors. Examples: TLS missing, console error blocking the page, mixed-content auth form.
High−5Visible defect or failure mode that hurts trust or conversion within seconds of landing. Examples: hero image not loading on mobile, primary CTA invisible at low contrast, missing meta description on a money page.
Medium−2Real polish issue or measurable degradation, but not enough to bounce a typical visitor. Examples: large unoptimised hero image, redundant H1s, missing OG image.
Low−0.5Minor refinement — a real finding, but the page still works fine. Examples: a stray <p> with no margin, an image with redundant alt text, a missing favicon size variant.
Info0Observation surfaced for context. Doesn't move the score at all. Examples: detected platform fingerprint, presence of analytics, URL canonicalisation note.

Worked example: a scan that produces 1 Critical, 2 High, 3 Medium and 8 Low findings scores 100 − (10 + 2×5 + 3×2 + 8×0.5) = 70 — comfortably in the "Good" band, with a clear list of fixes to lift it into "Excellent".

Grade bands

Numerical scores roll up to a single label so reports can speak human at a glance.

95
Excellent

Mature production site. Most fixes are nice-to-have polish.

78
Good

Solid foundation, a handful of medium findings worth a sprint.

60
Fair

Multiple high-severity findings. Quick Wins view will be busy.

42
Poor

A blocker or two on top of accumulated polish debt.

18
Critical

Critical issues — TLS, broken pages, security headers absent.

Bands: 90+ Excellent · 70+ Good · 50+ Fair · 30+ Poor · below 30 Critical.

Versioned, not silently re-graded.

Every scan is stamped with the scoring algorithm version it was computed under (v1 today). When we evolve the weights, old scans keep their original score — your historical badges and report PDFs continue to mean exactly what they meant when they were issued.

Same number on the badge as on the report.

The verified-domain badge embeds the same score the report card shows — there's no "display rounding" or marketing-only number hiding the real one. The badge is also cryptographically signed so it can't be edited after the fact.

Common questions

Is the score deterministic?
Yes. Once a scan's findings are produced, the score is a pure function of the findings list and the SEVERITIES weights. Re-running scoring on the same findings always produces the same number. (Re-running the SCAN can produce different findings — sites change between visits — but that's a different question.)
Why don't you publish per-category sub-scores on the badge?
The badge is a top-line trust signal — one number, one grade. Per-category breakdowns live on the report itself. Publishing five sub-scores on a badge dilutes the signal: most viewers don't know whether 87 in Performance is good or bad, but everyone understands 92 overall vs 47 overall.
Does the score weight categories differently?
No — every finding contributes via its severity, regardless of which category it sits in. We deliberately don't pre-judge that 'SEO matters more than Accessibility' for everyone; if you want a category-led view, switch to a Lens (Pre-launch, Conversion, Investor, Brand) and the report reorders for you, but the underlying score stays comparable.
Can a perfect site really score 100?
Yes — a site with zero findings at MEDIUM or above and a clean LOW pool scores 100. We see it occasionally on tightly-scoped landing pages. Most production sites land in the 60–90 band where there's always some polish debt to surface.
What if I disagree with a finding?
Every finding has an inline 'flag this' link that sends a structured false-positive report straight into our review queue. Confirmed mis-detections get the rule patched and the next scan reflects it. You're not arguing with a black box.
Does Info severity affect the score?
No. Info findings are surfaced for context — detected platform, declared analytics, canonicalisation notes — and weight zero. They appear in the All Findings inventory but never move the number.

Curious what your score is?

Free 1-page audit, no card needed. Full multi-page scans start at $1.