I've spent a long time in performance engineering — the kind where you're sitting in a war room at a retailer's HQ, staring at waterfall charts, trying to explain to a room full of marketers why their gorgeous new homepage redesign just cost them 11% of mobile conversion.
The redesign looked incredible. The agency that built it had won awards. The hero video was cinematic. The parallax scroll was buttery. The custom typeface was exquisite. And on a 4G connection — which is what 40% of their mobile traffic was actually on — the page took 6.8 seconds to paint anything meaningful.
Nobody in the pitch meeting had mentioned that.
The trade-off is real, and that's fine
I'm not here to tell agencies to stop building beautiful sites. The visual craft is genuinely valuable — it's what wins clients, builds brands, and justifies premium pricing. The problem isn't that the trade-off exists. The problem is that nobody's measuring it.
When an agency hands over a site, the deliverable is typically a Figma-faithful build that matches the approved mock-ups. The client signs off on what they can see: the layout, the typography, the imagery, the interactions. What they can't see — and what nobody in the room is showing them — is the performance cost of every design decision that went into it.
That hero video? 4.2 MB. The parallax effect? A Cumulative Layout Shift score of 0.38 (Google's "poor" threshold is 0.25). The custom font loaded without font-display: swap? Invisible text for 2.3 seconds on the first visit. The five animation libraries the developer pulled in? 380 KB of JavaScript that executes before anything interactive happens.
Each of these is a defensible design choice in isolation. Stacked together, they're a conversion problem.
Why it didn't used to matter
For a long time, this was genuinely fine. If you were building a brand site — a corporate homepage, a campaign microsite, a portfolio — performance was a nice-to-have. The site's job was to look premium. Visitors arrived from a direct link, a QR code on a poster, or a paid ad. Nobody was ranking for keywords. Nobody was measuring checkout conversion. The site was a brochure, and brochures are allowed to be heavy.
That era is ending for two reasons.
First: Google now grades you on speed. Core Web Vitals — Largest Contentful Paint, Cumulative Layout Shift, Interaction to Next Paint — became ranking factors in 2021 and have only tightened since. A slow beautiful site now ranks below a fast plain one, all else being equal. If the client is paying for SEO alongside the redesign (and increasingly they are), the agency's design choices directly affect the SEO team's ability to deliver.
Second: agencies are building e-commerce now. Shopify themes. WooCommerce builds. Headless storefronts on BigCommerce. And in e-commerce, the maths is brutal and well-documented: every 100ms of additional load time costs roughly 1% of conversion. A product page that loads in 5.2 seconds instead of 2.1 seconds isn't a "minor performance issue" — it's a measurable revenue gap that compounds across every session, every day, for as long as the site is live.
The trade-off now has a pound sign next to it. And the agency that built the site is on the hook for it, whether or not they know it.
The missing feedback loop
Here's the structural problem: most agency workflows don't have a step where someone measures the cost of the design.
The designer produces the concept. The creative director approves it. The developer builds it — faithfully, pixel-perfectly — in whatever framework the agency uses. It goes to staging, the client reviews it visually, signs it off, and it goes live. At no point does anyone run a structured audit that says: "this animation adds 1.4 seconds to LCP on mobile" or "this font-loading strategy causes invisible text for 2 seconds" or "this third-party chat widget is the heaviest single resource on the page."
The developer might suspect the performance isn't great. They might even have flagged it casually. But there's no artefact — no document, no score, no severity-rated finding list — that makes the cost visible to the designer or the client in a language they understand.
Without that feedback loop, the trade-off stays invisible. And invisible trade-offs don't get managed — they get shipped.
What a feedback loop actually looks like
This is where we think PageLens AI fits into the agency workflow, and I want to be specific about the shape.
Run a scan after the build hits staging, before the client signs off for launch. A $1 Starter scan covers three pages — enough to catch systemic issues. A $15 Professional scan covers up to 25 pages if you need broader coverage.
What you get back isn't a Lighthouse score and a shrug. It's a structured report across 10 audit categories — Performance, SEO, Accessibility, Design, Content, UX, Security, Security Headers, Tracking, and Errors — with severity-rated findings, screenshots at both desktop and mobile widths, and a concrete suggestion for each issue.
The report gives the agency something they've never had: a data layer to put underneath the design conversation.
"We love the hero animation. Here's what it costs: 1.2 seconds of LCP on a mid-tier Android device. Here are three lighter alternatives that preserve the feel — CSS-only transition, a reduced-motion variant, or a static image with a subtle entrance. Client, your call."
That's a different conversation to "the developer says the site is slow." It's specific, it's evidenced, and it leaves the decision with the person who should make it.
The categories that hit agencies hardest
Not all ten categories will be equally relevant to every agency project, but these are the ones we see lighting up most often on agency-built sites:
Performance — The big one. Page weight from unoptimised hero images, render-blocking animations, excessive third-party scripts (chat widgets, analytics, heatmaps, A/B testing tools — each one adds JS to every page). We measure LCP, CLS, and total page weight and tell you which specific resources are the most expensive.
Accessibility — Agencies love overlay text on hero images. White text on a light photograph fails WCAG AA contrast the moment the image gets bright. Portfolio galleries ship without alt text. Custom form designs skip visible labels. Each of these is a finding with a severity rating and a fix.
SEO — Template-generated pages (collection pages, product listings, blog indices) routinely ship with identical or auto-truncated meta descriptions. Structured data (Product schema, FAQ schema, BreadcrumbList) is often missing entirely. These directly affect whether the client's pages show rich snippets in search results.
Tracking — This one surprises agencies. Consent management is a compliance requirement (GDPR, UK PECR, ePrivacy), and it's also a performance issue. An unmanaged tag soup of analytics and ad pixels is often the single heaviest resource category on the page. We check for Google Consent Mode v2, detect which pixels are present, and flag when a consent management platform is missing.
Content — Placeholder copy shipping to production. "Lorem ipsum" in a footer. "Your tagline here" in an OG meta tag. It happens more often than anyone wants to admit, especially on sites where the design was signed off before the copy was final.
The pitch to make to your client
Here's the thing I wish more agencies understood: the audit isn't a criticism of the work. It's a value-add.
Frame it as part of the deliverable. "We don't just build beautiful — we build audited." Include the PageLens report in the handover deck alongside the design system, the component library, and the CMS documentation. The report says: we didn't just make it look right, we checked that it works right — across performance, SEO, accessibility, security, and compliance.
For e-commerce clients especially, this is a differentiation that matters. The client is choosing between three agencies who all produce beautiful Shopify themes. The agency that hands over a structured audit showing they've measured and optimised the performance trade-offs is the one that earns trust — and the retainer.
And if you're doing ongoing work for the client (theme updates, seasonal campaigns, new collection launches), a re-scan after each deploy gives you a trending score. Did the Black Friday campaign page regress performance? The score tells you before the client's conversion dashboard does.
For agencies using AI tools
One more angle worth mentioning. We're seeing more agencies adopt AI-assisted development — Cursor, v0, Lovable, Bolt — to speed up builds. The output is often visually strong but ships with the same blind spots every AI tool has: default favicons, missing meta descriptions, icon-only buttons without aria-labels, no Content-Security-Policy.
If that's your workflow, the vibe coder audit and the e-commerce audit pages break down the platform-specific issues we see most often. Same $1 scan, same report — just framed for the specific failure modes of AI-generated code.
The version of this conversation I wish I'd had ten years ago
Back when I was doing performance consulting, the hardest part was never the diagnosis. The waterfall charts were clear. The bottlenecks were obvious. The fix suggestions wrote themselves.
The hard part was getting the information to the right person, in a format they could act on, before the site launched — not six months later when the organic traffic numbers came in and everyone started pointing fingers.
That's what we built PageLens AI to solve. Not just for developers running Lighthouse in a terminal, but for the designer who needs to see that their animation costs 1.4 seconds, the project manager who needs to explain the trade-off to the client, and the agency owner who wants "we build audited" in their pitch deck.
$1 to scan your client's staging site. Five minutes to get the report. The cheapest quality gate you'll ever add to your delivery workflow.
— Richard