Vibe coding

The dogfood loop: how our own Markdown export fixed our own site

We ran PageLens AI against pagelensai.com, fed the Markdown export back into Claude, and shipped a dozen real fixes the same afternoon. Here's the workflow — and why the export exists in the first place.

Anna Moore6 min read

title: "The dogfood loop: how our own Markdown export fixed our own site" description: "We ran PageLens AI against pagelensai.com, fed the Markdown export back into Claude, and shipped a dozen real fixes the same afternoon. Here's the workflow — and why the export exists in the first place." date: "2026-04-19" author: "Anna Moore" category: "Vibe coding" readingTime: "6 min read"

A few weeks after we shipped the Markdown export feature, I did something that's harder than it sounds: I pointed PageLens AI at PageLens AI, and then I actually used the output the way we tell customers to use it.

Spoiler: it worked, and I want to walk through what "it worked" actually looked like, because the playbook generalises to any vibe-coded site.

The setup

The premise of PageLens AI has always been "an audit your AI agent can act on." Every report you generate ships in three formats:

  • a live HTML report for humans (with screenshots, severity chips, a story view, the works),
  • a PDF for stakeholders and archival ("the thing you forward to the agency"),
  • and a Markdown export that we built specifically for AI coding agents — no chrome, no screenshots, just findings, stable rule IDs, evidence and suggested fixes, in the format Claude / Cursor / Copilot Workspace / Codex actually parse cleanly.

We dogfood the audit output the same way we tell customers to. I ran a Standard scan against https://www.pagelensai.com, both desktop and mobile, opened the report, and then I clicked Download Markdown — the same button that's there for every paying customer. Score: 80 / Good. 53 findings. The report was honest with us.

What the Markdown actually looks like

If you've never opened one, the file is a few hundred lines of structured Markdown. It opens with the executive summary, the score breakdown, and then for each page it lists every finding in this rough shape:

#### High · `SEO-002` · Missing meta description _(−5 pts)_

The /pricing page returns no <meta name="description"> tag.
Search engines are auto-generating snippets from body text.

> Suggestion: add a 140–155 char description that mentions
> "$1 audit", the 9 categories, and "no subscription".

That's it. No images, no React. The whole point is that the file is boring — boring is what an LLM needs to act on something without hallucinating. Every finding has a stable RULE-ID, a severity, a description, the actual evidence string, and a suggestion. The same shape, every time, for every category.

The 30-minute loop

Here's what I actually did:

  1. Downloaded the file.
  2. Opened a fresh Claude Code session in the page-lens-ai repo.
  3. Pasted the file in and said, more or less: "Walk through the Quick wins and propose a checklist of code changes I should make. Group them by file."
  4. Claude came back with a 12-item checklist, every item tied to a specific file in the repo.
  5. I authorised the ones I agreed with. Claude opened files, proposed patches, and asked for review where it wasn't sure. I rejected three (one was a false positive, one was already correct, one I disagreed with).
  6. Half an hour later, the changes were committed and on a PR.

A handful of representative wins from that session:

  • The OG image's strapline pill said "9 audit categories" which was already in the subhead just below — a duplicate. Claude proposed swapping it for "Markdown for your AI agent", which is funnier and more on-brand. (You're looking at the result every time someone shares one of our pages on Slack.)
  • A touch target on /example-report was 20px tall — min-h-[24px] got slapped on it without my having to remember the rule ID.
  • Six images were missing width / height attributes, which our own scan had flagged as a Cumulative Layout Shift risk. The patch added them in one pass.
  • The /about page metadata title was 64 chars and getting truncated in SERPs. Claude rewrote it to fit under 60 with the brand template.

None of these are interesting fixes. That's the point. They're the kind of thing nobody wants to spend an afternoon manually grepping for, but that you wouldn't want to ship without. The audit found them, the Markdown export described them in a format the agent could act on, and the agent did the legwork.

The findings that didn't get auto-fixed

This is where the workflow earns its keep, because the 12-item checklist Claude generated was honest about what it couldn't do.

Three findings ended up in a separate WORKER-FIX-self-audit-feedback.md document — items where the audit was right that something was off, but where the fix belonged to the AI worker pipeline (the code that produces the audit), not to the marketing site itself.

For example, one of the findings flagged that two unrelated rules in our scan were both being emitted with the rule ID SEO-005. That's a worker bug, not a website bug. Claude correctly read the evidence, said "this isn't something I can fix in the marketing repo — here's a writeup for the team that owns the worker," and we filed it.

I think this is the part that actually surprised me. I'd expected the agent to over-reach and try to fix everything. Instead it triaged. The Markdown gives it enough structure to know which findings are "code change in this repo" vs "you need to talk to a different system about this," and that distinction matters when you're pointing an LLM at a real codebase.

Why we didn't just give it the PDF

This is the question we get most often, so it's worth answering.

The PDF is a beautiful artefact. It's also a binary blob full of screenshots, custom fonts, page breaks, and a cover image. If you paste a PDF (or, worse, a PDF rendered into Markdown by a converter) into an LLM, you get one of two failure modes:

  • The model spends half its context window on layout noise that doesn't help it write code.
  • The model hallucinates rule IDs and selectors that aren't actually in the report, because the conversion mangled them.

The Markdown export is what the PDF would look like if you stripped away everything an agent doesn't need. Same findings, same rule IDs, same evidence — just in a format where every line is signal.

That's why we ship both. The PDF stays the right format for stakeholders and archival. The Markdown is the right format for your AI agent. Same scan; two outputs; one click each.

The shape we want every PageLens user to have

If you're reading this and you've shipped a vibe-coded site recently — Lovable, Bolt, v0, Replit, Cursor, hand-rolled, doesn't matter — the loop we'd love you to try is exactly what I just walked through:

  1. Run a $1 PageLens scan against your live URL (or start with a free 1-page trial).
  2. Open the report. Read the executive summary in your own voice for two minutes.
  3. Click Download Markdown in the report toolbar.
  4. Drop it into your AI coding agent. Ask it to walk through the Quick Wins and propose a patch.
  5. Review and merge. Reject the ones it gets wrong (it will get some wrong; LLMs do that).

The whole loop took us 30 minutes. The audit cost us less than what I'm typing now.

We didn't build PageLens AI to sell PDFs. We built it because the half-hour between "I shipped something" and "my agent has a checklist" is the half-hour where the most polish gets bought for the least effort, and nobody else seemed to be packaging that loop in a single button.

If you want to see what the Markdown actually looks like before running a scan, the example report has a Download Markdown button live on the page — same file format, no signup needed.

— Anna