Skip to content
Vibe Coding Best Practices
Agent Workflow9 min read

Use PageLens Markdown with your AI coding agent

How to turn a PageLens report into a practical scan, prompt, patch and re-scan workflow for Cursor, Claude and other coding agents.

Beginnernpm run lintnpm run build

An audit report is only useful if it turns into fixes.

That is why PageLens exports Markdown.

PDFs are useful for stakeholders. The live report is useful for humans. Markdown is useful for AI coding agents because it strips the report down to structured findings, evidence, rule IDs and suggested fixes.

The workflow is simple:

Scan. Export Markdown. Give it to your agent. Review patches. Re-scan.

Why Markdown works better than screenshots

AI coding agents need precise text.

Screenshots are useful for visual context, but they do not reliably tell an agent:

  • exact rule ID
  • severity
  • affected URL
  • evidence string
  • suggested fix
  • category
  • whether a finding appears on multiple pages

Markdown does.

It is boring on purpose.

Start with triage

Do not ask the agent to fix everything immediately.

Paste the Markdown export and start with:

Use this PageLens Markdown report as a launch repair checklist.

First, do not edit files.

Group findings into:
- quick wins
- high-impact fixes
- needs product decision
- needs design decision
- needs backend/security review
- likely false positive or intentionally accepted

For each group, explain why it belongs there.

This gives you control before the agent starts changing code.

Ask for file-level patches

After triage, ask for a targeted plan:

Take the quick wins and high-impact fixes from the PageLens report.

Create a file-by-file implementation plan.
For each proposed edit, include:
- PageLens rule ID
- affected URL or component
- file likely to change
- exact fix
- why the fix is safe
- how to verify after editing

Do not implement until I approve.

This reduces random refactors.

It also creates a review trail from finding to patch.

Keep false positives explicit

Some findings will not be fixed.

That is normal.

Maybe the report found an issue that is already handled another way. Maybe the trade-off is intentional. Maybe the current branch has a temporary state. Maybe the scanner needs feedback.

Do not leave those decisions implicit.

Use:

Identify findings we should not implement.

For each one:
- explain why it may be a false positive or accepted risk
- state what evidence supports that decision
- suggest wording for an acknowledgement or scanner feedback note

If you use the PageLens MCP server, you can feed that decision back so the review loop improves over time.

Implement in small batches

Do not hand a 200-finding report to an agent and say "fix all."

Work in batches:

  1. metadata and SEO quick wins
  2. image and performance quick wins
  3. accessibility labels and tap targets
  4. security headers and route-level changes
  5. authenticated or backend changes

Use this prompt:

Implement only the first batch of approved PageLens fixes.

Keep edits small.
Do not refactor unrelated code.
After editing, list each finding addressed, each file changed, and how to verify it.

Small batches are easier to review and less likely to create new issues.

Verify locally

After patches:

npm run lint
npm run build

Add stack-specific checks if your app has them:

  • tests
  • type-checking
  • database migration checks
  • Supabase type generation
  • Playwright smoke tests

Ask:

Review the changes against the original PageLens findings.

For each finding we intended to fix, say:
- fixed
- partially fixed
- not fixed
- needs re-scan

Include the command or manual step that verifies it.

Re-scan the live app

Local verification is necessary but not enough.

PageLens checks the deployed surface. Headers, metadata, assets, scripts and routing can differ between local and production.

After deployment, re-scan.

Look for:

  • fixed findings disappearing
  • severity dropping
  • score improving
  • new issues introduced by the patch
  • mobile-only regressions

The re-scan is what closes the loop.

What good looks like

A good PageLens repair loop produces:

  • a small batch of reviewed changes
  • clear mapping from finding to fix
  • no unrelated refactors
  • local checks passing
  • deployed scan improving
  • accepted findings documented

That is how an AI agent becomes useful in QA.

Not by blindly fixing everything.

By turning structured findings into reviewed, verifiable patches.

Use PageLens Markdown with your AI coding agent | PageLens AI