Skip to content
Vibe Coding Best Practices
Agent Workflow12 min read

Cursor and Claude prompts to run before launch

Copy-paste prompts for using AI coding agents as launch reviewers, security reviewers, accessibility reviewers and PageLens repair partners.

Beginnernpm run lintnpx tsc --noEmitnpm run build

The same AI agent that helped you build the app can help you review it.

But only if you change the prompt.

"Keep building this feature" and "find the ways this launch could fail" are different jobs.

Before launch, open a fresh agent session and make it review the app from the outside in.

The launch QA prompt

Use this first:

Review this app as an adversarial pre-launch QA reviewer.

Assume the happy path works.
Your job is to find what could embarrass us after launch.

Focus on:
- security and privacy risks
- exposed secrets or browser-side API keys
- missing rate limits
- broken production build assumptions
- accessibility issues
- mobile UX issues
- SEO and Open Graph gaps
- slow pages, large images and heavy scripts
- database access and over-broad API responses
- authenticated route and admin boundary mistakes

Do not edit code yet.
Return a prioritized checklist with severity, why it matters, how to verify it, and the safest fix.

This prompt makes the agent slow down.

It asks for a checklist before code. That matters because the first answer should be judgement, not patches.

The security reviewer prompt

Use this when the app has login, payments, AI credits, user data, uploads or API keys:

Review this app as a security reviewer before launch.

Find:
- exposed secrets
- public environment variable misuse
- API keys in client code
- missing server-side authorization
- non-proxied privileged third-party calls
- routes that need rate limiting
- unsafe logs
- over-broad API responses
- weak security headers
- admin routes or actions exposed to normal users

For each issue, explain the exploit path in plain English and propose the smallest safe fix.

Ask for exploit paths. It forces the agent to explain why the issue matters.

The database boundary prompt

Use this when the app has private user data:

Review database access and API responses.

Find any place where:
- a user can read another user's data
- a tenant can access another tenant's data
- a route trusts an ID from the client without ownership checks
- service role or admin access is used unnecessarily
- entire rows are returned when the UI needs only a few fields
- private fields are sent to the browser

Create negative tests for each boundary.

The phrase "negative tests" is important.

You want proof that the wrong user cannot do the wrong thing.

The accessibility and mobile prompt

Use this for UI quality:

Review the app for accessibility and mobile UX issues.

Check:
- form labels and accessible names
- error messages
- keyboard navigation
- focus states
- colour contrast
- image alt text
- mobile tap targets
- mobile menus
- modals and overlays
- layout overflow
- content readability on small screens

Prioritize issues that affect conversion or trust.

This usually finds boring fixes.

Boring fixes are exactly what you want before launch.

The performance prompt

Use this after the build works:

Review this app for performance risks before launch.

Look for:
- large images
- unoptimized screenshots
- missing image dimensions
- heavy JavaScript
- unnecessary client components
- expensive third-party scripts
- embedded videos or widgets
- slow data fetching
- layout shift risks
- mobile LCP issues

Suggest fixes in order of likely user impact.

Do not let the agent spend a day micro-optimising a footer if the hero image is huge.

The SEO and AI search prompt

Use this for public pages:

Review public pages for SEO and AI search readiness.

For each important page, check:
- title
- meta description
- H1
- canonical
- robots/indexability
- Open Graph preview
- structured data
- product clarity
- pricing clarity
- audience clarity
- answer-engine readability

Suggest copy and metadata improvements.

This is especially useful for AI-built apps because generated landing pages often sound polished but vague.

The PageLens Markdown repair prompt

After running PageLens, download the Markdown export and use:

Use this PageLens Markdown report as a repair checklist.

First, group findings into:
- quick wins
- high-impact fixes
- needs human decision
- likely false positive or intentionally accepted

Then propose code changes for the quick wins.
For each proposed change, include:
- file path
- exact issue being fixed
- why the fix is safe
- how to verify it

Do not change code until I approve the grouped plan.

This works because the Markdown export gives the agent stable rule IDs, evidence and suggested fixes without screenshot or PDF noise.

The final gate prompt

Before launch:

Act as the final launch gate.

Review the current state and tell me:
- what is safe to launch
- what is risky but acceptable for MVP
- what must be fixed before launch
- what should be monitored after launch
- what should be added to the next sprint

Be direct. Do not reassure me unless the evidence supports it.

This is the prompt that stops optimism from becoming a bug report.

The workflow

Use the prompts in this order:

  1. Run local commands.
  2. Ask for launch QA review.
  3. Ask for security/database review if the app handles data.
  4. Ask for accessibility/mobile review.
  5. Ask for performance review.
  6. Ask for SEO/AI search review.
  7. Run PageLens.
  8. Feed the Markdown export to your agent.
  9. Approve fixes.
  10. Re-scan.

Your AI agent is not just a builder.

Used correctly, it is also your first launch reviewer.

Cursor and Claude prompts to run before launch | PageLens AI