Skip to content
Vibe Coding Best Practices
Launch Readiness9 min read

Done is not launch-ready

Learn the difference between an AI-built app that works in a demo and one that is ready for real users, customers and crawlers.

Beginnernpm run buildnpm run lintnpx tsc --noEmit

An AI-built app can feel finished long before it is safe to launch.

The homepage renders. The buttons click. The auth flow works for your test account. The pricing table looks polished. The demo path is good enough for a screen recording.

That is progress.

It is not proof.

Launch readiness means the product can survive first contact with people who are not you: users on slow phones, customers with password managers, investors opening the link in Slack, crawlers reading metadata, and strangers trying paths your builder never tested.

What done usually means

When founders say an app is done, they often mean:

  • the main page loads locally
  • the happy path works once
  • the design looks close to the prompt
  • the database has the right tables
  • the checkout or signup button exists
  • there are no obvious red errors in development
  • the AI builder stopped complaining

That is a useful milestone, but it is mostly a builder's definition of done.

It says the app exists.

It does not say the app is ready.

What launch-ready means

Launch-ready means the obvious failure modes have been checked before the public finds them.

A launch-ready app has been reviewed for:

  • security headers and browser protections
  • secrets and API keys staying server-side
  • rate limits on expensive or abusable routes
  • privacy, cookie and terms basics
  • accessible forms, labels, contrast and keyboard states
  • production build errors
  • mobile layout and tap target issues
  • image and JavaScript weight
  • metadata, sharing previews and indexability
  • AI search readability
  • authenticated routes and admin boundaries
  • database access rules and over-broad responses

You do not need a heavyweight enterprise audit before sharing an MVP.

You do need a structured pass that asks: "What would make this look unprepared if someone found it today?"

The command pass

Start with the commands your project already understands.

For many Node, React and Next.js apps, that means:

npm run lint
npx tsc --noEmit
npm run build

These commands answer different questions.

npm run lint asks whether the code follows the project's rules. It can catch accessibility issues, unsafe patterns, unused variables and common mistakes that AI-generated code often leaves behind.

npx tsc --noEmit asks whether your TypeScript contracts still line up. It catches wrong field names, impossible values, missing properties and assumptions about data that may not be true.

npm run build asks whether production can compile the app. Development mode is forgiving. Production builds are not.

If any of these fail, you are not launch-ready yet.

The agent review pass

After the command pass, ask a separate AI session to review the app as a launch reviewer, not as the builder that just made it.

Use this prompt:

Review this app as a pre-launch QA reviewer.

Assume the happy path works. Look for what could embarrass us after launch:
- security and privacy risks
- exposed secrets or client-side API keys
- missing rate limits
- broken production build assumptions
- accessibility issues
- mobile UX issues
- SEO and sharing preview gaps
- slow pages, large images or heavy scripts
- database access or over-broad API responses
- authenticated route and admin boundary mistakes

Return a prioritized checklist with severity, why it matters, how to verify it, and the safest fix.
Do not change code yet.

This prompt changes the job from "keep building" to "find risk."

That distinction matters. AI builders are often excellent at continuing momentum. They are less naturally adversarial unless you ask them to be.

The PageLens pass

PageLens checks the public side of the app the way an external reviewer sees it.

It cannot see every line of source code, and that is a feature: users cannot either. They experience the rendered site, the headers, the metadata, the mobile layout, the page weight, the accessibility surface, and the trust signals.

Run a scan before launch, then look for:

  • high and critical findings
  • repeated issues across pages
  • mobile-only issues
  • missing metadata on important pages
  • performance problems on commercial pages
  • security headers that are absent or weak
  • accessibility issues that affect forms or CTAs
  • AI search findings that make the product hard to understand

The score is useful, but the findings are the real work.

How to know you are closer

You are moving from done to launch-ready when:

  • the build succeeds cleanly
  • lint and type checks are explainable
  • no secret values are visible in frontend code
  • expensive routes have protection
  • the main pages have titles, descriptions and share previews
  • mobile users can complete the main action
  • forms have labels and clear errors
  • PageLens quick wins are fixed or intentionally accepted
  • the remaining issues are known trade-offs, not surprises

Launch readiness is not perfection.

It is informed risk.

The habit to build

Before every public launch, run the same loop:

  1. Run local commands.
  2. Ask an AI agent for a risk review.
  3. Run PageLens against the live URL.
  4. Fix the quick wins.
  5. Re-scan.
  6. Share the link.

That loop turns "the app looks done" into "we checked the obvious ways it could fail."

That is the difference your users will feel.

Done is not launch-ready | PageLens AI