Why Launchwright

Every one of these had a fix.
None of them got a review.

The vulnerabilities that took down real products weren't sophisticated. They were boring. Textbook. Preventable. The problem wasn't the AI — it was that nobody checked what the AI built before real users touched it.

53%

of developers who shipped AI-generated code later discovered security issues that passed initial review

Developer survey, 2026

69

vulnerabilities found across 15 audited vibe-coded apps — 6 of them critical

Security firm pen test, 2026

170

production apps affected simultaneously by a single inverted access control pattern

CVE-2025-48757

35

new CVEs attributed to AI-generated code in March 2026 alone — up from 6 in January

Georgia Tech Vibe Security Radar

The pattern repeats

Across every documented incident the root causes are the same — AI generates code that works functionally but skips the security fundamentals, database protections, and edge-case handling that experienced developers apply instinctively. The code looks correct. It passes basic tests. It fails when someone actually tries to break it.

In March 2026 alone, Georgia Tech's Vibe Security Radar tracked 35 new CVEs directly attributed to AI-generated code, up from just 6 in January. Researchers estimate the actual number is 5–10× what they currently detect.

None of these incidents required a months-long security engagement to prevent. The Enrichlead auth flaw would have taken an experienced developer 20 minutes to spot. The Moltbook database misconfiguration was a single settings check. The problem isn't the AI — it's that nobody checked before launch.

Documented incidents

Publicly documented cases. Sources attributed. Legal note: this is factual reporting, not accusation.

CriticalCursorAuth / Security

Enrichlead

Shut down in 72 hours after launch.

72 hours. That's how long it took to find the door the AI left open.

What would have caught it: Any senior developer reviewing the auth architecture before launch — a 20-minute code review would have flagged client-side authorization as a critical finding immediately.

Read the full case study →

CriticalVibe coded with SupabaseDatabase / Infrastructure

Moltbook

1.5 million API keys exposed in a misconfigured database.

The AI set the defaults. The founder trusted the defaults. 1.5 million keys later, that trust was expensive.

What would have caught it: A 30-minute database configuration review — checking that RLS was enabled and default access was restricted. A standard item on any pre-launch infrastructure checklist.

Read the full case study →

CriticalLovableAuth / Access Control

CVE-2025-48757

170 production apps affected simultaneously by inverted access control.

The code looked right. It ran fine. It just let in the wrong people.

What would have caught it: End-to-end access control testing: log in as a real user, confirm you can see your data. Make a request without authentication, confirm you cannot. A test that would have failed immediately on every affected app.

Read the full case study →

CriticalVariousMultiple

The Pen Test

69 vulnerabilities across 15 apps — all of them textbook, all of them preventable.

69 vulnerabilities. 15 apps. All of them boring. All of them preventable.

What would have caught it: A standard pre-launch code review — the kind none of these 15 founders got.

Read the full case study →

CriticalLovableAuth / Data Handling / API

18,000 Users

16 exploitable vulnerabilities in a live app with 18,000 active users.

18,000 users. 16 vulnerabilities. Zero reviews.

What would have caught it: A pre-launch audit before the first user signed up. At 18,000 users the exposure window was already significant — every day without a fix was a day the vulnerabilities were live.

Read the full case study →

Your app deserves a review before real users find what the AI missed.

A Launchwright audit catches the boring, preventable vulnerabilities before they become your incident report. Starting at $299.