41% of code is AI-generated. Who tests it?

·3 min read

A study on vibe coding security found 53% of AI-generated code contains security vulnerabilities. But the second finding is wilder. After 5 rounds of asking GPT-4o to fix the vulnerabilities, the code had 37% MORE vulnerabilities than it started with. It just confidently introduces new problems while "fixing" old ones.

~41% of committed code is now AI-generated (GitHub data). Everyone's talking about productivity gains. Nobody's talking about verification.

When we're generating code 2-3x faster, the test coverage gap grows pretty fast. We're shipping faster than we can verify. And "write tests for your code" doesn't really scale when code volume triples overnight lol

Every $1 spent on testing saves $3-5 in downstream fixes. But nobody budgets for testing because it's invisible until something breaks in prod. Tale as old as time...

"works on my machine" is not the same as "works in production with real users and real payment providers." We're accelerating creation without accelerating verification and I don't think we've figured out the answer yet.

What's your team actually doing about this? Scaling testing alongside AI code, or just shipping faster and hoping?

Stop babysitting flaky tests

Zerocheck runs E2E tests on every PR with recordings, screenshots, and step traces.

Get a demo