46% of devs distrust AI testing accuracy

·3 min read

Some numbers from the State of Testing 2026 that jumped out.

~79% cite AI as the most impactful technology for testing. But 46% straight up distrust AI testing accuracy. And around 30% of GenAI testing projects get abandoned after POC. That's a weird combo.

The finding that surprised me: people who actually use AI tools daily are 4x less concerned about job loss than people who don't. The fear is coming from the sidelines, not from people in the trenches.

My take on the 30% abandonment: most tools optimize for demos, not production. They "heal" selector failures silently, and you can't tell whether the AI just healed away a real bug or fixed a broken locator. That ambiguity kills trust pretty fast. False positives are worse than actual bugs honestly... cry wolf enough times and nobody checks the alerts anymore.

Has anyone here actually shipped AI-generated tests to production and trusted the results past 6 months? Not a 2-week POC, actually relied on them day to day. What made it stick or fall apart?

Stop babysitting flaky tests

Zerocheck runs E2E tests on every PR with recordings, screenshots, and step traces.

Get a demo