Glossary
Agentic testing is a testing approach where an AI agent autonomously generates, executes, and maintains end-to-end tests. Unlike traditional self-healing tools that capture CSS selectors and try to repair them when they break, agentic testers interact with the application visually — the way a real user would. The agent sees buttons as buttons, forms as forms, and navigation as navigation, without depending on DOM structure, CSS classes, or XPath expressions. When the UI changes, the agent adapts its interaction strategy rather than searching for a replacement selector.
The testing industry spent a decade on “self-healing” — tools that capture selectors, detect when they break, and guess alternative selectors. This approach creates a circular problem: selectors break, the tool guesses a replacement, and sometimes guesses wrong (creating false positives where tests pass but validate the wrong element). 46% of developers now distrust AI testing accuracy because of this pattern. Agentic testing sidesteps the problem entirely by never creating selectors in the first place. It represents a fundamental architecture shift from “selector management” to “intent-based interaction.”
Most teams still use selector-based automation (Playwright, Cypress) with manual maintenance. Some use self-healing tools (Mabl, Testim) that reduce maintenance but introduce false positives. Newer tools (Momentic, Spur) use intent-based or vision-based approaches but are often described as “black boxes” — engineers can’t see why the AI made specific decisions. The gap is transparency: agents that act autonomously but explain their reasoning.
Zerocheck uses an agentic approach where tests describe user intent in plain English. The AI agent interacts with the application visually, adapting to UI changes without selectors. Every adaptation is visible and reviewable with confidence scores. When confidence drops below threshold, tests fail closed instead of silently passing — the opposite of the “heal and hope” approach.