Validation
Traditional scanners dump everything they flag into a CSV and leave you to triage. Pentestas inverts that β findings go through an Accuracy Gate before they're persisted, and only survivors are shown.
The gate
Every raw detection passes through three stages:
1. Junk filter
Fast regex-based checks: an SQLi payload that "extracted" style.css (a CSS filename) is junk; an XSS reflection that echoed your payload as a JSON-escaped string inside a plain JSON response is not exploitable. Anything that clearly can't be weaponised is dropped here.
2. Second-pass verifier
Each vulnerability class has an orthogonal verifier that uses a different signal than the original detector. Examples:
- SQLi: original detector sees a SQL error message. Verifier fires a time-based payload (
' AND SLEEP(5)--) and measures response delta. If the delay isn't present, it was just a verbose error page. - XSS: original detector sees payload reflected. Verifier renders the page in a headless Chromium with a JS hook that fires on script execution. If no alert fires, the payload was string-inserted but never executed.
- SSRF: original detector sees the target make an outbound request. Verifier uses an OOB-DNS callback server; if the DNS hit arrives, confirmed. If not, dropped.
- IDOR: original detector sees a 200 on a different user's ID. Verifier compares response bodies; if the response is identical (no user-specific content), it's a cache artifact, not an IDOR.
If the verifier doesn't confirm, the finding is dropped.
3. AI filter (Pro+)
Claude reads the verified finding + evidence and judges exploitability in context. Can mark as false-positive (dropping the finding), keep-as-is (standard), or bump/lower severity.
What "verified" means in the UI
Every finding has a verified boolean:
trueβ the second-pass verifier re-confirmed the finding independently.falseβ detector fired, but the verifier wasn't applicable or didn't confirm. Finding is still persisted (the signal is real enough to investigate) but lacks the higher-confidence badge.
Filter for verified:true in the findings list if you only want high-confidence items.
What doesn't get filtered
The gate is tuned to favour precision. Things that survive even when they might be benign:
- Exploitable-in-theory but low impact β e.g. a reflected XSS on a logout endpoint. The probe confirms; rating is LOW.
- Missing headers β HSTS, CSP, X-Frame-Options missing. Can't have a false positive here; the check is a header presence test.
- Open ports + banners β INFO-level disclosures that aren't vulns on their own.
What gets dropped
- SQL error pages without injection semantics (just a stack trace).
- XSS payload reflections in escaped JSON / quoted HTML attributes that aren't executable.
- Open-redirect probes on URLs that redirect anywhere based on
?next=. - Any finding whose "evidence" is empty or obviously templated.
Tuning
Settings β Scans β Accuracy Gate strictness:
- Aggressive (default) β drops anything the verifier doesn't confirm.
- Permissive β persists detector-only signals, clearly marked
unverified. Use when debugging the scanner. - Maximum (Pro+) β requires both verifier confirmation AND AI validation.
Why this matters
Typical open-source scanners produce 70β90% false positives on complex apps. You spend Monday triaging Sunday's scan. Pentestas targets <10% false positive rate β the Accuracy Gate + AI filter is how we get there.
See also
- Severity scale
- Claude analysis β what the AI layer actually does