False-positive filter
A last-line check that uses the AI layer to cull findings that look suspicious on the wire but won't actually hurt anyone in practice.
Order of operations
- Tool detector fires β raw signal produced.
- Accuracy Gate β junk filter + orthogonal verifier. Drops detector-only false positives. See Validation.
- False-positive filter (this page) β only runs on findings that cleared step 2, and only on Pro+ plans.
What Claude looks at
For each finding:
- The full request + response.
- The CVSS vector + CWE + OWASP category.
- The target's detected stack (e.g. "this is a Rails app with Devise auth").
- Rule-based description of the detection.
- Validation-step output (e.g. "time-based SQLi payload added 5.2s delay").
And decides:
- Real vulnerability β keep, possibly raise severity if it's worse than the scanner thought.
- False positive β drop or flag
ai_false_positive=true. - Ambiguous β keep, do not flag; add a caveat to the narrative.
Examples of false positives it catches
- CORS wildcard on a public docs site. Detector flags
Access-Control-Allow-Origin: *; verifier sees the app has no auth cookies; AI confirms it's a marketing site with no authenticated surface β drops. - XSS reflection in a 404 handler that's rendered server-side as plain text with no HTML context. Detector sees reflection; verifier can't fire JS; AI confirms the response is text/plain β drops.
- Stored XSS in a dev-only endpoint visible only to internal users behind a corp SSO. Finding is correct on-the-wire but the exposure is tiny; AI keeps but lowers severity.
- "Missing HSTS" on an endpoint that only serves JSON and is never loaded in a browser. AI drops the finding as inapplicable.
What it does NOT do
- Drop CRITICAL findings without human review. CRITICAL findings may have severity adjusted but are never auto-dropped.
- Invent exculpatory context. The AI only uses evidence it can see β if the evidence says "exploited", it doesn't hand-wave that away.
- Train on your data. See Claude analysis.
Displayed in the UI
Findings flagged ai_false_positive=true are hidden by default. Toggle Show AI-filtered to see them with a strikethrough + rationale:
AI rationale: "Target is a static marketing site with no user input surface. Reflected parameter is a template variable resolved server-side and never reaches HTML context β no XSS possible."
If you disagree, click Disagree β the flag is cleared, the finding goes back into the main list, and an audit-logged override is recorded.
Tuning
Settings β AI β False-positive filter β three options:
- Strict (default) β AI acts conservatively; drops only clear false positives.
- Aggressive β lowers the confidence bar for FP judgements. Usable when you trust the AI and want a lean findings list.
- Off β the filter is skipped entirely. All findings survive regardless of AI opinion.
See also
- Validation β the upstream Accuracy Gate
- Claude analysis