Claude analysis
On Pro+ plans, every scan runs a post-processing pass with Claude (Anthropic's AI model). The goal: bridge the gap between raw scanner output and what a competent pentester would actually write in a report.
What Claude does
Given the raw findings, target context, and stack profile, Claude produces:
- Plain-English narrative per finding β "Any authenticated user can read any other user's billing history by changing the
account_idpath parameter. This is a classic IDOR with no server-side authorization check." - Business impact paragraph β "Customer support ticket histories include credit card last-4, home addresses, and internal notes marking certain customers as VIP. Public exposure of this data violates PCI DSS 3.4 and likely your Terms of Service."
- Remediation β stack-specific, code-level fix.
- False-positive flag β if Claude concludes the finding is benign, it's either dropped or kept with
ai_false_positive=true. See False-positive filter. - Attack chain synthesis β re-reads all findings together to find non-obvious combinations. See Attack chain synthesis.
- Severity adjustment β raises or lowers severity by one level based on context.
What Claude does NOT do
- Invent findings. Every claim Claude makes must reference an actual scanner-produced finding with evidence. The output is post-processing of real data, not hallucination.
- Probe your app. The AI layer runs after the scan is complete. It reads the persisted evidence β the request + response that the scanner captured β and reasons over that. No new traffic hits your app during AI analysis.
- Train on your data. Per our contract with Anthropic, customer data is not used for model training. Inputs are sent with the
no-trainingflag.
Bring-your-own-key (Pro+)
You can use your own Anthropic API key:
Settings β AI β Anthropic API key.
The key is encrypted with your tenant's Fernet key and only decrypted for the duration of the AI pass. Usage counts against your own Anthropic billing, not ours. Useful when:
- You have a negotiated enterprise rate with Anthropic.
- You require all AI traffic to use your own account for compliance.
- You want to hit higher rate limits than the default pool.
Without a BYOK, Pentestas uses a shared pool. Rate limits apply (typically 200 findings per scan can be AI-analysed; the rest fall back to rule-based narratives).
Which findings get analysed
- All CRITICAL + HIGH findings always.
- MEDIUM findings if the total finding count is under the per-tenant cap.
- LOW/INFO findings β skipped by default; surface in the report only with rule-based narratives.
Turning it off
Settings β AI β Enable AI analysis β uncheck. All findings still flow, just without the narrative + remediation + chain synthesis. Rule-based output (template narratives, default remediation) is still produced.
See also
- Attack chain synthesis
- False-positive filter
- Plans and limits β AI analysis is Pro+