Cerberus: AI pentesting that literally can't go out of scope
Cerberus launched today calling itself the world's first safe AI hacker. Normal AI pentest tools try to constrain the agent with prompts and guardrails — Cerberus does it with math. Every action the AI takes has to produce a proof of authorization in their custom language, otherwise it just can't run.
You hand it an app, describe in plain English what's in scope, wait 3-4 hours. Out comes a full pentest report. The team is ex-pentesters who also happen to do type theory research, and it shows — formal verification as the scope enforcement layer is a real departure from 'we'll add more guardrails'.
Pricing is aggressive: $999/year for individuals, $60K for on-prem with custom models. That's cheap for a tool that could replace a mid-tier pentest contract. If the verification holds up, every agent security vendor needs to re-answer how they prove nothing bad happened.
Link: https://c7-security.com/cerberus
← Back to all articles
You hand it an app, describe in plain English what's in scope, wait 3-4 hours. Out comes a full pentest report. The team is ex-pentesters who also happen to do type theory research, and it shows — formal verification as the scope enforcement layer is a real departure from 'we'll add more guardrails'.
Pricing is aggressive: $999/year for individuals, $60K for on-prem with custom models. That's cheap for a tool that could replace a mid-tier pentest contract. If the verification holds up, every agent security vendor needs to re-answer how they prove nothing bad happened.
Link: https://c7-security.com/cerberus
Comments