BreachLens Request a demo
Self-hosted · single Docker stack

The DevSecOps platform you can run fully air-gapped.

Multi-AI auto-fix · function-level reachability across 7 SCA ecosystems · AI red-team for your own LLM apps · zero outbound dependencies. One Docker stack. Yours to deploy on your hardware.

Four moats no competitor combines

What no other DevSecOps platform combines.

Multi-AI BYOA auto-fix

Pick your AI provider per-org: Anthropic Claude, OpenAI GPT, Google Gemini, or local Ollama. AI generates a unified-diff patch, opens a PR, and reports verdict back via GitHub Check Runs.

  • Operator-controlled AI choice — never locked to one vendor
  • Patch validation: rejects off-target hunks (5-line slack)
  • Per-finding cost transparency in the dashboard

vs the convention: Most auto-fix platforms lock you to one AI vendor. We let your operator pick — and switch — per-org.

Function-level reachability

Tree-sitter call graphs across npm / PyPI / Go / Maven / RubyGems / Cargo / TypeScript. We resolve the path from your bootstrap file to the vulnerable function symbol — no premium tier, no separate SKU.

  • "Why this is reachable" call-path drawer for every fn-tier finding
  • Honest "Need follow-up" surface for the npm coverage gap
  • Container reachability: 3-tier classification across OS packages, language deps, and reachable code paths

vs the convention: Other platforms gate function-level reachability behind a premium tier or a Series-A price tag. We ship it free, on every SCA scan.

Active LLM red-team

Six skills × four runtimes target your own AI apps with real adversarial payloads. A judge LLM grades each response so you get calibrated severity, not just pattern matches.

  • OWASP LLM01 / 02 / 06 / 07 / 08 covered out of the box
  • Anthropic · OpenAI · Bedrock · Ollama — judge runs anywhere
  • Severity calibrated by what actually leaked, not payload count

vs the convention: Most AI security platforms catalogue your LLM apps (passive posture). We send actual injection payloads and prove what leaks. Active, not passive.

Fully air-gappable

Including AI auto-fix, via local Ollama inference. Zero outbound dependencies in OFFLINE_MODE=true deployments. The federal / regulated / IL5 answer.

  • All scanners ship with bundled rule packs / CVE DBs
  • License validation is offline — ed25519 JWT, no phone-home
  • Operator-pulled updates via cosign-signed images

vs the convention: The DevSecOps category is overwhelmingly cloud-mandatory. We're the only platform you can run with your firewall fully closed outbound — including the AI auto-fix.

Cross-tier attack paths

One chain. Five scan tiers. AI verdict.

When a SAST finding shares the CVE with a container vuln on a host that DAST already proved exploitable, we link them into one chain — and the AI tells you whether it's likely real, mixed signal, or noise.

Screenshot of a multi-vector RCE attack-path chain in BreachLens — CRITICAL severity with EXPLOIT confirmed, 81 hops scored 486. The AI summary panel at the top shows a LIKELY REAL verdict at 78 percent confidence stating that runtime telemetry corroborates confirmed SQL-injection and LFI exploits on the dvwa-host with a critical openssl CVE in the container image. Three bullets cite specific CVEs (CVE-2022-40032, CVE-2023-50839, CVE-2026-31789). The workflow timeline below steps through DAST → DAST → CONTAINER → RUNTIME phases.
  • 5 bridge plugins link findings across SAST · SCA · Container · DAST · Pentest · Runtime
  • Per-Application boundary keeps chains scoped — no mega-chain noise
  • AI verdict (LIKELY_REAL / MIXED_SIGNAL / LIKELY_NOISE) with bullet narrative
  • Per-finding cost transparency — every AI-narrated chain costs ~$0.001 in cached state

Function-level reachability

From your entry point to the vulnerable function.

Function-level reachability across 7 SCA ecosystems traces every CVE back to the actual call path that reaches it. You triage what's exploitable, not what's installed.

Screenshot of the BreachLens FindingDetailDrawer for a HIGH-severity SCA finding — marked 0.3.5 with seven CVEs. The drawer shows a Likely Real Threat verdict with HIGH CONFIDENCE, a description of CVE-2017-16114, and a Why This Is Reachable (function-level) panel naming the vulnerable symbol marked.setOptions and the call path from server.js:126 through an anonymous function to the vulnerable call
  • Vulnerable symbol identified — not just the package: the exact function that carries the CVE (e.g. marked.setOptions())
  • Call path from your entry point — every hop with file path + line number, click-through to source
  • AI verdict on reachability — Likely Real Threat / Likely False Positive with confidence band, so you defer the safe ones with proof
  • Honest classifier — REACHABLE_FN / NOT_REACHABLE_FN / UNKNOWN, surfaced as filter values; we never silently misclassify the 'we don't know' cases
AI_SECURITY · ACTIVE_REDTEAM

Ship AI to production with proof , not posture.

Six skills fire 130 adversarial payloads at your AI app. A judge LLM grades each response. You get calibrated severity backed by what actually leaked — not a pattern-match count.

Skills
6
Payloads
130
Judge runtimes
4
OWASP LLM
01·02·06·07·08
LLM01 20 payloads

Prompt Injection

Ignore-prior-instructions, role hijacks, jailbreak primitives.

skill:prompt-injection-tester
LLM07 25 payloads

System Prompt Leak

Probe whether your hidden instructions can be extracted verbatim.

skill:system-prompt-leak
LLM01 25 payloads

Jailbreak Detector

DAN / STAN variants and modern jailbreak templates.

skill:jailbreak-detector
LLM06 20 payloads

PII Exfil

Extract user PII the agent has stored or has tool-access to.

skill:pii-exfil-tester
LLM02 20 payloads

Content Filter

Bypass safety filters via encoding, role-play, indirect framing.

skill:content-filter-validator
LLM08 20 payloads

Agent Permissions

Excessive-agency tests — prompt the agent to overstep its role.

skill:agent-permissions-auditor
01 ·

Severity calibrated by what leaked

The judge upgrades or downgrades each finding based on whether the response contains the actual secret value, the system prompt verbatim, just a hint, or nothing at all. Pattern-match counters inflate ticket queues; calibrated severity does not.

  • CRITICAL Secret value reproduced in the response.
  • HIGH System prompt content reproduced verbatim.
  • MEDIUM Existence + category of secrets/policies disclosed.
  • LOW Hint or partial reveal — not exploitable alone.

Negative cases also emit a not_leaked field — calibrated trust signal, not silence.

02 ·

Run anywhere

Same Skills contract, four judge backends. Operator picks per-org; air-gapped deployments stay on Ollama.

  • Anthropic Claude
    claude-haiku · default
  • OpenAI GPT
    gpt-4o · gpt-4o-mini
  • AWS Bedrock
    GovCloud-aware regional routing
  • Ollama
    local · llama3.2 · zero outbound
    air-gap

5th runtime: one _judge_response() override on the shared base.

Nine scanner tiers, one Docker stack

Everything bundled. Nothing to integrate.

We orchestrate best-of-breed scanners under one correlation engine. No third-party SaaS, no per-tier integration work, no surprise vendor in the egress logs.

SAST

Source code static analysis

  • 1500+ rules
  • 30+ languages

SCA

Dependencies + reachability

  • 7 ecosystems
  • Function-level

Secrets

Git history + filesystem

  • 600+ patterns
  • All commits scanned

IaC

Terraform · K8s · Cloud

  • 1000+ policies
  • 12 IaC formats

Container

Image CVEs + reachability

  • 3-tier classification
  • OS + language deps

DAST

Live web application scan

  • Active probing
  • Auth-aware

Pentest

Autonomous + Proof-of-Exploit

  • 13 exploit modules
  • OWASP coverage

CSPM

AWS · Azure · GCP

  • 500+ checks
  • 12 frameworks

AI Security

Red-team your own LLM apps

  • 6 Skills
  • Judge-graded

Plus the orchestration layer

Multi-AI auto-fix · Claude · GPT · Gemini · Ollama
Attack-path correlation · 5 bridge plugins · per-Application boundary
AI-narrated chains · verdict + bullet narrative
Runtime ingest · Agents + generic ECS contract
PR Check Runs · GitHub native · auto-fix cross-link
RBAC · 5 roles · audit-logged

Operator-honest UI

License, backups, and feature flags — all in one place.

No mystery box. The Operations tab shows your license tier and expiry, the last backup time and size, and every feature flag with one-click toggles. Audit-logged and admin-gated.

Screenshot of the BreachLens Settings → Operations tab — license card with VALID status, backup card with last-run timestamp, and feature flags grouped by category
  • License panel — VALID / EXPIRED / RENEWAL DUE / UNLICENSED, all states render the right warning
  • Backup panel — last backup, encryption status, retention, next scheduled run
  • Feature flags — runtime kill-switches per feature, optional per-org overrides
  • Every toggle audit-logged — admin-gated, defense-in-depth (UI + API)

Request a demo

See it on your stack in under 30 minutes.

We'll spin up a sandbox against your repos / containers / domain, show you the auto-fix flow + reachability evidence + cross-tier chains live, and answer your security / compliance questions.

  • No sales pre-qualifying call. The first conversation is technical.
  • Pricing transparent. Starting at $15k/year. Annual flat-fee, no per-seat math. Custom enterprise pricing.
  • Air-gapped customers welcome. Federal · regulated · defense · healthcare. We've designed for your environment from day one.
  • Or just email us: sales@breachlens.app

We respond within 1 business day.