GitHub

AI THAT FINDS SECURITY VULNERABILITIES

Hadrix is an AI-powered security scanner that audits your codebase for vulnerabilities.

Open source. Each scan runs locally on your own machine. No data stored by us.

Install & Run

Run your first scan in just a few minutes

1
Install
npm install -g hadrix
2
Setup scanners
hadrix setup
3
Set required environment variables
export HADRIX_PROVIDER=openai export HADRIX_API_KEY=sk-...
Supported providers: openai, anthropic
4
Run scan
hadrix scan
Optional: provide a path to scan a specific directory. hadrix scan path/to/repo

What we scan for

Top security categories Hadrix focuses on.

Injection
Untrusted input reaches dangerous interpreters (SQL/NoSQL/command).
Access Control
Missing or broken authorization — IDORs, role/tenant checks, privilege escalation.
Authentication
Weak or missing authentication/session handling — token validation, login flows, auth boundaries.
Secrets
Leaked API keys, tokens, and credentials in code or logs.
Logic Issues
Risky business logic paths that bypass safeguards or allow unintended actions.
Dependency Risks
Vulnerable or outdated dependencies.
Misconfigurations
Unsafe defaults and security settings — permissive CORS, headers, env/config mistakes.
Supabase
Coming soon
Coming soon: RLS misconfigurations, public RPCs, column-level privileges, and public storage buckets.

Output Preview

After running a scan, you get a concise summary plus a full findings block that can be pasted into your agent of choice for remediation.

hadrix scan
HADRIX SUMMARY
----------------
- Findings: 97 total (🔥 CRITICAL 0, 🚨 HIGH 80, ⚠️ MEDIUM 10, 🟢 LOW 7, ℹ️ INFO 0)
- Sources: 17 static, 80 llm
- Categories:
  1) 🔐 Access control (HIGH, 30)
  2) 🔎 Secrets (HIGH, 14)
  3) 🛡️ Configuration (HIGH, 13)
  4) 📦 Dependency risks (HIGH, 11)
  5) 💉 Injection (HIGH, 11)
  6) 🗝️ Authentication (HIGH, 8)
  7) 🧠 Logic issues (HIGH, 6)
- PRIORITY FIX ORDER (fastest risk reduction):
  P0: Fix missing server-side auth/authz on sensitive endpoints (admin/delete/list, webhooks, repo scanning)
  P1: Remove/lock down command execution surfaces (scan-repo/runShell) and validate all shell inputs
  P1: Stop returning/logging sensitive payloads and verbose internal errors to clients
  P2: Harden webhook trust (signature verification + replay protection)
  P2: Fix token/JWT handling (no weak defaults, proper verification)
  P3: Add rate limiting/lockout and sane pagination for bulk endpoints
  P3: Add security headers + tighten CORS

ALL FINDINGS
The following is a description of all 97 findings. Paste into LLM to begin fixing security issues.
Note: Some issues may not be fixable by your agent alone (for example, adding new RLS policies to Supabase tables).
HIGH 📦 STATIC #4 osv-scanner: GHSA-h25m-26qc-wcjf
  location: hadrix-react-supabase-app/package-lock.json:1
  GHSA-h25m-26qc-wcjf: Next.js HTTP request deserialization can lead to DoS when using insecure React Server Components in [email protected]
  evidence: Vulnerable package: [email protected] (npm)

HIGH 🧠 LLM #14 API response may expose internal error details to clients
  matched titles: "API response may expose internal error details to clients", "Excessive data exposure and verbose/internal error leakage across API responses, including PII in admin endpoints and failure details in create/list/get project flows.", "Potential verbose error exposure / unhandled error path in webhook endpoint"
  affected locations (3):
  - hadrix-react-supabase-app/backend/supabase/functions/create-project.ts:1
  - hadrix-react-supabase-app/backend/supabase/functions/webhook.ts:1
  The endpoint returns error?.message directly to the client, which can leak internal server/DB error details, stack traces, or sensitive operational information. This increases an attacker’s ability to enumerate failure modes, craft targeted attacks, or glean secrets. While success responses include limited fields, error payloads may reveal server-side structure and behavior not intended for public exposure.
  evidence: return new Response(JSON.stringify({ project: data ?? null, error: error?.message ?? null }), { | status: error ? 400 : 200, | headers: { ...corsHeaders(req.headers.get("origin") ?? ""), "content-type": "application/json" } | });
  remediation: Do not return raw error messages in API responses. Normalize errors to a safe DTO, and log the detailed error server-side. Return a generic error message (e.g., { error: 'request_failed' }) with appropriate HTTP status. Consider implementing a centralized error boundary/handler and ensure DTOs contain only intended fields.

HIGH 🛡️ LLM #15 API responses lack critical security headers (CSP, HSTS, X-Frame-Options, X-Content-Type-Options).
  matched titles: "API responses lack critical security headers (CSP, HSTS, X-Frame-Options, X-Content-Type-Options).", "API responses lack security headers (CSP, HSTS, X-Frame-Options, X-Content-Type-Options) on backend function.", "Backend endpoint lacks comprehensive security headers (CSP, HSTS, X-Frame-Options, X-Content-Type-Options) in responses."…
  affected locations (7):
  - hadrix-react-supabase-app/backend/supabase/functions/admin-delete-user.ts:1
  - hadrix-react-supabase-app/backend/supabase/functions/admin-list-users.ts:1
  - hadrix-react-supabase-app/backend/supabase/functions/create-project.ts:1
  - hadrix-react-supabase-app/backend/supabase/functions/get-project.ts:1
  - hadrix-react-supabase-app/backend/supabase/functions/webhook.ts:1
  The API responses include CORS-related headers and content-type, but there is no CSP, HSTS, X-Frame-Options, or X-Content-Type-Options headers, which increases exposure to clickjacking, MIME-type misinterpretation, and content security risks.
  evidence: return new Response(JSON.stringify({ project: rows[0] ?? null }), { | headers: { ...corsHeaders(req.headers.get("origin") ?? ""), "content-type": "application/json" } | });
  remediation: Set security headers globally (via middleware) to include at minimum: Content-Security-Policy, Strict-Transport-Security (HSTS), X-Frame-Options, and X-Content-Type-Options. Ensure all responses include these headers.
- Example: add headers like 'Content-Security-Policy': "default-src 'self'", 'Strict-Transport-Security': "max-age=31536000; includeSubDomains", 'X-Frame-Options': "DENY", 'X-Content-Type-Options': "nosniff".

<More findings... typically 50+ for a normal sized repo that hasn't been scanned yet>

CLI Options

Flags supported by the CLI.

hadrix scan [target]
Target defaults to the current directory when omitted.
  • -f, --format <format> Output format (text|json|core-json)
  • --json Shortcut for --format json
  • --skip-static Skip running static scanners
  • --power Power mode switches the model from the default lightweight models (gpt-5.1-codex-mini, claude-haiku-4-5) to more capable models (gpt-5.2-codex, claude-opus-4-5). Power mode gives more thorough results at higher cost. The default lightweight mode is optimal for more frequent scans or CI/CD use cases.
  • --debug Enable debug logging

Additional Notes

  • V1 of Hadrix is specifically targeted at JavaScript/TypeScript codebases as these are most common amongst vibe coders.
  • Cost: A typical scan on a normal‑sized Next.js repo is ~$1.00-5.00 in OpenAI API costs. In our testing, a normal sized (~3MB) repo cost ~$2.00 in OpenAI API costs. Power mode will cost more at roughly 6-7x the default. Power mode gives more thorough results at higher cost, so use it sparingly as the cost adds up quickly. For CI/CD use cases, we encourage you to use the default lightweight mode to keep costs down.
  • Running hadrix setup will install required local scanners interactively. These are by default installed in ~/.hadrix/tools if you have never installed them (otherwise, they're already detectable on PATH).
  • The supported model providers at the moment are OpenAI and Anthropic.
    • Default OpenAI model: gpt-5.1-codex-mini.
    • Default Anthropic model: claude-haiku-4-5.
  • Typical scan time is 5–20 minutes although this can vary significantly depending on the size of repo and your API rate limits.

Scan pipeline

An overview of how Hadrix scans your codebase

1Static scan

Fast analysis of the codebase for known vulnerability patterns and dependency issues. Here we use existing open source static scanners:

  • OSV - used to scan for vulnerabilities in dependencies.
  • Gitleaks - used to scan for secrets and sensitive data in the codebase.
  • ESLint - used to scan for code quality issues and enforce coding standards.
2Chunking & caching

Files are split into security‑focused chunks. Jelly builds a call graph (“who calls what”) for your JS/TS code, which we use to align chunks to real execution paths. We also annotate chunks with static scanner + deterministic signals (regex detectors), so we can prioritize the most security‑relevant areas. Chunking results are cached locally and reused when files are unchanged.

3Mandatory file checks

In this step, we look for high‑signal patterns (auth, routes/handlers, and dangerous sinks) and force‑include those files/chunks in the LLM input set. This is especially important for larger repos with more noise, where we need to focus the LLM on the highest‑risk areas of the code: authentication and middleware, API/route handlers, edge/Supabase functions, and files containing obvious sinks like exec/spawn, eval/new Function, raw SQL, webhooks, or dangerouslySetInnerHTML.

4Per-chunk LLM understanding

Before diagnosing threats, we scan each chunk of code individually to understand what it does. We create a JSON object for each chunk containing important information about the chunk such as data inputs, data sinks, and most importantly, an array of signals which deterministically map to our catalog of threats.

5Threat catalog scanning

Here, depending on the signals detected in the per-chunk scanning, we select a subset of threats from our catalog to scan for. For efficiency, we batch related threats together. Here, we are basically checking to see if a given chunk or set of chunks contains a given threat.

6Open scan pass

After the cataloged rules, we run an open scan to catch issues that don’t fit neatly into existing rules. This acts as a catch‑all for novel or edge‑case findings.

7Composite pass

We run a repository‑level pass to identify chained or systemic vulnerabilities that only appear when multiple files or flows are considered together.

8Dedupe & aggregation

We clean up all findings, remove duplicates, and combine everything into a single, ranked report.