Skip to main content

Building a Semi-Autonomous Bug Bounty System with Claude Code

How I built a multi-agent bug bounty hunting system with evidence-gated progression, RAG-enhanced learning, and safety mechanisms that keeps humans in the loop.

Chudi Nnorukam
Chudi Nnorukam
Dec 28, 2025 4 min read

In this cluster

Bug Bounty Automation: Autonomous security testing with human-in-the-loop safeguards and evidence gates.

Pillar guide

I Built an AI-Powered Bug Bounty Automation System Why I chose multi-agent architecture over monolithic scanners, and how evidence-gated progression keeps findings honest. Part 1 of 5.

Related in this cluster

Building a Semi-Autonomous Bug Bounty System with Claude Code

Reconnaissance takes forever. You spend hours on subdomain enumeration, tech stack fingerprinting, and endpoint discovery—only to find the same vulnerabilities you’ve tested before.

I built BugBountyBot to automate the tedious 80% while keeping humans in the loop for the decisions that matter.

The Problem with Manual Hunting

Traditional bug bounty hunting breaks down like this:

PhaseTime SpentValue Added
Reconnaissance40%Low (repetitive)
Testing30%Medium (pattern-based)
Validation15%High (requires judgment)
Reporting15%High (requires clarity)

Most hunters spend 70% of their time on work that could be automated. The high-value phases—validation and reporting—get squeezed because you’re exhausted from the grind.

The Multi-Agent Architecture

BugBountyBot uses four specialized agents, each optimized for their phase:

Recon Agent

  • Subdomain enumeration
  • Tech stack fingerprinting
  • Endpoint discovery
  • Auth flow mapping

Testing Agent

  • IDOR testing
  • Auth bypass attempts
  • XSS payload generation
  • Known CVE scanning

Validator Agent

  • Automated PoC execution
  • Response diff analysis
  • False positive filtering
  • Confidence scoring

Reporter Agent

  • CVSS calculation
  • Platform-specific formatting
  • Evidence packaging
  • PoC code generation

Why Four Agents Instead of One?

A single agent trying to do everything suffers from context dilution. The prompt space needed for effective reconnaissance is completely different from vulnerability testing.

Specialized agents can:

  • Use phase-specific prompts without compromise
  • Maintain focused context windows
  • Be tuned independently based on performance
  • Fail in isolation without breaking the pipeline

Evidence-Gated Progression

The biggest risk in automated hunting is false positives. Submit garbage, and your reputation tanks. Platforms flag your account. Programs stop accepting your reports.

BugBountyBot uses a 0.85 confidence threshold before any finding advances:

interface Finding {
  vulnerability: VulnerabilityType;
  evidence: Evidence[];
  confidence: number; // 0.0 - 1.0
  status: 'pending' | 'validated' | 'rejected';
}

function shouldAdvance(finding: Finding): boolean {
  // Only findings with 0.85+ confidence advance to human review
  return finding.confidence >= 0.85;
}

What Builds Confidence?

The Validator Agent runs multiple checks:

  1. PoC Execution - Does the exploit actually work?
  2. Response Diff Analysis - Is the behavior change meaningful?
  3. False Positive Signatures - Does this match known FP patterns?
  4. Evidence Hashing - Is the evidence reproducible?

Each check contributes to the confidence score. Only when all checks align does a finding hit the 0.85 threshold.

The RAG Database

SQLite stores everything the system learns:

-- Knowledge that improves over time
CREATE TABLE knowledge_base (
  pattern TEXT,           -- What worked
  context TEXT,           -- Where it worked
  success_rate REAL,      -- How often it works
  last_used TIMESTAMP
);

CREATE TABLE failure_patterns (
  approach TEXT,          -- What failed
  reason TEXT,            -- Why it failed
  program_id TEXT,        -- Program-specific context
  created_at TIMESTAMP
);

CREATE TABLE false_positive_signatures (
  signature TEXT,         -- What to avoid
  occurrences INTEGER,    -- How often we see it
  last_seen TIMESTAMP
);

Every hunt session adds knowledge:

  • Successful patterns get reinforced
  • Failures get logged with reasons
  • False positives become signatures to filter

After 50 hunts, the system knows which approaches work on which program types. It stops repeating mistakes that wasted your time six months ago.

Safety Mechanisms

Automated hunting without safety is a fast path to bans. BugBountyBot includes:

Rate Limiting

Token bucket algorithm per target. Configurable burst size and refill rate. Automatic slowdown when approaching limits.

Scope Validation

Every request validates against program scope before execution. Out-of-scope domains are hard-blocked, not just warned.

Ban Detection

Monitors for consecutive failures, response time changes, and error patterns that indicate blocking. Triggers automatic cooldown before you get banned.

interface SafetyConfig {
  maxRequestsPerMinute: number;
  burstSize: number;
  cooldownOnConsecutiveFailures: number;
  scopeValidation: 'strict' | 'permissive';
}

Human-in-the-Loop

Every bug bounty platform requires human oversight for submissions. This isn’t a limitation to work around—it’s a feature to design for.

BugBountyBot’s workflow:

  1. Automated phases (Recon → Testing → Validation) run without intervention
  2. 0.85+ findings queue for human review with full evidence
  3. Human approves specific findings for submission
  4. Reporter Agent formats and submits approved findings

You spend your time reviewing validated findings with evidence, not grinding through reconnaissance. The ratio flips: 20% of your time on tedious work, 80% on high-value decisions.

Checkpoint System

Hunt sessions can span days or weeks. The checkpoint system saves state:

interface Checkpoint {
  sessionId: string;
  phase: 'recon' | 'testing' | 'validation' | 'reporting';
  progress: PhaseProgress;
  findings: Finding[];
  timestamp: Date;
}

Resume any session exactly where you left off. No lost context, no repeated work.

Results

After building and running BugBountyBot:

MetricBeforeAfter
Time on recon4+ hours30 mins (review)
False positive rate~30%Under 5%
Findings per session2-38-12 (validated)
Time to first finding2 days4 hours

The system doesn’t replace skill—it multiplies it. Your expertise in validation and reporting gets applied to 4x more findings.

Getting Started

BugBountyBot is built with TypeScript, SQLite, and Claude Code integration. The core architecture:

/src
  /agents
    recon.ts          # Passive enumeration
    testing.ts        # Vulnerability detection
    validator.ts      # PoC verification
    reporter.ts       # Report generation
  /database
    rag.ts            # Knowledge storage
    checkpoints.ts    # Session persistence
  /safety
    rate-limit.ts     # Request throttling
    scope.ts          # Scope validation
    ban-detect.ts     # Blocking detection

Start with a single program. Let the RAG database learn. Expand scope as confidence grows.

What’s Next

BugBountyBot v2.0 is in development with methodology-driven hunting:

  • 6-8 week structured hunt phases
  • Feature mapping before testing
  • Scope change monitoring
  • JavaScript file change detection

The shift from “run and hope” to systematic, elite-hunter methodology.


Chudi Nnorukam

Written by Chudi Nnorukam

I design and deploy agent-based AI automation systems that eliminate manual workflows, scale content, and power recursive learning. Specializing in micro-SaaS tools, content automation, and high-performance web applications.

Related: Why Human-in-the-Loop Beats Full Automation | Portfolio: BugBountyBot

FAQ

What is semi-autonomous bug bounty hunting?

A system where AI handles the tedious phases (reconnaissance, initial testing, validation) while humans make final submission decisions. It automates 80% of the work while maintaining platform compliance.

How does evidence-gating prevent false positives?

Every finding must reach 0.85+ confidence through automated PoC verification, response diff analysis, and false positive signature checking before advancing to human review. Below-threshold findings are logged for learning, not submitted.

Why not fully automate submissions?

Bug bounty platforms like HackerOne explicitly require human oversight for submissions. Automated false positives damage your reputation and can get you banned. The liability also shifts unpredictably with full automation.

What makes multi-agent better than single-agent?

Specialized agents excel at their phase. A Recon agent optimized for passive enumeration is different from a Testing agent tuned for vulnerability detection. Single agents try to do everything and do nothing well.

How does the RAG database improve over time?

It stores successful exploitation patterns, failed approaches, false positive signatures, and per-program context. Each hunt session adds knowledge, so the system gets smarter about what works on specific targets.

Sources & Further Reading

Sources

Further Reading