Skip to main content
How I Built Unified Bug Bounty Scanning Across HackerOne, Intigriti, and Bugcrowd

How I Built Unified Bug Bounty Scanning Across HackerOne, Intigriti, and Bugcrowd

How I built unified integration for HackerOne, Intigriti, and Bugcrowd with platform-specific formatters and a shared findings model. Part 4 of 5.

Chudi Nnorukam
Chudi Nnorukam
Dec 20, 2025 Updated Feb 16, 2026 8 min read
In this cluster

Bug Bounty Automation: Autonomous security testing with human-in-the-loop safeguards and evidence gates.

Pillar guide

I Built a Semi-Autonomous Bug Bounty System: Here's the Full Architecture How I built a multi-agent bug bounty hunting system with evidence-gated progression, RAG-enhanced learning, and safety mechanisms that keeps humans in the loop.

Related in this cluster

I found the same vulnerability on three different programs. Same IDOR pattern, same impact, same proof-of-concept.

Wrote three completely different reports. HackerOne wanted structured sections with their severity dropdown. Intigriti expected different field names and inline severity justification. Bugcrowd had its own template that matched neither.

That specific tedium—of reformatting the same finding three times—is exactly what automation should eliminate.

Multi-platform bug bounty integration requires a unified internal findings model that transforms to platform-specific formats at submission time. Store vulnerabilities once in a canonical structure with all possible fields. When submitting, platform formatters extract relevant data and restructure it for HackerOne, Intigriti, or Bugcrowd’s expected format. One truth, three presentations.


Why Not Just Use Each Platform’s API Directly?

Using each platform’s API directly forces every layer of your code to know which platform it’s talking to. Testing agents, validation logic, and storage schemas all accumulate platform-specific branches. Complexity compounds quickly with each new platform added. A unified internal model keeps platform awareness isolated to two boundaries: ingestion and submission.

Direct API integration seems simpler at first:

// The naive approach
if (platform === 'hackerone') {
  await hackeroneAPI.submitReport(finding);
} else if (platform === 'intigriti') {
  await intigritiAPI.submitReport(finding);
} else if (platform === 'bugcrowd') {
  await bugcrowdAPI.submitReport(finding);
}

But then every piece of code needs platform awareness. Testing agents need to know which platform. Validation needs platform context. Storage needs platform-specific schemas.

The complexity explodes.

Instead, I built a unified findings model at the core. Every agent works with this model. Platform awareness only exists at two boundaries:

  1. Ingestion: When pulling program scope from platforms
  2. Submission: When sending reports to platforms

Everything between is platform-agnostic.

In part 1, I described the 4-tier agent architecture. The Reporter Agent handles submission—it’s the only agent that knows about platform differences.


What Does the Unified Findings Model Look Like?

The unified findings model is a TypeScript interface containing every field any platform might require: title, description, CVSS vector, severity rating, proof-of-concept steps, screenshots, and HTTP request logs. Not every platform uses every field, but all fields are available so formatters can pull exactly what they need at submission time.

A finding has everything any platform might need:

interface Finding {
  // Core identification
  id: string;
  sessionId: string;
  targetAssetId: string;

  // Vulnerability details
  title: string;
  description: string;          // Markdown supported
  vulnerabilityType: VulnType;  // XSS, IDOR, SQLi, etc.

  // Severity
  cvssVector: string;           // Full CVSS v3.1 vector
  cvssScore: number;            // Calculated from vector
  severity: 'critical' | 'high' | 'medium' | 'low' | 'informational';

  // Proof
  poc: {
    steps: string[];            // Reproduction steps
    curl?: string;              // Raw curl command
    script?: string;            // Python/JS script
  };

  // Evidence
  evidence: {
    screenshots: string[];      // File paths or base64
    requestResponse: string[];  // HTTP exchanges
    hashes: string[];           // SHA-256 for authenticity
  };

  // Metadata
  confidence: number;           // 0.0 - 1.0
  status: FindingStatus;        // new, validating, reviewed, submitted
  createdAt: Date;
  platform?: string;            // Set at submission time
  externalId?: string;          // Platform's report ID after submission
}

This model captures everything. Not every field is used by every platform—but all fields are available for any platform that needs them.

[!NOTE] The CVSS vector is stored as a string (e.g., CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N). The score is calculated from this vector. Storing both allows quick sorting by score while preserving the detailed breakdown.


How Do Platform-Specific Formatters Work?

Platform-specific formatters implement a common interface with three methods: format, validate, and submit. Each formatter takes a unified Finding object and transforms it into the structure that platform expects. HackerOne needs weakness taxonomy IDs, Intigriti uses different field names, and Bugcrowd requires bounty table entries mapped from your severity rating.

Each platform has a formatter that transforms the unified model:

1

HackerOne Formatter

Extracts title, description, severity. Maps vulnerability type to HackerOne's weakness taxonomy. Generates their specific JSON structure with `weakness_id`, `severity_rating`, and `impact` fields.
2

Intigriti Formatter

Different field names: `vulnerability_type` instead of `weakness_id`. Severity uses their specific scale. Evidence uploads handled differently. Inline severity justification required.
3

Bugcrowd Formatter

Bounty table awareness--maps severity to their payout tiers. Unique submission structure. Different file attachment handling.
// Simplified formatter pattern
interface PlatformFormatter {
  format(finding: Finding): PlatformReport;
  validate(report: PlatformReport): ValidationResult;
  submit(report: PlatformReport): Promise<SubmissionResult>;
}

class HackerOneFormatter implements PlatformFormatter {
  format(finding: Finding): HackerOneReport {
    return {
      data: {
        type: 'report',
        attributes: {
          title: finding.title,
          vulnerability_information: finding.description,
          severity_rating: this.mapSeverity(finding.severity),
          weakness_id: this.mapVulnType(finding.vulnerabilityType),
          impact: this.generateImpactStatement(finding),
          // ... platform-specific fields
        }
      }
    };
  }
}

I originally had one giant switch statement for formatting. Well, it’s more like… I thought “just add another case” was sustainable. By platform #3, the function was 400 lines. Separate formatters saved my sanity.


How Does the Budget Manager Prevent Rate Limiting?

The Budget Manager tracks read and write API quotas for each platform separately. Before every API call, agents check canRead() or canWrite(). If the budget for that platform is exhausted, the request waits in a queue until the quota window resets. This prevents rate limit errors across HackerOne, Intigriti, and Bugcrowd simultaneously.

Each platform has different API limits:

  • HackerOne: X requests per minute
  • Intigriti: Different limits, different reset windows
  • Bugcrowd: Yet another set of constraints

The Budget Manager tracks all of them:

class BudgetManager {
  private budgets: Map<string, PlatformBudget>;

  canRead(platform: string): boolean {
    const budget = this.budgets.get(platform);
    return budget.read.remaining > 0;
  }

  canWrite(platform: string): boolean {
    const budget = this.budgets.get(platform);
    return budget.write.remaining > 0;
  }

  consumeRead(platform: string): void {
    const budget = this.budgets.get(platform);
    budget.read.remaining--;
    this.scheduleRefill(platform, 'read');
  }

  async waitForBudget(platform: string, type: 'read' | 'write'): Promise<void> {
    while (!this.can(platform, type)) {
      await sleep(1000);
    }
  }
}

Before any API call, agents check with the Budget Manager:

async function fetchProgramScope(platform: string, programId: string) {
  await budgetManager.waitForBudget(platform, 'read');
  budgetManager.consumeRead(platform);

  return await platformAPI.getProgram(programId);
}

This connects to failure-driven learning in part 3. When rate limits hit despite budget management, the failure detector adjusts budget estimates downward.


What Is CVSS v3.1 and Why Calculate It Myself?

CVSS v3.1 is the industry-standard formula for scoring vulnerability severity using eight metrics: Attack Vector, Attack Complexity, Privileges Required, User Interaction, Scope, Confidentiality, Integrity, and Availability. Calculating it yourself rather than relying on platform defaults ensures consistent scoring across programs and gives triage teams a verifiable breakdown they can validate independently.

CVSS (Common Vulnerability Scoring System) is the industry standard for severity. Version 3.1 uses 8 metrics:

MetricWhat It Measures
Attack Vector (AV)Network, Adjacent, Local, Physical
Attack Complexity (AC)Low, High
Privileges Required (PR)None, Low, High
User Interaction (UI)None, Required
Scope (S)Unchanged, Changed
Confidentiality (C)None, Low, High
Integrity (I)None, Low, High
Availability (A)None, Low, High

I calculate CVSS myself rather than trusting platform defaults because:

  1. Consistency: Same vulnerability scored the same across platforms
  2. Credibility: Detailed CVSS breakdown shows I understand the impact
  3. Accuracy: Platform auto-scoring often uses simplified heuristics
function calculateCVSS(metrics: CVSSMetrics): { score: number; vector: string } {
  // Implement CVSS v3.1 formula
  const iss = 1 - ((1 - metrics.C) * (1 - metrics.I) * (1 - metrics.A));
  const impact = metrics.S === 'unchanged'
    ? 6.42 * iss
    : 7.52 * (iss - 0.029) - 3.25 * Math.pow(iss - 0.02, 15);

  const exploitability = 8.22 * metrics.AV * metrics.AC * metrics.PR * metrics.UI;

  // Full formula is complex--this is simplified
  const score = roundUp(Math.min(impact + exploitability, 10));

  return { score, vector: buildVectorString(metrics) };
}

[!TIP] Always include the CVSS vector string in reports, not just the score. Triage teams can verify your scoring methodology. “9.8 Critical” is less convincing than “CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:N” which they can validate independently.


How Does First-Mover Priority Work?

First-mover priority works by monitoring all three platforms for programs that launched within the last 24 hours. Newly detected programs get immediate passive reconnaissance — subdomain enumeration and endpoint discovery — followed by a 2-4 hour delay for scope clarification, then active testing. A freshness score decreases over time, so newer programs always rank higher.

New programs are gold. Less competition. More low-hanging fruit. Higher acceptance rates for initial reports.

My system detects new programs and prioritizes them:

async function checkNewPrograms(): Promise<NewProgram[]> {
  const newPrograms = [];

  for (const platform of ['hackerone', 'intigriti', 'bugcrowd']) {
    const recent = await platformAPI.getRecentPrograms(platform, { hours: 24 });
    const unknown = recent.filter(p => !db.hasProgram(p.id));
    newPrograms.push(...unknown);
  }

  return newPrograms.sort((a, b) => b.freshness - a.freshness);
}

When new programs detected:

1

Immediate passive recon

Subdomain enumeration, technology fingerprinting, endpoint discovery. No active testing yet--just understanding the scope.
2

Scope clarification delay

Wait 2-4 hours. New programs often update scope in the first day. Testing prematurely risks out-of-scope violations.
3

Active testing begins

After scope stabilizes, Testing Agents engage. Focus on common vulnerability types (IDOR, XSS) that newer programs often have--many of which appear in the [OWASP Top Ten](https://owasp.org/www-project-top-ten/).
4

Report queuing

Findings queue for human review with "first-mover" flag. Priority submission to capture bounties before competition.

The freshness score decreases over time. A program launched 1 hour ago gets higher priority than one launched 20 hours ago.


What Are the Platform Authentication Differences?

HackerOne uses HTTP Basic Auth with a username and API token encoded in Base64. Intigriti uses Bearer tokens with an additional API key header. Bugcrowd uses a Token scheme with a custom content type. A credential manager handles storage and automatic refresh for each platform so agents never touch raw secrets directly.

Each platform authenticates differently:

HackerOne: Basic auth with username + API token

const credentials = btoa(username + ':' + apiToken);
const headers = {
  'Authorization': 'Basic ' + credentials,
  'Content-Type': 'application/json'
};

Intigriti: Different OAuth-style flow with refresh tokens

const headers = {
  'Authorization': 'Bearer ' + accessToken,
  'X-API-Key': apiKey
};

Bugcrowd: Yet another structure with API key in header

const headers = {
  'Authorization': 'Token ' + token,
  'Content-Type': 'application/vnd.bugcrowd+json'
};

The credential manager stores these separately and handles refresh for each:

class CredentialManager {
  async getCredentials(platform: string): Promise<Credentials> {
    const creds = await this.loadFromSecureStorage(platform);

    if (this.needsRefresh(creds)) {
      return await this.refresh(platform, creds);
    }

    return creds;
  }
}

This connects to auth error recovery in part 3. When auth fails, the system attempts credential refresh before escalating.


How Does the Unified Model Handle Platform-Specific Fields?

The unified model includes an optional platformMetadata field that holds platform-specific data alongside the standard fields. Each formatter checks this namespace first and falls back to deriving values from standard fields when platform-specific data is absent. This keeps the core model clean while still supporting every unique field each platform requires.

Some platforms have unique requirements not covered by the base model.

Solution: extensible metadata

interface Finding {
  // ... standard fields ...

  platformMetadata?: {
    hackerone?: {
      weakness_id?: string;      // HackerOne's weakness taxonomy ID
      structured_scope_id?: string;
    };
    intigriti?: {
      submission_type?: string;  // Intigriti-specific field
    };
    bugcrowd?: {
      bounty_table_entry?: string; // Bugcrowd payout tier
    };
  };
}

Formatters check for platform-specific metadata and use it if present. Otherwise, they derive the needed values from standard fields.


What’s the Report Submission Flow?

A validated finding moves from the human review queue to a platform formatter, which transforms it into the required structure. The Budget Manager confirms API availability, then the platform API call is made and the external report ID is captured. The finding’s status updates to submitted. Every submission requires human approval before the formatter even runs.

From validated finding to platform submission:

Validated Finding (0.85+ confidence)

Human Review Queue

[Human approves]

Formatter transforms to platform format

Budget Manager confirms API availability

Platform API submission

External ID captured

Status → 'submitted'

All submissions go through human-in-the-loop review (part 5). Automation prepares; humans decide.


Where This Series Goes Next

This is part 4 of a 5-part series on building bug bounty automation:

  1. Architecture & Multi-Agent Design
  2. From Detection to Proof: Validation & False Positives
  3. Failure-Driven Learning: Auto-Recovery Patterns
  4. One Tool, Three Platforms: Multi-Platform Integration (you are here)
  5. Human-in-the-Loop: The Ethics of Security Automation

Next up: why humans still make every submission decision, and how mandatory review gates protect researcher reputation.


Maybe platform differences aren’t obstacles. Maybe they’re forcing functions—requiring a cleaner internal model that happens to work anywhere, because it had to.

Chudi Nnorukam

Written by Chudi Nnorukam

I design and deploy agent-based AI automation systems that eliminate manual workflows, scale content, and power recursive learning. Specializing in micro-SaaS tools, content automation, and high-performance web applications.

FAQ

How does unified findings model work across bug bounty platforms?

Findings are stored in a platform-agnostic format with all possible fields (title, description, CVSS, PoC, evidence). When submitting, platform-specific formatters transform this data to match each platform's required structure and field names.

What are the differences between HackerOne, Intigriti, and Bugcrowd APIs?

HackerOne uses REST API with basic auth and structured severity ratings. Intigriti has different field names and severity scales. Bugcrowd has unique bounty table structures. Each requires specific authentication and report formats.

How does the Budget Manager prevent rate limit violations?

The Budget Manager tracks read/write API quotas per platform. Before each request, it checks canRead() or canWrite(). If budget exhausted, request queues until quota refills. This prevents hitting rate limits across concurrent operations.

What is CVSS v3.1 and why calculate it for bug bounty reports?

CVSS (Common Vulnerability Scoring System) v3.1 is the industry standard for vulnerability severity. It uses 8 metrics including Attack Vector, Attack Complexity, and Impact scores. Consistent CVSS scoring helps programs triage and increases report credibility.

How does first-mover priority work in bug bounty automation?

The system monitors for newly launched programs. New programs get immediate passive reconnaissance. Active testing queues after 2-4 hours for scope clarification. This captures 'low-hanging fruit' before other researchers find them.

Sources & Further Reading

Sources

  • HackerOne API Documentation HackerOne doc Official API reference for HackerOne integrations.
  • Intigriti API Intigriti doc Official API documentation for Intigriti reporting.
  • Bugcrowd API Bugcrowd doc Official overview of Bugcrowd programmatic API access.

Further Reading

Discussion

Comments powered by GitHub Discussions coming soon.