Distributed Testing

Run tests across multiple machines or cloud regions

Overview

Fusillade supports distributed load generation for tests that exceed single-machine capacity. Run from multiple geographic locations to test global infrastructure.

Local mode - Single machine, up to ~100k workers

Cloud mode - Distributed across regions, unlimited scale

Cloud Execution

Run tests on Fusillade Cloud for distributed execution.

# Login to Cloud

$ fusillade login <your-api-token>
$ fusillade whoami  # Verify authentication

# Run on Cloud

# Run test in the cloud
$ fusillade run test.js --cloud

# Specify region
$ fusillade run test.js --cloud --region us-east-1

# Multiple regions
$ fusillade run test.js --cloud --region us-east-1,eu-west-1,ap-southeast-1

Available Regions

us-east-1 - N. Virginia (default)

us-west-2 - Oregon

eu-west-1 - Ireland

eu-central-1 - Frankfurt

ap-southeast-1 - Singapore

ap-northeast-1 - Tokyo

Multi-Region Testing

export const options = {
  cloud: {
    distribution: {
      'us-east-1': { percent: 50 },
      'eu-west-1': { percent: 30 },
      'ap-southeast-1': { percent: 20 },
    }
  },
  workers: 1000,
  duration: '5m',
};

export default function() {
  // Traffic distributed across regions
  http.get('https://api.example.com/data');
}

Metrics Aggregation

Metrics from all nodes are automatically aggregated. Use tags to filter by region.

export const options = {
  thresholds: {
    // Global threshold
    'http_req_duration': ['p95 < 500'],

    // Per-region thresholds
    'http_req_duration{region:us-east-1}': ['p95 < 300'],
    'http_req_duration{region:eu-west-1}': ['p95 < 400'],
    'http_req_duration{region:ap-southeast-1}': ['p95 < 600'],
  }
};

Worker/Controller Architecture

For self-hosted distributed testing, use the worker/controller architecture.

# Start Workers

# On each worker machine
$ fusillade worker --listen 0.0.0.0:8080

# Or connect to controller (Cluster Mode)
$ fusillade worker --connect http://controller:9001

# Start Controller

# Controller orchestrates workers and serves dashboard
$ fusillade controller --listen 0.0.0.0:9000

# Access dashboard at http://controller:9000/dashboard

Controller API Endpoints

GET / - Controller info page

GET /dashboard - Real-time metrics dashboard

GET /api/stats - Current test statistics (JSON)

GET /api/workers - List connected workers

POST /api/dispatch - Dispatch test to all workers

# Dispatch a test

# Check connected workers
curl http://controller:9000/api/workers

# Dispatch test to all workers
curl -X POST http://controller:9000/api/dispatch \
  -H "Content-Type: application/json" \
  -d '{
    "script_content": "export default function() { http.get(\"https://httpbin.org/get\"); }",
    "config": { "workers": 10, "duration_secs": 60 }
  }'

Manual Distribution (OTLP/StatsD)

Run Fusillade on multiple machines and aggregate metrics centrally.

# On each load generator

# Machine 1 (50 workers)
$ fusillade run test.js -w 50 --out otlp=http://metrics-server:4317 --tag node=node1

# Machine 2 (50 workers)
$ fusillade run test.js -w 50 --out otlp=http://metrics-server:4317 --tag node=node2

# Machine 3 (50 workers)
$ fusillade run test.js -w 50 --out otlp=http://metrics-server:4317 --tag node=node3

# With StatsD

# Export to central StatsD/Datadog
$ fusillade run test.js -w 100 --out statsd=statsd-server:8125

Container Orchestration

# Docker

# Run in container
docker run -v $(pwd):/scripts fusillade/fusillade run /scripts/test.js -w 100

# Docker Compose with controller + workers
docker-compose up --build
# Access dashboard at http://localhost:9000/dashboard

Kubernetes Architecture

┌─────────────────────────────────────────────────────────────┐
│                    Kubernetes Cluster                        │
│                                                              │
│  ┌──────────────────┐      gRPC (9001)     ┌─────────────┐  │
│  │                  │◄─────────────────────│   Worker    │  │
│  │    Controller    │◄─────────────────────│   Worker    │  │
│  │   (Dashboard)    │◄─────────────────────│   Worker    │  │
│  │     :9000        │      Metrics         │    ...      │  │
│  └────────┬─────────┘                      └──────┬──────┘  │
│           │                                       │         │
│           │ HTTP                          HPA (3-50 pods)   │
│           ▼                                                 │
│    LoadBalancer:80                                          │
└─────────────────────────────────────────────────────────────┘

# Quick Kubernetes Setup

# Create namespace
kubectl apply -f k8s/namespace.yaml

# Deploy controller and workers
kubectl apply -f k8s/controller.yaml -n fusillade
kubectl apply -f k8s/worker.yaml -n fusillade

# Check status
kubectl get pods -n fusillade

# Access dashboard
kubectl port-forward svc/fusillade-controller 9000:9000 -n fusillade
# Open http://localhost:9000/dashboard

Resource Requirements

Controller: CPU 250m-500m, Memory 256Mi-512Mi

Worker: CPU 500m-2000m, Memory 512Mi-2Gi

Workers auto-scale from 3-50 replicas based on CPU (70%) and memory (80%) utilization.

CI/CD Integration

# GitHub Actions

name: Load Test
on:
  push:
    branches: [main]

jobs:
  load-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Install Fusillade
        run: npm install -g @fusillade-io/fusillade

      - name: Run Load Test
        run: |
          fusillade run tests/load.js \
            --headless \
            --abort-on-fail \
            --export-json results.json \
            --out junit=results.xml

      - name: Upload Results
        uses: actions/upload-artifact@v3
        with:
          name: load-test-results
          path: |
            results.json
            results.xml

# GitLab CI

load_test:
  image: fusillade/fusillade
  stage: test
  script:
    - fusillade run tests/load.js --headless --abort-on-fail --export-json results.json
  artifacts:
    paths:
      - results.json
    reports:
      junit: results.xml

Best Practices

1. Start with local testing before scaling to distributed

2. Ensure all load generators have same script version

3. Use centralized metrics collection (OTLP, StatsD)

4. Tag metrics with node/region for debugging

5. Monitor load generator health alongside target metrics

6. Use --memory-safe flag for high concurrency tests