Skip to main content

Overview

Batch processing lets you run the same API endpoint against multiple inputs — from a file or stdin — with parallel execution, rate limiting, and configurable error handling.

Basic Syntax

anysite api <endpoint> --from-file <file> --input-key <key> [OPTIONS]

Input Sources

From File

Provide a list of inputs in a text file (one per line):
# users.txt
satyanadella
jeffweiner08
billgates
anysite api /api/linkedin/user --from-file users.txt --input-key user

From CSV/JSON Files

Batch from structured files — the CLI auto-detects the format:
# From CSV (uses specified column)
anysite api /api/linkedin/company --from-file companies.csv --input-key company

# From JSONL (uses specified field)
anysite api /api/linkedin/user --from-file users.jsonl --input-key user

From Stdin

Pipe data directly from other commands:
cat urls.txt | anysite api /api/linkedin/user --input-key user --stdin

# Chain with other CLI commands
anysite api /api/linkedin/search/users keywords="CTO" count=50 -q --format jsonl | \
  jq -r '.urn.value' | \
  anysite api /api/linkedin/user --input-key user --stdin

Parallel Execution

Control the number of concurrent workers:
anysite api /api/linkedin/user --from-file users.txt --input-key user \
  --parallel 5
Higher parallelism increases throughput but also API token consumption. Start with 3-5 workers and adjust based on your plan limits.

Rate Limiting

Set a maximum request rate to stay within limits:
anysite api /api/linkedin/user --from-file users.txt --input-key user \
  --parallel 5 --rate-limit "10/s"
Rate limit formats:
  • 10/s — 10 requests per second
  • 100/m — 100 requests per minute

Error Handling

Configure behavior when individual requests fail:
anysite api /api/linkedin/user --from-file users.txt --input-key user \
  --on-error skip
StrategyBehavior
stopStop the entire batch on first error (default)
skipSkip the failed input and continue with the rest
retryRetry failed requests with exponential backoff

Progress and Statistics

Progress Tracking

Show a real-time progress bar:
anysite api /api/linkedin/user --from-file users.txt --input-key user \
  --parallel 5 --progress

Batch Statistics

Display summary statistics after completion:
anysite api /api/linkedin/user --from-file users.txt --input-key user \
  --parallel 5 --stats
Output includes total processed, succeeded, failed, and elapsed time.

Output Options

All output format options work with batch processing:
# Save batch results to CSV
anysite api /api/linkedin/user --from-file users.txt --input-key user \
  --parallel 5 --format csv --output results.csv

# JSONL for streaming
anysite api /api/linkedin/user --from-file users.txt --input-key user \
  --parallel 5 --format jsonl --output results.jsonl

# Quiet mode for piping
anysite api /api/linkedin/user --from-file users.txt --input-key user \
  --parallel 5 -q --format jsonl | anysite db insert mydb --table profiles --stdin

Complete Example

Enrich a list of LinkedIn profiles with parallel processing, rate limiting, and error recovery:
anysite api /api/linkedin/user \
  --from-file linkedin_urls.txt \
  --input-key user \
  --parallel 5 \
  --rate-limit "10/s" \
  --on-error skip \
  --progress \
  --stats \
  --format csv \
  --output enriched_profiles.csv

Options Reference

OptionDescription
--from-fileInput file path (TXT, CSV, JSON, JSONL)
--input-keyParameter name to map each input value to
--stdinRead inputs from stdin instead of file
--parallelNumber of concurrent workers (default: 1)
--rate-limitMaximum request rate (e.g., 10/s, 100/m)
--on-errorError handling: stop, skip, retry (default: stop)
--progressShow real-time progress bar
--statsDisplay batch statistics after completion

Next Steps