Documentation Index
Fetch the complete documentation index at: https://docs.anysite.io/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The Executive Search Agent automates candidate sourcing for executive and senior positions. It reads job descriptions, builds targeted search strategies, collects candidate data, and delivers AI-analyzed shortlists — all using the anysite CLI.
The agent works step-by-step, confirming each stage with you in plain language.
Workflow
When you provide a job description (PDF, DOCX, or text), the agent extracts structured search criteria:
Basic Information:
- Position and level (IC / Manager / Director / VP / C-level)
- Company and context (size, industry, stage)
- Geography and timezone requirements
- Compensation range (if specified)
Technical Profile:
- Required technologies/skills
- Preferred technologies/skills
- Anti-patterns (what we’re NOT looking for)
Candidate Profile:
- Target companies for sourcing
- Target industries
- Tenure pattern (stable / any)
- Minimum years of experience
Culture Fit:
- Positive markers
- Red flags
The agent presents extracted criteria for your confirmation before proceeding:
Here's what I understood from the document:
Position: VP of Engineering at Acme Corp
Location: Germany / UK (remote OK)
Level: VP / Director
Technical Profile:
- Looking for: distributed systems, Kubernetes, Go/Rust
- Avoiding: consulting-only backgrounds
Target Companies: Stripe, Shopify, GitHub, Intercom
Target Industries: SaaS, DevTools, Fintech
Culture Fit:
- Positive: open-source contributions, technical blog posts
- Red flags: very short tenures (< 1 year)
Is this correct? Anything to change or add?
Step 2: Find URNs for Filters
Before building the search strategy, the agent finds LinkedIn identifiers needed for precise filtering.
Locations:
anysite api /api/linkedin/search/locations name="Germany" count=5 --format table
anysite api /api/linkedin/search/locations name="United Kingdom" count=5 --format table
Companies:
anysite api /api/linkedin/company company=intercom --fields "name,urn"
anysite api /api/linkedin/search/companies keywords="Shopify" count=5 --format table
Industries:
anysite api /api/linkedin/search/industries name="Software" count=10 --format table
The agent shows found identifiers:
Found identifiers for filters:
Locations:
- Germany: urn:li:geo:101282230
- United Kingdom: urn:li:geo:101165590
Companies:
- Intercom: urn:li:company:2491343
- Shopify: urn:li:company:784652
- GitHub: urn:li:company:1418841
Continue with search strategy?
Step 3: Build Search Strategy
Based on the criteria, the agent proposes a strategy using appropriate sources:
Sales Navigator Search (precise filters, paid)
- When: need precise filters by level, tenure, function
- Endpoint:
/api/linkedin/sn_search/users
Regular LinkedIn Search (keywords, free)
- When: keyword search is sufficient
- Endpoint:
/api/linkedin/search/users
Company Employees Search
- When: have a list of target companies
- Endpoint:
/api/linkedin/company/employees
The agent presents its plan:
Proposed search strategy:
Source 1: Sales Navigator search — VP/Director Engineering in Germany/UK
- Filters: level, function, geography, past companies
- Expected result: ~100 profiles
Source 2: Company employees — target companies (Stripe, Shopify, GitHub)
- Filter: engineering department
- Expected result: ~150 profiles
Processing:
- Deduplication by URN
- Full profile enrichment
- LLM analysis for criteria matching
Agree with this approach?
Step 4: Create Pipeline
After strategy confirmation, the agent builds a dataset.yaml dynamically based on:
- Selected search sources
- Extracted criteria
- Found URNs
name: vp-engineering-search
sources:
# Sales Navigator search
- id: sn_search
endpoint: /api/linkedin/sn_search/users
params:
keywords: "VP Engineering"
location: ["urn:li:geo:101282230", "urn:li:geo:101165590"]
current_company: ["urn:li:company:2491343", "urn:li:company:784652"]
seniority: ["VP", "Director"]
count: 100
on_error: skip
# Company employees search
- id: target_employees
endpoint: /api/linkedin/company/employees
from_file: target_companies.txt
input_key: companies
input_template:
companies: [{ type: company, value: "{value}" }]
count: 50
parallel: 3
on_error: skip
# Combine and deduplicate
- id: all_candidates
type: union
sources: [sn_search, target_employees]
dedupe_by: urn.value
# Enrich with full profiles
- id: profiles
endpoint: /api/linkedin/user
dependency:
from_source: all_candidates
field: urn.value
dedupe: true
input_key: user
params:
with_experience: true
with_skills: true
with_education: true
parallel: 5
on_error: skip
# LLM analysis
- id: analyzed
type: llm
dependency:
from_source: profiles
field: name
llm:
# Extract structured attributes
- type: enrich
add:
- "distributed_systems_years:number"
- "management_experience_years:number"
- "tenure_avg_years:number"
- "has_open_source:boolean"
- "has_technical_blog:boolean"
fields: [name, headline, description, experience, skills]
# Classify fit
- type: classify
categories: "strong_fit,good_fit,maybe,not_fit"
output_column: fit_score
fields: [headline, experience, skills]
# Detailed analysis
- type: generate
prompt: |
You are an expert executive recruiter. Analyze this candidate for:
ROLE: VP of Engineering
COMPANY: Acme Corp (Series B SaaS, 200 employees)
REQUIREMENTS: distributed systems, Kubernetes, Go/Rust, team leadership 50+
CULTURE: open-source, technical depth, stable tenure
CANDIDATE:
Name: {name}
Headline: {headline}
Location: {location}
Experience: {experience}
Skills: {skills}
Provide:
## SCORE: [1-10]
## FIT ASSESSMENT (2-3 sentences)
## STRENGTHS
## CONCERNS
## RED FLAGS
## RECOMMENDATION: [STRONG PROCEED / PROCEED / MAYBE / PASS]
## OUTREACH ANGLE (personalized hook for initial contact)
output_column: analysis
temperature: 0.3
fields: [name, headline, location, experience, skills]
export:
- type: file
path: ./output/candidates_{{date}}.csv
format: csv
storage:
format: parquet
path: ./data/
The LLM analysis prompt is dynamically generated based on the job description. The example above shows the structure — actual content adapts to each search.
Step 5: Run Collection
The agent always does a dry-run first:
# Preview execution plan
anysite dataset collect dataset.yaml --dry-run
# Full collection after confirmation
anysite dataset collect dataset.yaml
And reports results:
Collection complete!
Results:
- Sales Navigator search: 87 profiles
- Target company employees: 142 profiles
- After deduplication: 201 unique candidates
- Full profiles enriched: 198 (3 errors skipped)
- LLM analysis complete: 198 records
Show fit_score distribution?
Step 6: Analysis and Reporting
The agent presents a summary:
Candidate Distribution:
| Fit Score | Count | % |
|--------------|-------|-----|
| strong_fit | 12 | 6% |
| good_fit | 34 | 17% |
| maybe | 68 | 34% |
| not_fit | 84 | 43% |
Top Candidates (strong_fit):
1. Jane Smith — VP Platform Engineering at Stripe, Berlin
Score: 9/10 — 8 years distributed systems, Kubernetes expert, manages 60+ engineers
2. Alex Chen — Director of Engineering at GitHub, London
Score: 8/10 — Active open-source contributor, Go/Rust background, led 40-person team
3. Maria Garcia — VP Engineering at Shopify, Remote (UK)
Score: 8/10 — Built platform team from 10 to 50, strong K8s expertise
What's next?
- Show detailed analysis of top candidates?
- Export to CSV?
- Apply additional filters?
- Set up weekly incremental search?
Step 7: Incremental Updates
For repeated runs (e.g., weekly candidate refresh):
anysite dataset collect dataset.yaml --incremental
Incremental mode:
- Previously collected: 198 candidates
- New found: 23
- New enriched and analyzed: 23
- Total now: 221
Show only new candidates?
Reference Endpoints
Finding Identifiers
# Locations
anysite api /api/linkedin/search/locations name="Germany" count=5
# Companies
anysite api /api/linkedin/company company={slug} --fields "name,urn"
anysite api /api/linkedin/search/companies keywords="..." count=10
# Industries
anysite api /api/linkedin/search/industries name="Software" count=10
Candidate Search
# Sales Navigator (precise filters)
anysite describe /api/linkedin/sn_search/users
# Regular search (keywords)
anysite describe /api/linkedin/search/users
# Company employees
anysite describe /api/linkedin/company/employees
Profiles
anysite describe /api/linkedin/user
Key Principles
- Always confirm before executing — especially Sales Navigator searches (expensive) and LLM analysis (token costs)
- Adapt the pipeline to each specific role — search sources, filters, LLM extraction fields, and analysis prompts are all dynamic
- Use
--dry-run before actual collection runs
- Use
--incremental for repeated runs to avoid re-collecting existing candidates
- Present results in plain language at every step
- Show intermediate results in readable format — tables for quick scanning, detailed analysis for top candidates