Skip to main content

Overview

MCP v2 replaces 70+ individual tools with 5 universal meta-tools. Any skill, prompt, or agent instruction that references old tool names must be updated. What changed:
v1 (old)v2 (new)
search_linkedin_users(keywords, location)execute("linkedin", "search", "search_users", {"keywords": ..., "location": ...})
70+ tool definitions in context5 compact meta-tools
No cachingBuilt-in cache with query_cache and export_data
No server-side filteringServer-side filtering, aggregation, grouping

Tool Mapping

LinkedIn

Old toolNew call
search_linkedin_users(keywords, location, count)execute("linkedin", "search", "search_users", {"keywords": ..., "location": ..., "count": ...})
get_linkedin_profile(user)execute("linkedin", "user", "get", {"user": ...})
get_linkedin_company(company)execute("linkedin", "company", "get", {"company": ...})
search_linkedin_companies(keywords, count)execute("linkedin", "search", "search_companies", {"keywords": ..., "count": ...})
search_linkedin_jobs(keywords, location, count)execute("linkedin", "job_search", "search_jobs", {"keywords": ..., "count": ...})
search_linkedin_posts(keywords, count)execute("linkedin", "post", "search_posts", {"keywords": ..., "count": ...})
get_linkedin_user_posts(user)execute("linkedin", "post", "get_user_posts", {"user": ...})
find_linkedin_email(user)execute("linkedin", "email", "find", {"user": ...})
google_linkedin_search(query, count)execute("linkedin", "google", "search", {"query": ..., "count": ...})

Twitter/X

Old toolNew call
search_twitter_users(query)execute("twitter", "search", "search_users", {"query": ...})
get_twitter_user(username)execute("twitter", "user", "get", {"username": ...})
get_twitter_user_tweets(username)execute("twitter", "user_tweets", "get", {"username": ...})

Instagram

Old toolNew call
search_instagram_users(query)execute("instagram", "search", "search_users", {"query": ...})
get_instagram_user(username)execute("instagram", "user", "get", {"username": ...})
get_instagram_post(url)execute("instagram", "post", "get", {"url": ...})

YouTube

Old toolNew call
search_youtube(query, count)execute("youtube", "search", "search_videos", {"query": ..., "count": ...})
get_youtube_channel(channel_id)execute("youtube", "channel", "get", {"channel_id": ...})
get_youtube_video(video_id)execute("youtube", "video", "get", {"video_id": ...})

Reddit

Old toolNew call
search_reddit(query)execute("reddit", "search", "search", {"query": ...})
get_reddit_user(username)execute("reddit", "user", "get", {"username": ...})
get_reddit_posts(subreddit)execute("reddit", "posts", "get", {"subreddit": ...})

YC / SEC / Web

Old toolNew call
search_yc_companies(query)execute("yc", "search", "search", {"query": ...})
get_yc_company(slug)execute("yc", "company", "get", {"slug": ...})
search_sec_filings(query)execute("sec", "search", "search", {"query": ...})
get_sec_document(url)execute("sec", "document", "get", {"url": ...})
scrape_webpage(url)execute("webparser", "parse", "parse", {"url": ...})
If an old tool name is not in the mapping above, use discover(source, category) to find the correct endpoint name, then call execute().

Migration Rules

1. Direct tool references → execute()

Before:
Use search_linkedin_users to find people matching the criteria.
After:
Use execute("linkedin", "search", "search_users", {params}) to find people matching the criteria.

2. Unknown endpoint/params → add discover() first

Before:
Search for the company on LinkedIn and get their details.
After:
Use discover("linkedin", "company") to check available endpoints and params.
Then use execute("linkedin", "company", "get", {params}) to get company details.

3. Known endpoint — skip discover()

discover() is only needed when the skill doesn’t know the exact endpoint name or parameter format. If the skill hardcodes specific execute() calls, discover is not required.

4. Multi-step workflows

Before:
1. Use search_linkedin_users to find the person
2. Use get_linkedin_profile to get their full profile
3. Use find_linkedin_email to get their email
After:
1. Use execute("linkedin", "search", "search_users", {"first_name": ..., "last_name": ..., "count": 5})
   to find the person
2. Use execute("linkedin", "user", "get", {"user": "{alias from step 1}"})
   to get their full profile
3. Use execute("linkedin", "email", "find", {"user": "{alias from step 1}"})
   to get their email

5. New capabilities — pagination, filtering, export

v2 adds cache-based tools that didn’t exist in v1. Update skills to take advantage:
Results from execute() include cache_key. If more data exists:
get_page(cache_key="{cache_key}", offset=10, limit=10)
Filter results without consuming context tokens:
query_cache(cache_key="{cache_key}", conditions=[
  {"field": "location", "op": "contains", "value": "San Francisco"}
])
Calculate statistics server-side:
query_cache(cache_key="{cache_key}", aggregate={"field": "followers", "op": "avg"}, group_by="industry")
Download full datasets:
export_data(cache_key="{cache_key}", output_format="csv")
→ returns download URL

6. Error handling

Before:
If search_linkedin_users returns an error, try with different keywords.
After:
If execute() returns an error with "llm_hint", follow the hint.
If execute() returns {"error": "Source not found", "available_sources": [...]}, check source name.
If execute() returns {"error": "Endpoint not found", "available_endpoints": [...]},
  call discover() to get correct endpoint names.

Migration Checklist

1

Replace tool calls

  • Replace all search_linkedin_*, get_linkedin_*execute("linkedin", ...)
  • Replace all search_twitter_*, get_twitter_*execute("twitter", ...)
  • Replace all search_instagram_*, get_instagram_*execute("instagram", ...)
  • Replace all search_youtube_*, get_youtube_*execute("youtube", ...)
  • Replace all search_reddit_*, get_reddit_*execute("reddit", ...)
  • Replace all search_yc_*, get_yc_*execute("yc", ...)
  • Replace all search_sec_*, get_sec_*execute("sec", ...)
  • Replace all scrape_webpageexecute("webparser", "parse", ...)
2

Add new capabilities

  • Add get_page for large result sets
  • Add query_cache for filtering/aggregation
  • Add export_data for file downloads
3

Add discover() where needed

Only where endpoint names or params are not known in advance
4

Clean up

  • Remove references to disabled sources (e.g., Crunchbase)
  • Update error handling to v2 format
5

Test

Test each migrated skill end-to-end

Automated Migration

With Claude Code Skill

Instead of migrating manually, use the dedicated anysite-mcp-migration skill for Claude Code: Install via Claude Code:
# Add the skills marketplace
/plugin marketplace add https://github.com/anysiteio/agent-skills

# Install the migration skill
/plugin install anysite-mcp-migration@anysite-skills
Quick preview of all available skills:
npx @anysiteio/agent-skills
Usage — ask Claude Code in natural language:
  • “Migrate the skill at /path/to/SKILL.md to v2”
  • “Migrate this prompt to the new anysite MCP” (paste old prompt)
  • “What v1 tools are still in this skill?” (paste text)
The skill automatically:
  1. Scans for all v1 tool references
  2. Replaces with correct execute() calls
  3. Adds discover() only where needed
  4. Adds get_page, query_cache, export_data where beneficial
  5. Removes disabled source references
  6. Updates error handling
  7. Outputs a migration summary

With Any LLM (Auto-Migration Prompt)

Copy the prompt below and paste it along with your old skill text into any LLM:
You are migrating an Anysite MCP skill from v1 (individual tools) to v2 (meta-tools).

RULES:
1. Replace every old tool call with execute(source, category, endpoint, params).
2. Only add discover() if the skill text says "check what's available" or doesn't specify
   exact endpoint/params. If the skill already knows exactly what to call — use execute()
   directly, no discover needed.
3. Add get_page/query_cache/export_data where the skill would benefit from pagination,
   filtering, or file export.
4. Keep the same logical flow — don't change what the skill does, only how it calls tools.
5. Remove references to Crunchbase (disabled source).

TOOL MAPPING:
- search_linkedin_users(...) → execute("linkedin", "search", "search_users", {...})
- get_linkedin_profile(user=X) → execute("linkedin", "user", "get", {"user": X})
- get_linkedin_company(company=X) → execute("linkedin", "company", "get", {"company": X})
- search_linkedin_companies(...) → execute("linkedin", "search", "search_companies", {...})
- search_linkedin_jobs(...) → execute("linkedin", "job_search", "search_jobs", {...})
- search_linkedin_posts(...) → execute("linkedin", "post", "search_posts", {...})
- get_linkedin_user_posts(user=X) → execute("linkedin", "post", "get_user_posts", {"user": X})
- find_linkedin_email(user=X) → execute("linkedin", "email", "find", {"user": X})
- google_linkedin_search(query=X) → execute("linkedin", "google", "search", {"query": X})
- search_twitter_users(query=X) → execute("twitter", "search", "search_users", {"query": X})
- get_twitter_user(username=X) → execute("twitter", "user", "get", {"username": X})
- get_twitter_user_tweets(username=X) → execute("twitter", "user_tweets", "get", {"username": X})
- search_instagram_users(query=X) → execute("instagram", "search", "search_users", {"query": X})
- get_instagram_user(username=X) → execute("instagram", "user", "get", {"username": X})
- search_youtube(query=X) → execute("youtube", "search", "search_videos", {"query": X})
- get_youtube_channel(channel_id=X) → execute("youtube", "channel", "get", {"channel_id": X})
- search_reddit(query=X) → execute("reddit", "search", "search", {"query": X})
- get_reddit_user(username=X) → execute("reddit", "user", "get", {"username": X})
- search_yc_companies(query=X) → execute("yc", "search", "search", {"query": X})
- get_yc_company(slug=X) → execute("yc", "company", "get", {"slug": X})
- search_sec_filings(query=X) → execute("sec", "search", "search", {"query": X})
- scrape_webpage(url=X) → execute("webparser", "parse", "parse", {"url": X})

NEW CAPABILITIES (add where useful):
- get_page(cache_key, offset, limit) — paginate large results
- query_cache(cache_key, conditions, sort_by, aggregate, group_by) — filter/sort/aggregate server-side
- export_data(cache_key, output_format) — download as json/csv/jsonl

If an old tool name is not in the mapping above, use discover(source, category) to find
the correct endpoint, then execute().

Now migrate the following skill:

<paste old skill text here>