Model Context Protocol Server for AI Agents
Transport: stdio · Runtime: Python 3.12+
The Model Context Protocol enables AI agents (Claude, etc.) to interact with IATO autonomously — crawling sites, building sitemaps, auditing SEO, and managing content without human intervention.
The MCP server translates 105 tool calls into IATO REST API requests, handling auth, retries, and error recovery automatically.
No installation or config files needed. Connect directly from Claude Desktop.
Go to Settings → Connectors (or Settings → MCP Servers).
Click "Add custom connector" and enter:
IATO
https://iato.ai/mcp
Click "Add" → a browser window will open → log in with your IATO account → click "Authorize". That's it! All 77 MCP tools will appear in Claude Desktop.
Don't have an account? Sign up at iato.ai — free trial includes full MCP access.
If you prefer to use an API key directly, add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"iato": {
"type": "streamable-http",
"url": "https://iato.ai/mcp",
"headers": {
"Authorization": "Bearer iato_your_api_key"
}
}
}
}
Generate an API key at Developers → API Keys in your IATO dashboard.
For local development, install and run the MCP server via stdio transport:
# Install
python3 -m venv mcp_venv
source mcp_venv/bin/activate
pip install -r requirements-mcp.txt
Add to claude_desktop_config.json:
{
"mcpServers": {
"iato": {
"command": "/path/to/crawler/mcp_venv/bin/python",
"args": ["-m", "mcp_server.server"],
"cwd": "/path/to/crawler",
"env": {
"IATO_BASE_URL": "https://iato.ai",
"IATO_API_KEY": "your-api-key"
}
}
}
}
Tip: Use the full path to the venv's Python binary — not just python.
┌─────────────┐ Streamable HTTP ┌──────────────────┐ ┌─────────────┐
│ AI Agent │ ◄────────────────► │ MCP Server (83) │ ◄──────────► │ IATO API │
│ (Claude) │ iato.ai/mcp │ Python / httpx │ REST/JSON │ FastAPI │
└─────────────┘ (or stdio local) └──────────────────┘ └──────┬──────┘
│
┌──────▼──────┐
│ MySQL │
└─────────────┘
Every tool returns a consistent JSON envelope:
{
"success": true,
"data": { ... },
"summary": "Created workspace 'My Site' (ID: 42)",
"next_actions": [
{ "tool": "start_crawl", "args": {...}, "description": "Begin crawling" }
]
}
Error responses include recovery_actions with specific tool calls — agents never hit dead ends.
All 105 tools organized by module. Method badges: GET POST PUT DEL CLIENT
| Tool | Method | Endpoint | Description |
|---|---|---|---|
| list_workspaces | GET | /api/workspaces | List all workspaces with optional stats |
| get_workspace | GET | /api/workspaces/{id} | Get details with crawls and sitemaps |
| create_workspace | POST | /api/workspaces | Create a new workspace |
| update_workspace | PUT | /api/workspaces/{id} | Update name or description |
| delete_workspace | DEL | /api/workspaces/{id} | Delete (optionally with contents) |
| Tool | Method | Endpoint | Description |
|---|---|---|---|
| start_crawl | POST | /api/crawl/start | Start crawl with config |
| get_crawl_status | GET | /api/crawl/jobs/{id} | Check progress, pages found, ETA |
| list_crawls | GET | /api/crawl/jobs | List crawls filtered by workspace or status |
| stop_crawl | POST | /api/crawl/jobs/{id}/cancel | Stop a running crawl |
| delete_crawl | DEL | /api/crawl/jobs/{id} | Delete crawl and associated data |
| Tool | Method | Endpoint | Description |
|---|---|---|---|
| list_sitemaps | GET | /api/sitemaps | List all visual sitemaps |
| get_sitemap | GET | /api/sitemaps/{id} | Get sitemap with full node tree |
| create_sitemap | POST | /api/sitemaps | Create from completed crawl |
| Tool | Method | Description |
|---|---|---|
| create_sitemap_node | POST | Add a page or section node |
| update_sitemap_node | PUT | Update title, status, type, notes |
| delete_sitemap_node | DEL | Remove a node |
| move_sitemap_node | PUT | Move to new parent or position |
| duplicate_sitemap_node | POST | Copy with optional children |
| bulk_update_nodes | PUT | Update status/type on multiple nodes |
| bulk_delete_nodes | DEL | Delete multiple nodes at once |
| Tool | Method | Description |
|---|---|---|
| get_page_content | GET | Full page data — HTML, metadata, headers |
| search_pages | GET | Full-text search across crawled pages |
| get_screenshot | GET | Page screenshot (requires JS rendering) |
| Changes | ||
| detect_changes | POST | Compare sitemap to latest crawl |
| apply_changes | POST | Accept changes, add new pages as nodes |
| Export | ||
| export_sitemap | GET | Export as JSON, CSV, standalone HTML, or redirect map (CSV/JSON). Formats: json, csv, html, redirects, redirects_json. Use redirect_code param (301/302) for redirect formats. |
| export_crawl | GET | Export as JSON, CSV, or XML sitemap |
| Tool | Method | Description |
|---|---|---|
| get_taxonomy | GET | Get all categories and tags |
| create_category | POST | Create a category |
| update_category | PUT | Rename a category |
| delete_category | DEL | Delete, clear from nodes |
| assign_category | PUT | Assign category to nodes (bulk) |
| create_tag | POST | Create a colored tag |
| update_tag | PUT | Update label or color |
| delete_tag | DEL | Delete from all nodes |
| assign_tags | PUT | Assign tags to nodes (bulk) |
| bulk_auto_tag | CLIENT | Auto-tag by URL patterns with dry run |
| Tool | Method | Description |
|---|---|---|
| get_workspace_stats | GET | Aggregate stats — crawls, sitemaps, pages |
| get_crawl_analytics | GET | Pages by status, performance, structure |
| get_content_metrics | GET | Word counts, images, content metrics |
| get_trends | GET | Multi-crawl trend data |
| get_low_performing_pages | GET | Slowest, largest, thinnest pages |
| SEO Audit | ||
| run_seo_audit | GET | Run full SEO audit with scores |
| get_seo_issues | GET | Issues by severity and category |
| get_seo_score | GET | Score breakdown by category |
| get_audit_history | GET | Per-page audit results |
| fix_seo_issue | POST | Apply fix for title/meta/alt issues |
| AI Suggestions | ||
| generate_suggestions | CLIENT | Analyze issues, generate prioritized suggestions |
| prioritize_suggestions | CLIENT | Re-rank by impact, effort, or quick wins |
| apply_suggestion | POST | Apply auto-fixable suggestions |
| dismiss_suggestion | CLIENT | Dismiss with optional reason |
Events: crawl.started crawl.completed crawl.failed changes.detected and more.
| Tool | Method | Description |
|---|---|---|
| list_webhooks | GET | List all webhooks |
| create_webhook | POST | Create with event subscriptions |
| get_webhook | GET | Details and delivery stats |
| update_webhook | PUT | Update URL, events, or status |
| delete_webhook | DEL | Delete a webhook |
| test_webhook | POST | Send test payload |
| get_webhook_history | GET | Recent delivery attempts |
| Tool | Method | Description |
|---|---|---|
| discover_agents | CLIENT | Find other IATO instances |
| get_agent_capabilities | CLIENT | View tools and rate limits |
| delegate_task | POST | Send tool call to another agent |
| get_delegated_result | CLIENT | Retrieve async task results |
| share_resource | CLIENT | Share crawl/sitemap/report |
Create and manage CSS, XPath, or regex extraction rules to pull structured data from crawled pages. Rules can be tested against live URLs before saving, then attached to crawls or schedules via extraction_rule_ids.
| Tool | Method | Description |
|---|---|---|
| list_extraction_rules | GET | List all extraction rules |
| create_extraction_rule | POST | Create rule (css/xpath/regex, target: text/html/attribute/count) |
| get_extraction_rule | GET | Get rule details by ID |
| update_extraction_rule | PUT | Update rule selector, target, or options |
| delete_extraction_rule | DEL | Delete a rule |
| test_extraction_rule | POST | Test rule against a live URL without saving |
create_workspace(name="Client Site")
start_crawl(url="https://example.com", max_pages=500)
get_crawl_status(crawl_id="abc123") # poll until complete
create_sitemap(name="Site Map", job_id="abc123")
export_sitemap(sitemap_id=1, format="json")
run_seo_audit(crawl_id="abc123")
get_seo_issues(crawl_id="abc123", severity="error")
generate_suggestions(crawl_id="abc123", focus_areas=["seo"])
apply_suggestion(crawl_id="abc123", suggestion_id="sug_1")
get_seo_score(crawl_id="abc123")
create_schedule(name="Weekly", url="https://example.com", frequency="weekly")
create_webhook(name="Slack", url="https://hooks.slack.com/...", events=["crawl.completed", "changes.detected"])
compare_crawls(crawl_id_old="abc123", crawl_id_new="def456")
test_extraction_rule(url="https://example.com", rule_type="css", selector="h1")
create_extraction_rule(name="Page Title", rule_type="css", selector="h1", target="text")
create_extraction_rule(name="Price", rule_type="css", selector=".price", target="text", match_all=True)
start_crawl(url="https://example.com", extraction_rule_ids="1,2")
get_extracted_data(crawl_id="abc123")
start_crawl(url="https://example.com", max_pages=1000)
create_sitemap(name="Migration Plan", job_id="abc123")
# Restructure sitemap, mark pages as redirect with destination URLs
export_sitemap(sitemap_id=1, format="redirects", redirect_code=301)
# Or get structured JSON:
export_sitemap(sitemap_id=1, format="redirects_json")
Use IATO together with WordPress.com's MCP server to crawl, audit, and fix your WordPress site — all from Claude Desktop.
Add both connectors in Claude Desktop (Settings → Connectors → Add):
| Connector | URL | Purpose |
|---|---|---|
| IATO | https://iato.ai/mcp | Crawl, audit, analyze SEO & content |
| WordPress | https://public-api.wordpress.com/wpcom/v2/mcp/v1 | Edit posts, update metadata, manage content |
IATO MCP (Analyze) Claude (Orchestrate) WordPress MCP (Edit)
| | |
Crawl site ──────────────> Identify issues ──────────────> Fix titles
SEO audit ───────────────> Prioritize fixes ─────────────> Update meta
Content gaps ────────────> Generate content ─────────────> Edit posts
Broken links ────────────> Map to pages ─────────────────> Fix links
IATO includes 3 tools that produce WordPress-ready output with post slugs:
| Tool | Description |
|---|---|
| get_wordpress_seo_fixes | SEO issues with current/suggested values mapped to WordPress post slugs |
| get_wordpress_content_gaps | Thin content pages with word counts, missing elements, and recommendations |
| get_wordpress_broken_links | Broken links mapped to source posts with anchor text and fix suggestions |
"Crawl my-site.com and fix any missing meta descriptions in WordPress"
"Run an SEO audit on my WordPress site and update the page titles that are too long"
"Find broken links on my site and fix them in WordPress"
"Identify thin content pages and expand them in the WordPress editor"
Claude chains tools from both servers automatically. Just describe what you want and it picks the right tools.
| Variable | Default | Description |
|---|---|---|
| IATO_BASE_URL | https://iato.ai | IATO API base URL |
| IATO_API_KEY | required | API key for authentication |
| MCP_REQUEST_TIMEOUT | 30.0 | HTTP timeout in seconds |
| MCP_MAX_RETRIES | 2 | Retry count on transient failures |
| MCP_RETRY_DELAY | 1.0 | Seconds between retries |