Map API

Discover Every
URL

Get all URLs on any website in seconds. 10x faster than crawling each page individually.

$0.002flat fee
Response
// Discovered 847 URLs in 2.3s
{
"links": [
{ "url": "/", "title": "Home" },
{ "url": "/docs", "title": "Documentation" },
{ "url": "/blog", "title": "Blog" },
{ "url": "/pricing", "title": "Pricing" },
// ... 843 more URLs
],
"cost": 0.002
}
100K
Max URLs
$0.002
Flat Fee
1-5s
Speed
How It Works

Smart URL Discovery

The API uses an intelligent multi-step approach to find all URLs on a website.

1. Check Sitemap
Fastest method
2. Crawl Links
If no sitemap
3. Return URLs
With page titles
Why Map First?

Map returns only URLs and titles — no content extraction. Discover what's on a site, filter the URLs you need, then scrape only those pages. This workflow saves significant time and money.

10x Faster
Titles only, no content extraction
Sitemap First
Checks sitemap.xml automatically
Subdomain Support
Include blog.*, docs.*, api.*
Workflow

Map → Filter → Crawl

1
MAP
$0.002 flat

Discover all URLs on the website

→ Found 500 URLs
2
FILTER
$0 (your code)

Filter by pattern, keyword, or path

→ Keep 50 blog posts
3
CRAWL
$0.001 × 50 = $0.05

Extract full content from selected pages

→ Get 50 articles
Without Map

Crawl all 500 pages = $0.50

With Map First

Map + Crawl 50 = $0.052

90% savings by mapping first
Quick Start

Start Mapping in Minutes

map.py
PYTHON
from llmlayer import LLMLayerClient

client = LLMLayerClient(api_key="...")

# Map entire website
response = client.map(url="https://example.com")

print(f"Found {len(response.links)} pages")
for link in response.links[:10]:
    print(f"  {link['title']}: {link['url']}")
map.ts
TYPESCRIPT
import { LLMLayerClient } from 'llmlayer';

const client = new LLMLayerClient({
  apiKey: process.env.LLMLAYER_API_KEY
});

// Map with keyword filter
const response = await client.map({
  url: 'https://example.com',
  search: 'blog',  // Only URLs with "blog"
  includeSubdomains: true
});

console.log(`Found ${response.links.length} blog pages`);
Map → Filter → Crawl WorkflowPYTHON
# Step 1: Map the site ($0.002)
map_result = client.map(url="https://docs.example.com")
print(f"Discovered {len(map_result.links)} URLs")

# Step 2: Filter URLs you need (free - your code)
api_docs = [
    link['url'] for link in map_result.links
    if '/api/' in link['url']
]
print(f"Found {len(api_docs)} API documentation pages")

# Step 3: Crawl only what you need ($0.001 each)
for url in api_docs[:20]:  # Crawl first 20
    content = client.scrape(url=url, formats=["markdown"])
    print(f"Scraped: {content.title}")
TerminalcURL
curl -X POST https://api.llmlayer.dev/api/v2/map \
  -H "Authorization: Bearer $LLMLAYER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://example.com", "includeSubdomains": true, "limit": 10000}'
Reference

API Parameters

POST/api/v2/map
PARAMETERTYPEDEFAULTDESCRIPTION
url*stringValid HTTP(S) URL to map
searchstringFilter URLs containing this keyword
ignoreSitemapbooleanfalseSkip sitemap.xml, force link crawling
includeSubdomainsbooleanfalseInclude URLs from all subdomains
limitinteger5000Max URLs to discover (1-100,000)
timeoutinteger15000Operation timeout in milliseconds

* Required parameter

Response

Response Format

response.json
{
  "links": [
    {
      "url": "https://example.com/",
      "title": "Home"
    },
    {
      "url": "https://example.com/about",
      "title": "About Us"
    },
    {
      "url": "https://example.com/blog/post-1",
      "title": "First Blog Post"
    }
  ],
  "statusCode": 200,
  "cost": 0.002
}
Response Fields
links

Array of discovered URLs with their page titles

links[].url

Full URL of the discovered page

links[].title

Page title extracted from HTML

statusCode

HTTP status (200 = success)

cost

Always $0.002 regardless of URLs found

Use Cases

Built For

SEO Site Audits

Get complete URL inventory for technical SEO analysis and broken link detection.

Content Inventory

Catalog all pages before migration, redesign, or content strategy planning.

Pre-Crawl Planning

Discover URLs first, filter what you need, then crawl only relevant pages.

Competitor Analysis

Map competitor sites to understand their content structure and strategy.

Sitemap Generation

Create or validate sitemaps for websites without one.

AI Data Pipeline

First step in building datasets — map sites to find content, then extract.

Comparison

Map API vs Crawl API

ASPECTMAP APICRAWL API
ReturnsURLs + titles onlyFull page content
Speed1-5 secondsSlower (content extraction)
Cost$0.002 flat$0.001 per page
Best forDiscovery & planningContent extraction
Use Map API when:
  • You need to discover what pages exist
  • Planning which pages to crawl
  • Building a content inventory
  • SEO auditing site structure
Use Crawl API when:
  • You need actual page content
  • Extracting text for AI processing
  • Building search indexes
  • Creating training datasets
$0.002for up to 100,000 URLs

Map Any Website
In Seconds

Discover every URL on any website. 10x faster than crawling. Free credits to start.