Get all URLs on any website in seconds. 10x faster than crawling each page individually.
The API uses an intelligent multi-step approach to find all URLs on a website.
Map returns only URLs and titles — no content extraction. Discover what's on a site, filter the URLs you need, then scrape only those pages. This workflow saves significant time and money.
Discover all URLs on the website
Filter by pattern, keyword, or path
Extract full content from selected pages
Crawl all 500 pages = $0.50
Map + Crawl 50 = $0.052
from llmlayer import LLMLayerClient
client = LLMLayerClient(api_key="...")
# Map entire website
response = client.map(url="https://example.com")
print(f"Found {len(response.links)} pages")
for link in response.links[:10]:
print(f" {link['title']}: {link['url']}")import { LLMLayerClient } from 'llmlayer';
const client = new LLMLayerClient({
apiKey: process.env.LLMLAYER_API_KEY
});
// Map with keyword filter
const response = await client.map({
url: 'https://example.com',
search: 'blog', // Only URLs with "blog"
includeSubdomains: true
});
console.log(`Found ${response.links.length} blog pages`);# Step 1: Map the site ($0.002)
map_result = client.map(url="https://docs.example.com")
print(f"Discovered {len(map_result.links)} URLs")
# Step 2: Filter URLs you need (free - your code)
api_docs = [
link['url'] for link in map_result.links
if '/api/' in link['url']
]
print(f"Found {len(api_docs)} API documentation pages")
# Step 3: Crawl only what you need ($0.001 each)
for url in api_docs[:20]: # Crawl first 20
content = client.scrape(url=url, formats=["markdown"])
print(f"Scraped: {content.title}")curl -X POST https://api.llmlayer.dev/api/v2/map \
-H "Authorization: Bearer $LLMLAYER_API_KEY" \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com", "includeSubdomains": true, "limit": 10000}'/api/v2/map| PARAMETER | TYPE | DEFAULT | DESCRIPTION |
|---|---|---|---|
url* | string | — | Valid HTTP(S) URL to map |
search | string | — | Filter URLs containing this keyword |
ignoreSitemap | boolean | false | Skip sitemap.xml, force link crawling |
includeSubdomains | boolean | false | Include URLs from all subdomains |
limit | integer | 5000 | Max URLs to discover (1-100,000) |
timeout | integer | 15000 | Operation timeout in milliseconds |
* Required parameter
{
"links": [
{
"url": "https://example.com/",
"title": "Home"
},
{
"url": "https://example.com/about",
"title": "About Us"
},
{
"url": "https://example.com/blog/post-1",
"title": "First Blog Post"
}
],
"statusCode": 200,
"cost": 0.002
}linksArray of discovered URLs with their page titles
links[].urlFull URL of the discovered page
links[].titlePage title extracted from HTML
statusCodeHTTP status (200 = success)
costAlways $0.002 regardless of URLs found
Get complete URL inventory for technical SEO analysis and broken link detection.
Catalog all pages before migration, redesign, or content strategy planning.
Discover URLs first, filter what you need, then crawl only relevant pages.
Map competitor sites to understand their content structure and strategy.
Create or validate sitemaps for websites without one.
First step in building datasets — map sites to find content, then extract.
| ASPECT | MAP API | CRAWL API |
|---|---|---|
| Returns | URLs + titles only | Full page content |
| Speed | 1-5 seconds | Slower (content extraction) |
| Cost | $0.002 flat | $0.001 per page |
| Best for | Discovery & planning | Content extraction |
Discover every URL on any website. 10x faster than crawling. Free credits to start.