Skip to main content
Give your ElevenAgents voice and chat agents the ability to scrape, search, and crawl the web in real time using Firecrawl. This guide covers two integration paths:
  1. MCP server — connect the hosted Firecrawl MCP server for zero-code setup.
  2. Server webhook tool — point a custom tool at Firecrawl’s REST API for full control over requests.

Prerequisites

Option 1: Firecrawl MCP Server

The fastest way to give an agent web access. ElevenAgents supports remote MCP servers over SSE, and Firecrawl provides a hosted MCP endpoint.

Add the MCP server

  1. Open the MCP server integrations dashboard in ElevenLabs and click Add Custom MCP Server.
  2. Fill in the following fields:
FieldValue
NameFirecrawl
DescriptionSearch, scrape, crawl, and extract content from any website.
Server URLhttps://mcp.firecrawl.dev/YOUR_FIRECRAWL_API_KEY/v2/mcp
Replace YOUR_FIRECRAWL_API_KEY with your actual key. Treat this URL as a secret — it contains your API key.
  1. Click Add Integration. ElevenLabs will connect to the server and list the available tools (scrape, search, crawl, map, and more).

Attach it to an agent

  1. Create or open an agent in the ElevenAgents dashboard.
  2. In the Agent tab, scroll to Tools and click Add Tool.
  3. Select the Firecrawl MCP server you just created.
  4. Choose an approval mode. For read-only scraping, No Approval is fine. For tools that crawl large sites, consider Fine-Grained Tool Approval so the agent asks before kicking off expensive operations.

Update the system prompt

Add instructions so the agent knows when to use Firecrawl. For example:
You are a helpful research assistant. When the user asks about a website,
a company, or any topic that requires up-to-date information, use the
Firecrawl tools to search the web or scrape the relevant page, then
summarize the results.

Test it

Press Test AI agent and try a prompt like:
“What does firecrawl.dev do? Go to the site and summarize it for me.”
The agent will call the Firecrawl MCP scrape tool, receive the page markdown, and respond with a spoken summary.

Option 2: Server Webhook Tool

Use this approach when you need precise control over request parameters (formats, headers, timeouts, etc.) or want to call a specific Firecrawl endpoint without exposing the full MCP tool set.

Scrape tool

Create a tool that scrapes a single URL and returns its content as markdown.
  1. In your agent settings, click Add Tool and select Webhook.
  2. Configure the tool:
FieldValue
Namescrape_website
DescriptionScrape content from a URL and return it as clean markdown.
MethodPOST
URLhttps://api.firecrawl.dev/v1/scrape
  1. Add a header for authentication:
HeaderTypeValue
AuthorizationSecretBearer YOUR_FIRECRAWL_API_KEY
Store the API key as a workspace secret for security.
  1. Add a body parameter:
ParameterTypeDescriptionRequired
urlstringThe URL to scrapeYes
The Firecrawl API returns the page content as markdown by default. The agent receives the JSON response and can use the markdown field to answer questions.

Search tool

Create a tool that searches the web and returns results with scraped content.
  1. Add another Webhook tool with:
FieldValue
Namesearch_web
DescriptionSearch the web for a query and return relevant results with page content.
MethodPOST
URLhttps://api.firecrawl.dev/v1/search
  1. Add the same Authorization header as above.
  2. Add body parameters:
ParameterTypeDescriptionRequired
querystringThe search queryYes
limitnumberMaximum number of results to return (default 5)No

Update the system prompt

You are a knowledgeable assistant with access to web tools.

- Use `scrape_website` when the user gives you a specific URL to read.
- Use `search_web` when the user asks a general question that requires
  finding information online.

Always summarize the information concisely and cite the source URL.

Test it

Try asking:
“Search for the latest Next.js features and give me a summary.”
The agent will call search_web, receive results from Firecrawl, and respond with a spoken summary of the findings.

Tips

  • Model selection — ElevenLabs recommends high-intelligence models (GPT-4o mini, Claude 3.5 Sonnet, or later) for reliable tool calling. Smaller models may struggle to extract the correct parameters.
  • Keep prompts specific — Tell the agent exactly when to use each tool. Vague instructions lead to missed or incorrect tool calls.
  • Limit response size — For voice agents, long scraped pages can overwhelm the LLM context. Use onlyMainContent: true in scrape options (or instruct the agent to summarize aggressively) to keep responses concise.
  • Tool call sounds — ElevenLabs lets you add ambient audio while a tool runs. This is useful for scrape calls that take a moment — it signals to the user that the agent is working.

Resources