FireCrawlLoader#

class langchain_community.document_loaders.firecrawl.FireCrawlLoader(url: str, *, api_key: str | None = None, api_url: str | None = None, mode: Literal['crawl', 'scrape'] = 'crawl', params: dict | None = None)[source]#

Load web pages as Documents using FireCrawl.

Must have Python package firecrawl installed and a FireCrawl API key. See

https://www.firecrawl.dev/ for more.

Initialize with API key and url.

Parameters:
  • url (str) – The url to be crawled.

  • api_key (str | None) – The Firecrawl API key. If not specified will be read from env var FIRECRAWL_API_KEY. Get an API key

  • api_url (str | None) – The Firecrawl API URL. If not specified will be read from env var FIRECRAWL_API_URL or defaults to https://api.firecrawl.dev.

  • mode (Literal['crawl', 'scrape']) – The mode to run the loader in. Default is “crawl”. Options include “scrape” (single url) and “crawl” (all accessible sub pages).

  • params (dict | None) – The parameters to pass to the Firecrawl API. Examples include crawlerOptions. For more details, visit: mendableai/firecrawl-py

Methods

__init__(url, *[, api_key, api_url, mode, ...])

Initialize with API key and url.

alazy_load()

A lazy loader for Documents.

aload()

Load data into Document objects.

lazy_load()

A lazy loader for Documents.

load()

Load data into Document objects.

load_and_split([text_splitter])

Load Documents and split into chunks.

__init__(url: str, *, api_key: str | None = None, api_url: str | None = None, mode: Literal['crawl', 'scrape'] = 'crawl', params: dict | None = None)[source]#

Initialize with API key and url.

Parameters:
  • url (str) – The url to be crawled.

  • api_key (str | None) – The Firecrawl API key. If not specified will be read from env var FIRECRAWL_API_KEY. Get an API key

  • api_url (str | None) – The Firecrawl API URL. If not specified will be read from env var FIRECRAWL_API_URL or defaults to https://api.firecrawl.dev.

  • mode (Literal['crawl', 'scrape']) – The mode to run the loader in. Default is “crawl”. Options include “scrape” (single url) and “crawl” (all accessible sub pages).

  • params (dict | None) – The parameters to pass to the Firecrawl API. Examples include crawlerOptions. For more details, visit: mendableai/firecrawl-py

async alazy_load() AsyncIterator[Document]#

A lazy loader for Documents.

Return type:

AsyncIterator[Document]

async aload() List[Document]#

Load data into Document objects.

Return type:

List[Document]

lazy_load() Iterator[Document][source]#

A lazy loader for Documents.

Return type:

Iterator[Document]

load() List[Document]#

Load data into Document objects.

Return type:

List[Document]

load_and_split(text_splitter: TextSplitter | None = None) List[Document]#

Load Documents and split into chunks. Chunks are returned as Documents.

Do not override this method. It should be considered to be deprecated!

Parameters:

text_splitter (Optional[TextSplitter]) – TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.

Returns:

List of Documents.

Return type:

List[Document]

Examples using FireCrawlLoader