AI21SemanticTextSplitter#

class langchain_ai21.semantic_text_splitter.AI21SemanticTextSplitter(chunk_size: int = 0, chunk_overlap: int = 0, client: Any | None = None, api_key: SecretStr | None = None, api_host: str | None = None, timeout_sec: float | None = None, num_retries: int | None = None, **kwargs: Any)[source]#

Splitting text into coherent and readable units, based on distinct topics and lines.

Create a new TextSplitter.

Methods

__init__([chunk_size,Β chunk_overlap,Β ...])

Create a new TextSplitter.

atransform_documents(documents,Β **kwargs)

Asynchronously transform a list of documents.

create_documents(texts[,Β metadatas])

Create documents from a list of texts.

from_huggingface_tokenizer(tokenizer,Β **kwargs)

Text splitter that uses HuggingFace tokenizer to count length.

from_tiktoken_encoder([encoding_name,Β ...])

Text splitter that uses tiktoken encoder to count length.

split_documents(documents)

Split documents.

split_text(source)

Split text into multiple components.

split_text_to_documents(source)

Split text into multiple documents.

transform_documents(documents,Β **kwargs)

Transform sequence of documents by splitting them.

Parameters:
  • chunk_size (int) –

  • chunk_overlap (int) –

  • client (Any | None) –

  • api_key (SecretStr | None) –

  • api_host (str | None) –

  • timeout_sec (float | None) –

  • num_retries (int | None) –

  • kwargs (Any) –

__init__(chunk_size: int = 0, chunk_overlap: int = 0, client: Any | None = None, api_key: SecretStr | None = None, api_host: str | None = None, timeout_sec: float | None = None, num_retries: int | None = None, **kwargs: Any) β†’ None[source]#

Create a new TextSplitter.

Parameters:
  • chunk_size (int) –

  • chunk_overlap (int) –

  • client (Any | None) –

  • api_key (SecretStr | None) –

  • api_host (str | None) –

  • timeout_sec (float | None) –

  • num_retries (int | None) –

  • kwargs (Any) –

Return type:

None

async atransform_documents(documents: Sequence[Document], **kwargs: Any) β†’ Sequence[Document]#

Asynchronously transform a list of documents.

Parameters:
  • documents (Sequence[Document]) – A sequence of Documents to be transformed.

  • kwargs (Any) –

Returns:

A sequence of transformed Documents.

Return type:

Sequence[Document]

create_documents(texts: List[str], metadatas: List[dict] | None = None) β†’ List[Document][source]#

Create documents from a list of texts.

Parameters:
  • texts (List[str]) –

  • metadatas (List[dict] | None) –

Return type:

List[Document]

classmethod from_huggingface_tokenizer(tokenizer: Any, **kwargs: Any) β†’ TextSplitter#

Text splitter that uses HuggingFace tokenizer to count length.

Parameters:
  • tokenizer (Any) –

  • kwargs (Any) –

Return type:

TextSplitter

classmethod from_tiktoken_encoder(encoding_name: str = 'gpt2', model_name: str | None = None, allowed_special: Literal['all'] | AbstractSet[str] = {}, disallowed_special: Literal['all'] | Collection[str] = 'all', **kwargs: Any) β†’ TS#

Text splitter that uses tiktoken encoder to count length.

Parameters:
  • encoding_name (str) –

  • model_name (str | None) –

  • allowed_special (Literal['all'] | ~typing.AbstractSet[str]) –

  • disallowed_special (Literal['all'] | ~typing.Collection[str]) –

  • kwargs (Any) –

Return type:

TS

split_documents(documents: Iterable[Document]) β†’ List[Document]#

Split documents.

Parameters:

documents (Iterable[Document]) –

Return type:

List[Document]

split_text(source: str) β†’ List[str][source]#

Split text into multiple components.

Parameters:

source (str) – Specifies the text input for text segmentation

Return type:

List[str]

split_text_to_documents(source: str) β†’ List[Document][source]#

Split text into multiple documents.

Parameters:

source (str) – Specifies the text input for text segmentation

Return type:

List[Document]

transform_documents(documents: Sequence[Document], **kwargs: Any) β†’ Sequence[Document]#

Transform sequence of documents by splitting them.

Parameters:
  • documents (Sequence[Document]) –

  • kwargs (Any) –

Return type:

Sequence[Document]