ChatLlamaCpp#

class langchain_community.chat_models.llamacpp.ChatLlamaCpp[source]#

Bases: BaseChatModel

llama.cpp model.

To use, you should have the llama-cpp-python library installed, and provide the path to the Llama model as a named parameter to the constructor. Check out: abetlen/llama-cpp-python

Note

ChatLlamaCpp implements the standard Runnable Interface. 🏃

The Runnable Interface has additional methods that are available on runnables, such as with_types, with_retry, assign, bind, get_graph, and more.

param cache: BaseCache | bool | None = None#

Whether to cache the response.

  • If true, will use the global cache.

  • If false, will not use a cache

  • If None, will use the global cache if it’s set, otherwise no cache.

  • If instance of BaseCache, will use the provided cache.

Caching is not currently supported for streaming methods of models.

param callback_manager: BaseCallbackManager | None = None#

Deprecated since version 0.1.7: Use callbacks instead.

Callback manager to add to the run trace.

param callbacks: Callbacks = None#

Callbacks to add to the run trace.

param custom_get_token_ids: Callable[[str], List[int]] | None = None#

Optional encoder to use for counting tokens.

param disable_streaming: bool | Literal['tool_calling'] = False#

Whether to disable streaming for this model.

If streaming is bypassed, then stream()/astream() will defer to invoke()/ainvoke().

  • If True, will always bypass streaming case.

  • If “tool_calling”, will bypass streaming case only when the model is called with a tools keyword argument.

  • If False (default), will always use streaming case if available.

param echo: bool = False#

Whether to echo the prompt.

param f16_kv: bool = True#

Use half-precision for key/value cache.

param grammar: Any = None#

grammar: formal grammar for constraining model outputs. For instance, the grammar can be used to force the model to generate valid JSON or to speak exclusively in emojis. At most one of grammar_path and grammar should be passed in.

param grammar_path: str | Path | None = None#

grammar_path: Path to the .gbnf file that defines formal grammars for constraining model outputs. For instance, the grammar can be used to force the model to generate valid JSON or to speak exclusively in emojis. At most one of grammar_path and grammar should be passed in.

param last_n_tokens_size: int = 64#

The number of tokens to look back when applying the repeat_penalty.

param logits_all: bool = False#

Return logits for all tokens, not just the last token.

param logprobs: int | None = None#

The number of logprobs to return. If None, no logprobs are returned.

param lora_base: str | None = None#

The path to the Llama LoRA base model.

param lora_path: str | None = None#

The path to the Llama LoRA. If None, no LoRa is loaded.

param max_tokens: int = 256#

The maximum number of tokens to generate.

param metadata: Dict[str, Any] | None = None#

Metadata to add to the run trace.

param model_kwargs: Dict[str, Any] [Optional]#

Any additional parameters to pass to llama_cpp.Llama.

param model_path: str [Required]#

The path to the Llama model file.

param n_batch: int = 8#

Number of tokens to process in parallel. Should be a number between 1 and n_ctx.

param n_ctx: int = 512#

Token context window.

param n_gpu_layers: int | None = None#

Number of layers to be loaded into gpu memory. Default None.

param n_parts: int = -1#

Number of parts to split the model into. If -1, the number of parts is automatically determined.

param n_threads: int | None = None#

Number of threads to use. If None, the number of threads is automatically determined.

param rate_limiter: BaseRateLimiter | None = None#

An optional rate limiter to use for limiting the number of requests.

param repeat_penalty: float = 1.1#

The penalty to apply to repeated tokens.

param rope_freq_base: float = 10000.0#

Base frequency for rope sampling.

param rope_freq_scale: float = 1.0#

Scale factor for rope sampling.

param seed: int = -1#

Seed. If -1, a random seed is used.

param stop: List[str] | None = None#

A list of strings to stop generation when encountered.

param streaming: bool = True#

Whether to stream the results, token by token.

param suffix: str | None = None#

A suffix to append to the generated text. If None, no suffix is appended.

param tags: List[str] | None = None#

Tags to add to the run trace.

param temperature: float = 0.8#

The temperature to use for sampling.

param top_k: int = 40#

The top-k value to use for sampling.

param top_p: float = 0.95#

The top-p value to use for sampling.

param use_mlock: bool = False#

Force system to keep model in RAM.

param use_mmap: bool = True#

Whether to keep the model loaded in RAM

param verbose: bool = True#

Print verbose output to stderr.

param vocab_only: bool = False#

Only load the vocabulary, no weights.

__call__(messages: List[BaseMessage], stop: List[str] | None = None, callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None, **kwargs: Any) BaseMessage#

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters:
Return type:

BaseMessage

async abatch(inputs: List[Input], config: RunnableConfig | List[RunnableConfig] | None = None, *, return_exceptions: bool = False, **kwargs: Any | None) List[Output]#

Default implementation runs ainvoke in parallel using asyncio.gather.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.

Parameters:
  • inputs (List[Input]) – A list of inputs to the Runnable.

  • config (RunnableConfig | List[RunnableConfig] | None) – A config to use when invoking the Runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None.

  • return_exceptions (bool) – Whether to return exceptions instead of raising them. Defaults to False.

  • kwargs (Any | None) – Additional keyword arguments to pass to the Runnable.

Returns:

A list of outputs from the Runnable.

Return type:

List[Output]

async abatch_as_completed(inputs: Sequence[Input], config: RunnableConfig | Sequence[RunnableConfig] | None = None, *, return_exceptions: bool = False, **kwargs: Any | None) AsyncIterator[Tuple[int, Output | Exception]]#

Run ainvoke in parallel on a list of inputs, yielding results as they complete.

Parameters:
  • inputs (Sequence[Input]) – A list of inputs to the Runnable.

  • config (RunnableConfig | Sequence[RunnableConfig] | None) – A config to use when invoking the Runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Defaults to None. Defaults to None.

  • return_exceptions (bool) – Whether to return exceptions instead of raising them. Defaults to False.

  • kwargs (Any | None) – Additional keyword arguments to pass to the Runnable.

Yields:

A tuple of the index of the input and the output from the Runnable.

Return type:

AsyncIterator[Tuple[int, Output | Exception]]

async agenerate(messages: List[List[BaseMessage]], stop: List[str] | None = None, callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None, *, tags: List[str] | None = None, metadata: Dict[str, Any] | None = None, run_name: str | None = None, run_id: UUID | None = None, **kwargs: Any) LLMResult#

Asynchronously pass a sequence of prompts to a model and return generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters:
  • messages (List[List[BaseMessage]]) – List of list of messages.

  • stop (List[str] | None) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (List[BaseCallbackHandler] | BaseCallbackManager | None) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

  • tags (List[str] | None) –

  • metadata (Dict[str, Any] | None) –

  • run_name (str | None) –

  • run_id (UUID | None) –

  • **kwargs

Returns:

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type:

LLMResult

async agenerate_prompt(prompts: List[PromptValue], stop: List[str] | None = None, callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None, **kwargs: Any) LLMResult#

Asynchronously pass a sequence of prompts and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters:
  • prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).

  • stop (List[str] | None) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (List[BaseCallbackHandler] | BaseCallbackManager | None) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

Returns:

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type:

LLMResult

async ainvoke(input: LanguageModelInput, config: RunnableConfig | None = None, *, stop: List[str] | None = None, **kwargs: Any) BaseMessage#

Default implementation of ainvoke, calls invoke from a thread.

The default implementation allows usage of async code even if the Runnable did not implement a native async version of invoke.

Subclasses should override this method if they can run asynchronously.

Parameters:
  • input (LanguageModelInput) –

  • config (Optional[RunnableConfig]) –

  • stop (Optional[List[str]]) –

  • kwargs (Any) –

Return type:

BaseMessage

async apredict(text: str, *, stop: Sequence[str] | None = None, **kwargs: Any) str#

Deprecated since version langchain-core==0.1.7: Use ainvoke instead.

Parameters:
  • text (str) –

  • stop (Sequence[str] | None) –

  • kwargs (Any) –

Return type:

str

async apredict_messages(messages: List[BaseMessage], *, stop: Sequence[str] | None = None, **kwargs: Any) BaseMessage#

Deprecated since version langchain-core==0.1.7: Use ainvoke instead.

Parameters:
  • messages (List[BaseMessage]) –

  • stop (Sequence[str] | None) –

  • kwargs (Any) –

Return type:

BaseMessage

async astream(input: LanguageModelInput, config: RunnableConfig | None = None, *, stop: List[str] | None = None, **kwargs: Any) AsyncIterator[BaseMessageChunk]#

Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output.

Parameters:
  • input (LanguageModelInput) – The input to the Runnable.

  • config (Optional[RunnableConfig]) – The config to use for the Runnable. Defaults to None.

  • kwargs (Any) – Additional keyword arguments to pass to the Runnable.

  • stop (Optional[List[str]]) –

Yields:

The output of the Runnable.

Return type:

AsyncIterator[BaseMessageChunk]

astream_events(input: Any, config: RunnableConfig | None = None, *, version: Literal['v1', 'v2'], include_names: Sequence[str] | None = None, include_types: Sequence[str] | None = None, include_tags: Sequence[str] | None = None, exclude_names: Sequence[str] | None = None, exclude_types: Sequence[str] | None = None, exclude_tags: Sequence[str] | None = None, **kwargs: Any) AsyncIterator[StandardStreamEvent | CustomStreamEvent]#

Beta

This API is in beta and may change in the future.

Generate a stream of events.

Use to create an iterator over StreamEvents that provide real-time information about the progress of the Runnable, including StreamEvents from intermediate results.

A StreamEvent is a dictionary with the following schema:

  • event: str - Event names are of the

    format: on_[runnable_type]_(start|stream|end).

  • name: str - The name of the Runnable that generated the event.

  • run_id: str - randomly generated ID associated with the given execution of

    the Runnable that emitted the event. A child Runnable that gets invoked as part of the execution of a parent Runnable is assigned its own unique ID.

  • parent_ids: List[str] - The IDs of the parent runnables that

    generated the event. The root Runnable will have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.

  • tags: Optional[List[str]] - The tags of the Runnable that generated

    the event.

  • metadata: Optional[Dict[str, Any]] - The metadata of the Runnable

    that generated the event.

  • data: Dict[str, Any]

Below is a table that illustrates some evens that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.

ATTENTION This reference table is for the V2 version of the schema.

event

name

chunk

input

output

on_chat_model_start

[model name]

{“messages”: [[SystemMessage, HumanMessage]]}

on_chat_model_stream

[model name]

AIMessageChunk(content=”hello”)

on_chat_model_end

[model name]

{“messages”: [[SystemMessage, HumanMessage]]}

AIMessageChunk(content=”hello world”)

on_llm_start

[model name]

{‘input’: ‘hello’}

on_llm_stream

[model name]

‘Hello’

on_llm_end

[model name]

‘Hello human!’

on_chain_start

format_docs

on_chain_stream

format_docs

“hello world!, goodbye world!”

on_chain_end

format_docs

[Document(…)]

“hello world!, goodbye world!”

on_tool_start

some_tool

{“x”: 1, “y”: “2”}

on_tool_end

some_tool

{“x”: 1, “y”: “2”}

on_retriever_start

[retriever name]

{“query”: “hello”}

on_retriever_end

[retriever name]

{“query”: “hello”}

[Document(…), ..]

on_prompt_start

[template_name]

{“question”: “hello”}

on_prompt_end

[template_name]

{“question”: “hello”}

ChatPromptValue(messages: [SystemMessage, …])

In addition to the standard events, users can also dispatch custom events (see example below).

Custom events will be only be surfaced with in the v2 version of the API!

A custom event has following format:

Attribute

Type

Description

name

str

A user defined name for the event.

data

Any

The data associated with the event. This can be anything, though we suggest making it JSON serializable.

Here are declarations associated with the standard events shown above:

format_docs:

def format_docs(docs: List[Document]) -> str:
    '''Format the docs.'''
    return ", ".join([doc.page_content for doc in docs])

format_docs = RunnableLambda(format_docs)

some_tool:

@tool
def some_tool(x: int, y: str) -> dict:
    '''Some_tool.'''
    return {"x": x, "y": y}

prompt:

template = ChatPromptTemplate.from_messages(
    [("system", "You are Cat Agent 007"), ("human", "{question}")]
).with_config({"run_name": "my_template", "tags": ["my_template"]})

Example:

from langchain_core.runnables import RunnableLambda

async def reverse(s: str) -> str:
    return s[::-1]

chain = RunnableLambda(func=reverse)

events = [
    event async for event in chain.astream_events("hello", version="v2")
]

# will produce the following events (run_id, and parent_ids
# has been omitted for brevity):
[
    {
        "data": {"input": "hello"},
        "event": "on_chain_start",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"chunk": "olleh"},
        "event": "on_chain_stream",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
    {
        "data": {"output": "olleh"},
        "event": "on_chain_end",
        "metadata": {},
        "name": "reverse",
        "tags": [],
    },
]

Example: Dispatch Custom Event

from langchain_core.callbacks.manager import (
    adispatch_custom_event,
)
from langchain_core.runnables import RunnableLambda, RunnableConfig
import asyncio


async def slow_thing(some_input: str, config: RunnableConfig) -> str:
    """Do something that takes a long time."""
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 1 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    await adispatch_custom_event(
        "progress_event",
        {"message": "Finished step 2 of 3"},
        config=config # Must be included for python < 3.10
    )
    await asyncio.sleep(1) # Placeholder for some slow operation
    return "Done"

slow_thing = RunnableLambda(slow_thing)

async for event in slow_thing.astream_events("some_input", version="v2"):
    print(event)
Parameters:
  • input (Any) – The input to the Runnable.

  • config (RunnableConfig | None) – The config to use for the Runnable.

  • version (Literal['v1', 'v2']) – The version of the schema to use either v2 or v1. Users should use v2. v1 is for backwards compatibility and will be deprecated in 0.4.0. No default will be assigned until the API is stabilized. custom events will only be surfaced in v2.

  • include_names (Sequence[str] | None) – Only include events from runnables with matching names.

  • include_types (Sequence[str] | None) – Only include events from runnables with matching types.

  • include_tags (Sequence[str] | None) – Only include events from runnables with matching tags.

  • exclude_names (Sequence[str] | None) – Exclude events from runnables with matching names.

  • exclude_types (Sequence[str] | None) – Exclude events from runnables with matching types.

  • exclude_tags (Sequence[str] | None) – Exclude events from runnables with matching tags.

  • kwargs (Any) – Additional keyword arguments to pass to the Runnable. These will be passed to astream_log as this implementation of astream_events is built on top of astream_log.

Yields:

An async stream of StreamEvents.

Raises:

NotImplementedError – If the version is not v1 or v2.

Return type:

AsyncIterator[StandardStreamEvent | CustomStreamEvent]

batch(inputs: List[Input], config: RunnableConfig | List[RunnableConfig] | None = None, *, return_exceptions: bool = False, **kwargs: Any | None) List[Output]#

Default implementation runs invoke in parallel using a thread pool executor.

The default implementation of batch works well for IO bound runnables.

Subclasses should override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.

Parameters:
Return type:

List[Output]

batch_as_completed(inputs: Sequence[Input], config: RunnableConfig | Sequence[RunnableConfig] | None = None, *, return_exceptions: bool = False, **kwargs: Any | None) Iterator[Tuple[int, Output | Exception]]#

Run invoke in parallel on a list of inputs, yielding results as they complete.

Parameters:
  • inputs (Sequence[Input]) –

  • config (RunnableConfig | Sequence[RunnableConfig] | None) –

  • return_exceptions (bool) –

  • kwargs (Any | None) –

Return type:

Iterator[Tuple[int, Output | Exception]]

bind_tools(tools: Sequence[Dict[str, Any] | Type[BaseModel] | Callable | BaseTool], *, tool_choice: Dict[str, Dict] | bool | str | None = None, **kwargs: Any) Runnable[PromptValue | str | Sequence[BaseMessage | List[str] | Tuple[str, str] | str | Dict[str, Any]], BaseMessage][source]#

Bind tool-like objects to this chat model

tool_choice: does not currently support “any”, “auto” choices like OpenAI

tool-calling API. should be a dict of the form to force this tool {“type”: “function”, “function”: {“name”: <<tool_name>>}}.

Parameters:
  • tools (Sequence[Dict[str, Any] | Type[BaseModel] | Callable | BaseTool]) –

  • tool_choice (Dict[str, Dict] | bool | str | None) –

  • kwargs (Any) –

Return type:

Runnable[PromptValue | str | Sequence[BaseMessage | List[str] | Tuple[str, str] | str | Dict[str, Any]], BaseMessage]

call_as_llm(message: str, stop: List[str] | None = None, **kwargs: Any) str#

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters:
  • message (str) –

  • stop (List[str] | None) –

  • kwargs (Any) –

Return type:

str

configurable_alternatives(which: ConfigurableField, *, default_key: str = 'default', prefix_keys: bool = False, **kwargs: Runnable[Input, Output] | Callable[[], Runnable[Input, Output]]) RunnableSerializable[Input, Output]#

Configure alternatives for Runnables that can be set at runtime.

Parameters:
  • which (ConfigurableField) – The ConfigurableField instance that will be used to select the alternative.

  • default_key (str) – The default key to use if no alternative is selected. Defaults to “default”.

  • prefix_keys (bool) – Whether to prefix the keys with the ConfigurableField id. Defaults to False.

  • **kwargs (Runnable[Input, Output] | Callable[[], Runnable[Input, Output]]) – A dictionary of keys to Runnable instances or callables that return Runnable instances.

Returns:

A new Runnable with the alternatives configured.

Return type:

RunnableSerializable[Input, Output]

from langchain_anthropic import ChatAnthropic
from langchain_core.runnables.utils import ConfigurableField
from langchain_openai import ChatOpenAI

model = ChatAnthropic(
    model_name="claude-3-sonnet-20240229"
).configurable_alternatives(
    ConfigurableField(id="llm"),
    default_key="anthropic",
    openai=ChatOpenAI()
)

# uses the default model ChatAnthropic
print(model.invoke("which organization created you?").content)

# uses ChatOpenAI
print(
    model.with_config(
        configurable={"llm": "openai"}
    ).invoke("which organization created you?").content
)
configurable_fields(**kwargs: ConfigurableField | ConfigurableFieldSingleOption | ConfigurableFieldMultiOption) RunnableSerializable[Input, Output]#

Configure particular Runnable fields at runtime.

Parameters:

**kwargs (ConfigurableField | ConfigurableFieldSingleOption | ConfigurableFieldMultiOption) – A dictionary of ConfigurableField instances to configure.

Returns:

A new Runnable with the fields configured.

Return type:

RunnableSerializable[Input, Output]

from langchain_core.runnables import ConfigurableField
from langchain_openai import ChatOpenAI

model = ChatOpenAI(max_tokens=20).configurable_fields(
    max_tokens=ConfigurableField(
        id="output_token_number",
        name="Max tokens in the output",
        description="The maximum number of tokens in the output",
    )
)

# max_tokens = 20
print(
    "max_tokens_20: ",
    model.invoke("tell me something about chess").content
)

# max_tokens = 200
print("max_tokens_200: ", model.with_config(
    configurable={"output_token_number": 200}
    ).invoke("tell me something about chess").content
)
generate(messages: List[List[BaseMessage]], stop: List[str] | None = None, callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None, *, tags: List[str] | None = None, metadata: Dict[str, Any] | None = None, run_name: str | None = None, run_id: UUID | None = None, **kwargs: Any) LLMResult#

Pass a sequence of prompts to the model and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters:
  • messages (List[List[BaseMessage]]) – List of list of messages.

  • stop (List[str] | None) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (List[BaseCallbackHandler] | BaseCallbackManager | None) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

  • tags (List[str] | None) –

  • metadata (Dict[str, Any] | None) –

  • run_name (str | None) –

  • run_id (UUID | None) –

  • **kwargs

Returns:

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type:

LLMResult

generate_prompt(prompts: List[PromptValue], stop: List[str] | None = None, callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None, **kwargs: Any) LLMResult#

Pass a sequence of prompts to the model and return model generations.

This method should make use of batched calls for models that expose a batched API.

Use this method when you want to:
  1. take advantage of batched calls,

  2. need more output from the model than just the top generated value,

  3. are building chains that are agnostic to the underlying language model

    type (e.g., pure text completion models vs chat models).

Parameters:
  • prompts (List[PromptValue]) – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models).

  • stop (List[str] | None) – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.

  • callbacks (List[BaseCallbackHandler] | BaseCallbackManager | None) – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.

  • **kwargs (Any) – Arbitrary additional keyword arguments. These are usually passed to the model provider API call.

Returns:

An LLMResult, which contains a list of candidate Generations for each input

prompt and additional model provider-specific output.

Return type:

LLMResult

get_num_tokens(text: str) int#

Get the number of tokens present in the text.

Useful for checking if an input fits in a model’s context window.

Parameters:

text (str) – The string input to tokenize.

Returns:

The integer number of tokens in the text.

Return type:

int

get_num_tokens_from_messages(messages: List[BaseMessage]) int#

Get the number of tokens in the messages.

Useful for checking if an input fits in a model’s context window.

Parameters:

messages (List[BaseMessage]) – The message inputs to tokenize.

Returns:

The sum of the number of tokens across the messages.

Return type:

int

get_token_ids(text: str) List[int]#

Return the ordered ids of the tokens in a text.

Parameters:

text (str) – The string input to tokenize.

Returns:

A list of ids corresponding to the tokens in the text, in order they occur

in the text.

Return type:

List[int]

invoke(input: LanguageModelInput, config: RunnableConfig | None = None, *, stop: List[str] | None = None, **kwargs: Any) BaseMessage#

Transform a single input into an output. Override to implement.

Parameters:
  • input (LanguageModelInput) – The input to the Runnable.

  • config (Optional[RunnableConfig]) – A config to use when invoking the Runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details.

  • stop (Optional[List[str]]) –

  • kwargs (Any) –

Returns:

The output of the Runnable.

Return type:

BaseMessage

predict(text: str, *, stop: Sequence[str] | None = None, **kwargs: Any) str#

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters:
  • text (str) –

  • stop (Sequence[str] | None) –

  • kwargs (Any) –

Return type:

str

predict_messages(messages: List[BaseMessage], *, stop: Sequence[str] | None = None, **kwargs: Any) BaseMessage#

Deprecated since version langchain-core==0.1.7: Use invoke instead.

Parameters:
  • messages (List[BaseMessage]) –

  • stop (Sequence[str] | None) –

  • kwargs (Any) –

Return type:

BaseMessage

stream(input: LanguageModelInput, config: RunnableConfig | None = None, *, stop: List[str] | None = None, **kwargs: Any) Iterator[BaseMessageChunk]#

Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output.

Parameters:
  • input (LanguageModelInput) – The input to the Runnable.

  • config (Optional[RunnableConfig]) – The config to use for the Runnable. Defaults to None.

  • kwargs (Any) – Additional keyword arguments to pass to the Runnable.

  • stop (Optional[List[str]]) –

Yields:

The output of the Runnable.

Return type:

Iterator[BaseMessageChunk]

to_json() SerializedConstructor | SerializedNotImplemented#

Serialize the Runnable to JSON.

Returns:

A JSON-serializable representation of the Runnable.

Return type:

SerializedConstructor | SerializedNotImplemented

with_structured_output(schema: Dict | Type[BaseModel] | None = None, *, include_raw: bool = False, **kwargs: Any) Runnable[PromptValue | str | Sequence[BaseMessage | List[str] | Tuple[str, str] | str | Dict[str, Any]], Dict | BaseModel][source]#

Model wrapper that returns outputs formatted to match the given schema.

Parameters:
  • schema (Dict | Type[BaseModel] | None) – The output schema as a dict or a Pydantic class. If a Pydantic class then the model output will be an object of that class. If a dict then the model output will be a dict. With a Pydantic class the returned attributes will be validated, whereas with a dict they will not be. If method is “function_calling” and schema is a dict, then the dict must match the OpenAI function-calling spec or be a valid JSON schema with top level ‘title’ and ‘description’ keys specified.

  • include_raw (bool) – If False then only the parsed structured output is returned. If an error occurs during model output parsing it will be raised. If True then both the raw model response (a BaseMessage) and the parsed model response will be returned. If an error occurs during output parsing it will be caught and returned as well. The final output is always a dict with keys “raw”, “parsed”, and “parsing_error”.

  • kwargs (Any) – Any other args to bind to model, self.bind(..., **kwargs).

Returns:

If include_raw is True then a dict with keys:

raw: BaseMessage parsed: Optional[_DictOrPydantic] parsing_error: Optional[BaseException]

If include_raw is False then just _DictOrPydantic is returned, where _DictOrPydantic depends on the schema:

If schema is a Pydantic class then _DictOrPydantic is the Pydantic

class.

If schema is a dict then _DictOrPydantic is a dict.

Return type:

A Runnable that takes any ChatModel input and returns as output

Example: Pydantic schema (include_raw=False):
from langchain_community.chat_models import ChatLlamaCpp
from langchain_core.pydantic_v1 import BaseModel

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''
    answer: str
    justification: str

llm = ChatLlamaCpp(
    temperature=0.,
    model_path="./SanctumAI-meta-llama-3-8b-instruct.Q8_0.gguf",
    n_ctx=10000,
    n_gpu_layers=4,
    n_batch=200,
    max_tokens=512,
    n_threads=multiprocessing.cpu_count() - 1,
    repeat_penalty=1.5,
    top_p=0.5,
    stop=["<|end_of_text|>", "<|eot_id|>"],
)
structured_llm = llm.with_structured_output(AnswerWithJustification)

structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers")

# -> AnswerWithJustification(
#     answer='They weigh the same',
#     justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'
# )
Example: Pydantic schema (include_raw=True):
from langchain_community.chat_models import ChatLlamaCpp
from langchain_core.pydantic_v1 import BaseModel

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''
    answer: str
    justification: str

llm = ChatLlamaCpp(
    temperature=0.,
    model_path="./SanctumAI-meta-llama-3-8b-instruct.Q8_0.gguf",
    n_ctx=10000,
    n_gpu_layers=4,
    n_batch=200,
    max_tokens=512,
    n_threads=multiprocessing.cpu_count() - 1,
    repeat_penalty=1.5,
    top_p=0.5,
    stop=["<|end_of_text|>", "<|eot_id|>"],
)
structured_llm = llm.with_structured_output(AnswerWithJustification, include_raw=True)

structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers")
# -> {
#     'raw': AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Ao02pnFYXD6GN1yzc0uXPsvF', 'function': {'arguments': '{"answer":"They weigh the same.","justification":"Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ."}', 'name': 'AnswerWithJustification'}, 'type': 'function'}]}),
#     'parsed': AnswerWithJustification(answer='They weigh the same.', justification='Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.'),
#     'parsing_error': None
# }
Example: dict schema (include_raw=False):
from langchain_community.chat_models import ChatLlamaCpp
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.utils.function_calling import convert_to_openai_tool

class AnswerWithJustification(BaseModel):
    '''An answer to the user question along with justification for the answer.'''
    answer: str
    justification: str

dict_schema = convert_to_openai_tool(AnswerWithJustification)
llm = ChatLlamaCpp(
    temperature=0.,
    model_path="./SanctumAI-meta-llama-3-8b-instruct.Q8_0.gguf",
    n_ctx=10000,
    n_gpu_layers=4,
    n_batch=200,
    max_tokens=512,
    n_threads=multiprocessing.cpu_count() - 1,
    repeat_penalty=1.5,
    top_p=0.5,
    stop=["<|end_of_text|>", "<|eot_id|>"],
)
structured_llm = llm.with_structured_output(dict_schema)

structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers")
# -> {
#     'answer': 'They weigh the same',
#     'justification': 'Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.'
# }