LLMThought#

class langchain_community.callbacks.streamlit.streamlit_callback_handler.LLMThought(parent_container: DeltaGenerator, labeler: LLMThoughtLabeler, expanded: bool, collapse_on_complete: bool)[source]#

A thought in the LLM’s thought stream.

Initialize the LLMThought.

Parameters:
  • parent_container (DeltaGenerator) – The container we’re writing into.

  • labeler (LLMThoughtLabeler) – The labeler to use for this thought.

  • expanded (bool) – Whether the thought should be expanded by default.

  • collapse_on_complete (bool) – Whether the thought should be collapsed.

Attributes

container

The container we're writing into.

last_tool

The last tool executed by this thought

Methods

__init__(parent_container, labeler, ...)

Initialize the LLMThought.

clear()

Remove the thought from the screen.

complete([final_label])

Finish the thought.

on_agent_action(action[, color])

on_llm_end(response, **kwargs)

on_llm_error(error, **kwargs)

on_llm_new_token(token, **kwargs)

on_llm_start(serialized, prompts)

on_tool_end(output[, color, ...])

on_tool_error(error, **kwargs)

on_tool_start(serialized, input_str, **kwargs)

__init__(parent_container: DeltaGenerator, labeler: LLMThoughtLabeler, expanded: bool, collapse_on_complete: bool)[source]#

Initialize the LLMThought.

Parameters:
  • parent_container (DeltaGenerator) – The container we’re writing into.

  • labeler (LLMThoughtLabeler) – The labeler to use for this thought.

  • expanded (bool) – Whether the thought should be expanded by default.

  • collapse_on_complete (bool) – Whether the thought should be collapsed.

clear() None[source]#

Remove the thought from the screen. A cleared thought can’t be reused.

Return type:

None

complete(final_label: str | None = None) None[source]#

Finish the thought.

Parameters:

final_label (str | None) –

Return type:

None

on_agent_action(action: AgentAction, color: str | None = None, **kwargs: Any) Any[source]#
Parameters:
  • action (AgentAction) –

  • color (str | None) –

  • kwargs (Any) –

Return type:

Any

on_llm_end(response: LLMResult, **kwargs: Any) None[source]#
Parameters:
Return type:

None

on_llm_error(error: BaseException, **kwargs: Any) None[source]#
Parameters:
  • error (BaseException) –

  • kwargs (Any) –

Return type:

None

on_llm_new_token(token: str, **kwargs: Any) None[source]#
Parameters:
  • token (str) –

  • kwargs (Any) –

Return type:

None

on_llm_start(serialized: Dict[str, Any], prompts: List[str]) None[source]#
Parameters:
  • serialized (Dict[str, Any]) –

  • prompts (List[str]) –

Return type:

None

on_tool_end(output: Any, color: str | None = None, observation_prefix: str | None = None, llm_prefix: str | None = None, **kwargs: Any) None[source]#
Parameters:
  • output (Any) –

  • color (str | None) –

  • observation_prefix (str | None) –

  • llm_prefix (str | None) –

  • kwargs (Any) –

Return type:

None

on_tool_error(error: BaseException, **kwargs: Any) None[source]#
Parameters:
  • error (BaseException) –

  • kwargs (Any) –

Return type:

None

on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) None[source]#
Parameters:
  • serialized (Dict[str, Any]) –

  • input_str (str) –

  • kwargs (Any) –

Return type:

None