RunnableMultiActionAgent#
- class langchain.agents.agent.RunnableMultiActionAgent[source]#
 Bases:
BaseMultiActionAgentAgent powered by Runnables.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
- param input_keys_arg: List[str] = []#
 
- param return_keys_arg: List[str] = []#
 
- param runnable: Runnable[dict, List[AgentAction] | AgentFinish] [Required]#
 Runnable to call to get agent actions.
- param stream_runnable: bool = True#
 Whether to stream from the runnable or not.
- If True then underlying LLM is invoked in a streaming fashion to make it possible
 to get access to the individual LLM tokens when using stream_log with the Agent Executor. If False then LLM is invoked in a non-streaming fashion and individual LLM tokens will not be available in stream_log.
- async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None, **kwargs: Any) List[AgentAction] | AgentFinish[source]#
 Async based on past history and current inputs, decide what to do.
- Parameters:
 intermediate_steps (List[Tuple[AgentAction, str]]) β Steps the LLM has taken to date, along with observations.
callbacks (List[BaseCallbackHandler] | BaseCallbackManager | None) β Callbacks to run.
**kwargs (Any) β User inputs.
- Returns:
 Action specifying what tool to use.
- Return type:
 List[AgentAction] | AgentFinish
- get_allowed_tools() List[str] | None#
 Get allowed tools.
- Returns:
 Allowed tools.
- Return type:
 Optional[List[str]]
- plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: List[BaseCallbackHandler] | BaseCallbackManager | None = None, **kwargs: Any) List[AgentAction] | AgentFinish[source]#
 Based on past history and current inputs, decide what to do.
- Parameters:
 intermediate_steps (List[Tuple[AgentAction, str]]) β Steps the LLM has taken to date, along with the observations.
callbacks (List[BaseCallbackHandler] | BaseCallbackManager | None) β Callbacks to run.
**kwargs (Any) β User inputs.
- Returns:
 Action specifying what tool to use.
- Return type:
 List[AgentAction] | AgentFinish
- return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) AgentFinish#
 Return response when agent has been stopped due to max iterations.
- Parameters:
 early_stopping_method (str) β Method to use for early stopping.
intermediate_steps (List[Tuple[AgentAction, str]]) β Steps the LLM has taken to date, along with observations.
**kwargs (Any) β User inputs.
- Returns:
 Agent finish object.
- Return type:
 - Raises:
 ValueError β If early_stopping_method is not supported.
- save(file_path: Path | str) None#
 Save the agent.
- Parameters:
 file_path (Path | str) β Path to file to save the agent to.
- Raises:
 NotImplementedError β If agent does not support saving.
ValueError β If file_path is not json or yaml.
- Return type:
 None
Example: .. code-block:: python
# If working with agent executor agent.agent.save(file_path=βpath/agent.yamlβ)
- tool_run_logging_kwargs() Dict#
 Return logging kwargs for tool run.
- Return type:
 Dict
- property input_keys: List[str]#
 Return the input keys.
- Returns:
 List of input keys.
- property return_values: List[str]#
 Return values of the agent.