LLMInputOutputAdapter#

class langchain_community.llms.bedrock.LLMInputOutputAdapter[source]#

Adapter class to prepare the inputs from Langchain to a format that LLM model expects.

It also provides helper function to extract the generated text from the model response.

Attributes

provider_to_output_key_map

Methods

__init__()

aprepare_output_stream(provider, response[, ...])

prepare_input(provider, model_kwargs[, ...])

prepare_output(provider, response)

prepare_output_stream(provider, response[, ...])

__init__()#
classmethod aprepare_output_stream(provider: str, response: Any, stop: List[str] | None = None) AsyncIterator[GenerationChunk][source]#
Parameters:
  • provider (str) –

  • response (Any) –

  • stop (List[str] | None) –

Return type:

AsyncIterator[GenerationChunk]

classmethod prepare_input(provider: str, model_kwargs: Dict[str, Any], prompt: str | None = None, system: str | None = None, messages: List[Dict] | None = None) Dict[str, Any][source]#
Parameters:
  • provider (str) –

  • model_kwargs (Dict[str, Any]) –

  • prompt (str | None) –

  • system (str | None) –

  • messages (List[Dict] | None) –

Return type:

Dict[str, Any]

classmethod prepare_output(provider: str, response: Any) dict[source]#
Parameters:
  • provider (str) –

  • response (Any) –

Return type:

dict

classmethod prepare_output_stream(provider: str, response: Any, stop: List[str] | None = None, messages_api: bool = False) Iterator[GenerationChunk][source]#
Parameters:
  • provider (str) –

  • response (Any) –

  • stop (List[str] | None) –

  • messages_api (bool) –

Return type:

Iterator[GenerationChunk]