Skip to main content
Ctrl+K
🦜🔗 LangChain  documentation - Home
  • Reference
  • Legacy reference
Ctrl+K
Docs
  • GitHub
  • X / Twitter
Ctrl+K
  • Reference
  • Legacy reference
Docs
  • GitHub
  • X / Twitter

Section Navigation

Base packages

  • Core
  • Langchain
  • Text Splitters
  • Community
  • Experimental
    • agents
    • autonomous_agents
      • AutoGPT
      • AutoGPTMemory
      • AutoGPTAction
      • AutoGPTOutputParser
      • BaseAutoGPTOutputParser
      • AutoGPTPrompt
      • PromptGenerator
      • BabyAGI
      • TaskCreationChain
      • TaskExecutionChain
      • TaskPrioritizationChain
      • HuggingGPT
      • ResponseGenerationChain
      • ResponseGenerator
      • Task
      • TaskExecutor
      • BasePlanner
      • Plan
      • PlanningOutputParser
      • Step
      • TaskPlaningChain
      • TaskPlanner
      • preprocess_json_input
      • get_prompt
      • load_response_generator
      • load_chat_planner
    • chat_models
    • comprehend_moderation
    • cpal
    • data_anonymizer
    • fallacy_removal
    • generative_agents
    • graph_transformers
    • llm_bash
    • llm_symbolic_math
    • llms
    • open_clip
    • pal_chain
    • plan_and_execute
    • prompt_injection_identifier
    • recommenders
    • retrievers
    • rl_chain
    • smart_llm
    • sql
    • tabular_synthetic_data
    • text_splitter
    • tools
    • tot
    • utilities
    • video_captioning

Integrations

  • AI21
  • Airbyte
  • Anthropic
  • AstraDB
  • AWS
  • Azure Dynamic Sessions
  • Chroma
  • Cohere
  • Couchbase
  • Elasticsearch
  • Exa
  • Fireworks
  • Google Community
  • Google GenAI
  • Google VertexAI
  • Groq
  • Huggingface
  • Milvus
  • MistralAI
  • MongoDB
  • Nomic
  • Nvidia Ai Endpoints
  • Ollama
  • OpenAI
  • Pinecone
  • Postgres
  • Prompty
  • Qdrant
  • Robocorp
  • Together
  • Unstructured
  • VoyageAI
  • Weaviate
  • LangChain Python API Reference
  • autonomous_agents
  • load_respons...

load_response_generator#

langchain_experimental.autonomous_agents.hugginggpt.repsonse_generator.load_response_generator(llm: BaseLanguageModel) → ResponseGenerator[source]#

Load the ResponseGenerator.

Parameters:

llm (BaseLanguageModel) –

Return type:

ResponseGenerator

On this page
  • load_response_generator()

© Copyright 2023, LangChain Inc.