Skip to main content
Ctrl+K
🦜🔗 LangChain  documentation - Home
  • Reference
  • Legacy reference
Ctrl+K
Docs
  • GitHub
  • X / Twitter
Ctrl+K
  • Reference
  • Legacy reference
Docs
  • GitHub
  • X / Twitter

Section Navigation

Base packages

  • Core
  • Langchain
    • agents
    • callbacks
    • chains
      • Chain
      • BaseCombineDocumentsChain
      • MapReduceDocumentsChain
      • MapRerankDocumentsChain
      • AsyncCombineDocsProtocol
      • CombineDocsProtocol
      • ReduceDocumentsChain
      • RefineDocumentsChain
      • ConstitutionalChain
      • ConstitutionalPrinciple
      • BaseConversationalRetrievalChain
      • ChatVectorDBChain
      • InputType
      • ElasticsearchDatabaseChain
      • FlareChain
      • QuestionGeneratorChain
      • FinishedOutputParser
      • HypotheticalDocumentEmbedder
      • LLMMathChain
      • OpenAIModerationChain
      • NatBotChain
      • Crawler
      • ElementInViewPort
      • FactWithEvidence
      • QuestionAnswer
      • SimpleRequestChain
      • AnswerWithSources
      • BasePromptSelector
      • ConditionalPromptSelector
      • LoadingCallable
      • RetrievalQAWithSourcesChain
      • VectorDBQAWithSourcesChain
      • StructuredQueryOutputParser
      • ISO8601Date
      • AttributeInfo
      • LoadingCallable
      • MultiRouteChain
      • Route
      • RouterChain
      • EmbeddingRouterChain
      • RouterOutputParser
      • MultiRetrievalQAChain
      • SequentialChain
      • SimpleSequentialChain
      • SQLInput
      • SQLInputWithTables
      • LoadingCallable
      • TransformChain
      • acollapse_docs
      • collapse_docs
      • split_list_of_docs
      • create_stuff_documents_chain
      • generate_example
      • create_history_aware_retriever
      • create_citation_fuzzy_match_runnable
      • openapi_spec_to_openai_fn
      • get_llm_kwargs
      • is_chat_model
      • is_llm
      • construct_examples
      • fix_filter_directive
      • get_query_constructor_prompt
      • load_query_constructor_runnable
      • get_parser
      • v_args
      • create_retrieval_chain
      • create_sql_query_chain
      • get_openai_output_parser
      • load_summarize_chain
      • APIChain
      • AnalyzeDocumentChain
      • StuffDocumentsChain
      • ConversationChain
      • ConversationalRetrievalChain
      • LLMChain
      • LLMCheckerChain
      • LLMSummarizationCheckerChain
      • MapReduceChain
      • QAGenerationChain
      • BaseQAWithSourcesChain
      • QAWithSourcesChain
      • BaseRetrievalQA
      • RetrievalQA
      • VectorDBQA
      • LLMRouterChain
      • MultiPromptChain
      • load_chain
      • load_chain_from_config
      • create_openai_fn_chain
      • create_structured_output_chain
      • create_citation_fuzzy_match_chain
      • create_extraction_chain
      • create_extraction_chain_pydantic
      • get_openapi_chain
      • create_qa_with_sources_chain
      • create_qa_with_structure_chain
      • create_tagging_chain
      • create_tagging_chain_pydantic
      • create_extraction_chain_pydantic
      • load_qa_with_sources_chain
      • load_query_constructor_chain
      • load_qa_chain
      • create_openai_fn_runnable
      • create_structured_output_runnable
    • chat_models
    • embeddings
    • evaluation
    • globals
    • hub
    • indexes
    • memory
    • model_laboratory
    • output_parsers
    • retrievers
    • runnables
    • smith
    • storage
  • Text Splitters
  • Community
  • Experimental

Integrations

  • AI21
  • Airbyte
  • Anthropic
  • AstraDB
  • AWS
  • Azure Dynamic Sessions
  • Chroma
  • Cohere
  • Couchbase
  • Elasticsearch
  • Exa
  • Fireworks
  • Google Community
  • Google GenAI
  • Google VertexAI
  • Groq
  • Huggingface
  • Milvus
  • MistralAI
  • MongoDB
  • Nomic
  • Nvidia Ai Endpoints
  • Ollama
  • OpenAI
  • Pinecone
  • Postgres
  • Prompty
  • Qdrant
  • Robocorp
  • Together
  • Unstructured
  • VoyageAI
  • Weaviate
  • LangChain Python API Reference
  • chains
  • generate_example

generate_example#

langchain.chains.example_generator.generate_example(examples: List[dict], llm: BaseLanguageModel, prompt_template: PromptTemplate) → str[source]#

Return another example given a list of examples for a prompt.

Parameters:
  • examples (List[dict]) –

  • llm (BaseLanguageModel) –

  • prompt_template (PromptTemplate) –

Return type:

str

On this page
  • generate_example()

© Copyright 2023, LangChain Inc.