id
stringlengths
36
36
text
stringlengths
114
429k
url
stringlengths
54
121
d9a37339-4bad-4cf3-9fb5-1dd03debfa69
Models LangChain provides interfaces and integrations for a number of different types of models. LLMs Chat Models
https://api.python.langchain.com/en/latest/models.html
8ff91ce1-7d62-4479-9049-f8f70b82ef58
Model I/O LangChain provides interfaces and integrations for working with language models. Prompts Models Output Parsers
https://api.python.langchain.com/en/latest/model_io.html
2412f67f-00bb-42fb-9975-249ace261d8e
Prompts The reference guides here all relate to objects for working with Prompts. Prompt Templates Example Selector
https://api.python.langchain.com/en/latest/prompts.html
f7f922b9-f340-4163-bd4d-a15304000f03
Data connection LangChain has a number of modules that help you load, structure, store, and retrieve documents. Document Loaders Document Transformers Embeddings Vector Stores Retrievers
https://api.python.langchain.com/en/latest/data_connection.html
583cb37c-049b-4ce1-bba8-5e96effe2285
Embeddings Wrappers around embedding modules. class langchain.embeddings.OpenAIEmbeddings(*, client=None, model='text-embedding-ada-002', deployment='text-embedding-ada-002', openai_api_version=None, openai_api_base=None, openai_api_type=None, openai_proxy=None, embedding_ctx_length=8191, openai_api_key=None, openai_organization=None, allowed_special={}, disallowed_special='all', chunk_size=1000, max_retries=6, request_timeout=None, headers=None, tiktoken_model_name=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around OpenAI embedding models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key or pass it as a named parameter to the constructor. Example from langchain.embeddings import OpenAIEmbeddings openai = OpenAIEmbeddings(openai_api_key="my-api-key") In order to use the library with Microsoft Azure endpoints, you need to set the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION. The OPENAI_API_TYPE must be set to β€˜azure’ and the others correspond to the properties of your endpoint. In addition, the deployment name must be passed as the model parameter. Example import os os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/" os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key" os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview" os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080" from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings( deployment="your-embeddings-deployment-name", model="your-embeddings-model-name", openai_api_base="https://your-endpoint.openai.azure.com/", openai_api_type="azure", ) text = "This is a test query." query_result = embeddings.embed_query(text) Parameters client (Any) – model (str) – deployment (str) – openai_api_version (Optional[str]) – openai_api_base (Optional[str]) – openai_api_type (Optional[str]) – openai_proxy (Optional[str]) – embedding_ctx_length (int) – openai_api_key (Optional[str]) – openai_organization (Optional[str]) – allowed_special (Union[Literal['all'], typing.Set[str]]) – disallowed_special (Union[Literal['all'], typing.Set[str], typing.Sequence[str]]) – chunk_size (int) – max_retries (int) – request_timeout (Optional[Union[float, Tuple[float, float]]]) – headers (Any) – tiktoken_model_name (Optional[str]) – Return type None attribute chunk_size: int = 1000 Maximum number of texts to embed in each batch attribute max_retries: int = 6 Maximum number of retries to make when generating. attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = None Timeout in seconds for the OpenAPI request. attribute tiktoken_model_name: Optional[str] = None The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here. async aembed_documents(texts, chunk_size=0)[source] Call out to OpenAI’s embedding endpoint async for embedding search docs. Parameters texts (List[str]) – The list of texts to embed. chunk_size (Optional[int]) – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. Return type List[List[float]] async aembed_query(text)[source] Call out to OpenAI’s embedding endpoint async for embedding query text. Parameters text (str) – The text to embed. Returns Embedding for the text. Return type List[float] embed_documents(texts, chunk_size=0)[source] Call out to OpenAI’s embedding endpoint for embedding search docs. Parameters texts (List[str]) – The list of texts to embed. chunk_size (Optional[int]) – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Call out to OpenAI’s embedding endpoint for embedding query text. Parameters text (str) – The text to embed. Returns Embedding for the text. Return type List[float] class langchain.embeddings.HuggingFaceEmbeddings(*, client=None, model_name='sentence-transformers/all-mpnet-base-v2', cache_folder=None, model_kwargs=None, encode_kwargs=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around sentence_transformers embedding models. To use, you should have the sentence_transformers python package installed. Example from langchain.embeddings import HuggingFaceEmbeddings model_name = "sentence-transformers/all-mpnet-base-v2" model_kwargs = {'device': 'cpu'} encode_kwargs = {'normalize_embeddings': False} hf = HuggingFaceEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs ) Parameters client (Any) – model_name (str) – cache_folder (Optional[str]) – model_kwargs (Dict[str, Any]) – encode_kwargs (Dict[str, Any]) – Return type None attribute cache_folder: Optional[str] = None Path to store models. Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable. attribute encode_kwargs: Dict[str, Any] [Optional] Key word arguments to pass when calling the encode method of the model. attribute model_kwargs: Dict[str, Any] [Optional] Key word arguments to pass to the model. attribute model_name: str = 'sentence-transformers/all-mpnet-base-v2' Model name to use. embed_documents(texts)[source] Compute doc embeddings using a HuggingFace transformer model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a HuggingFace transformer model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.CohereEmbeddings(*, client=None, model='embed-english-v2.0', truncate=None, cohere_api_key=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around Cohere embedding models. To use, you should have the cohere python package installed, and the environment variable COHERE_API_KEY set with your API key or pass it as a named parameter to the constructor. Example from langchain.embeddings import CohereEmbeddings cohere = CohereEmbeddings( model="embed-english-light-v2.0", cohere_api_key="my-api-key" ) Parameters client (Any) – model (str) – truncate (Optional[str]) – cohere_api_key (Optional[str]) – Return type None attribute model: str = 'embed-english-v2.0' Model name to use. attribute truncate: Optional[str] = None Truncate embeddings that are too long from start or end (β€œNONE”|”START”|”END”) embed_documents(texts)[source] Call out to Cohere’s embedding endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Call out to Cohere’s embedding endpoint. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.ElasticsearchEmbeddings(client, model_id, *, input_field='text_field')[source] Bases: langchain.embeddings.base.Embeddings Wrapper around Elasticsearch embedding models. This class provides an interface to generate embeddings using a model deployed in an Elasticsearch cluster. It requires an Elasticsearch connection object and the model_id of the model deployed in the cluster. In Elasticsearch you need to have an embedding model loaded and deployed. - https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html - https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html Parameters client (MlClient) – model_id (str) – input_field (str) – classmethod from_credentials(model_id, *, es_cloud_id=None, es_user=None, es_password=None, input_field='text_field')[source] Instantiate embeddings from Elasticsearch credentials. Parameters model_id (str) – The model_id of the model deployed in the Elasticsearch cluster. input_field (str) – The name of the key for the input text field in the document. Defaults to β€˜text_field’. es_cloud_id (Optional[str]) – (str, optional): The Elasticsearch cloud ID to connect to. es_user (Optional[str]) – (str, optional): Elasticsearch username. es_password (Optional[str]) – (str, optional): Elasticsearch password. Return type langchain.embeddings.elasticsearch.ElasticsearchEmbeddings Example from langchain.embeddings import ElasticsearchEmbeddings # Define the model ID and input field name (if different from default) model_id = "your_model_id" # Optional, only if different from 'text_field' input_field = "your_input_field" # Credentials can be passed in two ways. Either set the env vars # ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically # pulled in, or pass them in directly as kwargs. embeddings = ElasticsearchEmbeddings.from_credentials( model_id, input_field=input_field, # es_cloud_id="foo", # es_user="bar", # es_password="baz", ) documents = [ "This is an example document.", "Another example document to generate embeddings for.", ] embeddings_generator.embed_documents(documents) classmethod from_es_connection(model_id, es_connection, input_field='text_field')[source] Instantiate embeddings from an existing Elasticsearch connection. This method provides a way to create an instance of the ElasticsearchEmbeddings class using an existing Elasticsearch connection. The connection object is used to create an MlClient, which is then used to initialize the ElasticsearchEmbeddings instance. Args: model_id (str): The model_id of the model deployed in the Elasticsearch cluster. es_connection (elasticsearch.Elasticsearch): An existing Elasticsearch connection object. input_field (str, optional): The name of the key for the input text field in the document. Defaults to β€˜text_field’. Returns: ElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class. Example from elasticsearch import Elasticsearch from langchain.embeddings import ElasticsearchEmbeddings # Define the model ID and input field name (if different from default) model_id = "your_model_id" # Optional, only if different from 'text_field' input_field = "your_input_field" # Create Elasticsearch connection es_connection = Elasticsearch( hosts=["localhost:9200"], http_auth=("user", "password") ) # Instantiate ElasticsearchEmbeddings using the existing connection embeddings = ElasticsearchEmbeddings.from_es_connection( model_id, es_connection, input_field=input_field, ) documents = [ "This is an example document.", "Another example document to generate embeddings for.", ] embeddings_generator.embed_documents(documents) Parameters model_id (str) – es_connection (Elasticsearch) – input_field (str) – Return type ElasticsearchEmbeddings embed_documents(texts)[source] Generate embeddings for a list of documents. Parameters texts (List[str]) – A list of document text strings to generate embeddings for. Returns A list of embeddings, one for each document in the inputlist. Return type List[List[float]] embed_query(text)[source] Generate an embedding for a single query text. Parameters text (str) – The query text to generate an embedding for. Returns The embedding for the input query text. Return type List[float] class langchain.embeddings.LlamaCppEmbeddings(*, client=None, model_path, n_ctx=512, n_parts=- 1, seed=- 1, f16_kv=False, logits_all=False, vocab_only=False, use_mlock=False, n_threads=None, n_batch=8, n_gpu_layers=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around llama.cpp embedding models. To use, you should have the llama-cpp-python library installed, and provide the path to the Llama model as a named parameter to the constructor. Check out: https://github.com/abetlen/llama-cpp-python Example from langchain.embeddings import LlamaCppEmbeddings llama = LlamaCppEmbeddings(model_path="/path/to/model.bin") Parameters client (Any) – model_path (str) – n_ctx (int) – n_parts (int) – seed (int) – f16_kv (bool) – logits_all (bool) – vocab_only (bool) – use_mlock (bool) – n_threads (Optional[int]) – n_batch (Optional[int]) – n_gpu_layers (Optional[int]) – Return type None attribute f16_kv: bool = False Use half-precision for key/value cache. attribute logits_all: bool = False Return logits for all tokens, not just the last token. attribute n_batch: Optional[int] = 8 Number of tokens to process in parallel. Should be a number between 1 and n_ctx. attribute n_ctx: int = 512 Token context window. attribute n_gpu_layers: Optional[int] = None Number of layers to be loaded into gpu memory. Default None. attribute n_parts: int = -1 Number of parts to split the model into. If -1, the number of parts is automatically determined. attribute n_threads: Optional[int] = None Number of threads to use. If None, the number of threads is automatically determined. attribute seed: int = -1 Seed. If -1, a random seed is used. attribute use_mlock: bool = False Force system to keep model in RAM. attribute vocab_only: bool = False Only load the vocabulary, no weights. embed_documents(texts)[source] Embed a list of documents using the Llama model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Embed a query using the Llama model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.HuggingFaceHubEmbeddings(*, client=None, repo_id='sentence-transformers/all-mpnet-base-v2', task='feature-extraction', model_kwargs=None, huggingfacehub_api_token=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around HuggingFaceHub embedding models. To use, you should have the huggingface_hub python package installed, and the environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Example from langchain.embeddings import HuggingFaceHubEmbeddings repo_id = "sentence-transformers/all-mpnet-base-v2" hf = HuggingFaceHubEmbeddings( repo_id=repo_id, task="feature-extraction", huggingfacehub_api_token="my-api-key", ) Parameters client (Any) – repo_id (str) – task (Optional[str]) – model_kwargs (Optional[dict]) – huggingfacehub_api_token (Optional[str]) – Return type None attribute model_kwargs: Optional[dict] = None Key word arguments to pass to the model. attribute repo_id: str = 'sentence-transformers/all-mpnet-base-v2' Model name to use. attribute task: Optional[str] = 'feature-extraction' Task to call the model with. embed_documents(texts)[source] Call out to HuggingFaceHub’s embedding endpoint for embedding search docs. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Call out to HuggingFaceHub’s embedding endpoint for embedding query text. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.ModelScopeEmbeddings(*, embed=None, model_id='damo/nlp_corom_sentence-embedding_english-base')[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around modelscope_hub embedding models. To use, you should have the modelscope python package installed. Example from langchain.embeddings import ModelScopeEmbeddings model_id = "damo/nlp_corom_sentence-embedding_english-base" embed = ModelScopeEmbeddings(model_id=model_id) Parameters embed (Any) – model_id (str) – Return type None attribute model_id: str = 'damo/nlp_corom_sentence-embedding_english-base' Model name to use. embed_documents(texts)[source] Compute doc embeddings using a modelscope embedding model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a modelscope embedding model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.TensorflowHubEmbeddings(*, embed=None, model_url='https://tfhub.dev/google/universal-sentence-encoder-multilingual/3')[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around tensorflow_hub embedding models. To use, you should have the tensorflow_text python package installed. Example from langchain.embeddings import TensorflowHubEmbeddings url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3" tf = TensorflowHubEmbeddings(model_url=url) Parameters embed (Any) – model_url (str) – Return type None attribute model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3' Model name to use. embed_documents(texts)[source] Compute doc embeddings using a TensorflowHub embedding model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a TensorflowHub embedding model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.SagemakerEndpointEmbeddings(*, client=None, endpoint_name='', region_name='', credentials_profile_name=None, content_handler, model_kwargs=None, endpoint_kwargs=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around custom Sagemaker Inference Endpoints. To use, you must supply the endpoint name from your deployed Sagemaker model & the region where it is deployed. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Sagemaker endpoint. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html Parameters client (Any) – endpoint_name (str) – region_name (str) – credentials_profile_name (Optional[str]) – content_handler (langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler) – model_kwargs (Optional[Dict]) – endpoint_kwargs (Optional[Dict]) – Return type None attribute content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required] The content handler class that provides an input and output transform functions to handle formats between LLM and the endpoint. attribute credentials_profile_name: Optional[str] = None The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html attribute endpoint_kwargs: Optional[Dict] = None Optional attributes passed to the invoke_endpoint function. See `boto3`_. docs for more info. .. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html> attribute endpoint_name: str = '' The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region. attribute model_kwargs: Optional[Dict] = None Key word arguments to pass to the model. attribute region_name: str = '' The aws region where the Sagemaker model is deployed, eg. us-west-2. embed_documents(texts, chunk_size=64)[source] Compute doc embeddings using a SageMaker Inference Endpoint. Parameters texts (List[str]) – The list of texts to embed. chunk_size (int) – The chunk size defines how many input texts will be grouped together as request. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a SageMaker inference endpoint. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.HuggingFaceInstructEmbeddings(*, client=None, model_name='hkunlp/instructor-large', cache_folder=None, model_kwargs=None, encode_kwargs=None, embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ')[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around sentence_transformers embedding models. To use, you should have the sentence_transformers and InstructorEmbedding python packages installed. Example from langchain.embeddings import HuggingFaceInstructEmbeddings model_name = "hkunlp/instructor-large" model_kwargs = {'device': 'cpu'} encode_kwargs = {'normalize_embeddings': True} hf = HuggingFaceInstructEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs ) Parameters client (Any) – model_name (str) – cache_folder (Optional[str]) – model_kwargs (Dict[str, Any]) – encode_kwargs (Dict[str, Any]) – embed_instruction (str) – query_instruction (str) – Return type None attribute cache_folder: Optional[str] = None Path to store models. Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable. attribute embed_instruction: str = 'Represent the document for retrieval: ' Instruction to use for embedding documents. attribute encode_kwargs: Dict[str, Any] [Optional] Key word arguments to pass when calling the encode method of the model. attribute model_kwargs: Dict[str, Any] [Optional] Key word arguments to pass to the model. attribute model_name: str = 'hkunlp/instructor-large' Model name to use. attribute query_instruction: str = 'Represent the question for retrieving supporting documents: ' Instruction to use for embedding query. embed_documents(texts)[source] Compute doc embeddings using a HuggingFace instruct model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a HuggingFace instruct model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.MosaicMLInstructorEmbeddings(*, endpoint_url='https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ', retry_sleep=1.0, mosaicml_api_token=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around MosaicML’s embedding inference service. To use, you should have the environment variable MOSAICML_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Example from langchain.llms import MosaicMLInstructorEmbeddings endpoint_url = ( "https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict" ) mosaic_llm = MosaicMLInstructorEmbeddings( endpoint_url=endpoint_url, mosaicml_api_token="my-api-key" ) Parameters endpoint_url (str) – embed_instruction (str) – query_instruction (str) – retry_sleep (float) – mosaicml_api_token (Optional[str]) – Return type None attribute embed_instruction: str = 'Represent the document for retrieval: ' Instruction used to embed documents. attribute endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict' Endpoint URL to use. attribute query_instruction: str = 'Represent the question for retrieving supporting documents: ' Instruction used to embed the query. attribute retry_sleep: float = 1.0 How long to try sleeping for if a rate limit is encountered embed_documents(texts)[source] Embed documents using a MosaicML deployed instructor embedding model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Embed a query using a MosaicML deployed instructor embedding model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.SelfHostedEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn, load_fn_kwargs=None, model_reqs=['./', 'torch'], inference_kwargs=None)[source] Bases: langchain.llms.self_hosted.SelfHostedPipeline, langchain.embeddings.base.Embeddings Runs custom embedding models on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") def get_pipeline(): model_id = "facebook/bart-large" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) return pipeline("feature-extraction", model=model, tokenizer=tokenizer) embeddings = SelfHostedEmbeddings( model_load_fn=get_pipeline, hardware=gpu model_reqs=["./", "torch", "transformers"], ) Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings import runhouse as rh from transformers import pipeline gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") pipeline = pipeline(model="bert-base-uncased", task="feature-extraction") rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to(gpu, path="models") embeddings = SelfHostedHFEmbeddings.from_pipeline( pipeline="models/pipeline.pkl", hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – pipeline_ref (Any) – client (Any) – inference_fn (Callable) – hardware (Any) – model_load_fn (Callable) – load_fn_kwargs (Optional[dict]) – model_reqs (List[str]) – inference_kwargs (Any) – Return type None attribute inference_fn: Callable = <function _embed_documents> Inference function to extract the embeddings on the remote hardware. attribute inference_kwargs: Any = None Any kwargs to pass to the model’s inference function. embed_documents(texts)[source] Compute doc embeddings using a HuggingFace transformer model. Parameters texts (List[str]) – The list of texts to embed.s Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a HuggingFace transformer model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.SelfHostedHuggingFaceEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn=<function load_embedding_model>, load_fn_kwargs=None, model_reqs=['./', 'sentence_transformers', 'torch'], inference_kwargs=None, model_id='sentence-transformers/all-mpnet-base-v2')[source] Bases: langchain.embeddings.self_hosted.SelfHostedEmbeddings Runs sentence_transformers embedding models on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example from langchain.embeddings import SelfHostedHuggingFaceEmbeddings import runhouse as rh model_name = "sentence-transformers/all-mpnet-base-v2" gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – pipeline_ref (Any) – client (Any) – inference_fn (Callable) – hardware (Any) – model_load_fn (Callable) – load_fn_kwargs (Optional[dict]) – model_reqs (List[str]) – inference_kwargs (Any) – model_id (str) – Return type None attribute hardware: Any = None Remote hardware to send the inference function to. attribute inference_fn: Callable = <function _embed_documents> Inference function to extract the embeddings. attribute load_fn_kwargs: Optional[dict] = None Key word arguments to pass to the model load function. attribute model_id: str = 'sentence-transformers/all-mpnet-base-v2' Model name to use. attribute model_load_fn: Callable = <function load_embedding_model> Function to load the model remotely on the server. attribute model_reqs: List[str] = ['./', 'sentence_transformers', 'torch'] Requirements to install on hardware to inference the model. class langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn=<function load_embedding_model>, load_fn_kwargs=None, model_reqs=['./', 'InstructorEmbedding', 'torch'], inference_kwargs=None, model_id='hkunlp/instructor-large', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ')[source] Bases: langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings Runs InstructorEmbedding embedding models on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings import runhouse as rh model_name = "hkunlp/instructor-large" gpu = rh.cluster(name='rh-a10x', instance_type='A100:1') hf = SelfHostedHuggingFaceInstructEmbeddings( model_name=model_name, hardware=gpu) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – pipeline_ref (Any) – client (Any) – inference_fn (Callable) – hardware (Any) – model_load_fn (Callable) – load_fn_kwargs (Optional[dict]) – model_reqs (List[str]) – inference_kwargs (Any) – model_id (str) – embed_instruction (str) – query_instruction (str) – Return type None attribute embed_instruction: str = 'Represent the document for retrieval: ' Instruction to use for embedding documents. attribute model_id: str = 'hkunlp/instructor-large' Model name to use. attribute model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch'] Requirements to install on hardware to inference the model. attribute query_instruction: str = 'Represent the question for retrieving supporting documents: ' Instruction to use for embedding query. embed_documents(texts)[source] Compute doc embeddings using a HuggingFace instruct model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a HuggingFace instruct model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.FakeEmbeddings(*, size)[source] Bases: langchain.embeddings.base.Embeddings, pydantic.main.BaseModel Parameters size (int) – Return type None embed_documents(texts)[source] Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] embed_query(text)[source] Embed query text. Parameters text (str) – Return type List[float] class langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper for Aleph Alpha’s Asymmetric Embeddings AA provides you with an endpoint to embed a document and a query. The models were optimized to make the embeddings of documents and the query for a document as similar as possible. To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/ Example from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding embeddings = AlephAlphaSymmetricSemanticEmbedding() document = "This is a content of the document" query = "What is the content of the document?" doc_result = embeddings.embed_documents([document]) query_result = embeddings.embed_query(query) Parameters client (Any) – model (Optional[str]) – hosting (Optional[str]) – normalize (Optional[bool]) – compress_to_size (Optional[int]) – contextual_control_threshold (Optional[int]) – control_log_additive (Optional[bool]) – aleph_alpha_api_key (Optional[str]) – Return type None attribute aleph_alpha_api_key: Optional[str] = None API key for Aleph Alpha API. attribute compress_to_size: Optional[int] = 128 Should the returned embeddings come back as an original 5120-dim vector, or should it be compressed to 128-dim. attribute contextual_control_threshold: Optional[int] = None Attention control parameters only apply to those tokens that have explicitly been set in the request. attribute control_log_additive: Optional[bool] = True Apply controls on prompt items by adding the log(control_factor) to attention scores. attribute hosting: Optional[str] = 'https://api.aleph-alpha.com' Optional parameter that specifies which datacenters may process the request. attribute model: Optional[str] = 'luminous-base' Model name to use. attribute normalize: Optional[bool] = True Should returned embeddings be normalized embed_documents(texts)[source] Call out to Aleph Alpha’s asymmetric Document endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Call out to Aleph Alpha’s asymmetric, query embedding endpoint :param text: The text to embed. Returns Embeddings for the text. Parameters text (str) – Return type List[float] class langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source] Bases: langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding The symmetric version of the Aleph Alpha’s semantic embeddings. The main difference is that here, both the documents and queries are embedded with a SemanticRepresentation.Symmetric .. rubric:: Example from aleph_alpha import AlephAlphaSymmetricSemanticEmbedding embeddings = AlephAlphaAsymmetricSemanticEmbedding() text = "This is a test text" doc_result = embeddings.embed_documents([text]) query_result = embeddings.embed_query(text) Parameters client (Any) – model (Optional[str]) – hosting (Optional[str]) – normalize (Optional[bool]) – compress_to_size (Optional[int]) – contextual_control_threshold (Optional[int]) – control_log_additive (Optional[bool]) – aleph_alpha_api_key (Optional[str]) – Return type None embed_documents(texts)[source] Call out to Aleph Alpha’s Document endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Call out to Aleph Alpha’s asymmetric, query embedding endpoint :param text: The text to embed. Returns Embeddings for the text. Parameters text (str) – Return type List[float] langchain.embeddings.SentenceTransformerEmbeddings alias of langchain.embeddings.huggingface.HuggingFaceEmbeddings class langchain.embeddings.MiniMaxEmbeddings(*, endpoint_url='https://api.minimax.chat/v1/embeddings', model='embo-01', embed_type_db='db', embed_type_query='query', minimax_group_id=None, minimax_api_key=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around MiniMax’s embedding inference service. To use, you should have the environment variable MINIMAX_GROUP_ID and MINIMAX_API_KEY set with your API token, or pass it as a named parameter to the constructor. Example from langchain.embeddings import MiniMaxEmbeddings embeddings = MiniMaxEmbeddings() query_text = "This is a test query." query_result = embeddings.embed_query(query_text) document_text = "This is a test document." document_result = embeddings.embed_documents([document_text]) Parameters endpoint_url (str) – model (str) – embed_type_db (str) – embed_type_query (str) – minimax_group_id (Optional[str]) – minimax_api_key (Optional[str]) – Return type None attribute embed_type_db: str = 'db' For embed_documents attribute embed_type_query: str = 'query' For embed_query attribute endpoint_url: str = 'https://api.minimax.chat/v1/embeddings' Endpoint URL to use. attribute minimax_api_key: Optional[str] = None API Key for MiniMax API. attribute minimax_group_id: Optional[str] = None Group ID for MiniMax API. attribute model: str = 'embo-01' Embeddings model name to use. embed_documents(texts)[source] Embed documents using a MiniMax embedding endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Embed a query using a MiniMax embedding endpoint. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.BedrockEmbeddings(*, client=None, region_name=None, credentials_profile_name=None, model_id='amazon.titan-e1t-medium', model_kwargs=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Embeddings provider to invoke Bedrock embedding models. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Bedrock service. Parameters client (Any) – region_name (Optional[str]) – credentials_profile_name (Optional[str]) – model_id (str) – model_kwargs (Optional[Dict]) – Return type None attribute credentials_profile_name: Optional[str] = None The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html attribute model_id: str = 'amazon.titan-e1t-medium' Id of the model to call, e.g., amazon.titan-e1t-medium, this is equivalent to the modelId property in the list-foundation-models api attribute model_kwargs: Optional[Dict] = None Key word arguments to pass to the model. attribute region_name: Optional[str] = None The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config in case it is not provided here. embed_documents(texts, chunk_size=1)[source] Compute doc embeddings using a Bedrock model. Parameters texts (List[str]) – The list of texts to embed. chunk_size (int) – Bedrock currently only allows single string inputs, so chunk size is always 1. This input is here only for compatibility with the embeddings interface. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a Bedrock model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.DeepInfraEmbeddings(*, model_id='sentence-transformers/clip-ViT-B-32', normalize=False, embed_instruction='passage: ', query_instruction='query: ', model_kwargs=None, deepinfra_api_token=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around Deep Infra’s embedding inference service. To use, you should have the environment variable DEEPINFRA_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. There are multiple embeddings models available, see https://deepinfra.com/models?type=embeddings. Example from langchain.embeddings import DeepInfraEmbeddings deepinfra_emb = DeepInfraEmbeddings( model_id="sentence-transformers/clip-ViT-B-32", deepinfra_api_token="my-api-key" ) r1 = deepinfra_emb.embed_documents( [ "Alpha is the first letter of Greek alphabet", "Beta is the second letter of Greek alphabet", ] ) r2 = deepinfra_emb.embed_query( "What is the second letter of Greek alphabet" ) Parameters model_id (str) – normalize (bool) – embed_instruction (str) – query_instruction (str) – model_kwargs (Optional[dict]) – deepinfra_api_token (Optional[str]) – Return type None attribute embed_instruction: str = 'passage: ' Instruction used to embed documents. attribute model_id: str = 'sentence-transformers/clip-ViT-B-32' Embeddings model to use. attribute model_kwargs: Optional[dict] = None Other model keyword args attribute normalize: bool = False whether to normalize the computed embeddings attribute query_instruction: str = 'query: ' Instruction used to embed the query. embed_documents(texts)[source] Embed documents using a Deep Infra deployed embedding model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Embed a query using a Deep Infra deployed embedding model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.DashScopeEmbeddings(*, client=None, model='text-embedding-v1', dashscope_api_key=None, max_retries=5)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around DashScope embedding models. To use, you should have the dashscope python package installed, and the environment variable DASHSCOPE_API_KEY set with your API key or pass it as a named parameter to the constructor. Example from langchain.embeddings import DashScopeEmbeddings embeddings = DashScopeEmbeddings(dashscope_api_key="my-api-key") Example import os os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY" from langchain.embeddings.dashscope import DashScopeEmbeddings embeddings = DashScopeEmbeddings( model="text-embedding-v1", ) text = "This is a test query." query_result = embeddings.embed_query(text) Parameters client (Any) – model (str) – dashscope_api_key (Optional[str]) – max_retries (int) – Return type None attribute dashscope_api_key: Optional[str] = None Maximum number of retries to make when generating. embed_documents(texts)[source] Call out to DashScope’s embedding endpoint for embedding search docs. Parameters texts (List[str]) – The list of texts to embed. chunk_size – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Call out to DashScope’s embedding endpoint for embedding query text. Parameters text (str) – The text to embed. Returns Embedding for the text. Return type List[float] class langchain.embeddings.EmbaasEmbeddings(*, model='e5-large-v2', instruction=None, api_url='https://api.embaas.io/v1/embeddings/', embaas_api_key=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around embaas’s embedding service. To use, you should have the environment variable EMBAAS_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example # Initialise with default model and instruction from langchain.embeddings import EmbaasEmbeddings emb = EmbaasEmbeddings() # Initialise with custom model and instruction from langchain.embeddings import EmbaasEmbeddings emb_model = "instructor-large" emb_inst = "Represent the Wikipedia document for retrieval" emb = EmbaasEmbeddings( model=emb_model, instruction=emb_inst ) Parameters model (str) – instruction (Optional[str]) – api_url (str) – embaas_api_key (Optional[str]) – Return type None attribute api_url: str = 'https://api.embaas.io/v1/embeddings/' The URL for the embaas embeddings API. attribute instruction: Optional[str] = None Instruction used for domain-specific embeddings. attribute model: str = 'e5-large-v2' The model used for embeddings. embed_documents(texts)[source] Get embeddings for a list of texts. Parameters texts (List[str]) – The list of texts to get embeddings for. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Get embeddings for a single text. Parameters text (str) – The text to get embeddings for. Returns List of embeddings. Return type List[float]
https://api.python.langchain.com/en/latest/modules/embeddings.html
c3d05344-a68a-42c1-a6e5-0cf256519de1
Memory class langchain.memory.CassandraChatMessageHistory(contact_points, session_id, port=9042, username='cassandra', password='cassandra', keyspace_name='chat_history', table_name='message_store')[source] Bases: langchain.schema.BaseChatMessageHistory Chat message history that stores history in Cassandra. Parameters contact_points (List[str]) – list of ips to connect to Cassandra cluster session_id (str) – arbitrary key that is used to store the messages of a single chat session. port (int) – port to connect to Cassandra cluster username (str) – username to connect to Cassandra cluster password (str) – password to connect to Cassandra cluster keyspace_name (str) – name of the keyspace to use table_name (str) – name of the table to use property messages: List[langchain.schema.BaseMessage] Retrieve the messages from Cassandra add_message(message)[source] Append the message to the record in Cassandra Parameters message (langchain.schema.BaseMessage) – Return type None clear()[source] Clear session memory from Cassandra Return type None class langchain.memory.ChatMessageHistory(*, messages=[])[source] Bases: langchain.schema.BaseChatMessageHistory, pydantic.main.BaseModel Parameters messages (List[langchain.schema.BaseMessage]) – Return type None attribute messages: List[langchain.schema.BaseMessage] = [] add_message(message)[source] Add a self-created message to the store Parameters message (langchain.schema.BaseMessage) – Return type None clear()[source] Remove all messages from the store Return type None class langchain.memory.CombinedMemory(*, memories)[source] Bases: langchain.schema.BaseMemory Class for combining multiple memories’ data together. Parameters memories (List[langchain.schema.BaseMemory]) – Return type None attribute memories: List[langchain.schema.BaseMemory] [Required] For tracking all the memories that should be accessed. clear()[source] Clear context from this session for every memory. Return type None load_memory_variables(inputs)[source] Load all vars from sub-memories. Parameters inputs (Dict[str, Any]) – Return type Dict[str, str] save_context(inputs, outputs)[source] Save context from this session for every memory. Parameters inputs (Dict[str, Any]) – outputs (Dict[str, str]) – Return type None property memory_variables: List[str] All the memory variables that this instance provides. class langchain.memory.ConversationBufferMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', memory_key='history')[source] Bases: langchain.memory.chat_memory.BaseChatMemory Buffer for storing conversation memory. Parameters chat_memory (langchain.schema.BaseChatMessageHistory) – output_key (Optional[str]) – input_key (Optional[str]) – return_messages (bool) – human_prefix (str) – ai_prefix (str) – memory_key (str) – Return type None attribute ai_prefix: str = 'AI' attribute human_prefix: str = 'Human' load_memory_variables(inputs)[source] Return history buffer. Parameters inputs (Dict[str, Any]) – Return type Dict[str, Any] property buffer: Any String buffer of memory. class langchain.memory.ConversationBufferWindowMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', memory_key='history', k=5)[source] Bases: langchain.memory.chat_memory.BaseChatMemory Buffer for storing conversation memory. Parameters chat_memory (langchain.schema.BaseChatMessageHistory) – output_key (Optional[str]) – input_key (Optional[str]) – return_messages (bool) – human_prefix (str) – ai_prefix (str) – memory_key (str) – k (int) – Return type None attribute ai_prefix: str = 'AI' attribute human_prefix: str = 'Human' attribute k: int = 5 load_memory_variables(inputs)[source] Return history buffer. Parameters inputs (Dict[str, Any]) – Return type Dict[str, str] property buffer: List[langchain.schema.BaseMessage] String buffer of memory. class langchain.memory.ConversationEntityMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', llm, entity_extraction_prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True), entity_summarization_prompt=PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\n\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\n\nFull conversation history (for context):\n{history}\n\nEntity to summarize:\n{entity}\n\nExisting summary of {entity}:\n{summary}\n\nLast line of conversation:\nHuman: {input}\nUpdated summary:', template_format='f-string', validate_template=True), entity_cache=[], k=3, chat_history_key='history', entity_store=None)[source] Bases: langchain.memory.chat_memory.BaseChatMemory Entity extractor & summarizer memory. Extracts named entities from the recent chat history and generates summaries. With a swapable entity store, persisting entities across conversations. Defaults to an in-memory entity store, and can be swapped out for a Redis, SQLite, or other entity store. Parameters chat_memory (langchain.schema.BaseChatMessageHistory) – output_key (Optional[str]) – input_key (Optional[str]) – return_messages (bool) – human_prefix (str) – ai_prefix (str) – llm (langchain.base_language.BaseLanguageModel) – entity_extraction_prompt (langchain.prompts.base.BasePromptTemplate) – entity_summarization_prompt (langchain.prompts.base.BasePromptTemplate) – entity_cache (List[str]) – k (int) – chat_history_key (str) – entity_store (langchain.memory.entity.BaseEntityStore) – Return type None attribute ai_prefix: str = 'AI' attribute chat_history_key: str = 'history' attribute entity_cache: List[str] = [] attribute entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True) attribute entity_store: langchain.memory.entity.BaseEntityStore [Optional] attribute entity_summarization_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\n\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\n\nFull conversation history (for context):\n{history}\n\nEntity to summarize:\n{entity}\n\nExisting summary of {entity}:\n{summary}\n\nLast line of conversation:\nHuman: {input}\nUpdated summary:', template_format='f-string', validate_template=True) attribute human_prefix: str = 'Human' attribute k: int = 3 attribute llm: langchain.base_language.BaseLanguageModel [Required] clear()[source] Clear memory contents. Return type None load_memory_variables(inputs)[source] Returns chat history and all generated entities with summaries if available, and updates or clears the recent entity cache. New entity name can be found when calling this method, before the entity summaries are generated, so the entity cache values may be empty if no entity descriptions are generated yet. Parameters inputs (Dict[str, Any]) – Return type Dict[str, Any] save_context(inputs, outputs)[source] Save context from this conversation history to the entity store. Generates a summary for each entity in the entity cache by prompting the model, and saves these summaries to the entity store. Parameters inputs (Dict[str, Any]) – outputs (Dict[str, str]) – Return type None property buffer: List[langchain.schema.BaseMessage] Access chat memory messages. class langchain.memory.ConversationKGMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, k=2, human_prefix='Human', ai_prefix='AI', kg=None, knowledge_extraction_prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nConversation history:\nPerson #1: Did you hear aliens landed in Area 51?\nAI: No, I didn't hear that. What do you know about Area 51?\nPerson #1: It's a secret military base in Nevada.\nAI: What do you know about Nevada?\nLast line of conversation:\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: Hello.\nAI: Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversation:\nPerson #1: I'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\nLast line of conversation:\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:", template_format='f-string', validate_template=True), entity_extraction_prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True), llm, summary_message_cls=<class 'langchain.schema.SystemMessage'>, memory_key='history')[source] Bases: langchain.memory.chat_memory.BaseChatMemory Knowledge graph memory for storing conversation memory. Integrates with external knowledge graph to store and retrieve information about knowledge triples in the conversation. Parameters chat_memory (langchain.schema.BaseChatMessageHistory) – output_key (Optional[str]) – input_key (Optional[str]) – return_messages (bool) – k (int) – human_prefix (str) – ai_prefix (str) – kg (langchain.graphs.networkx_graph.NetworkxEntityGraph) – knowledge_extraction_prompt (langchain.prompts.base.BasePromptTemplate) – entity_extraction_prompt (langchain.prompts.base.BasePromptTemplate) – llm (langchain.base_language.BaseLanguageModel) – summary_message_cls (Type[langchain.schema.BaseMessage]) – memory_key (str) – Return type None attribute ai_prefix: str = 'AI' attribute entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True) attribute human_prefix: str = 'Human' attribute k: int = 2 attribute kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional] attribute knowledge_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nConversation history:\nPerson #1: Did you hear aliens landed in Area 51?\nAI: No, I didn't hear that. What do you know about Area 51?\nPerson #1: It's a secret military base in Nevada.\nAI: What do you know about Nevada?\nLast line of conversation:\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: Hello.\nAI: Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversation:\nPerson #1: I'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\nLast line of conversation:\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:", template_format='f-string', validate_template=True) attribute llm: langchain.base_language.BaseLanguageModel [Required] attribute summary_message_cls: Type[langchain.schema.BaseMessage] = <class 'langchain.schema.SystemMessage'> Number of previous utterances to include in the context. clear()[source] Clear memory contents. Return type None get_current_entities(input_string)[source] Parameters input_string (str) – Return type List[str] get_knowledge_triplets(input_string)[source] Parameters input_string (str) – Return type List[langchain.graphs.networkx_graph.KnowledgeTriple] load_memory_variables(inputs)[source] Return history buffer. Parameters inputs (Dict[str, Any]) – Return type Dict[str, Any] save_context(inputs, outputs)[source] Save context from this conversation to buffer. Parameters inputs (Dict[str, Any]) – outputs (Dict[str, str]) – Return type None class langchain.memory.ConversationStringBufferMemory(*, human_prefix='Human', ai_prefix='AI', buffer='', output_key=None, input_key=None, memory_key='history')[source] Bases: langchain.schema.BaseMemory Buffer for storing conversation memory. Parameters human_prefix (str) – ai_prefix (str) – buffer (str) – output_key (Optional[str]) – input_key (Optional[str]) – memory_key (str) – Return type None attribute ai_prefix: str = 'AI' Prefix to use for AI generated responses. attribute buffer: str = '' attribute human_prefix: str = 'Human' attribute input_key: Optional[str] = None attribute output_key: Optional[str] = None clear()[source] Clear memory contents. Return type None load_memory_variables(inputs)[source] Return history buffer. Parameters inputs (Dict[str, Any]) – Return type Dict[str, str] save_context(inputs, outputs)[source] Save context from this conversation to buffer. Parameters inputs (Dict[str, Any]) – outputs (Dict[str, str]) – Return type None property memory_variables: List[str] Will always return list of memory variables. :meta private: class langchain.memory.ConversationSummaryBufferMemory(*, human_prefix='Human', ai_prefix='AI', llm, prompt=PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True), summary_message_cls=<class 'langchain.schema.SystemMessage'>, chat_memory=None, output_key=None, input_key=None, return_messages=False, max_token_limit=2000, moving_summary_buffer='', memory_key='history')[source] Bases: langchain.memory.chat_memory.BaseChatMemory, langchain.memory.summary.SummarizerMixin Buffer with summarizer for storing conversation memory. Parameters human_prefix (str) – ai_prefix (str) – llm (langchain.base_language.BaseLanguageModel) – prompt (langchain.prompts.base.BasePromptTemplate) – summary_message_cls (Type[langchain.schema.BaseMessage]) – chat_memory (langchain.schema.BaseChatMessageHistory) – output_key (Optional[str]) – input_key (Optional[str]) – return_messages (bool) – max_token_limit (int) – moving_summary_buffer (str) – memory_key (str) – Return type None attribute max_token_limit: int = 2000 attribute memory_key: str = 'history' attribute moving_summary_buffer: str = '' clear()[source] Clear memory contents. Return type None load_memory_variables(inputs)[source] Return history buffer. Parameters inputs (Dict[str, Any]) – Return type Dict[str, Any] prune()[source] Prune buffer if it exceeds max token limit Return type None save_context(inputs, outputs)[source] Save context from this conversation to buffer. Parameters inputs (Dict[str, Any]) – outputs (Dict[str, str]) – Return type None property buffer: List[langchain.schema.BaseMessage] class langchain.memory.ConversationSummaryMemory(*, human_prefix='Human', ai_prefix='AI', llm, prompt=PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True), summary_message_cls=<class 'langchain.schema.SystemMessage'>, chat_memory=None, output_key=None, input_key=None, return_messages=False, buffer='', memory_key='history')[source] Bases: langchain.memory.chat_memory.BaseChatMemory, langchain.memory.summary.SummarizerMixin Conversation summarizer to memory. Parameters human_prefix (str) – ai_prefix (str) – llm (langchain.base_language.BaseLanguageModel) – prompt (langchain.prompts.base.BasePromptTemplate) – summary_message_cls (Type[langchain.schema.BaseMessage]) – chat_memory (langchain.schema.BaseChatMessageHistory) – output_key (Optional[str]) – input_key (Optional[str]) – return_messages (bool) – buffer (str) – memory_key (str) – Return type None attribute buffer: str = '' clear()[source] Clear memory contents. Return type None classmethod from_messages(llm, chat_memory, *, summarize_step=2, **kwargs)[source] Parameters llm (langchain.base_language.BaseLanguageModel) – chat_memory (langchain.schema.BaseChatMessageHistory) – summarize_step (int) – kwargs (Any) – Return type langchain.memory.summary.ConversationSummaryMemory load_memory_variables(inputs)[source] Return history buffer. Parameters inputs (Dict[str, Any]) – Return type Dict[str, Any] save_context(inputs, outputs)[source] Save context from this conversation to buffer. Parameters inputs (Dict[str, Any]) – outputs (Dict[str, str]) – Return type None class langchain.memory.ConversationTokenBufferMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', llm, memory_key='history', max_token_limit=2000)[source] Bases: langchain.memory.chat_memory.BaseChatMemory Buffer for storing conversation memory. Parameters chat_memory (langchain.schema.BaseChatMessageHistory) – output_key (Optional[str]) – input_key (Optional[str]) – return_messages (bool) – human_prefix (str) – ai_prefix (str) – llm (langchain.base_language.BaseLanguageModel) – memory_key (str) – max_token_limit (int) – Return type None attribute ai_prefix: str = 'AI' attribute human_prefix: str = 'Human' attribute llm: langchain.base_language.BaseLanguageModel [Required] attribute max_token_limit: int = 2000 attribute memory_key: str = 'history' load_memory_variables(inputs)[source] Return history buffer. Parameters inputs (Dict[str, Any]) – Return type Dict[str, Any] save_context(inputs, outputs)[source] Save context from this conversation to buffer. Pruned. Parameters inputs (Dict[str, Any]) – outputs (Dict[str, str]) – Return type None property buffer: List[langchain.schema.BaseMessage] String buffer of memory. class langchain.memory.CosmosDBChatMessageHistory(cosmos_endpoint, cosmos_database, cosmos_container, session_id, user_id, credential=None, connection_string=None, ttl=None, cosmos_client_kwargs=None)[source] Bases: langchain.schema.BaseChatMessageHistory Chat history backed by Azure CosmosDB. Parameters cosmos_endpoint (str) – cosmos_database (str) – cosmos_container (str) – session_id (str) – user_id (str) – credential (Any) – connection_string (Optional[str]) – ttl (Optional[int]) – cosmos_client_kwargs (Optional[dict]) – prepare_cosmos()[source] Prepare the CosmosDB client. Use this function or the context manager to make sure your database is ready. Return type None load_messages()[source] Retrieve the messages from Cosmos Return type None add_message(message)[source] Add a self-created message to the store Parameters message (langchain.schema.BaseMessage) – Return type None upsert_messages()[source] Update the cosmosdb item. Return type None clear()[source] Clear session memory from this memory and cosmos. Return type None class langchain.memory.DynamoDBChatMessageHistory(table_name, session_id, endpoint_url=None)[source] Bases: langchain.schema.BaseChatMessageHistory Chat message history that stores history in AWS DynamoDB. This class expects that a DynamoDB table with name table_name and a partition Key of SessionId is present. Parameters table_name (str) – name of the DynamoDB table session_id (str) – arbitrary key that is used to store the messages of a single chat session. endpoint_url (Optional[str]) – URL of the AWS endpoint to connect to. This argument is optional and useful for test purposes, like using Localstack. If you plan to use AWS cloud service, you normally don’t have to worry about setting the endpoint_url. property messages: List[langchain.schema.BaseMessage] Retrieve the messages from DynamoDB add_message(message)[source] Append the message to the record in DynamoDB Parameters message (langchain.schema.BaseMessage) – Return type None clear()[source] Clear session memory from DynamoDB Return type None class langchain.memory.FileChatMessageHistory(file_path)[source] Bases: langchain.schema.BaseChatMessageHistory Chat message history that stores history in a local file. Parameters file_path (str) – path of the local file to store the messages. property messages: List[langchain.schema.BaseMessage] Retrieve the messages from the local file add_message(message)[source] Append the message to the record in the local file Parameters message (langchain.schema.BaseMessage) – Return type None clear()[source] Clear session memory from the local file Return type None class langchain.memory.InMemoryEntityStore(*, store={})[source] Bases: langchain.memory.entity.BaseEntityStore Basic in-memory entity store. Parameters store (Dict[str, Optional[str]]) – Return type None attribute store: Dict[str, Optional[str]] = {} clear()[source] Delete all entities from store. Return type None delete(key)[source] Delete entity value from store. Parameters key (str) – Return type None exists(key)[source] Check if entity exists in store. Parameters key (str) – Return type bool get(key, default=None)[source] Get entity value from store. Parameters key (str) – default (Optional[str]) – Return type Optional[str] set(key, value)[source] Set entity value in store. Parameters key (str) – value (Optional[str]) – Return type None class langchain.memory.MomentoChatMessageHistory(session_id, cache_client, cache_name, *, key_prefix='message_store:', ttl=None, ensure_cache_exists=True)[source] Bases: langchain.schema.BaseChatMessageHistory Chat message history cache that uses Momento as a backend. See https://gomomento.com/ Parameters session_id (str) – cache_client (momento.CacheClient) – cache_name (str) – key_prefix (str) – ttl (Optional[timedelta]) – ensure_cache_exists (bool) – classmethod from_client_params(session_id, cache_name, ttl, *, configuration=None, auth_token=None, **kwargs)[source] Construct cache from CacheClient parameters. Parameters session_id (str) – cache_name (str) – ttl (timedelta) – configuration (Optional[momento.config.Configuration]) – auth_token (Optional[str]) – kwargs (Any) – Return type MomentoChatMessageHistory property messages: list[langchain.schema.BaseMessage] Retrieve the messages from Momento. Raises SdkException – Momento service or network error Exception – Unexpected response Returns List of cached messages Return type list[BaseMessage] add_message(message)[source] Store a message in the cache. Parameters message (BaseMessage) – The message object to store. Raises SdkException – Momento service or network error. Exception – Unexpected response. Return type None clear()[source] Remove the session’s messages from the cache. Raises SdkException – Momento service or network error. Exception – Unexpected response. Return type None class langchain.memory.MongoDBChatMessageHistory(connection_string, session_id, database_name='chat_history', collection_name='message_store')[source] Bases: langchain.schema.BaseChatMessageHistory Chat message history that stores history in MongoDB. Parameters connection_string (str) – connection string to connect to MongoDB session_id (str) – arbitrary key that is used to store the messages of a single chat session. database_name (str) – name of the database to use collection_name (str) – name of the collection to use property messages: List[langchain.schema.BaseMessage] Retrieve the messages from MongoDB add_message(message)[source] Append the message to the record in MongoDB Parameters message (langchain.schema.BaseMessage) – Return type None clear()[source] Clear session memory from MongoDB Return type None class langchain.memory.MotorheadMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, url='https://api.getmetal.io/v1/motorhead', session_id, context=None, api_key=None, client_id=None, timeout=3000, memory_key='history')[source] Bases: langchain.memory.chat_memory.BaseChatMemory Parameters chat_memory (langchain.schema.BaseChatMessageHistory) – output_key (Optional[str]) – input_key (Optional[str]) – return_messages (bool) – url (str) – session_id (str) – context (Optional[str]) – api_key (Optional[str]) – client_id (Optional[str]) – timeout (int) – memory_key (str) – Return type None attribute api_key: Optional[str] = None attribute client_id: Optional[str] = None attribute context: Optional[str] = None attribute session_id: str [Required] attribute url: str = 'https://api.getmetal.io/v1/motorhead' delete_session()[source] Delete a session Return type None async init()[source] Return type None load_memory_variables(values)[source] Return key-value pairs given the text input to the chain. If None, return all memories Parameters values (Dict[str, Any]) – Return type Dict[str, Any] save_context(inputs, outputs)[source] Save context from this conversation to buffer. Parameters inputs (Dict[str, Any]) – outputs (Dict[str, str]) – Return type None property memory_variables: List[str] Input keys this memory class will load dynamically. class langchain.memory.PostgresChatMessageHistory(session_id, connection_string='postgresql://postgres:mypassword@localhost/chat_history', table_name='message_store')[source] Bases: langchain.schema.BaseChatMessageHistory Chat message history stored in a Postgres database. Parameters session_id (str) – connection_string (str) – table_name (str) – property messages: List[langchain.schema.BaseMessage] Retrieve the messages from PostgreSQL add_message(message)[source] Append the message to the record in PostgreSQL Parameters message (langchain.schema.BaseMessage) – Return type None clear()[source] Clear session memory from PostgreSQL Return type None class langchain.memory.ReadOnlySharedMemory(*, memory)[source] Bases: langchain.schema.BaseMemory A memory wrapper that is read-only and cannot be changed. Parameters memory (langchain.schema.BaseMemory) – Return type None attribute memory: langchain.schema.BaseMemory [Required] clear()[source] Nothing to clear, got a memory like a vault. Return type None load_memory_variables(inputs)[source] Load memory variables from memory. Parameters inputs (Dict[str, Any]) – Return type Dict[str, str] save_context(inputs, outputs)[source] Nothing should be saved or changed Parameters inputs (Dict[str, Any]) – outputs (Dict[str, str]) – Return type None property memory_variables: List[str] Return memory variables. class langchain.memory.RedisChatMessageHistory(session_id, url='redis://localhost:6379/0', key_prefix='message_store:', ttl=None)[source] Bases: langchain.schema.BaseChatMessageHistory Chat message history stored in a Redis database. Parameters session_id (str) – url (str) – key_prefix (str) – ttl (Optional[int]) – property key: str Construct the record key to use property messages: List[langchain.schema.BaseMessage] Retrieve the messages from Redis add_message(message)[source] Append the message to the record in Redis Parameters message (langchain.schema.BaseMessage) – Return type None clear()[source] Clear session memory from Redis Return type None class langchain.memory.RedisEntityStore(session_id='default', url='redis://localhost:6379/0', key_prefix='memory_store', ttl=86400, recall_ttl=259200, *args, redis_client=None)[source] Bases: langchain.memory.entity.BaseEntityStore Redis-backed Entity store. Entities get a TTL of 1 day by default, and that TTL is extended by 3 days every time the entity is read back. Parameters session_id (str) – url (str) – key_prefix (str) – ttl (Optional[int]) – recall_ttl (Optional[int]) – args (Any) – redis_client (Any) – Return type None attribute key_prefix: str = 'memory_store' attribute recall_ttl: Optional[int] = 259200 attribute redis_client: Any = None attribute session_id: str = 'default' attribute ttl: Optional[int] = 86400 clear()[source] Delete all entities from store. Return type None delete(key)[source] Delete entity value from store. Parameters key (str) – Return type None exists(key)[source] Check if entity exists in store. Parameters key (str) – Return type bool get(key, default=None)[source] Get entity value from store. Parameters key (str) – default (Optional[str]) – Return type Optional[str] set(key, value)[source] Set entity value in store. Parameters key (str) – value (Optional[str]) – Return type None property full_key_prefix: str class langchain.memory.SQLChatMessageHistory(session_id, connection_string, table_name='message_store')[source] Bases: langchain.schema.BaseChatMessageHistory Chat message history stored in an SQL database. Parameters session_id (str) – connection_string (str) – table_name (str) – property messages: List[langchain.schema.BaseMessage] Retrieve all messages from db add_message(message)[source] Append the message to the record in db Parameters message (langchain.schema.BaseMessage) – Return type None clear()[source] Clear session memory from db Return type None class langchain.memory.SQLiteEntityStore(session_id='default', db_file='entities.db', table_name='memory_store', *args)[source] Bases: langchain.memory.entity.BaseEntityStore SQLite-backed Entity store Parameters session_id (str) – db_file (str) – table_name (str) – args (Any) – Return type None attribute session_id: str = 'default' attribute table_name: str = 'memory_store' clear()[source] Delete all entities from store. Return type None delete(key)[source] Delete entity value from store. Parameters key (str) – Return type None exists(key)[source] Check if entity exists in store. Parameters key (str) – Return type bool get(key, default=None)[source] Get entity value from store. Parameters key (str) – default (Optional[str]) – Return type Optional[str] set(key, value)[source] Set entity value in store. Parameters key (str) – value (Optional[str]) – Return type None property full_table_name: str class langchain.memory.SimpleMemory(*, memories={})[source] Bases: langchain.schema.BaseMemory Simple memory for storing context or other bits of information that shouldn’t ever change between prompts. Parameters memories (Dict[str, Any]) – Return type None attribute memories: Dict[str, Any] = {} clear()[source] Nothing to clear, got a memory like a vault. Return type None load_memory_variables(inputs)[source] Return key-value pairs given the text input to the chain. If None, return all memories Parameters inputs (Dict[str, Any]) – Return type Dict[str, str] save_context(inputs, outputs)[source] Nothing should be saved or changed, my memory is set in stone. Parameters inputs (Dict[str, Any]) – outputs (Dict[str, str]) – Return type None property memory_variables: List[str] Input keys this memory class will load dynamically. class langchain.memory.VectorStoreRetrieverMemory(*, retriever, memory_key='history', input_key=None, return_docs=False)[source] Bases: langchain.schema.BaseMemory Class for a VectorStore-backed memory object. Parameters retriever (langchain.vectorstores.base.VectorStoreRetriever) – memory_key (str) – input_key (Optional[str]) – return_docs (bool) – Return type None attribute input_key: Optional[str] = None Key name to index the inputs to load_memory_variables. attribute memory_key: str = 'history' Key name to locate the memories in the result of load_memory_variables. attribute retriever: langchain.vectorstores.base.VectorStoreRetriever [Required] VectorStoreRetriever object to connect to. attribute return_docs: bool = False Whether or not to return the result of querying the database directly. clear()[source] Nothing to clear. Return type None load_memory_variables(inputs)[source] Return history buffer. Parameters inputs (Dict[str, Any]) – Return type Dict[str, Union[List[langchain.schema.Document], str]] save_context(inputs, outputs)[source] Save context from this conversation to buffer. Parameters inputs (Dict[str, Any]) – outputs (Dict[str, str]) – Return type None property memory_variables: List[str] The list of keys emitted from the load_memory_variables method. class langchain.memory.ZepChatMessageHistory(session_id, url='http://localhost:8000')[source] Bases: langchain.schema.BaseChatMessageHistory A ChatMessageHistory implementation that uses Zep as a backend. Recommended usage: # Set up Zep Chat History zep_chat_history = ZepChatMessageHistory( session_id=session_id, url=ZEP_API_URL, ) # Use a standard ConversationBufferMemory to encapsulate the Zep chat history memory = ConversationBufferMemory( memory_key="chat_history", chat_memory=zep_chat_history ) Zep provides long-term conversation storage for LLM apps. The server stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs. For server installation instructions and more, see: https://getzep.github.io/ This class is a thin wrapper around the zep-python package. Additional Zep functionality is exposed via the zep_summary and zep_messages properties. For more information on the zep-python package, see: https://github.com/getzep/zep-python Parameters session_id (str) – url (str) – Return type None property messages: List[langchain.schema.BaseMessage] Retrieve messages from Zep memory property zep_messages: List[Message] Retrieve summary from Zep memory property zep_summary: Optional[str] Retrieve summary from Zep memory add_message(message)[source] Append the message to the Zep memory history Parameters message (langchain.schema.BaseMessage) – Return type None search(query, metadata=None, limit=None)[source] Search Zep memory for messages matching the query Parameters query (str) – metadata (Optional[Dict]) – limit (Optional[int]) – Return type List[MemorySearchResult] clear()[source] Clear session memory from Zep. Note that Zep is long-term storage for memory and this is not advised unless you have specific data retention requirements. Return type None
https://api.python.langchain.com/en/latest/modules/memory.html
7702ed66-abcf-4761-ad44-02001b146132
Output Parsers class langchain.output_parsers.BooleanOutputParser(*, true_val='YES', false_val='NO')[source] Bases: langchain.schema.BaseOutputParser[bool] Parameters true_val (str) – false_val (str) – Return type None attribute false_val: str = 'NO' attribute true_val: str = 'YES' parse(text)[source] Parse the output of an LLM call to a boolean. Parameters text (str) – output of language model Returns boolean Return type bool class langchain.output_parsers.CombiningOutputParser(*, parsers)[source] Bases: langchain.schema.BaseOutputParser Class to combine multiple output parsers into one. Parameters parsers (List[langchain.schema.BaseOutputParser]) – Return type None attribute parsers: List[langchain.schema.BaseOutputParser] [Required] get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(text)[source] Parse the output of an LLM call. Parameters text (str) – Return type Dict[str, Any] class langchain.output_parsers.CommaSeparatedListOutputParser[source] Bases: langchain.output_parsers.list.ListOutputParser Parse out comma separated lists. Return type None get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(text)[source] Parse the output of an LLM call. Parameters text (str) – Return type List[str] class langchain.output_parsers.DatetimeOutputParser(*, format='%Y-%m-%dT%H:%M:%S.%fZ')[source] Bases: langchain.schema.BaseOutputParser[datetime.datetime] Parameters format (str) – Return type None attribute format: str = '%Y-%m-%dT%H:%M:%S.%fZ' get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(response)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model response (str) – Returns structured output Return type datetime.datetime class langchain.output_parsers.EnumOutputParser(*, enum)[source] Bases: langchain.schema.BaseOutputParser Parameters enum (Type[enum.Enum]) – Return type None attribute enum: Type[enum.Enum] [Required] get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(response)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model response (str) – Returns structured output Return type Any class langchain.output_parsers.GuardrailsOutputParser(*, guard=None, api=None, args=None, kwargs=None)[source] Bases: langchain.schema.BaseOutputParser Parameters guard (Any) – api (Optional[Callable]) – args (Any) – kwargs (Any) – Return type None attribute api: Optional[Callable] = None attribute args: Any = None attribute guard: Any = None attribute kwargs: Any = None classmethod from_rail(rail_file, num_reasks=1, api=None, *args, **kwargs)[source] Parameters rail_file (str) – num_reasks (int) – api (Optional[Callable]) – args (Any) – kwargs (Any) – Return type langchain.output_parsers.rail_parser.GuardrailsOutputParser classmethod from_rail_string(rail_str, num_reasks=1, api=None, *args, **kwargs)[source] Parameters rail_str (str) – num_reasks (int) – api (Optional[Callable]) – args (Any) – kwargs (Any) – Return type langchain.output_parsers.rail_parser.GuardrailsOutputParser get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(text)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text (str) – output of language model Returns structured output Return type Dict class langchain.output_parsers.ListOutputParser[source] Bases: langchain.schema.BaseOutputParser Class to parse the output of an LLM call to a list. Return type None abstract parse(text)[source] Parse the output of an LLM call. Parameters text (str) – Return type List[str] class langchain.output_parsers.OutputFixingParser(*, parser, retry_chain)[source] Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] Wraps a parser and tries to fix parsing errors. Parameters parser (langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]) – retry_chain (langchain.chains.llm.LLMChain) – Return type None attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] [Required] attribute retry_chain: langchain.chains.llm.LLMChain [Required] classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'error', 'instructions'], output_parser=None, partial_variables={}, template='Instructions:\n--------------\n{instructions}\n--------------\nCompletion:\n--------------\n{completion}\n--------------\n\nAbove, the Completion did not satisfy the constraints given in the Instructions.\nError:\n--------------\n{error}\n--------------\n\nPlease try again. Please only respond with an answer that satisfies the constraints laid out in the Instructions:', template_format='f-string', validate_template=True))[source] Parameters llm (langchain.base_language.BaseLanguageModel) – parser (langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]) – prompt (langchain.prompts.base.BasePromptTemplate) – Return type langchain.output_parsers.fix.OutputFixingParser[langchain.output_parsers.fix.T] get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(completion)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model completion (str) – Returns structured output Return type langchain.output_parsers.fix.T class langchain.output_parsers.PydanticOutputParser(*, pydantic_object)[source] Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.pydantic.T] Parameters pydantic_object (Type[langchain.output_parsers.pydantic.T]) – Return type None attribute pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required] get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(text)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text (str) – output of language model Returns structured output Return type langchain.output_parsers.pydantic.T class langchain.output_parsers.RegexDictParser(*, regex_pattern="{}:\\s?([^.'\\n']*)\\.?", output_key_to_format, no_update_value=None)[source] Bases: langchain.schema.BaseOutputParser Class to parse the output into a dictionary. Parameters regex_pattern (str) – output_key_to_format (Dict[str, str]) – no_update_value (Optional[str]) – Return type None attribute no_update_value: Optional[str] = None attribute output_key_to_format: Dict[str, str] [Required] attribute regex_pattern: str = "{}:\\s?([^.'\\n']*)\\.?" parse(text)[source] Parse the output of an LLM call. Parameters text (str) – Return type Dict[str, str] class langchain.output_parsers.RegexParser(*, regex, output_keys, default_output_key=None)[source] Bases: langchain.schema.BaseOutputParser Class to parse the output into a dictionary. Parameters regex (str) – output_keys (List[str]) – default_output_key (Optional[str]) – Return type None attribute default_output_key: Optional[str] = None attribute output_keys: List[str] [Required] attribute regex: str [Required] parse(text)[source] Parse the output of an LLM call. Parameters text (str) – Return type Dict[str, str] class langchain.output_parsers.ResponseSchema(*, name, description, type='string')[source] Bases: pydantic.main.BaseModel Parameters name (str) – description (str) – type (str) – Return type None attribute description: str [Required] attribute name: str [Required] attribute type: str = 'string' class langchain.output_parsers.RetryOutputParser(*, parser, retry_chain)[source] Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] Wraps a parser and tries to fix parsing errors. Does this by passing the original prompt and the completion to another LLM, and telling it the completion did not satisfy criteria in the prompt. Parameters parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) – retry_chain (langchain.chains.llm.LLMChain) – Return type None attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required] attribute retry_chain: langchain.chains.llm.LLMChain [Required] classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nPlease try again:', template_format='f-string', validate_template=True))[source] Parameters llm (langchain.base_language.BaseLanguageModel) – parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) – prompt (langchain.prompts.base.BasePromptTemplate) – Return type langchain.output_parsers.retry.RetryOutputParser[langchain.output_parsers.retry.T] get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(completion)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model completion (str) – Returns structured output Return type langchain.output_parsers.retry.T parse_with_prompt(completion, prompt_value)[source] Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion (str) – output of language model prompt – prompt value prompt_value (langchain.schema.PromptValue) – Returns structured output Return type langchain.output_parsers.retry.T class langchain.output_parsers.RetryWithErrorOutputParser(*, parser, retry_chain)[source] Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] Wraps a parser and tries to fix parsing errors. Does this by passing the original prompt, the completion, AND the error that was raised to another language model and telling it that the completion did not work, and raised the given error. Differs from RetryOutputParser in that this implementation provides the error that was raised back to the LLM, which in theory should give it more information on how to fix it. Parameters parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) – retry_chain (langchain.chains.llm.LLMChain) – Return type None attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required] attribute retry_chain: langchain.chains.llm.LLMChain [Required] classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'error', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nDetails: {error}\nPlease try again:', template_format='f-string', validate_template=True))[source] Parameters llm (langchain.base_language.BaseLanguageModel) – parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) – prompt (langchain.prompts.base.BasePromptTemplate) – Return type langchain.output_parsers.retry.RetryWithErrorOutputParser[langchain.output_parsers.retry.T] get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(completion)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model completion (str) – Returns structured output Return type langchain.output_parsers.retry.T parse_with_prompt(completion, prompt_value)[source] Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion (str) – output of language model prompt – prompt value prompt_value (langchain.schema.PromptValue) – Returns structured output Return type langchain.output_parsers.retry.T class langchain.output_parsers.StructuredOutputParser(*, response_schemas)[source] Bases: langchain.schema.BaseOutputParser Parameters response_schemas (List[langchain.output_parsers.structured.ResponseSchema]) – Return type None attribute response_schemas: List[langchain.output_parsers.structured.ResponseSchema] [Required] classmethod from_response_schemas(response_schemas)[source] Parameters response_schemas (List[langchain.output_parsers.structured.ResponseSchema]) – Return type langchain.output_parsers.structured.StructuredOutputParser get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(text)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text (str) – output of language model Returns structured output Return type Any
https://api.python.langchain.com/en/latest/modules/output_parsers.html
392b7fe6-4662-40df-96c9-d94e21dcb8f8
"Tools\nCore toolkit implementations.\nclass langchain.tools.AIPluginTool(*, name, description, a(...TRUNCATED)
https://api.python.langchain.com/en/latest/modules/tools.html
18521a36-3fee-40f9-86fe-e77cfb18d8be
"Callbacks\nCallback handlers that allow listening to events in LangChain.\nclass langchain.callb(...TRUNCATED)
https://api.python.langchain.com/en/latest/modules/callbacks.html
fb470470-dfc6-4e38-8a18-8c5a864d5999
"Document Loaders\nAll different types of document loaders.\nclass langchain.document_loaders.Acr(...TRUNCATED)
https://api.python.langchain.com/en/latest/modules/document_loaders.html

No dataset card yet

Downloads last month
21