id
stringlengths 36
36
| text
stringlengths 114
429k
| url
stringlengths 54
121
|
---|---|---|
d9a37339-4bad-4cf3-9fb5-1dd03debfa69 | Modelsο
LangChain provides interfaces and integrations for a number of different types of models.
LLMs
Chat Models | https://api.python.langchain.com/en/latest/models.html |
8ff91ce1-7d62-4479-9049-f8f70b82ef58 | Model I/Oο
LangChain provides interfaces and integrations for working with language models.
Prompts
Models
Output Parsers | https://api.python.langchain.com/en/latest/model_io.html |
2412f67f-00bb-42fb-9975-249ace261d8e | Promptsο
The reference guides here all relate to objects for working with Prompts.
Prompt Templates
Example Selector | https://api.python.langchain.com/en/latest/prompts.html |
f7f922b9-f340-4163-bd4d-a15304000f03 | Data connectionο
LangChain has a number of modules that help you load, structure, store, and retrieve documents.
Document Loaders
Document Transformers
Embeddings
Vector Stores
Retrievers | https://api.python.langchain.com/en/latest/data_connection.html |
583cb37c-049b-4ce1-bba8-5e96effe2285 | Embeddingsο
Wrappers around embedding modules.
class langchain.embeddings.OpenAIEmbeddings(*, client=None, model='text-embedding-ada-002', deployment='text-embedding-ada-002', openai_api_version=None, openai_api_base=None, openai_api_type=None, openai_proxy=None, embedding_ctx_length=8191, openai_api_key=None, openai_organization=None, allowed_special={}, disallowed_special='all', chunk_size=1000, max_retries=6, request_timeout=None, headers=None, tiktoken_model_name=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around OpenAI embedding models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="my-api-key")
In order to use the library with Microsoft Azure endpoints, you need to set
the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.
The OPENAI_API_TYPE must be set to βazureβ and the others correspond to
the properties of your endpoint.
In addition, the deployment name must be passed as the model parameter.
Example
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080"
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
deployment="your-embeddings-deployment-name",
model="your-embeddings-model-name",
openai_api_base="https://your-endpoint.openai.azure.com/",
openai_api_type="azure",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
Parameters
client (Any) β
model (str) β
deployment (str) β
openai_api_version (Optional[str]) β
openai_api_base (Optional[str]) β
openai_api_type (Optional[str]) β
openai_proxy (Optional[str]) β
embedding_ctx_length (int) β
openai_api_key (Optional[str]) β
openai_organization (Optional[str]) β
allowed_special (Union[Literal['all'], typing.Set[str]]) β
disallowed_special (Union[Literal['all'], typing.Set[str], typing.Sequence[str]]) β
chunk_size (int) β
max_retries (int) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
headers (Any) β
tiktoken_model_name (Optional[str]) β
Return type
None
attribute chunk_size: int = 1000ο
Maximum number of texts to embed in each batch
attribute max_retries: int = 6ο
Maximum number of retries to make when generating.
attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = Noneο
Timeout in seconds for the OpenAPI request.
attribute tiktoken_model_name: Optional[str] = Noneο
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
async aembed_documents(texts, chunk_size=0)[source]ο
Call out to OpenAIβs embedding endpoint async for embedding search docs.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size (Optional[int]) β The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
async aembed_query(text)[source]ο
Call out to OpenAIβs embedding endpoint async for embedding query text.
Parameters
text (str) β The text to embed.
Returns
Embedding for the text.
Return type
List[float]
embed_documents(texts, chunk_size=0)[source]ο
Call out to OpenAIβs embedding endpoint for embedding search docs.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size (Optional[int]) β The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to OpenAIβs embedding endpoint for embedding query text.
Parameters
text (str) β The text to embed.
Returns
Embedding for the text.
Return type
List[float]
class langchain.embeddings.HuggingFaceEmbeddings(*, client=None, model_name='sentence-transformers/all-mpnet-base-v2', cache_folder=None, model_kwargs=None, encode_kwargs=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers python package installed.
Example
from langchain.embeddings import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-mpnet-base-v2"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': False}
hf = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
Parameters
client (Any) β
model_name (str) β
cache_folder (Optional[str]) β
model_kwargs (Dict[str, Any]) β
encode_kwargs (Dict[str, Any]) β
Return type
None
attribute cache_folder: Optional[str] = Noneο
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
attribute encode_kwargs: Dict[str, Any] [Optional]ο
Key word arguments to pass when calling the encode method of the model.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Key word arguments to pass to the model.
attribute model_name: str = 'sentence-transformers/all-mpnet-base-v2'ο
Model name to use.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a HuggingFace transformer model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.CohereEmbeddings(*, client=None, model='embed-english-v2.0', truncate=None, cohere_api_key=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around Cohere embedding models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import CohereEmbeddings
cohere = CohereEmbeddings(
model="embed-english-light-v2.0", cohere_api_key="my-api-key"
)
Parameters
client (Any) β
model (str) β
truncate (Optional[str]) β
cohere_api_key (Optional[str]) β
Return type
None
attribute model: str = 'embed-english-v2.0'ο
Model name to use.
attribute truncate: Optional[str] = Noneο
Truncate embeddings that are too long from start or end (βNONEβ|βSTARTβ|βENDβ)
embed_documents(texts)[source]ο
Call out to Cohereβs embedding endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to Cohereβs embedding endpoint.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.ElasticsearchEmbeddings(client, model_id, *, input_field='text_field')[source]ο
Bases: langchain.embeddings.base.Embeddings
Wrapper around Elasticsearch embedding models.
This class provides an interface to generate embeddings using a model deployed
in an Elasticsearch cluster. It requires an Elasticsearch connection object
and the model_id of the model deployed in the cluster.
In Elasticsearch you need to have an embedding model loaded and deployed.
- https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html
- https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html
Parameters
client (MlClient) β
model_id (str) β
input_field (str) β
classmethod from_credentials(model_id, *, es_cloud_id=None, es_user=None, es_password=None, input_field='text_field')[source]ο
Instantiate embeddings from Elasticsearch credentials.
Parameters
model_id (str) β The model_id of the model deployed in the Elasticsearch
cluster.
input_field (str) β The name of the key for the input text field in the
document. Defaults to βtext_fieldβ.
es_cloud_id (Optional[str]) β (str, optional): The Elasticsearch cloud ID to connect to.
es_user (Optional[str]) β (str, optional): Elasticsearch username.
es_password (Optional[str]) β (str, optional): Elasticsearch password.
Return type
langchain.embeddings.elasticsearch.ElasticsearchEmbeddings
Example
from langchain.embeddings import ElasticsearchEmbeddings
# Define the model ID and input field name (if different from default)
model_id = "your_model_id"
# Optional, only if different from 'text_field'
input_field = "your_input_field"
# Credentials can be passed in two ways. Either set the env vars
# ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically
# pulled in, or pass them in directly as kwargs.
embeddings = ElasticsearchEmbeddings.from_credentials(
model_id,
input_field=input_field,
# es_cloud_id="foo",
# es_user="bar",
# es_password="baz",
)
documents = [
"This is an example document.",
"Another example document to generate embeddings for.",
]
embeddings_generator.embed_documents(documents)
classmethod from_es_connection(model_id, es_connection, input_field='text_field')[source]ο
Instantiate embeddings from an existing Elasticsearch connection.
This method provides a way to create an instance of the ElasticsearchEmbeddings
class using an existing Elasticsearch connection. The connection object is used
to create an MlClient, which is then used to initialize the
ElasticsearchEmbeddings instance.
Args:
model_id (str): The model_id of the model deployed in the Elasticsearch cluster.
es_connection (elasticsearch.Elasticsearch): An existing Elasticsearch
connection object. input_field (str, optional): The name of the key for the
input text field in the document. Defaults to βtext_fieldβ.
Returns:
ElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class.
Example
from elasticsearch import Elasticsearch
from langchain.embeddings import ElasticsearchEmbeddings
# Define the model ID and input field name (if different from default)
model_id = "your_model_id"
# Optional, only if different from 'text_field'
input_field = "your_input_field"
# Create Elasticsearch connection
es_connection = Elasticsearch(
hosts=["localhost:9200"], http_auth=("user", "password")
)
# Instantiate ElasticsearchEmbeddings using the existing connection
embeddings = ElasticsearchEmbeddings.from_es_connection(
model_id,
es_connection,
input_field=input_field,
)
documents = [
"This is an example document.",
"Another example document to generate embeddings for.",
]
embeddings_generator.embed_documents(documents)
Parameters
model_id (str) β
es_connection (Elasticsearch) β
input_field (str) β
Return type
ElasticsearchEmbeddings
embed_documents(texts)[source]ο
Generate embeddings for a list of documents.
Parameters
texts (List[str]) β A list of document text strings to generate embeddings
for.
Returns
A list of embeddings, one for each document in the inputlist.
Return type
List[List[float]]
embed_query(text)[source]ο
Generate an embedding for a single query text.
Parameters
text (str) β The query text to generate an embedding for.
Returns
The embedding for the input query text.
Return type
List[float]
class langchain.embeddings.LlamaCppEmbeddings(*, client=None, model_path, n_ctx=512, n_parts=- 1, seed=- 1, f16_kv=False, logits_all=False, vocab_only=False, use_mlock=False, n_threads=None, n_batch=8, n_gpu_layers=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around llama.cpp embedding models.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: https://github.com/abetlen/llama-cpp-python
Example
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model.bin")
Parameters
client (Any) β
model_path (str) β
n_ctx (int) β
n_parts (int) β
seed (int) β
f16_kv (bool) β
logits_all (bool) β
vocab_only (bool) β
use_mlock (bool) β
n_threads (Optional[int]) β
n_batch (Optional[int]) β
n_gpu_layers (Optional[int]) β
Return type
None
attribute f16_kv: bool = Falseο
Use half-precision for key/value cache.
attribute logits_all: bool = Falseο
Return logits for all tokens, not just the last token.
attribute n_batch: Optional[int] = 8ο
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
attribute n_ctx: int = 512ο
Token context window.
attribute n_gpu_layers: Optional[int] = Noneο
Number of layers to be loaded into gpu memory. Default None.
attribute n_parts: int = -1ο
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
attribute n_threads: Optional[int] = Noneο
Number of threads to use. If None, the number
of threads is automatically determined.
attribute seed: int = -1ο
Seed. If -1, a random seed is used.
attribute use_mlock: bool = Falseο
Force system to keep model in RAM.
attribute vocab_only: bool = Falseο
Only load the vocabulary, no weights.
embed_documents(texts)[source]ο
Embed a list of documents using the Llama model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using the Llama model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.HuggingFaceHubEmbeddings(*, client=None, repo_id='sentence-transformers/all-mpnet-base-v2', task='feature-extraction', model_kwargs=None, huggingfacehub_api_token=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around HuggingFaceHub embedding models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.embeddings import HuggingFaceHubEmbeddings
repo_id = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceHubEmbeddings(
repo_id=repo_id,
task="feature-extraction",
huggingfacehub_api_token="my-api-key",
)
Parameters
client (Any) β
repo_id (str) β
task (Optional[str]) β
model_kwargs (Optional[dict]) β
huggingfacehub_api_token (Optional[str]) β
Return type
None
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model.
attribute repo_id: str = 'sentence-transformers/all-mpnet-base-v2'ο
Model name to use.
attribute task: Optional[str] = 'feature-extraction'ο
Task to call the model with.
embed_documents(texts)[source]ο
Call out to HuggingFaceHubβs embedding endpoint for embedding search docs.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to HuggingFaceHubβs embedding endpoint for embedding query text.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.ModelScopeEmbeddings(*, embed=None, model_id='damo/nlp_corom_sentence-embedding_english-base')[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around modelscope_hub embedding models.
To use, you should have the modelscope python package installed.
Example
from langchain.embeddings import ModelScopeEmbeddings
model_id = "damo/nlp_corom_sentence-embedding_english-base"
embed = ModelScopeEmbeddings(model_id=model_id)
Parameters
embed (Any) β
model_id (str) β
Return type
None
attribute model_id: str = 'damo/nlp_corom_sentence-embedding_english-base'ο
Model name to use.
embed_documents(texts)[source]ο
Compute doc embeddings using a modelscope embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a modelscope embedding model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.TensorflowHubEmbeddings(*, embed=None, model_url='https://tfhub.dev/google/universal-sentence-encoder-multilingual/3')[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around tensorflow_hub embedding models.
To use, you should have the tensorflow_text python package installed.
Example
from langchain.embeddings import TensorflowHubEmbeddings
url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"
tf = TensorflowHubEmbeddings(model_url=url)
Parameters
embed (Any) β
model_url (str) β
Return type
None
attribute model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3'ο
Model name to use.
embed_documents(texts)[source]ο
Compute doc embeddings using a TensorflowHub embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a TensorflowHub embedding model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.SagemakerEndpointEmbeddings(*, client=None, endpoint_name='', region_name='', credentials_profile_name=None, content_handler, model_kwargs=None, endpoint_kwargs=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
Parameters
client (Any) β
endpoint_name (str) β
region_name (str) β
credentials_profile_name (Optional[str]) β
content_handler (langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler) β
model_kwargs (Optional[Dict]) β
endpoint_kwargs (Optional[Dict]) β
Return type
None
attribute content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]ο
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
attribute credentials_profile_name: Optional[str] = Noneο
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
attribute endpoint_kwargs: Optional[Dict] = Noneο
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
attribute endpoint_name: str = ''ο
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
attribute model_kwargs: Optional[Dict] = Noneο
Key word arguments to pass to the model.
attribute region_name: str = ''ο
The aws region where the Sagemaker model is deployed, eg. us-west-2.
embed_documents(texts, chunk_size=64)[source]ο
Compute doc embeddings using a SageMaker Inference Endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size (int) β The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a SageMaker inference endpoint.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.HuggingFaceInstructEmbeddings(*, client=None, model_name='hkunlp/instructor-large', cache_folder=None, model_kwargs=None, encode_kwargs=None, embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ')[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers
and InstructorEmbedding python packages installed.
Example
from langchain.embeddings import HuggingFaceInstructEmbeddings
model_name = "hkunlp/instructor-large"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': True}
hf = HuggingFaceInstructEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
Parameters
client (Any) β
model_name (str) β
cache_folder (Optional[str]) β
model_kwargs (Dict[str, Any]) β
encode_kwargs (Dict[str, Any]) β
embed_instruction (str) β
query_instruction (str) β
Return type
None
attribute cache_folder: Optional[str] = Noneο
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
attribute embed_instruction: str = 'Represent the document for retrieval: 'ο
Instruction to use for embedding documents.
attribute encode_kwargs: Dict[str, Any] [Optional]ο
Key word arguments to pass when calling the encode method of the model.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Key word arguments to pass to the model.
attribute model_name: str = 'hkunlp/instructor-large'ο
Model name to use.
attribute query_instruction: str = 'Represent the question for retrieving supporting documents: 'ο
Instruction to use for embedding query.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a HuggingFace instruct model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.MosaicMLInstructorEmbeddings(*, endpoint_url='https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ', retry_sleep=1.0, mosaicml_api_token=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around MosaicMLβs embedding inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import MosaicMLInstructorEmbeddings
endpoint_url = (
"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict"
)
mosaic_llm = MosaicMLInstructorEmbeddings(
endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
Parameters
endpoint_url (str) β
embed_instruction (str) β
query_instruction (str) β
retry_sleep (float) β
mosaicml_api_token (Optional[str]) β
Return type
None
attribute embed_instruction: str = 'Represent the document for retrieval: 'ο
Instruction used to embed documents.
attribute endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict'ο
Endpoint URL to use.
attribute query_instruction: str = 'Represent the question for retrieving supporting documents: 'ο
Instruction used to embed the query.
attribute retry_sleep: float = 1.0ο
How long to try sleeping for if a rate limit is encountered
embed_documents(texts)[source]ο
Embed documents using a MosaicML deployed instructor embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using a MosaicML deployed instructor embedding model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.SelfHostedEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn, load_fn_kwargs=None, model_reqs=['./', 'torch'], inference_kwargs=None)[source]ο
Bases: langchain.llms.self_hosted.SelfHostedPipeline, langchain.embeddings.base.Embeddings
Runs custom embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
def get_pipeline():
model_id = "facebook/bart-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu
model_reqs=["./", "torch", "transformers"],
)
Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings
import runhouse as rh
from transformers import pipeline
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
pipeline = pipeline(model="bert-base-uncased", task="feature-extraction")
rh.blob(pickle.dumps(pipeline),
path="models/pipeline.pkl").save().to(gpu, path="models")
embeddings = SelfHostedHFEmbeddings.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
inference_kwargs (Any) β
Return type
None
attribute inference_fn: Callable = <function _embed_documents>ο
Inference function to extract the embeddings on the remote hardware.
attribute inference_kwargs: Any = Noneο
Any kwargs to pass to the modelβs inference function.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts (List[str]) β The list of texts to embed.s
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a HuggingFace transformer model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.SelfHostedHuggingFaceEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn=<function load_embedding_model>, load_fn_kwargs=None, model_reqs=['./', 'sentence_transformers', 'torch'], inference_kwargs=None, model_id='sentence-transformers/all-mpnet-base-v2')[source]ο
Bases: langchain.embeddings.self_hosted.SelfHostedEmbeddings
Runs sentence_transformers embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
inference_kwargs (Any) β
model_id (str) β
Return type
None
attribute hardware: Any = Noneο
Remote hardware to send the inference function to.
attribute inference_fn: Callable = <function _embed_documents>ο
Inference function to extract the embeddings.
attribute load_fn_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model load function.
attribute model_id: str = 'sentence-transformers/all-mpnet-base-v2'ο
Model name to use.
attribute model_load_fn: Callable = <function load_embedding_model>ο
Function to load the model remotely on the server.
attribute model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']ο
Requirements to install on hardware to inference the model.
class langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn=<function load_embedding_model>, load_fn_kwargs=None, model_reqs=['./', 'InstructorEmbedding', 'torch'], inference_kwargs=None, model_id='hkunlp/instructor-large', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ')[source]ο
Bases: langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings
Runs InstructorEmbedding embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings
import runhouse as rh
model_name = "hkunlp/instructor-large"
gpu = rh.cluster(name='rh-a10x', instance_type='A100:1')
hf = SelfHostedHuggingFaceInstructEmbeddings(
model_name=model_name, hardware=gpu)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
inference_kwargs (Any) β
model_id (str) β
embed_instruction (str) β
query_instruction (str) β
Return type
None
attribute embed_instruction: str = 'Represent the document for retrieval: 'ο
Instruction to use for embedding documents.
attribute model_id: str = 'hkunlp/instructor-large'ο
Model name to use.
attribute model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']ο
Requirements to install on hardware to inference the model.
attribute query_instruction: str = 'Represent the question for retrieving supporting documents: 'ο
Instruction to use for embedding query.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a HuggingFace instruct model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.FakeEmbeddings(*, size)[source]ο
Bases: langchain.embeddings.base.Embeddings, pydantic.main.BaseModel
Parameters
size (int) β
Return type
None
embed_documents(texts)[source]ο
Embed search docs.
Parameters
texts (List[str]) β
Return type
List[List[float]]
embed_query(text)[source]ο
Embed query text.
Parameters
text (str) β
Return type
List[float]
class langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper for Aleph Alphaβs Asymmetric Embeddings
AA provides you with an endpoint to embed a document and a query.
The models were optimized to make the embeddings of documents and
the query for a document as similar as possible.
To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/
Example
from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding
embeddings = AlephAlphaSymmetricSemanticEmbedding()
document = "This is a content of the document"
query = "What is the content of the document?"
doc_result = embeddings.embed_documents([document])
query_result = embeddings.embed_query(query)
Parameters
client (Any) β
model (Optional[str]) β
hosting (Optional[str]) β
normalize (Optional[bool]) β
compress_to_size (Optional[int]) β
contextual_control_threshold (Optional[int]) β
control_log_additive (Optional[bool]) β
aleph_alpha_api_key (Optional[str]) β
Return type
None
attribute aleph_alpha_api_key: Optional[str] = Noneο
API key for Aleph Alpha API.
attribute compress_to_size: Optional[int] = 128ο
Should the returned embeddings come back as an original 5120-dim vector,
or should it be compressed to 128-dim.
attribute contextual_control_threshold: Optional[int] = Noneο
Attention control parameters only apply to those tokens that have
explicitly been set in the request.
attribute control_log_additive: Optional[bool] = Trueο
Apply controls on prompt items by adding the log(control_factor)
to attention scores.
attribute hosting: Optional[str] = 'https://api.aleph-alpha.com'ο
Optional parameter that specifies which datacenters may process the request.
attribute model: Optional[str] = 'luminous-base'ο
Model name to use.
attribute normalize: Optional[bool] = Trueο
Should returned embeddings be normalized
embed_documents(texts)[source]ο
Call out to Aleph Alphaβs asymmetric Document endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to Aleph Alphaβs asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
Parameters
text (str) β
Return type
List[float]
class langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source]ο
Bases: langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding
The symmetric version of the Aleph Alphaβs semantic embeddings.
The main difference is that here, both the documents and
queries are embedded with a SemanticRepresentation.Symmetric
.. rubric:: Example
from aleph_alpha import AlephAlphaSymmetricSemanticEmbedding
embeddings = AlephAlphaAsymmetricSemanticEmbedding()
text = "This is a test text"
doc_result = embeddings.embed_documents([text])
query_result = embeddings.embed_query(text)
Parameters
client (Any) β
model (Optional[str]) β
hosting (Optional[str]) β
normalize (Optional[bool]) β
compress_to_size (Optional[int]) β
contextual_control_threshold (Optional[int]) β
control_log_additive (Optional[bool]) β
aleph_alpha_api_key (Optional[str]) β
Return type
None
embed_documents(texts)[source]ο
Call out to Aleph Alphaβs Document endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to Aleph Alphaβs asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
Parameters
text (str) β
Return type
List[float]
langchain.embeddings.SentenceTransformerEmbeddingsο
alias of langchain.embeddings.huggingface.HuggingFaceEmbeddings
class langchain.embeddings.MiniMaxEmbeddings(*, endpoint_url='https://api.minimax.chat/v1/embeddings', model='embo-01', embed_type_db='db', embed_type_query='query', minimax_group_id=None, minimax_api_key=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around MiniMaxβs embedding inference service.
To use, you should have the environment variable MINIMAX_GROUP_ID and
MINIMAX_API_KEY set with your API token, or pass it as a named parameter to
the constructor.
Example
from langchain.embeddings import MiniMaxEmbeddings
embeddings = MiniMaxEmbeddings()
query_text = "This is a test query."
query_result = embeddings.embed_query(query_text)
document_text = "This is a test document."
document_result = embeddings.embed_documents([document_text])
Parameters
endpoint_url (str) β
model (str) β
embed_type_db (str) β
embed_type_query (str) β
minimax_group_id (Optional[str]) β
minimax_api_key (Optional[str]) β
Return type
None
attribute embed_type_db: str = 'db'ο
For embed_documents
attribute embed_type_query: str = 'query'ο
For embed_query
attribute endpoint_url: str = 'https://api.minimax.chat/v1/embeddings'ο
Endpoint URL to use.
attribute minimax_api_key: Optional[str] = Noneο
API Key for MiniMax API.
attribute minimax_group_id: Optional[str] = Noneο
Group ID for MiniMax API.
attribute model: str = 'embo-01'ο
Embeddings model name to use.
embed_documents(texts)[source]ο
Embed documents using a MiniMax embedding endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using a MiniMax embedding endpoint.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.BedrockEmbeddings(*, client=None, region_name=None, credentials_profile_name=None, model_id='amazon.titan-e1t-medium', model_kwargs=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Embeddings provider to invoke Bedrock embedding models.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Bedrock service.
Parameters
client (Any) β
region_name (Optional[str]) β
credentials_profile_name (Optional[str]) β
model_id (str) β
model_kwargs (Optional[Dict]) β
Return type
None
attribute credentials_profile_name: Optional[str] = Noneο
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
attribute model_id: str = 'amazon.titan-e1t-medium'ο
Id of the model to call, e.g., amazon.titan-e1t-medium, this is
equivalent to the modelId property in the list-foundation-models api
attribute model_kwargs: Optional[Dict] = Noneο
Key word arguments to pass to the model.
attribute region_name: Optional[str] = Noneο
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
embed_documents(texts, chunk_size=1)[source]ο
Compute doc embeddings using a Bedrock model.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size (int) β Bedrock currently only allows single string
inputs, so chunk size is always 1. This input is here
only for compatibility with the embeddings interface.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a Bedrock model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.DeepInfraEmbeddings(*, model_id='sentence-transformers/clip-ViT-B-32', normalize=False, embed_instruction='passage: ', query_instruction='query: ', model_kwargs=None, deepinfra_api_token=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around Deep Infraβs embedding inference service.
To use, you should have the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
There are multiple embeddings models available,
see https://deepinfra.com/models?type=embeddings.
Example
from langchain.embeddings import DeepInfraEmbeddings
deepinfra_emb = DeepInfraEmbeddings(
model_id="sentence-transformers/clip-ViT-B-32",
deepinfra_api_token="my-api-key"
)
r1 = deepinfra_emb.embed_documents(
[
"Alpha is the first letter of Greek alphabet",
"Beta is the second letter of Greek alphabet",
]
)
r2 = deepinfra_emb.embed_query(
"What is the second letter of Greek alphabet"
)
Parameters
model_id (str) β
normalize (bool) β
embed_instruction (str) β
query_instruction (str) β
model_kwargs (Optional[dict]) β
deepinfra_api_token (Optional[str]) β
Return type
None
attribute embed_instruction: str = 'passage: 'ο
Instruction used to embed documents.
attribute model_id: str = 'sentence-transformers/clip-ViT-B-32'ο
Embeddings model to use.
attribute model_kwargs: Optional[dict] = Noneο
Other model keyword args
attribute normalize: bool = Falseο
whether to normalize the computed embeddings
attribute query_instruction: str = 'query: 'ο
Instruction used to embed the query.
embed_documents(texts)[source]ο
Embed documents using a Deep Infra deployed embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using a Deep Infra deployed embedding model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.DashScopeEmbeddings(*, client=None, model='text-embedding-v1', dashscope_api_key=None, max_retries=5)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around DashScope embedding models.
To use, you should have the dashscope python package installed, and the
environment variable DASHSCOPE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(dashscope_api_key="my-api-key")
Example
import os
os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY"
from langchain.embeddings.dashscope import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(
model="text-embedding-v1",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
Parameters
client (Any) β
model (str) β
dashscope_api_key (Optional[str]) β
max_retries (int) β
Return type
None
attribute dashscope_api_key: Optional[str] = Noneο
Maximum number of retries to make when generating.
embed_documents(texts)[source]ο
Call out to DashScopeβs embedding endpoint for embedding search docs.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size β The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to DashScopeβs embedding endpoint for embedding query text.
Parameters
text (str) β The text to embed.
Returns
Embedding for the text.
Return type
List[float]
class langchain.embeddings.EmbaasEmbeddings(*, model='e5-large-v2', instruction=None, api_url='https://api.embaas.io/v1/embeddings/', embaas_api_key=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around embaasβs embedding service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Initialise with default model and instruction
from langchain.embeddings import EmbaasEmbeddings
emb = EmbaasEmbeddings()
# Initialise with custom model and instruction
from langchain.embeddings import EmbaasEmbeddings
emb_model = "instructor-large"
emb_inst = "Represent the Wikipedia document for retrieval"
emb = EmbaasEmbeddings(
model=emb_model,
instruction=emb_inst
)
Parameters
model (str) β
instruction (Optional[str]) β
api_url (str) β
embaas_api_key (Optional[str]) β
Return type
None
attribute api_url: str = 'https://api.embaas.io/v1/embeddings/'ο
The URL for the embaas embeddings API.
attribute instruction: Optional[str] = Noneο
Instruction used for domain-specific embeddings.
attribute model: str = 'e5-large-v2'ο
The model used for embeddings.
embed_documents(texts)[source]ο
Get embeddings for a list of texts.
Parameters
texts (List[str]) β The list of texts to get embeddings for.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Get embeddings for a single text.
Parameters
text (str) β The text to get embeddings for.
Returns
List of embeddings.
Return type
List[float] | https://api.python.langchain.com/en/latest/modules/embeddings.html |
c3d05344-a68a-42c1-a6e5-0cf256519de1 | Memoryο
class langchain.memory.CassandraChatMessageHistory(contact_points, session_id, port=9042, username='cassandra', password='cassandra', keyspace_name='chat_history', table_name='message_store')[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history that stores history in Cassandra.
Parameters
contact_points (List[str]) β list of ips to connect to Cassandra cluster
session_id (str) β arbitrary key that is used to store the messages
of a single chat session.
port (int) β port to connect to Cassandra cluster
username (str) β username to connect to Cassandra cluster
password (str) β password to connect to Cassandra cluster
keyspace_name (str) β name of the keyspace to use
table_name (str) β name of the table to use
property messages: List[langchain.schema.BaseMessage]ο
Retrieve the messages from Cassandra
add_message(message)[source]ο
Append the message to the record in Cassandra
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from Cassandra
Return type
None
class langchain.memory.ChatMessageHistory(*, messages=[])[source]ο
Bases: langchain.schema.BaseChatMessageHistory, pydantic.main.BaseModel
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
None
attribute messages: List[langchain.schema.BaseMessage] = []ο
add_message(message)[source]ο
Add a self-created message to the store
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Remove all messages from the store
Return type
None
class langchain.memory.CombinedMemory(*, memories)[source]ο
Bases: langchain.schema.BaseMemory
Class for combining multiple memoriesβ data together.
Parameters
memories (List[langchain.schema.BaseMemory]) β
Return type
None
attribute memories: List[langchain.schema.BaseMemory] [Required]ο
For tracking all the memories that should be accessed.
clear()[source]ο
Clear context from this session for every memory.
Return type
None
load_memory_variables(inputs)[source]ο
Load all vars from sub-memories.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, str]
save_context(inputs, outputs)[source]ο
Save context from this session for every memory.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property memory_variables: List[str]ο
All the memory variables that this instance provides.
class langchain.memory.ConversationBufferMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', memory_key='history')[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory
Buffer for storing conversation memory.
Parameters
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
human_prefix (str) β
ai_prefix (str) β
memory_key (str) β
Return type
None
attribute ai_prefix: str = 'AI'ο
attribute human_prefix: str = 'Human'ο
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Any]
property buffer: Anyο
String buffer of memory.
class langchain.memory.ConversationBufferWindowMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', memory_key='history', k=5)[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory
Buffer for storing conversation memory.
Parameters
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
human_prefix (str) β
ai_prefix (str) β
memory_key (str) β
k (int) β
Return type
None
attribute ai_prefix: str = 'AI'ο
attribute human_prefix: str = 'Human'ο
attribute k: int = 5ο
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, str]
property buffer: List[langchain.schema.BaseMessage]ο
String buffer of memory.
class langchain.memory.ConversationEntityMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', llm, entity_extraction_prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True), entity_summarization_prompt=PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\n\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\n\nFull conversation history (for context):\n{history}\n\nEntity to summarize:\n{entity}\n\nExisting summary of {entity}:\n{summary}\n\nLast line of conversation:\nHuman: {input}\nUpdated summary:', template_format='f-string', validate_template=True), entity_cache=[], k=3, chat_history_key='history', entity_store=None)[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory
Entity extractor & summarizer memory.
Extracts named entities from the recent chat history and generates summaries.
With a swapable entity store, persisting entities across conversations.
Defaults to an in-memory entity store, and can be swapped out for a Redis,
SQLite, or other entity store.
Parameters
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
human_prefix (str) β
ai_prefix (str) β
llm (langchain.base_language.BaseLanguageModel) β
entity_extraction_prompt (langchain.prompts.base.BasePromptTemplate) β
entity_summarization_prompt (langchain.prompts.base.BasePromptTemplate) β
entity_cache (List[str]) β
k (int) β
chat_history_key (str) β
entity_store (langchain.memory.entity.BaseEntityStore) β
Return type
None
attribute ai_prefix: str = 'AI'ο
attribute chat_history_key: str = 'history'ο
attribute entity_cache: List[str] = []ο
attribute entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True)ο
attribute entity_store: langchain.memory.entity.BaseEntityStore [Optional]ο
attribute entity_summarization_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['entity', 'summary', 'history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant helping a human keep track of facts about relevant people, places, and concepts in their life. Update the summary of the provided entity in the "Entity" section based on the last line of your conversation with the human. If you are writing the summary for the first time, return a single sentence.\nThe update should only include facts that are relayed in the last line of conversation about the provided entity, and should only contain facts about the provided entity.\n\nIf there is no new information about the provided entity or the information is not worth noting (not an important or relevant fact to remember long-term), return the existing summary unchanged.\n\nFull conversation history (for context):\n{history}\n\nEntity to summarize:\n{entity}\n\nExisting summary of {entity}:\n{summary}\n\nLast line of conversation:\nHuman: {input}\nUpdated summary:', template_format='f-string', validate_template=True)ο
attribute human_prefix: str = 'Human'ο
attribute k: int = 3ο
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
clear()[source]ο
Clear memory contents.
Return type
None
load_memory_variables(inputs)[source]ο
Returns chat history and all generated entities with summaries if available,
and updates or clears the recent entity cache.
New entity name can be found when calling this method, before the entity
summaries are generated, so the entity cache values may be empty if no entity
descriptions are generated yet.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Any]
save_context(inputs, outputs)[source]ο
Save context from this conversation history to the entity store.
Generates a summary for each entity in the entity cache by prompting
the model, and saves these summaries to the entity store.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property buffer: List[langchain.schema.BaseMessage]ο
Access chat memory messages.
class langchain.memory.ConversationKGMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, k=2, human_prefix='Human', ai_prefix='AI', kg=None, knowledge_extraction_prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nConversation history:\nPerson #1: Did you hear aliens landed in Area 51?\nAI: No, I didn't hear that. What do you know about Area 51?\nPerson #1: It's a secret military base in Nevada.\nAI: What do you know about Nevada?\nLast line of conversation:\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: Hello.\nAI: Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversation:\nPerson #1: I'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\nLast line of conversation:\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:", template_format='f-string', validate_template=True), entity_extraction_prompt=PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True), llm, summary_message_cls=<class 'langchain.schema.SystemMessage'>, memory_key='history')[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory
Knowledge graph memory for storing conversation memory.
Integrates with external knowledge graph to store and retrieve
information about knowledge triples in the conversation.
Parameters
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
k (int) β
human_prefix (str) β
ai_prefix (str) β
kg (langchain.graphs.networkx_graph.NetworkxEntityGraph) β
knowledge_extraction_prompt (langchain.prompts.base.BasePromptTemplate) β
entity_extraction_prompt (langchain.prompts.base.BasePromptTemplate) β
llm (langchain.base_language.BaseLanguageModel) β
summary_message_cls (Type[langchain.schema.BaseMessage]) β
memory_key (str) β
Return type
None
attribute ai_prefix: str = 'AI'ο
attribute entity_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='You are an AI assistant reading the transcript of a conversation between an AI and a human. Extract all of the proper nouns from the last line of conversation. As a guideline, a proper noun is generally capitalized. You should definitely extract all names and places.\n\nThe conversation history is provided just in case of a coreference (e.g. "What do you know about him" where "him" is defined in a previous line) -- ignore items mentioned there that are not in the last line.\n\nReturn the output as a single comma-separated list, or NONE if there is nothing of note to return (e.g. the user is just issuing a greeting or having a simple conversation).\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff.\nOutput: Langchain\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: how\'s it going today?\nAI: "It\'s going great! How about you?"\nPerson #1: good! busy working on Langchain. lots to do.\nAI: "That sounds like a lot of work! What kind of things are you doing to make Langchain better?"\nLast line:\nPerson #1: i\'m trying to improve Langchain\'s interfaces, the UX, its integrations with various products the user might want ... a lot of stuff. I\'m working with Person #2.\nOutput: Langchain, Person #2\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:', template_format='f-string', validate_template=True)ο
attribute human_prefix: str = 'Human'ο
attribute k: int = 2ο
attribute kg: langchain.graphs.networkx_graph.NetworkxEntityGraph [Optional]ο
attribute knowledge_extraction_prompt: langchain.prompts.base.BasePromptTemplate = PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template="You are a networked intelligence helping a human track knowledge triples about all relevant people, things, concepts, etc. and integrating them with your knowledge stored within your weights as well as that stored in a knowledge graph. Extract all of the knowledge triples from the last line of conversation. A knowledge triple is a clause that contains a subject, a predicate, and an object. The subject is the entity being described, the predicate is the property of the subject that is being described, and the object is the value of the property.\n\nEXAMPLE\nConversation history:\nPerson #1: Did you hear aliens landed in Area 51?\nAI: No, I didn't hear that. What do you know about Area 51?\nPerson #1: It's a secret military base in Nevada.\nAI: What do you know about Nevada?\nLast line of conversation:\nPerson #1: It's a state in the US. It's also the number 1 producer of gold in the US.\n\nOutput: (Nevada, is a, state)<|>(Nevada, is in, US)<|>(Nevada, is the number 1 producer of, gold)\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: Hello.\nAI: Hi! How are you?\nPerson #1: I'm good. How are you?\nAI: I'm good too.\nLast line of conversation:\nPerson #1: I'm going to the store.\n\nOutput: NONE\nEND OF EXAMPLE\n\nEXAMPLE\nConversation history:\nPerson #1: What do you know about Descartes?\nAI: Descartes was a French philosopher, mathematician, and scientist who lived in the 17th century.\nPerson #1: The Descartes I'm referring to is a standup comedian and interior designer from Montreal.\nAI: Oh yes, He is a comedian and an interior designer. He has been in the industry for 30 years. His favorite food is baked bean pie.\nLast line of conversation:\nPerson #1: Oh huh. I know Descartes likes to drive antique scooters and play the mandolin.\nOutput: (Descartes, likes to drive, antique scooters)<|>(Descartes, plays, mandolin)\nEND OF EXAMPLE\n\nConversation history (for reference only):\n{history}\nLast line of conversation (for extraction):\nHuman: {input}\n\nOutput:", template_format='f-string', validate_template=True)ο
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
attribute summary_message_cls: Type[langchain.schema.BaseMessage] = <class 'langchain.schema.SystemMessage'>ο
Number of previous utterances to include in the context.
clear()[source]ο
Clear memory contents.
Return type
None
get_current_entities(input_string)[source]ο
Parameters
input_string (str) β
Return type
List[str]
get_knowledge_triplets(input_string)[source]ο
Parameters
input_string (str) β
Return type
List[langchain.graphs.networkx_graph.KnowledgeTriple]
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Any]
save_context(inputs, outputs)[source]ο
Save context from this conversation to buffer.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
class langchain.memory.ConversationStringBufferMemory(*, human_prefix='Human', ai_prefix='AI', buffer='', output_key=None, input_key=None, memory_key='history')[source]ο
Bases: langchain.schema.BaseMemory
Buffer for storing conversation memory.
Parameters
human_prefix (str) β
ai_prefix (str) β
buffer (str) β
output_key (Optional[str]) β
input_key (Optional[str]) β
memory_key (str) β
Return type
None
attribute ai_prefix: str = 'AI'ο
Prefix to use for AI generated responses.
attribute buffer: str = ''ο
attribute human_prefix: str = 'Human'ο
attribute input_key: Optional[str] = Noneο
attribute output_key: Optional[str] = Noneο
clear()[source]ο
Clear memory contents.
Return type
None
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, str]
save_context(inputs, outputs)[source]ο
Save context from this conversation to buffer.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property memory_variables: List[str]ο
Will always return list of memory variables.
:meta private:
class langchain.memory.ConversationSummaryBufferMemory(*, human_prefix='Human', ai_prefix='AI', llm, prompt=PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True), summary_message_cls=<class 'langchain.schema.SystemMessage'>, chat_memory=None, output_key=None, input_key=None, return_messages=False, max_token_limit=2000, moving_summary_buffer='', memory_key='history')[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory, langchain.memory.summary.SummarizerMixin
Buffer with summarizer for storing conversation memory.
Parameters
human_prefix (str) β
ai_prefix (str) β
llm (langchain.base_language.BaseLanguageModel) β
prompt (langchain.prompts.base.BasePromptTemplate) β
summary_message_cls (Type[langchain.schema.BaseMessage]) β
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
max_token_limit (int) β
moving_summary_buffer (str) β
memory_key (str) β
Return type
None
attribute max_token_limit: int = 2000ο
attribute memory_key: str = 'history'ο
attribute moving_summary_buffer: str = ''ο
clear()[source]ο
Clear memory contents.
Return type
None
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Any]
prune()[source]ο
Prune buffer if it exceeds max token limit
Return type
None
save_context(inputs, outputs)[source]ο
Save context from this conversation to buffer.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property buffer: List[langchain.schema.BaseMessage]ο
class langchain.memory.ConversationSummaryMemory(*, human_prefix='Human', ai_prefix='AI', llm, prompt=PromptTemplate(input_variables=['summary', 'new_lines'], output_parser=None, partial_variables={}, template='Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial intelligence is a force for good?\nAI: Because artificial intelligence will help humans reach their full potential.\n\nNew summary:\nThe human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential.\nEND OF EXAMPLE\n\nCurrent summary:\n{summary}\n\nNew lines of conversation:\n{new_lines}\n\nNew summary:', template_format='f-string', validate_template=True), summary_message_cls=<class 'langchain.schema.SystemMessage'>, chat_memory=None, output_key=None, input_key=None, return_messages=False, buffer='', memory_key='history')[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory, langchain.memory.summary.SummarizerMixin
Conversation summarizer to memory.
Parameters
human_prefix (str) β
ai_prefix (str) β
llm (langchain.base_language.BaseLanguageModel) β
prompt (langchain.prompts.base.BasePromptTemplate) β
summary_message_cls (Type[langchain.schema.BaseMessage]) β
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
buffer (str) β
memory_key (str) β
Return type
None
attribute buffer: str = ''ο
clear()[source]ο
Clear memory contents.
Return type
None
classmethod from_messages(llm, chat_memory, *, summarize_step=2, **kwargs)[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
chat_memory (langchain.schema.BaseChatMessageHistory) β
summarize_step (int) β
kwargs (Any) β
Return type
langchain.memory.summary.ConversationSummaryMemory
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Any]
save_context(inputs, outputs)[source]ο
Save context from this conversation to buffer.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
class langchain.memory.ConversationTokenBufferMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, human_prefix='Human', ai_prefix='AI', llm, memory_key='history', max_token_limit=2000)[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory
Buffer for storing conversation memory.
Parameters
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
human_prefix (str) β
ai_prefix (str) β
llm (langchain.base_language.BaseLanguageModel) β
memory_key (str) β
max_token_limit (int) β
Return type
None
attribute ai_prefix: str = 'AI'ο
attribute human_prefix: str = 'Human'ο
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
attribute max_token_limit: int = 2000ο
attribute memory_key: str = 'history'ο
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Any]
save_context(inputs, outputs)[source]ο
Save context from this conversation to buffer. Pruned.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property buffer: List[langchain.schema.BaseMessage]ο
String buffer of memory.
class langchain.memory.CosmosDBChatMessageHistory(cosmos_endpoint, cosmos_database, cosmos_container, session_id, user_id, credential=None, connection_string=None, ttl=None, cosmos_client_kwargs=None)[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat history backed by Azure CosmosDB.
Parameters
cosmos_endpoint (str) β
cosmos_database (str) β
cosmos_container (str) β
session_id (str) β
user_id (str) β
credential (Any) β
connection_string (Optional[str]) β
ttl (Optional[int]) β
cosmos_client_kwargs (Optional[dict]) β
prepare_cosmos()[source]ο
Prepare the CosmosDB client.
Use this function or the context manager to make sure your database is ready.
Return type
None
load_messages()[source]ο
Retrieve the messages from Cosmos
Return type
None
add_message(message)[source]ο
Add a self-created message to the store
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
upsert_messages()[source]ο
Update the cosmosdb item.
Return type
None
clear()[source]ο
Clear session memory from this memory and cosmos.
Return type
None
class langchain.memory.DynamoDBChatMessageHistory(table_name, session_id, endpoint_url=None)[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history that stores history in AWS DynamoDB.
This class expects that a DynamoDB table with name table_name
and a partition Key of SessionId is present.
Parameters
table_name (str) β name of the DynamoDB table
session_id (str) β arbitrary key that is used to store the messages
of a single chat session.
endpoint_url (Optional[str]) β URL of the AWS endpoint to connect to. This argument
is optional and useful for test purposes, like using Localstack.
If you plan to use AWS cloud service, you normally donβt have to
worry about setting the endpoint_url.
property messages: List[langchain.schema.BaseMessage]ο
Retrieve the messages from DynamoDB
add_message(message)[source]ο
Append the message to the record in DynamoDB
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from DynamoDB
Return type
None
class langchain.memory.FileChatMessageHistory(file_path)[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history that stores history in a local file.
Parameters
file_path (str) β path of the local file to store the messages.
property messages: List[langchain.schema.BaseMessage]ο
Retrieve the messages from the local file
add_message(message)[source]ο
Append the message to the record in the local file
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from the local file
Return type
None
class langchain.memory.InMemoryEntityStore(*, store={})[source]ο
Bases: langchain.memory.entity.BaseEntityStore
Basic in-memory entity store.
Parameters
store (Dict[str, Optional[str]]) β
Return type
None
attribute store: Dict[str, Optional[str]] = {}ο
clear()[source]ο
Delete all entities from store.
Return type
None
delete(key)[source]ο
Delete entity value from store.
Parameters
key (str) β
Return type
None
exists(key)[source]ο
Check if entity exists in store.
Parameters
key (str) β
Return type
bool
get(key, default=None)[source]ο
Get entity value from store.
Parameters
key (str) β
default (Optional[str]) β
Return type
Optional[str]
set(key, value)[source]ο
Set entity value in store.
Parameters
key (str) β
value (Optional[str]) β
Return type
None
class langchain.memory.MomentoChatMessageHistory(session_id, cache_client, cache_name, *, key_prefix='message_store:', ttl=None, ensure_cache_exists=True)[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history cache that uses Momento as a backend.
See https://gomomento.com/
Parameters
session_id (str) β
cache_client (momento.CacheClient) β
cache_name (str) β
key_prefix (str) β
ttl (Optional[timedelta]) β
ensure_cache_exists (bool) β
classmethod from_client_params(session_id, cache_name, ttl, *, configuration=None, auth_token=None, **kwargs)[source]ο
Construct cache from CacheClient parameters.
Parameters
session_id (str) β
cache_name (str) β
ttl (timedelta) β
configuration (Optional[momento.config.Configuration]) β
auth_token (Optional[str]) β
kwargs (Any) β
Return type
MomentoChatMessageHistory
property messages: list[langchain.schema.BaseMessage]ο
Retrieve the messages from Momento.
Raises
SdkException β Momento service or network error
Exception β Unexpected response
Returns
List of cached messages
Return type
list[BaseMessage]
add_message(message)[source]ο
Store a message in the cache.
Parameters
message (BaseMessage) β The message object to store.
Raises
SdkException β Momento service or network error.
Exception β Unexpected response.
Return type
None
clear()[source]ο
Remove the sessionβs messages from the cache.
Raises
SdkException β Momento service or network error.
Exception β Unexpected response.
Return type
None
class langchain.memory.MongoDBChatMessageHistory(connection_string, session_id, database_name='chat_history', collection_name='message_store')[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history that stores history in MongoDB.
Parameters
connection_string (str) β connection string to connect to MongoDB
session_id (str) β arbitrary key that is used to store the messages
of a single chat session.
database_name (str) β name of the database to use
collection_name (str) β name of the collection to use
property messages: List[langchain.schema.BaseMessage]ο
Retrieve the messages from MongoDB
add_message(message)[source]ο
Append the message to the record in MongoDB
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from MongoDB
Return type
None
class langchain.memory.MotorheadMemory(*, chat_memory=None, output_key=None, input_key=None, return_messages=False, url='https://api.getmetal.io/v1/motorhead', session_id, context=None, api_key=None, client_id=None, timeout=3000, memory_key='history')[source]ο
Bases: langchain.memory.chat_memory.BaseChatMemory
Parameters
chat_memory (langchain.schema.BaseChatMessageHistory) β
output_key (Optional[str]) β
input_key (Optional[str]) β
return_messages (bool) β
url (str) β
session_id (str) β
context (Optional[str]) β
api_key (Optional[str]) β
client_id (Optional[str]) β
timeout (int) β
memory_key (str) β
Return type
None
attribute api_key: Optional[str] = Noneο
attribute client_id: Optional[str] = Noneο
attribute context: Optional[str] = Noneο
attribute session_id: str [Required]ο
attribute url: str = 'https://api.getmetal.io/v1/motorhead'ο
delete_session()[source]ο
Delete a session
Return type
None
async init()[source]ο
Return type
None
load_memory_variables(values)[source]ο
Return key-value pairs given the text input to the chain.
If None, return all memories
Parameters
values (Dict[str, Any]) β
Return type
Dict[str, Any]
save_context(inputs, outputs)[source]ο
Save context from this conversation to buffer.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property memory_variables: List[str]ο
Input keys this memory class will load dynamically.
class langchain.memory.PostgresChatMessageHistory(session_id, connection_string='postgresql://postgres:mypassword@localhost/chat_history', table_name='message_store')[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history stored in a Postgres database.
Parameters
session_id (str) β
connection_string (str) β
table_name (str) β
property messages: List[langchain.schema.BaseMessage]ο
Retrieve the messages from PostgreSQL
add_message(message)[source]ο
Append the message to the record in PostgreSQL
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from PostgreSQL
Return type
None
class langchain.memory.ReadOnlySharedMemory(*, memory)[source]ο
Bases: langchain.schema.BaseMemory
A memory wrapper that is read-only and cannot be changed.
Parameters
memory (langchain.schema.BaseMemory) β
Return type
None
attribute memory: langchain.schema.BaseMemory [Required]ο
clear()[source]ο
Nothing to clear, got a memory like a vault.
Return type
None
load_memory_variables(inputs)[source]ο
Load memory variables from memory.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, str]
save_context(inputs, outputs)[source]ο
Nothing should be saved or changed
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property memory_variables: List[str]ο
Return memory variables.
class langchain.memory.RedisChatMessageHistory(session_id, url='redis://localhost:6379/0', key_prefix='message_store:', ttl=None)[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history stored in a Redis database.
Parameters
session_id (str) β
url (str) β
key_prefix (str) β
ttl (Optional[int]) β
property key: strο
Construct the record key to use
property messages: List[langchain.schema.BaseMessage]ο
Retrieve the messages from Redis
add_message(message)[source]ο
Append the message to the record in Redis
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from Redis
Return type
None
class langchain.memory.RedisEntityStore(session_id='default', url='redis://localhost:6379/0', key_prefix='memory_store', ttl=86400, recall_ttl=259200, *args, redis_client=None)[source]ο
Bases: langchain.memory.entity.BaseEntityStore
Redis-backed Entity store. Entities get a TTL of 1 day by default, and
that TTL is extended by 3 days every time the entity is read back.
Parameters
session_id (str) β
url (str) β
key_prefix (str) β
ttl (Optional[int]) β
recall_ttl (Optional[int]) β
args (Any) β
redis_client (Any) β
Return type
None
attribute key_prefix: str = 'memory_store'ο
attribute recall_ttl: Optional[int] = 259200ο
attribute redis_client: Any = Noneο
attribute session_id: str = 'default'ο
attribute ttl: Optional[int] = 86400ο
clear()[source]ο
Delete all entities from store.
Return type
None
delete(key)[source]ο
Delete entity value from store.
Parameters
key (str) β
Return type
None
exists(key)[source]ο
Check if entity exists in store.
Parameters
key (str) β
Return type
bool
get(key, default=None)[source]ο
Get entity value from store.
Parameters
key (str) β
default (Optional[str]) β
Return type
Optional[str]
set(key, value)[source]ο
Set entity value in store.
Parameters
key (str) β
value (Optional[str]) β
Return type
None
property full_key_prefix: strο
class langchain.memory.SQLChatMessageHistory(session_id, connection_string, table_name='message_store')[source]ο
Bases: langchain.schema.BaseChatMessageHistory
Chat message history stored in an SQL database.
Parameters
session_id (str) β
connection_string (str) β
table_name (str) β
property messages: List[langchain.schema.BaseMessage]ο
Retrieve all messages from db
add_message(message)[source]ο
Append the message to the record in db
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
clear()[source]ο
Clear session memory from db
Return type
None
class langchain.memory.SQLiteEntityStore(session_id='default', db_file='entities.db', table_name='memory_store', *args)[source]ο
Bases: langchain.memory.entity.BaseEntityStore
SQLite-backed Entity store
Parameters
session_id (str) β
db_file (str) β
table_name (str) β
args (Any) β
Return type
None
attribute session_id: str = 'default'ο
attribute table_name: str = 'memory_store'ο
clear()[source]ο
Delete all entities from store.
Return type
None
delete(key)[source]ο
Delete entity value from store.
Parameters
key (str) β
Return type
None
exists(key)[source]ο
Check if entity exists in store.
Parameters
key (str) β
Return type
bool
get(key, default=None)[source]ο
Get entity value from store.
Parameters
key (str) β
default (Optional[str]) β
Return type
Optional[str]
set(key, value)[source]ο
Set entity value in store.
Parameters
key (str) β
value (Optional[str]) β
Return type
None
property full_table_name: strο
class langchain.memory.SimpleMemory(*, memories={})[source]ο
Bases: langchain.schema.BaseMemory
Simple memory for storing context or other bits of information that shouldnβt
ever change between prompts.
Parameters
memories (Dict[str, Any]) β
Return type
None
attribute memories: Dict[str, Any] = {}ο
clear()[source]ο
Nothing to clear, got a memory like a vault.
Return type
None
load_memory_variables(inputs)[source]ο
Return key-value pairs given the text input to the chain.
If None, return all memories
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, str]
save_context(inputs, outputs)[source]ο
Nothing should be saved or changed, my memory is set in stone.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property memory_variables: List[str]ο
Input keys this memory class will load dynamically.
class langchain.memory.VectorStoreRetrieverMemory(*, retriever, memory_key='history', input_key=None, return_docs=False)[source]ο
Bases: langchain.schema.BaseMemory
Class for a VectorStore-backed memory object.
Parameters
retriever (langchain.vectorstores.base.VectorStoreRetriever) β
memory_key (str) β
input_key (Optional[str]) β
return_docs (bool) β
Return type
None
attribute input_key: Optional[str] = Noneο
Key name to index the inputs to load_memory_variables.
attribute memory_key: str = 'history'ο
Key name to locate the memories in the result of load_memory_variables.
attribute retriever: langchain.vectorstores.base.VectorStoreRetriever [Required]ο
VectorStoreRetriever object to connect to.
attribute return_docs: bool = Falseο
Whether or not to return the result of querying the database directly.
clear()[source]ο
Nothing to clear.
Return type
None
load_memory_variables(inputs)[source]ο
Return history buffer.
Parameters
inputs (Dict[str, Any]) β
Return type
Dict[str, Union[List[langchain.schema.Document], str]]
save_context(inputs, outputs)[source]ο
Save context from this conversation to buffer.
Parameters
inputs (Dict[str, Any]) β
outputs (Dict[str, str]) β
Return type
None
property memory_variables: List[str]ο
The list of keys emitted from the load_memory_variables method.
class langchain.memory.ZepChatMessageHistory(session_id, url='http://localhost:8000')[source]ο
Bases: langchain.schema.BaseChatMessageHistory
A ChatMessageHistory implementation that uses Zep as a backend.
Recommended usage:
# Set up Zep Chat History
zep_chat_history = ZepChatMessageHistory(
session_id=session_id,
url=ZEP_API_URL,
)
# Use a standard ConversationBufferMemory to encapsulate the Zep chat history
memory = ConversationBufferMemory(
memory_key="chat_history", chat_memory=zep_chat_history
)
Zep provides long-term conversation storage for LLM apps. The server stores,
summarizes, embeds, indexes, and enriches conversational AI chat
histories, and exposes them via simple, low-latency APIs.
For server installation instructions and more, see: https://getzep.github.io/
This class is a thin wrapper around the zep-python package. Additional
Zep functionality is exposed via the zep_summary and zep_messages
properties.
For more information on the zep-python package, see:
https://github.com/getzep/zep-python
Parameters
session_id (str) β
url (str) β
Return type
None
property messages: List[langchain.schema.BaseMessage]ο
Retrieve messages from Zep memory
property zep_messages: List[Message]ο
Retrieve summary from Zep memory
property zep_summary: Optional[str]ο
Retrieve summary from Zep memory
add_message(message)[source]ο
Append the message to the Zep memory history
Parameters
message (langchain.schema.BaseMessage) β
Return type
None
search(query, metadata=None, limit=None)[source]ο
Search Zep memory for messages matching the query
Parameters
query (str) β
metadata (Optional[Dict]) β
limit (Optional[int]) β
Return type
List[MemorySearchResult]
clear()[source]ο
Clear session memory from Zep. Note that Zep is long-term storage for memory
and this is not advised unless you have specific data retention requirements.
Return type
None | https://api.python.langchain.com/en/latest/modules/memory.html |
7702ed66-abcf-4761-ad44-02001b146132 | Output Parsersο
class langchain.output_parsers.BooleanOutputParser(*, true_val='YES', false_val='NO')[source]ο
Bases: langchain.schema.BaseOutputParser[bool]
Parameters
true_val (str) β
false_val (str) β
Return type
None
attribute false_val: str = 'NO'ο
attribute true_val: str = 'YES'ο
parse(text)[source]ο
Parse the output of an LLM call to a boolean.
Parameters
text (str) β output of language model
Returns
boolean
Return type
bool
class langchain.output_parsers.CombiningOutputParser(*, parsers)[source]ο
Bases: langchain.schema.BaseOutputParser
Class to combine multiple output parsers into one.
Parameters
parsers (List[langchain.schema.BaseOutputParser]) β
Return type
None
attribute parsers: List[langchain.schema.BaseOutputParser] [Required]ο
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(text)[source]ο
Parse the output of an LLM call.
Parameters
text (str) β
Return type
Dict[str, Any]
class langchain.output_parsers.CommaSeparatedListOutputParser[source]ο
Bases: langchain.output_parsers.list.ListOutputParser
Parse out comma separated lists.
Return type
None
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(text)[source]ο
Parse the output of an LLM call.
Parameters
text (str) β
Return type
List[str]
class langchain.output_parsers.DatetimeOutputParser(*, format='%Y-%m-%dT%H:%M:%S.%fZ')[source]ο
Bases: langchain.schema.BaseOutputParser[datetime.datetime]
Parameters
format (str) β
Return type
None
attribute format: str = '%Y-%m-%dT%H:%M:%S.%fZ'ο
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(response)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
response (str) β
Returns
structured output
Return type
datetime.datetime
class langchain.output_parsers.EnumOutputParser(*, enum)[source]ο
Bases: langchain.schema.BaseOutputParser
Parameters
enum (Type[enum.Enum]) β
Return type
None
attribute enum: Type[enum.Enum] [Required]ο
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(response)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
response (str) β
Returns
structured output
Return type
Any
class langchain.output_parsers.GuardrailsOutputParser(*, guard=None, api=None, args=None, kwargs=None)[source]ο
Bases: langchain.schema.BaseOutputParser
Parameters
guard (Any) β
api (Optional[Callable]) β
args (Any) β
kwargs (Any) β
Return type
None
attribute api: Optional[Callable] = Noneο
attribute args: Any = Noneο
attribute guard: Any = Noneο
attribute kwargs: Any = Noneο
classmethod from_rail(rail_file, num_reasks=1, api=None, *args, **kwargs)[source]ο
Parameters
rail_file (str) β
num_reasks (int) β
api (Optional[Callable]) β
args (Any) β
kwargs (Any) β
Return type
langchain.output_parsers.rail_parser.GuardrailsOutputParser
classmethod from_rail_string(rail_str, num_reasks=1, api=None, *args, **kwargs)[source]ο
Parameters
rail_str (str) β
num_reasks (int) β
api (Optional[Callable]) β
args (Any) β
kwargs (Any) β
Return type
langchain.output_parsers.rail_parser.GuardrailsOutputParser
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(text)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text (str) β output of language model
Returns
structured output
Return type
Dict
class langchain.output_parsers.ListOutputParser[source]ο
Bases: langchain.schema.BaseOutputParser
Class to parse the output of an LLM call to a list.
Return type
None
abstract parse(text)[source]ο
Parse the output of an LLM call.
Parameters
text (str) β
Return type
List[str]
class langchain.output_parsers.OutputFixingParser(*, parser, retry_chain)[source]ο
Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]
Wraps a parser and tries to fix parsing errors.
Parameters
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]) β
retry_chain (langchain.chains.llm.LLMChain) β
Return type
None
attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] [Required]ο
attribute retry_chain: langchain.chains.llm.LLMChain [Required]ο
classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'error', 'instructions'], output_parser=None, partial_variables={}, template='Instructions:\n--------------\n{instructions}\n--------------\nCompletion:\n--------------\n{completion}\n--------------\n\nAbove, the Completion did not satisfy the constraints given in the Instructions.\nError:\n--------------\n{error}\n--------------\n\nPlease try again. Please only respond with an answer that satisfies the constraints laid out in the Instructions:', template_format='f-string', validate_template=True))[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]) β
prompt (langchain.prompts.base.BasePromptTemplate) β
Return type
langchain.output_parsers.fix.OutputFixingParser[langchain.output_parsers.fix.T]
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(completion)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
completion (str) β
Returns
structured output
Return type
langchain.output_parsers.fix.T
class langchain.output_parsers.PydanticOutputParser(*, pydantic_object)[source]ο
Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.pydantic.T]
Parameters
pydantic_object (Type[langchain.output_parsers.pydantic.T]) β
Return type
None
attribute pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required]ο
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(text)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text (str) β output of language model
Returns
structured output
Return type
langchain.output_parsers.pydantic.T
class langchain.output_parsers.RegexDictParser(*, regex_pattern="{}:\\s?([^.'\\n']*)\\.?", output_key_to_format, no_update_value=None)[source]ο
Bases: langchain.schema.BaseOutputParser
Class to parse the output into a dictionary.
Parameters
regex_pattern (str) β
output_key_to_format (Dict[str, str]) β
no_update_value (Optional[str]) β
Return type
None
attribute no_update_value: Optional[str] = Noneο
attribute output_key_to_format: Dict[str, str] [Required]ο
attribute regex_pattern: str = "{}:\\s?([^.'\\n']*)\\.?"ο
parse(text)[source]ο
Parse the output of an LLM call.
Parameters
text (str) β
Return type
Dict[str, str]
class langchain.output_parsers.RegexParser(*, regex, output_keys, default_output_key=None)[source]ο
Bases: langchain.schema.BaseOutputParser
Class to parse the output into a dictionary.
Parameters
regex (str) β
output_keys (List[str]) β
default_output_key (Optional[str]) β
Return type
None
attribute default_output_key: Optional[str] = Noneο
attribute output_keys: List[str] [Required]ο
attribute regex: str [Required]ο
parse(text)[source]ο
Parse the output of an LLM call.
Parameters
text (str) β
Return type
Dict[str, str]
class langchain.output_parsers.ResponseSchema(*, name, description, type='string')[source]ο
Bases: pydantic.main.BaseModel
Parameters
name (str) β
description (str) β
type (str) β
Return type
None
attribute description: str [Required]ο
attribute name: str [Required]ο
attribute type: str = 'string'ο
class langchain.output_parsers.RetryOutputParser(*, parser, retry_chain)[source]ο
Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]
Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt and the completion to another
LLM, and telling it the completion did not satisfy criteria in the prompt.
Parameters
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) β
retry_chain (langchain.chains.llm.LLMChain) β
Return type
None
attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]ο
attribute retry_chain: langchain.chains.llm.LLMChain [Required]ο
classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nPlease try again:', template_format='f-string', validate_template=True))[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) β
prompt (langchain.prompts.base.BasePromptTemplate) β
Return type
langchain.output_parsers.retry.RetryOutputParser[langchain.output_parsers.retry.T]
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(completion)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
completion (str) β
Returns
structured output
Return type
langchain.output_parsers.retry.T
parse_with_prompt(completion, prompt_value)[source]ο
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion (str) β output of language model
prompt β prompt value
prompt_value (langchain.schema.PromptValue) β
Returns
structured output
Return type
langchain.output_parsers.retry.T
class langchain.output_parsers.RetryWithErrorOutputParser(*, parser, retry_chain)[source]ο
Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]
Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt, the completion, AND the error
that was raised to another language model and telling it that the completion
did not work, and raised the given error. Differs from RetryOutputParser
in that this implementation provides the error that was raised back to the
LLM, which in theory should give it more information on how to fix it.
Parameters
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) β
retry_chain (langchain.chains.llm.LLMChain) β
Return type
None
attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]ο
attribute retry_chain: langchain.chains.llm.LLMChain [Required]ο
classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'error', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nDetails: {error}\nPlease try again:', template_format='f-string', validate_template=True))[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) β
prompt (langchain.prompts.base.BasePromptTemplate) β
Return type
langchain.output_parsers.retry.RetryWithErrorOutputParser[langchain.output_parsers.retry.T]
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(completion)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
completion (str) β
Returns
structured output
Return type
langchain.output_parsers.retry.T
parse_with_prompt(completion, prompt_value)[source]ο
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion (str) β output of language model
prompt β prompt value
prompt_value (langchain.schema.PromptValue) β
Returns
structured output
Return type
langchain.output_parsers.retry.T
class langchain.output_parsers.StructuredOutputParser(*, response_schemas)[source]ο
Bases: langchain.schema.BaseOutputParser
Parameters
response_schemas (List[langchain.output_parsers.structured.ResponseSchema]) β
Return type
None
attribute response_schemas: List[langchain.output_parsers.structured.ResponseSchema] [Required]ο
classmethod from_response_schemas(response_schemas)[source]ο
Parameters
response_schemas (List[langchain.output_parsers.structured.ResponseSchema]) β
Return type
langchain.output_parsers.structured.StructuredOutputParser
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(text)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text (str) β output of language model
Returns
structured output
Return type
Any | https://api.python.langchain.com/en/latest/modules/output_parsers.html |
392b7fe6-4662-40df-96c9-d94e21dcb8f8 | "Toolsο\nCore toolkit implementations.\nclass langchain.tools.AIPluginTool(*, name, description, a(...TRUNCATED) | https://api.python.langchain.com/en/latest/modules/tools.html |
18521a36-3fee-40f9-86fe-e77cfb18d8be | "Callbacksο\nCallback handlers that allow listening to events in LangChain.\nclass langchain.callb(...TRUNCATED) | https://api.python.langchain.com/en/latest/modules/callbacks.html |
fb470470-dfc6-4e38-8a18-8c5a864d5999 | "Document Loadersο\nAll different types of document loaders.\nclass langchain.document_loaders.Acr(...TRUNCATED) | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
- Downloads last month
- 21