id
stringlengths
14
16
text
stringlengths
29
2.73k
source
stringlengths
50
116
ae1a1d5c8490-104
exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
https:///python.langchain.com/en/latest/reference/modules/llms.html
ae1a1d5c8490-105
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. pydantic model langchain.llms.Writer[source]# Wrapper around Writer large language models. To use, you should have the environment variable WRITER_API_KEY set with your API key. Example from langchain import Writer writer = Writer(model_id="palmyra-base") Validators raise_deprecation » all fields set_verbose » verbose validate_environment » all fields field base_url: Optional[str] = None# Base url to use, if None decides based on model name. field beam_search_diversity_rate: float = 1.0# Only applies to beam search, i.e. when the beam width is >1. A higher value encourages beam search to return a more diverse set of candidates field beam_width: Optional[int] = None# The number of concurrent candidates to keep track of during beam search field length: int = 256# The maximum number of tokens to generate in the completion. field length_pentaly: float = 1.0# Only applies to beam search, i.e. when the beam width is >1. Larger values penalize long candidates more heavily, thus preferring shorter candidates field logprobs: bool = False# Whether to return log probabilities.
https:///python.langchain.com/en/latest/reference/modules/llms.html
ae1a1d5c8490-106
shorter candidates field logprobs: bool = False# Whether to return log probabilities. field model_id: str = 'palmyra-base'# Model name to use. field random_seed: int = 0# The model generates random results. Changing the random seed alone will produce a different response with similar characteristics. It is possible to reproduce results by fixing the random seed (assuming all other hyperparameters are also fixed) field repetition_penalty: float = 1.0# Penalizes repeated tokens according to frequency. field stop: Optional[List[str]] = None# Sequences when completion generation will stop field temperature: float = 1.0# What sampling temperature to use. field tokens_to_generate: int = 24# Max number of tokens to generate. field top_k: int = 1# The number of highest probability vocabulary tokens to keep for top-k-filtering. field top_p: float = 1.0# Total probability mass of tokens to consider at each step. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → str# Check Cache and run the LLM on the given prompt and input. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult# Run the LLM on the given prompt and input.
https:///python.langchain.com/en/latest/reference/modules/llms.html
ae1a1d5c8490-107
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model# Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict# Return a dictionary of the LLM. generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult#
https:///python.langchain.com/en/latest/reference/modules/llms.html
ae1a1d5c8490-108
Run the LLM on the given prompt and input. generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None) → langchain.schema.LLMResult# Take in a list of prompt values and return an LLMResult. get_num_tokens(text: str) → int# Get the number of tokens present in the text. get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# Get the number of tokens in the message. json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode# Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). save(file_path: Union[pathlib.Path, str]) → None# Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns: Any) → None# Try to update ForwardRefs on fields based on this Model, globalns and localns. previous Writer next Chat Models By Harrison Chase
https:///python.langchain.com/en/latest/reference/modules/llms.html
ae1a1d5c8490-109
previous Writer next Chat Models By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https:///python.langchain.com/en/latest/reference/modules/llms.html
c32ff36d4644-0
.rst .pdf Embeddings Embeddings# Wrappers around embedding modules. pydantic model langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding[source]# Wrapper for Aleph Alpha’s Asymmetric Embeddings AA provides you with an endpoint to embed a document and a query. The models were optimized to make the embeddings of documents and the query for a document as similar as possible. To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/ Example from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding embeddings = AlephAlphaSymmetricSemanticEmbedding() document = "This is a content of the document" query = "What is the content of the document?" doc_result = embeddings.embed_documents([document]) query_result = embeddings.embed_query(query) field compress_to_size: Optional[int] = 128# Should the returned embeddings come back as an original 5120-dim vector, or should it be compressed to 128-dim. field contextual_control_threshold: Optional[int] = None# Attention control parameters only apply to those tokens that have explicitly been set in the request. field control_log_additive: Optional[bool] = True# Apply controls on prompt items by adding the log(control_factor) to attention scores. field hosting: Optional[str] = 'https://api.aleph-alpha.com'# Optional parameter that specifies which datacenters may process the request. field model: Optional[str] = 'luminous-base'# Model name to use. field normalize: Optional[bool] = True# Should returned embeddings be normalized embed_documents(texts: List[str]) → List[List[float]][source]# Call out to Aleph Alpha’s asymmetric Document endpoint. Parameters texts – The list of texts to embed. Returns
https:///python.langchain.com/en/latest/reference/modules/embeddings.html
c32ff36d4644-1
Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]# Call out to Aleph Alpha’s asymmetric, query embedding endpoint :param text: The text to embed. Returns Embeddings for the text. pydantic model langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding[source]# The symmetric version of the Aleph Alpha’s semantic embeddings. The main difference is that here, both the documents and queries are embedded with a SemanticRepresentation.Symmetric .. rubric:: Example embed_documents(texts: List[str]) → List[List[float]][source]# Call out to Aleph Alpha’s Document endpoint. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]# Call out to Aleph Alpha’s asymmetric, query embedding endpoint :param text: The text to embed. Returns Embeddings for the text. pydantic model langchain.embeddings.CohereEmbeddings[source]# Wrapper around Cohere embedding models. To use, you should have the cohere python package installed, and the environment variable COHERE_API_KEY set with your API key or pass it as a named parameter to the constructor. Example from langchain.embeddings import CohereEmbeddings cohere = CohereEmbeddings(model="medium", cohere_api_key="my-api-key") field model: str = 'large'# Model name to use. field truncate: Optional[str] = None# Truncate embeddings that are too long from start or end (“NONE”|”START”|”END”) embed_documents(texts: List[str]) → List[List[float]][source]#
https:///python.langchain.com/en/latest/reference/modules/embeddings.html
c32ff36d4644-2
embed_documents(texts: List[str]) → List[List[float]][source]# Call out to Cohere’s embedding endpoint. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]# Call out to Cohere’s embedding endpoint. Parameters text – The text to embed. Returns Embeddings for the text. pydantic model langchain.embeddings.FakeEmbeddings[source]# embed_documents(texts: List[str]) → List[List[float]][source]# Embed search docs. embed_query(text: str) → List[float][source]# Embed query text. pydantic model langchain.embeddings.HuggingFaceEmbeddings[source]# Wrapper around sentence_transformers embedding models. To use, you should have the sentence_transformers python package installed. Example from langchain.embeddings import HuggingFaceEmbeddings model_name = "sentence-transformers/all-mpnet-base-v2" model_kwargs = {'device': 'cpu'} hf = HuggingFaceEmbeddings(model_name=model_name, model_kwargs=model_kwargs) field cache_folder: Optional[str] = None# Path to store models. Can be also set by SENTENCE_TRANSFORMERS_HOME enviroment variable. field encode_kwargs: Dict[str, Any] [Optional]# Key word arguments to pass when calling the encode method of the model. field model_kwargs: Dict[str, Any] [Optional]# Key word arguments to pass to the model. field model_name: str = 'sentence-transformers/all-mpnet-base-v2'# Model name to use. embed_documents(texts: List[str]) → List[List[float]][source]# Compute doc embeddings using a HuggingFace transformer model. Parameters
https:///python.langchain.com/en/latest/reference/modules/embeddings.html
c32ff36d4644-3
Compute doc embeddings using a HuggingFace transformer model. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]# Compute query embeddings using a HuggingFace transformer model. Parameters text – The text to embed. Returns Embeddings for the text. pydantic model langchain.embeddings.HuggingFaceHubEmbeddings[source]# Wrapper around HuggingFaceHub embedding models. To use, you should have the huggingface_hub python package installed, and the environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Example from langchain.embeddings import HuggingFaceHubEmbeddings repo_id = "sentence-transformers/all-mpnet-base-v2" hf = HuggingFaceHubEmbeddings( repo_id=repo_id, task="feature-extraction", huggingfacehub_api_token="my-api-key", ) field model_kwargs: Optional[dict] = None# Key word arguments to pass to the model. field repo_id: str = 'sentence-transformers/all-mpnet-base-v2'# Model name to use. field task: Optional[str] = 'feature-extraction'# Task to call the model with. embed_documents(texts: List[str]) → List[List[float]][source]# Call out to HuggingFaceHub’s embedding endpoint for embedding search docs. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]# Call out to HuggingFaceHub’s embedding endpoint for embedding query text. Parameters text – The text to embed. Returns
https:///python.langchain.com/en/latest/reference/modules/embeddings.html
c32ff36d4644-4
Parameters text – The text to embed. Returns Embeddings for the text. pydantic model langchain.embeddings.HuggingFaceInstructEmbeddings[source]# Wrapper around sentence_transformers embedding models. To use, you should have the sentence_transformers and InstructorEmbedding python packages installed. Example from langchain.embeddings import HuggingFaceInstructEmbeddings model_name = "hkunlp/instructor-large" model_kwargs = {'device': 'cpu'} hf = HuggingFaceInstructEmbeddings( model_name=model_name, model_kwargs=model_kwargs ) field cache_folder: Optional[str] = None# Path to store models. Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable. field embed_instruction: str = 'Represent the document for retrieval: '# Instruction to use for embedding documents. field model_kwargs: Dict[str, Any] [Optional]# Key word arguments to pass to the model. field model_name: str = 'hkunlp/instructor-large'# Model name to use. field query_instruction: str = 'Represent the question for retrieving supporting documents: '# Instruction to use for embedding query. embed_documents(texts: List[str]) → List[List[float]][source]# Compute doc embeddings using a HuggingFace instruct model. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]# Compute query embeddings using a HuggingFace instruct model. Parameters text – The text to embed. Returns Embeddings for the text. pydantic model langchain.embeddings.LlamaCppEmbeddings[source]# Wrapper around llama.cpp embedding models. To use, you should have the llama-cpp-python library installed, and provide the
https:///python.langchain.com/en/latest/reference/modules/embeddings.html
c32ff36d4644-5
To use, you should have the llama-cpp-python library installed, and provide the path to the Llama model as a named parameter to the constructor. Check out: abetlen/llama-cpp-python Example from langchain.embeddings import LlamaCppEmbeddings llama = LlamaCppEmbeddings(model_path="/path/to/model.bin") field f16_kv: bool = False# Use half-precision for key/value cache. field logits_all: bool = False# Return logits for all tokens, not just the last token. field n_batch: Optional[int] = 8# Number of tokens to process in parallel. Should be a number between 1 and n_ctx. field n_ctx: int = 512# Token context window. field n_parts: int = -1# Number of parts to split the model into. If -1, the number of parts is automatically determined. field n_threads: Optional[int] = None# Number of threads to use. If None, the number of threads is automatically determined. field seed: int = -1# Seed. If -1, a random seed is used. field use_mlock: bool = False# Force system to keep model in RAM. field vocab_only: bool = False# Only load the vocabulary, no weights. embed_documents(texts: List[str]) → List[List[float]][source]# Embed a list of documents using the Llama model. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]# Embed a query using the Llama model. Parameters text – The text to embed. Returns Embeddings for the text. pydantic model langchain.embeddings.OpenAIEmbeddings[source]#
https:///python.langchain.com/en/latest/reference/modules/embeddings.html
c32ff36d4644-6
pydantic model langchain.embeddings.OpenAIEmbeddings[source]# Wrapper around OpenAI embedding models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key or pass it as a named parameter to the constructor. Example from langchain.embeddings import OpenAIEmbeddings openai = OpenAIEmbeddings(openai_api_key="my-api-key") In order to use the library with Microsoft Azure endpoints, you need to set the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and optionally and API_VERSION. The OPENAI_API_TYPE must be set to ‘azure’ and the others correspond to the properties of your endpoint. In addition, the deployment name must be passed as the model parameter. Example import os os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/" os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key" from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings( deployment="your-embeddings-deployment-name", model="your-embeddings-model-name", api_base="https://your-endpoint.openai.azure.com/", api_type="azure", ) text = "This is a test query." query_result = embeddings.embed_query(text) field chunk_size: int = 1000# Maximum number of texts to embed in each batch field max_retries: int = 6# Maximum number of retries to make when generating. embed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]][source]# Call out to OpenAI’s embedding endpoint for embedding search docs.
https:///python.langchain.com/en/latest/reference/modules/embeddings.html
c32ff36d4644-7
Call out to OpenAI’s embedding endpoint for embedding search docs. Parameters texts – The list of texts to embed. chunk_size – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]# Call out to OpenAI’s embedding endpoint for embedding query text. Parameters text – The text to embed. Returns Embedding for the text. pydantic model langchain.embeddings.SagemakerEndpointEmbeddings[source]# Wrapper around custom Sagemaker Inference Endpoints. To use, you must supply the endpoint name from your deployed Sagemaker model & the region where it is deployed. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Sagemaker endpoint. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html field content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]# The content handler class that provides an input and output transform functions to handle formats between LLM and the endpoint. field credentials_profile_name: Optional[str] = None# The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used.
https:///python.langchain.com/en/latest/reference/modules/embeddings.html
c32ff36d4644-8
credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html field endpoint_kwargs: Optional[Dict] = None# Optional attributes passed to the invoke_endpoint function. See `boto3`_. docs for more info. .. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html> field endpoint_name: str = ''# The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region. field model_kwargs: Optional[Dict] = None# Key word arguments to pass to the model. field region_name: str = ''# The aws region where the Sagemaker model is deployed, eg. us-west-2. embed_documents(texts: List[str], chunk_size: int = 64) → List[List[float]][source]# Compute doc embeddings using a SageMaker Inference Endpoint. Parameters texts – The list of texts to embed. chunk_size – The chunk size defines how many input texts will be grouped together as request. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]# Compute query embeddings using a SageMaker inference endpoint. Parameters text – The text to embed. Returns Embeddings for the text. pydantic model langchain.embeddings.SelfHostedEmbeddings[source]# Runs custom embedding models on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.).
https:///python.langchain.com/en/latest/reference/modules/embeddings.html
c32ff36d4644-9
cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") def get_pipeline(): model_id = "facebook/bart-large" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) return pipeline("feature-extraction", model=model, tokenizer=tokenizer) embeddings = SelfHostedEmbeddings( model_load_fn=get_pipeline, hardware=gpu model_reqs=["./", "torch", "transformers"], ) Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings import runhouse as rh from transformers import pipeline gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") pipeline = pipeline(model="bert-base-uncased", task="feature-extraction") rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to(gpu, path="models") embeddings = SelfHostedHFEmbeddings.from_pipeline( pipeline="models/pipeline.pkl", hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Validators raise_deprecation » all fields set_verbose » verbose field inference_fn: Callable = <function _embed_documents># Inference function to extract the embeddings on the remote hardware. field inference_kwargs: Any = None# Any kwargs to pass to the model’s inference function.
https:///python.langchain.com/en/latest/reference/modules/embeddings.html
c32ff36d4644-10
field inference_kwargs: Any = None# Any kwargs to pass to the model’s inference function. embed_documents(texts: List[str]) → List[List[float]][source]# Compute doc embeddings using a HuggingFace transformer model. Parameters texts – The list of texts to embed.s Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]# Compute query embeddings using a HuggingFace transformer model. Parameters text – The text to embed. Returns Embeddings for the text. pydantic model langchain.embeddings.SelfHostedHuggingFaceEmbeddings[source]# Runs sentence_transformers embedding models on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example from langchain.embeddings import SelfHostedHuggingFaceEmbeddings import runhouse as rh model_name = "sentence-transformers/all-mpnet-base-v2" gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu) Validators raise_deprecation » all fields set_verbose » verbose field hardware: Any = None# Remote hardware to send the inference function to. field inference_fn: Callable = <function _embed_documents># Inference function to extract the embeddings. field load_fn_kwargs: Optional[dict] = None# Key word arguments to pass to the model load function. field model_id: str = 'sentence-transformers/all-mpnet-base-v2'#
https:///python.langchain.com/en/latest/reference/modules/embeddings.html
c32ff36d4644-11
field model_id: str = 'sentence-transformers/all-mpnet-base-v2'# Model name to use. field model_load_fn: Callable = <function load_embedding_model># Function to load the model remotely on the server. field model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']# Requirements to install on hardware to inference the model. pydantic model langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings[source]# Runs InstructorEmbedding embedding models on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings import runhouse as rh model_name = "hkunlp/instructor-large" gpu = rh.cluster(name='rh-a10x', instance_type='A100:1') hf = SelfHostedHuggingFaceInstructEmbeddings( model_name=model_name, hardware=gpu) Validators raise_deprecation » all fields set_verbose » verbose field embed_instruction: str = 'Represent the document for retrieval: '# Instruction to use for embedding documents. field model_id: str = 'hkunlp/instructor-large'# Model name to use. field model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']# Requirements to install on hardware to inference the model. field query_instruction: str = 'Represent the question for retrieving supporting documents: '# Instruction to use for embedding query. embed_documents(texts: List[str]) → List[List[float]][source]#
https:///python.langchain.com/en/latest/reference/modules/embeddings.html
c32ff36d4644-12
embed_documents(texts: List[str]) → List[List[float]][source]# Compute doc embeddings using a HuggingFace instruct model. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]# Compute query embeddings using a HuggingFace instruct model. Parameters text – The text to embed. Returns Embeddings for the text. langchain.embeddings.SentenceTransformerEmbeddings# alias of langchain.embeddings.huggingface.HuggingFaceEmbeddings pydantic model langchain.embeddings.TensorflowHubEmbeddings[source]# Wrapper around tensorflow_hub embedding models. To use, you should have the tensorflow_text python package installed. Example from langchain.embeddings import TensorflowHubEmbeddings url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3" tf = TensorflowHubEmbeddings(model_url=url) field model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3'# Model name to use. embed_documents(texts: List[str]) → List[List[float]][source]# Compute doc embeddings using a TensorflowHub embedding model. Parameters texts – The list of texts to embed. Returns List of embeddings, one for each text. embed_query(text: str) → List[float][source]# Compute query embeddings using a TensorflowHub embedding model. Parameters text – The text to embed. Returns Embeddings for the text. previous Chat Models next Indexes By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https:///python.langchain.com/en/latest/reference/modules/embeddings.html
ed1d262969d5-0
.md .pdf Quickstart Guide Contents Installation Environment Setup Building a Language Model Application: LLMs LLMs: Get predictions from a language model Prompt Templates: Manage prompts for LLMs Chains: Combine LLMs and prompts in multi-step workflows Agents: Dynamically Call Chains Based on User Input Memory: Add State to Chains and Agents Building a Language Model Application: Chat Models Get Message Completions from a Chat Model Chat Prompt Templates Chains with Chat Models Agents with Chat Models Memory: Add State to Chains and Agents Quickstart Guide# This tutorial gives you a quick walkthrough about building an end-to-end language model application with LangChain. Installation# To get started, install LangChain with the following command: pip install langchain # or conda install langchain -c conda-forge Environment Setup# Using LangChain will usually require integrations with one or more model providers, data stores, apis, etc. For this example, we will be using OpenAI’s APIs, so we will first need to install their SDK: pip install openai We will then need to set the environment variable in the terminal. export OPENAI_API_KEY="..." Alternatively, you could do this from inside the Jupyter notebook (or Python script): import os os.environ["OPENAI_API_KEY"] = "..." Building a Language Model Application: LLMs# Now that we have installed LangChain and set up our environment, we can start building our language model application. LangChain provides many modules that can be used to build language model applications. Modules can be combined to create more complex applications, or be used individually for simple applications. LLMs: Get predictions from a language model# The most basic building block of LangChain is calling an LLM on some input.
https:///python.langchain.com/en/latest/getting_started/getting_started.html
ed1d262969d5-1
The most basic building block of LangChain is calling an LLM on some input. Let’s walk through a simple example of how to do this. For this purpose, let’s pretend we are building a service that generates a company name based on what the company makes. In order to do this, we first need to import the LLM wrapper. from langchain.llms import OpenAI We can then initialize the wrapper with any arguments. In this example, we probably want the outputs to be MORE random, so we’ll initialize it with a HIGH temperature. llm = OpenAI(temperature=0.9) We can now call it on some input! text = "What would be a good company name for a company that makes colorful socks?" print(llm(text)) Feetful of Fun For more details on how to use LLMs within LangChain, see the LLM getting started guide. Prompt Templates: Manage prompts for LLMs# Calling an LLM is a great first step, but it’s just the beginning. Normally when you use an LLM in an application, you are not sending user input directly to the LLM. Instead, you are probably taking user input and constructing a prompt, and then sending that to the LLM. For example, in the previous example, the text we passed in was hardcoded to ask for a name for a company that made colorful socks. In this imaginary service, what we would want to do is take only the user input describing what the company does, and then format the prompt with that information. This is easy to do with LangChain! First lets define the prompt template: from langchain.prompts import PromptTemplate prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", )
https:///python.langchain.com/en/latest/getting_started/getting_started.html
ed1d262969d5-2
template="What is a good name for a company that makes {product}?", ) Let’s now see how this works! We can call the .format method to format it. print(prompt.format(product="colorful socks")) What is a good name for a company that makes colorful socks? For more details, check out the getting started guide for prompts. Chains: Combine LLMs and prompts in multi-step workflows# Up until now, we’ve worked with the PromptTemplate and LLM primitives by themselves. But of course, a real application is not just one primitive, but rather a combination of them. A chain in LangChain is made up of links, which can be either primitives like LLMs or other chains. The most core type of chain is an LLMChain, which consists of a PromptTemplate and an LLM. Extending the previous example, we can construct an LLMChain which takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. from langchain.prompts import PromptTemplate from langchain.llms import OpenAI llm = OpenAI(temperature=0.9) prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?", ) We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM: from langchain.chains import LLMChain chain = LLMChain(llm=llm, prompt=prompt) Now we can run that chain only specifying the product! chain.run("colorful socks") # -> '\n\nSocktastic!' There we go! There’s the first chain - an LLM Chain.
https:///python.langchain.com/en/latest/getting_started/getting_started.html
ed1d262969d5-3
There we go! There’s the first chain - an LLM Chain. This is one of the simpler types of chains, but understanding how it works will set you up well for working with more complex chains. For more details, check out the getting started guide for chains. Agents: Dynamically Call Chains Based on User Input# So far the chains we’ve looked at run in a predetermined order. Agents no longer do: they use an LLM to determine which actions to take and in what order. An action can either be using a tool and observing its output, or returning to the user. When used correctly agents can be extremely powerful. In this tutorial, we show you how to easily use agents through the simplest, highest level API. In order to load agents, you should understand the following concepts: Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. The interface for a tool is currently a function that is expected to have a string as an input, with a string as an output. LLM: The language model powering the agent. Agent: The agent to use. This should be a string that references a support agent class. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see the documentation for custom agents (coming soon). Agents: For a list of supported agents and their specifications, see here. Tools: For a list of predefined tools and their specifications, see here. For this example, you will also need to install the SerpAPI Python package. pip install google-search-results And set the appropriate environment variables. import os os.environ["SERPAPI_API_KEY"] = "..." Now we can get started! from langchain.agents import load_tools
https:///python.langchain.com/en/latest/getting_started/getting_started.html
ed1d262969d5-4
Now we can get started! from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.llms import OpenAI # First, let's load the language model we're going to use to control the agent. llm = OpenAI(temperature=0) # Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in. tools = load_tools(["serpapi", "llm-math"], llm=llm) # Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use. agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True) # Now let's test it out! agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?") > Entering new AgentExecutor chain... I need to find the temperature first, then use the calculator to raise it to the .023 power. Action: Search Action Input: "High temperature in SF yesterday" Observation: San Francisco Temperature Yesterday. Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ... Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power. Action: Calculator Action Input: 57^.023 Observation: Answer: 1.0974509573251117 Thought: I now know the final answer Final Answer: The high temperature in SF yesterday in Fahrenheit raised to the .023 power is 1.0974509573251117.
https:///python.langchain.com/en/latest/getting_started/getting_started.html
ed1d262969d5-5
> Finished chain. Memory: Add State to Chains and Agents# So far, all the chains and agents we’ve gone through have been stateless. But often, you may want a chain or agent to have some concept of “memory” so that it may remember information about its previous interactions. The clearest and simple example of this is when designing a chatbot - you want it to remember previous messages so it can use context from that to have a better conversation. This would be a type of “short-term memory”. On the more complex side, you could imagine a chain/agent remembering key pieces of information over time - this would be a form of “long-term memory”. For more concrete ideas on the latter, see this awesome paper. LangChain provides several specially created chains just for this purpose. This notebook walks through using one of those chains (the ConversationChain) with two different types of memory. By default, the ConversationChain has a simple type of memory that remembers all previous inputs/outputs and adds them to the context that is passed. Let’s take a look at using this chain (setting verbose=True so we can see the prompt). from langchain import OpenAI, ConversationChain llm = OpenAI(temperature=0) conversation = ConversationChain(llm=llm, verbose=True) output = conversation.predict(input="Hi there!") print(output) > Entering new chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: > Finished chain. ' Hello! How are you today?' output = conversation.predict(input="I'm doing well! Just having a conversation with an AI.") print(output)
https:///python.langchain.com/en/latest/getting_started/getting_started.html
ed1d262969d5-6
print(output) > Entering new chain... Prompt after formatting: The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know. Current conversation: Human: Hi there! AI: Hello! How are you today? Human: I'm doing well! Just having a conversation with an AI. AI: > Finished chain. " That's great! What would you like to talk about?" Building a Language Model Application: Chat Models# Similarly, you can use chat models instead of LLMs. Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a “text in, text out” API, they expose an interface where “chat messages” are the inputs and outputs. Chat model APIs are fairly new, so we are still figuring out the correct abstractions. Get Message Completions from a Chat Model# You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage – ChatMessage takes in an arbitrary role parameter. Most of the time, you’ll just be dealing with HumanMessage, AIMessage, and SystemMessage. from langchain.chat_models import ChatOpenAI from langchain.schema import ( AIMessage, HumanMessage, SystemMessage ) chat = ChatOpenAI(temperature=0) You can get completions by passing in a single message. chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
https:///python.langchain.com/en/latest/getting_started/getting_started.html
ed1d262969d5-7
chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")]) # -> AIMessage(content="J'aime programmer.", additional_kwargs={}) You can also pass in multiple messages for OpenAI’s gpt-3.5-turbo and gpt-4 models. messages = [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love programming.") ] chat(messages) # -> AIMessage(content="J'aime programmer.", additional_kwargs={}) You can go one step further and generate completions for multiple sets of messages using generate. This returns an LLMResult with an additional message parameter: batch_messages = [ [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love programming.") ], [ SystemMessage(content="You are a helpful assistant that translates English to French."), HumanMessage(content="Translate this sentence from English to French. I love artificial intelligence.") ], ] result = chat.generate(batch_messages) result # -> LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}}) You can recover things like token usage from this LLMResult: result.llm_output['token_usage']
https:///python.langchain.com/en/latest/getting_started/getting_started.html
ed1d262969d5-8
result.llm_output['token_usage'] # -> {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89} Chat Prompt Templates# Similar to LLMs, you can make use of templating by using a MessagePromptTemplate. You can build a ChatPromptTemplate from one or more MessagePromptTemplates. You can use ChatPromptTemplate’s format_prompt – this returns a PromptValue, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model. For convenience, there is a from_template method exposed on the template. If you were to use this template, this is what it would look like: from langchain.chat_models import ChatOpenAI from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, ) chat = ChatOpenAI(temperature=0) template = "You are a helpful assistant that translates {input_language} to {output_language}." system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template = "{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) # get a chat completion from the formatted messages chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()) # -> AIMessage(content="J'aime programmer.", additional_kwargs={}) Chains with Chat Models# The LLMChain discussed in the above section can be used with chat models as well: from langchain.chat_models import ChatOpenAI from langchain import LLMChain from langchain.prompts.chat import ( ChatPromptTemplate, SystemMessagePromptTemplate,
https:///python.langchain.com/en/latest/getting_started/getting_started.html
ed1d262969d5-9
ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate, ) chat = ChatOpenAI(temperature=0) template = "You are a helpful assistant that translates {input_language} to {output_language}." system_message_prompt = SystemMessagePromptTemplate.from_template(template) human_template = "{text}" human_message_prompt = HumanMessagePromptTemplate.from_template(human_template) chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt]) chain = LLMChain(llm=chat, prompt=chat_prompt) chain.run(input_language="English", output_language="French", text="I love programming.") # -> "J'aime programmer." Agents with Chat Models# Agents can also be used with chat models, you can initialize one using AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION as the agent type. from langchain.agents import load_tools from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.chat_models import ChatOpenAI from langchain.llms import OpenAI # First, let's load the language model we're going to use to control the agent. chat = ChatOpenAI(temperature=0) # Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in. llm = OpenAI(temperature=0) tools = load_tools(["serpapi", "llm-math"], llm=llm) # Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use. agent = initialize_agent(tools, chat, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True) # Now let's test it out!
https:///python.langchain.com/en/latest/getting_started/getting_started.html
ed1d262969d5-10
# Now let's test it out! agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?") > Entering new AgentExecutor chain... Thought: I need to use a search engine to find Olivia Wilde's boyfriend and a calculator to raise his age to the 0.23 power. Action: { "action": "Search", "action_input": "Olivia Wilde boyfriend" } Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling. Thought:I need to use a search engine to find Harry Styles' current age. Action: { "action": "Search", "action_input": "Harry Styles age" } Observation: 29 years Thought:Now I need to calculate 29 raised to the 0.23 power. Action: { "action": "Calculator", "action_input": "29^0.23" } Observation: Answer: 2.169459462491557 Thought:I now know the final answer. Final Answer: 2.169459462491557 > Finished chain. '2.169459462491557' Memory: Add State to Chains and Agents# You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object. from langchain.prompts import ( ChatPromptTemplate, MessagesPlaceholder,
https:///python.langchain.com/en/latest/getting_started/getting_started.html
ed1d262969d5-11
from langchain.prompts import ( ChatPromptTemplate, MessagesPlaceholder, SystemMessagePromptTemplate, HumanMessagePromptTemplate ) from langchain.chains import ConversationChain from langchain.chat_models import ChatOpenAI from langchain.memory import ConversationBufferMemory prompt = ChatPromptTemplate.from_messages([ SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."), MessagesPlaceholder(variable_name="history"), HumanMessagePromptTemplate.from_template("{input}") ]) llm = ChatOpenAI(temperature=0) memory = ConversationBufferMemory(return_messages=True) conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm) conversation.predict(input="Hi there!") # -> 'Hello! How can I assist you today?' conversation.predict(input="I'm doing well! Just having a conversation with an AI.") # -> "That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?" conversation.predict(input="Tell me about yourself.") # -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?" previous Welcome to LangChain next Models Contents Installation Environment Setup Building a Language Model Application: LLMs LLMs: Get predictions from a language model Prompt Templates: Manage prompts for LLMs
https:///python.langchain.com/en/latest/getting_started/getting_started.html
ed1d262969d5-12
LLMs: Get predictions from a language model Prompt Templates: Manage prompts for LLMs Chains: Combine LLMs and prompts in multi-step workflows Agents: Dynamically Call Chains Based on User Input Memory: Add State to Chains and Agents Building a Language Model Application: Chat Models Get Message Completions from a Chat Model Chat Prompt Templates Chains with Chat Models Agents with Chat Models Memory: Add State to Chains and Agents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on May 02, 2023.
https:///python.langchain.com/en/latest/getting_started/getting_started.html