id
stringlengths
14
16
text
stringlengths
13
2.7k
source
stringlengths
57
178
343962c61522-8
Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
343962c61522-9
This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]][source]¶ Get the sub prompts for llm call.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
343962c61522-10
Get the sub prompts for llm call. get_token_ids(text: str) → List[int][source]¶ Get the token IDs using the tiktoken package. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool[source]¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
343962c61522-11
to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. max_tokens_for_prompt(prompt: str) → int[source]¶ Calculate the maximum number of tokens possible to generate for a prompt. Parameters prompt – The prompt to pass into the model. Returns The maximum number of tokens to generate for a prompt. Example max_tokens = openai.max_token_for_prompt("Tell me a joke.") static modelname_to_contextsize(modelname: str) → int[source]¶ Calculate the maximum number of tokens possible to generate for a model. Parameters modelname – The modelname we want to know the context size for. Returns The maximum context size Example max_tokens = openai.modelname_to_contextsize("text-davinci-003") classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
343962c61522-12
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
343962c61522-13
to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id,
lang/api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
343962c61522-14
The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. property InputType: TypeAlias¶ Get the input type for this runnable. property OutputType: Type[str]¶ Get the input type for this runnable. property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict[str, Any]¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”}
lang/api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
343962c61522-15
For example,{“openai_api_key”: “OPENAI_API_KEY”} property max_context_size: int¶ Get max context size for this model. property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.openai.BaseOpenAI.html
79933c6a99ac-0
langchain.llms.vertexai.completion_with_retry¶ langchain.llms.vertexai.completion_with_retry(llm: VertexAI, *args: Any, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any) → Any[source]¶ Use tenacity to retry the completion call.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.completion_with_retry.html
a8af0bb016aa-0
langchain.llms.gooseai.GooseAI¶ class langchain.llms.gooseai.GooseAI[source]¶ Bases: LLM GooseAI large language models. To use, you should have the openai python package installed, and the environment variable GOOSEAI_API_KEY set with your API key. Any parameters that are valid to be passed to the openai.create call can be passed in, even if not explicitly saved on this class. Example from langchain.llms import GooseAI gooseai = GooseAI(model_name="gpt-neo-20b") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param client: Any = None¶ param frequency_penalty: float = 0¶ Penalizes repeated tokens according to frequency. param gooseai_api_key: Optional[pydantic.types.SecretStr] = None¶ Constraints type = string writeOnly = True format = password param logit_bias: Optional[Dict[str, float]] [Optional]¶ Adjust the probability of specific tokens being generated. param max_tokens: int = 256¶ The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param min_tokens: int = 1¶ The minimum number of tokens to generate in the completion. param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
a8af0bb016aa-1
Holds any model parameters valid for create call not explicitly specified. param model_name: str = 'gpt-neo-20b'¶ Model name to use param n: int = 1¶ How many completions to generate for each prompt. param presence_penalty: float = 0¶ Penalizes repeated tokens. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: float = 0.7¶ What sampling temperature to use param top_p: float = 1¶ Total probability mass of tokens to consider at each step. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
a8af0bb016aa-2
e.g., if the underlying runnable uses an API which supports a batch mode. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
a8af0bb016aa-3
functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
a8af0bb016aa-4
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
a8af0bb016aa-5
input is still being generated. batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
lang/api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
a8af0bb016aa-6
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. classmethod from_orm(obj: Any) → Model¶ generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
a8af0bb016aa-7
Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the
lang/api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
a8af0bb016aa-8
For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Transform a single input into an output. Override to implement. Parameters
lang/api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
a8af0bb016aa-9
Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
a8af0bb016aa-10
classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”)
lang/api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
a8af0bb016aa-11
.. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters
lang/api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
a8af0bb016aa-12
Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
a8af0bb016aa-13
Bind input and output types to a Runnable, returning a new Runnable. property InputType: TypeAlias¶ Get the input type for this runnable. property OutputType: Type[str]¶ Get the input type for this runnable. property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model. Examples using GooseAI¶ GooseAI
lang/api.python.langchain.com/en/latest/llms/langchain.llms.gooseai.GooseAI.html
97cd3865470f-0
langchain.llms.edenai.EdenAI¶ class langchain.llms.edenai.EdenAI[source]¶ Bases: LLM Wrapper around edenai models. To use, you should have the environment variable EDENAI_API_KEY set with your API token. You can find your token here: https://app.edenai.run/admin/account/settings feature and subfeature are required, but any other model parameters can also be passed in with the format params={model_param: value, …} for api reference check edenai documentation: http://docs.edenai.co. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param base_url: str = 'https://api.edenai.run/v2'¶ param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param edenai_api_key: Optional[str] = None¶ param feature: Literal['text', 'image'] = 'text'¶ Which generative feature to use, use text by default param max_tokens: Optional[int] = None¶ Constraints minimum = 0 param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model: Optional[str] = None¶ model name for above provider (eg: ‘text-davinci-003’ for openai) available models are shown on https://docs.edenai.co/ under ‘available providers’ param model_kwargs: Dict[str, Any] [Optional]¶ extra parameters param params: Dict[str, Any] [Optional]¶ DEPRECATED: use temperature, max_tokens, resolution directly optional parameters to pass to api
lang/api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html
97cd3865470f-1
DEPRECATED: use temperature, max_tokens, resolution directly optional parameters to pass to api param provider: str [Required]¶ Generative provider to use (eg: openai,stabilityai,cohere,google etc.) param resolution: Optional[Literal['256x256', '512x512', '1024x1024']] = None¶ param stop_sequences: Optional[List[str]] = None¶ Stop sequences to use. param subfeature: Literal['generation'] = 'generation'¶ Subfeature of above feature, use generation by default param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: Optional[float] = None¶ Constraints minimum = 0 maximum = 1 param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html
97cd3865470f-2
e.g., if the underlying runnable uses an API which supports a batch mode. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html
97cd3865470f-3
functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html
97cd3865470f-4
first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html
97cd3865470f-5
input is still being generated. batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
lang/api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html
97cd3865470f-6
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. classmethod from_orm(obj: Any) → Model¶ generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html
97cd3865470f-7
Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the
lang/api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html
97cd3865470f-8
For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. static get_user_agent() → str[source]¶ invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html
97cd3865470f-9
Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html
97cd3865470f-10
classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”)
lang/api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html
97cd3865470f-11
.. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters
lang/api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html
97cd3865470f-12
Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html
97cd3865470f-13
Bind input and output types to a Runnable, returning a new Runnable. property InputType: TypeAlias¶ Get the input type for this runnable. property OutputType: Type[str]¶ Get the input type for this runnable. property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model. Examples using EdenAI¶ Eden AI
lang/api.python.langchain.com/en/latest/llms/langchain.llms.edenai.EdenAI.html
abc571a5ff66-0
langchain_experimental.llms.jsonformer_decoder.import_jsonformer¶ langchain_experimental.llms.jsonformer_decoder.import_jsonformer() → jsonformer[source]¶ Lazily import jsonformer.
lang/api.python.langchain.com/en/latest/llms/langchain_experimental.llms.jsonformer_decoder.import_jsonformer.html
c83584502c94-0
langchain.llms.minimax.MinimaxCommon¶ class langchain.llms.minimax.MinimaxCommon[source]¶ Bases: BaseModel Common parameters for Minimax large language models. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param max_tokens: int = 256¶ Denotes the number of tokens to predict per generation. param minimax_api_host: Optional[str] = None¶ param minimax_api_key: Optional[str] = None¶ param minimax_group_id: Optional[str] = None¶ param model: str = 'abab5.5-chat'¶ Model name to use. param model_kwargs: Dict[str, Any] [Optional]¶ Holds any model parameters valid for create call not explicitly specified. param temperature: float = 0.7¶ A non-negative float that tunes the degree of randomness in generation. param top_p: float = 0.95¶ Total probability mass of tokens to consider at each step. classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model
lang/api.python.langchain.com/en/latest/llms/langchain.llms.minimax.MinimaxCommon.html
c83584502c94-1
Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.minimax.MinimaxCommon.html
c83584502c94-2
classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.minimax.MinimaxCommon.html
b2021a78bf52-0
langchain.llms.openai.acompletion_with_retry¶ async langchain.llms.openai.acompletion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], run_manager: Optional[AsyncCallbackManagerForLLMRun] = None, **kwargs: Any) → Any[source]¶ Use tenacity to retry the async completion call.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.openai.acompletion_with_retry.html
e38848f1443f-0
langchain.llms.vertexai.acompletion_with_retry¶ async langchain.llms.vertexai.acompletion_with_retry(llm: VertexAI, *args: Any, run_manager: Optional[AsyncCallbackManagerForLLMRun] = None, **kwargs: Any) → Any[source]¶ Use tenacity to retry the completion call.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.acompletion_with_retry.html
c9896efd112e-0
langchain.llms.anyscale.create_llm_result¶ langchain.llms.anyscale.create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int], model_name: str) → LLMResult[source]¶ Create the LLMResult from the choices and prompts.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.anyscale.create_llm_result.html
eed0054d9789-0
langchain.llms.openai.update_token_usage¶ langchain.llms.openai.update_token_usage(keys: Set[str], response: Dict[str, Any], token_usage: Dict[str, Any]) → None[source]¶ Update token usage.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.openai.update_token_usage.html
06a231346fbf-0
langchain.llms.aviary.get_completions¶ langchain.llms.aviary.get_completions(model: str, prompt: str, use_prompt_format: bool = True, version: str = '') → Dict[str, Union[str, float, int]][source]¶ Get completions from Aviary models.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aviary.get_completions.html
7e67ca242254-0
langchain.llms.bittensor.NIBittensorLLM¶ class langchain.llms.bittensor.NIBittensorLLM[source]¶ Bases: LLM NIBittensor LLMs NIBittensorLLM is created by Neural Internet (https://neuralinternet.ai/), powered by Bittensor, a decentralized network full of different AI models. To analyze API_KEYS and logs of your usage visithttps://api.neuralinternet.ai/api-keys https://api.neuralinternet.ai/logs Example from langchain.llms import NIBittensorLLM llm = NIBittensorLLM() Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param system_prompt: Optional[str] = None¶ Provide system prompt that you want to supply it to model before every prompt param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param top_responses: Optional[int] = 0¶ Provide top_responses to get Top N miner responses on one request.May get delayed Don’t use in Production param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.bittensor.NIBittensorLLM.html
7e67ca242254-1
Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value,
lang/api.python.langchain.com/en/latest/llms/langchain.llms.bittensor.NIBittensorLLM.html
7e67ca242254-2
need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed
lang/api.python.langchain.com/en/latest/llms/langchain.llms.bittensor.NIBittensorLLM.html
7e67ca242254-3
**kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.bittensor.NIBittensorLLM.html
7e67ca242254-4
This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.bittensor.NIBittensorLLM.html
7e67ca242254-5
Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. classmethod from_orm(obj: Any) → Model¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.bittensor.NIBittensorLLM.html
7e67ca242254-6
classmethod from_orm(obj: Any) → Model¶ generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.bittensor.NIBittensorLLM.html
7e67ca242254-7
functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.bittensor.NIBittensorLLM.html
7e67ca242254-8
Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Is this class serializable?
lang/api.python.langchain.com/en/latest/llms/langchain.llms.bittensor.NIBittensorLLM.html
7e67ca242254-9
classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.bittensor.NIBittensorLLM.html
7e67ca242254-10
Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.bittensor.NIBittensorLLM.html
7e67ca242254-11
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.bittensor.NIBittensorLLM.html
7e67ca242254-12
fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. property InputType: TypeAlias¶ Get the input type for this runnable. property OutputType: Type[str]¶ Get the input type for this runnable. property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.bittensor.NIBittensorLLM.html
7e67ca242254-13
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model. Examples using NIBittensorLLM¶ NIBittensor Bittensor
lang/api.python.langchain.com/en/latest/llms/langchain.llms.bittensor.NIBittensorLLM.html
c1d6f9d875e0-0
langchain.llms.vertexai.is_codey_model¶ langchain.llms.vertexai.is_codey_model(model_name: str) → bool[source]¶ Returns True if the model name is a Codey model. Parameters model_name – The model name to check. Returns: True if the model name is a Codey model.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.vertexai.is_codey_model.html
53dcc2adc77b-0
langchain.llms.aviary.get_models¶ langchain.llms.aviary.get_models() → List[str][source]¶ List available models
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aviary.get_models.html
4cc413362a29-0
langchain.llms.fireworks.completion_with_retry_batching¶ langchain.llms.fireworks.completion_with_retry_batching(llm: Fireworks, use_retry: bool, *, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any) → Any[source]¶ Use tenacity to retry the completion call.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.fireworks.completion_with_retry_batching.html
93542bfc799f-0
langchain.llms.aviary.AviaryBackend¶ class langchain.llms.aviary.AviaryBackend(backend_url: str, bearer: str)[source]¶ Aviary backend. backend_url¶ The URL for the Aviary backend. Type str bearer¶ The bearer token for the Aviary backend. Type str Attributes backend_url bearer Methods __init__(backend_url, bearer) from_env() __init__(backend_url: str, bearer: str) → None¶ classmethod from_env() → AviaryBackend[source]¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aviary.AviaryBackend.html
4bee6d12759f-0
langchain.llms.huggingface_pipeline.HuggingFacePipeline¶ class langchain.llms.huggingface_pipeline.HuggingFacePipeline[source]¶ Bases: BaseLLM HuggingFace Pipeline API. To use, you should have the transformers python package installed. Only supports text-generation, text2text-generation and summarization for now. Example using from_model_id:from langchain.llms import HuggingFacePipeline hf = HuggingFacePipeline.from_model_id( model_id="gpt2", task="text-generation", pipeline_kwargs={"max_new_tokens": 10}, ) Example passing pipeline in directly:from langchain.llms import HuggingFacePipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) hf = HuggingFacePipeline(pipeline=pipe) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param batch_size: int = 4¶ Batch size to use when passing multiple documents to generate. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model_id: str = 'gpt2'¶ Model name to use. param model_kwargs: Optional[dict] = None¶ Keyword arguments passed to the model. param pipeline_kwargs: Optional[dict] = None¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
4bee6d12759f-1
Keyword arguments passed to the model. param pipeline_kwargs: Optional[dict] = None¶ Keyword arguments passed to the pipeline. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
4bee6d12759f-2
Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
4bee6d12759f-3
the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
4bee6d12759f-4
Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
4bee6d12759f-5
e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include
lang/api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
4bee6d12759f-6
exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. classmethod from_model_id(model_id: str, task: str, device: Optional[int] = - 1, device_map: Optional[str] = None, model_kwargs: Optional[dict] = None, pipeline_kwargs: Optional[dict] = None, batch_size: int = 4, **kwargs: Any) → HuggingFacePipeline[source]¶ Construct the pipeline object from model_id and task. classmethod from_orm(obj: Any) → Model¶ generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
4bee6d12759f-7
Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the
lang/api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
4bee6d12759f-8
For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Transform a single input into an output. Override to implement. Parameters
lang/api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
4bee6d12759f-9
Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
4bee6d12759f-10
classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”)
lang/api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
4bee6d12759f-11
.. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters
lang/api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
4bee6d12759f-12
Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
4bee6d12759f-13
Bind input and output types to a Runnable, returning a new Runnable. property InputType: TypeAlias¶ Get the input type for this runnable. property OutputType: Type[str]¶ Get the input type for this runnable. property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model. Examples using HuggingFacePipeline¶ Hugging Face Hugging Face Local Pipelines RELLM JSONFormer
lang/api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_pipeline.HuggingFacePipeline.html
3308dda14ad3-0
langchain.llms.javelin_ai_gateway.Params¶ class langchain.llms.javelin_ai_gateway.Params[source]¶ Bases: BaseModel Parameters for the Javelin AI Gateway LLM. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param max_tokens: Optional[int] = None¶ param stop: Optional[List[str]] = None¶ param temperature: float = 0.0¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance
lang/api.python.langchain.com/en/latest/llms/langchain.llms.javelin_ai_gateway.Params.html
3308dda14ad3-1
deep – set to True to make a deep copy of the model Returns new model instance dict(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False) → DictStrAny¶ Generate a dictionary representation of the model, optionally specifying which fields to include or exclude. classmethod from_orm(obj: Any) → Model¶ json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.javelin_ai_gateway.Params.html
3308dda14ad3-2
classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶ classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.javelin_ai_gateway.Params.html
cb6b2e428eb2-0
langchain.llms.aleph_alpha.AlephAlpha¶ class langchain.llms.aleph_alpha.AlephAlpha[source]¶ Bases: LLM Aleph Alpha large language models. To use, you should have the aleph_alpha_client python package installed, and the environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass it as a named parameter to the constructor. Parameters are explained more in depth here: https://github.com/Aleph-Alpha/aleph-alpha-client/blob/c14b7dd2b4325c7da0d6a119f6e76385800e097b/aleph_alpha_client/completion.py#L10 Example from langchain.llms import AlephAlpha aleph_alpha = AlephAlpha(aleph_alpha_api_key="my-api-key") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param aleph_alpha_api_key: Optional[str] = None¶ API key for Aleph Alpha API. param best_of: Optional[int] = None¶ returns the one with the “best of” results (highest log probability per token) param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param completion_bias_exclusion: Optional[Sequence[str]] = None¶ param completion_bias_exclusion_first_token_only: bool = False¶ Only consider the first token for the completion_bias_exclusion. param completion_bias_inclusion: Optional[Sequence[str]] = None¶ param completion_bias_inclusion_first_token_only: bool = False¶ param contextual_control_threshold: Optional[float] = None¶ If set to None, attention control parameters only apply to those tokens that have
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
cb6b2e428eb2-1
If set to None, attention control parameters only apply to those tokens that have explicitly been set in the request. If set to a non-None value, control parameters are also applied to similar tokens. param control_log_additive: Optional[bool] = True¶ True: apply control by adding the log(control_factor) to attention scores. False: (attention_scores - - attention_scores.min(-1)) * control_factor param disable_optimizations: Optional[bool] = False¶ param echo: bool = False¶ Echo the prompt in the completion. param frequency_penalty: float = 0.0¶ Penalizes repeated tokens according to frequency. param host: str = 'https://api.aleph-alpha.com'¶ The hostname of the API host. The default one is “https://api.aleph-alpha.com”) param hosting: Optional[str] = None¶ Determines in which datacenters the request may be processed. You can either set the parameter to “aleph-alpha” or omit it (defaulting to None). Not setting this value, or setting it to None, gives us maximal flexibility in processing your request in our own datacenters and on servers hosted with other providers. Choose this option for maximal availability. Setting it to “aleph-alpha” allows us to only process the request in our own datacenters. Choose this option for maximal data privacy. param log_probs: Optional[int] = None¶ Number of top log probabilities to be returned for each generated token. param logit_bias: Optional[Dict[int, float]] = None¶ The logit bias allows to influence the likelihood of generating tokens. param maximum_tokens: int = 64¶ The maximum number of tokens to be generated. param metadata: Optional[Dict[str, Any]] = None¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
cb6b2e428eb2-2
param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param minimum_tokens: Optional[int] = 0¶ Generate at least this number of tokens. param model: Optional[str] = 'luminous-base'¶ Model name to use. param n: int = 1¶ How many completions to generate for each prompt. param nice: bool = False¶ Setting this to True, will signal to the API that you intend to be nice to other users by de-prioritizing your request below concurrent ones. param penalty_bias: Optional[str] = None¶ Penalty bias for the completion. param penalty_exceptions: Optional[List[str]] = None¶ List of strings that may be generated without penalty, regardless of other penalty settings param penalty_exceptions_include_stop_sequences: Optional[bool] = None¶ Should stop_sequences be included in penalty_exceptions. param presence_penalty: float = 0.0¶ Penalizes repeated tokens. param raw_completion: bool = False¶ Force the raw completion of the model to be returned. param repetition_penalties_include_completion: bool = True¶ Flag deciding whether presence penalty or frequency penalty are updated from the completion. param repetition_penalties_include_prompt: Optional[bool] = False¶ Flag deciding whether presence penalty or frequency penalty are updated from the prompt. param request_timeout_seconds: int = 305¶ Client timeout that will be set for HTTP requests in the requests library’s API calls. Server will close all requests after 300 seconds with an internal server error. param sequence_penalty: float = 0.0¶ param sequence_penalty_min_length: int = 2¶ param stop_sequences: Optional[List[str]] = None¶ Stop sequences to use. param tags: Optional[List[str]] = None¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
cb6b2e428eb2-3
Stop sequences to use. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param temperature: float = 0.0¶ A non-negative float that tunes the degree of randomness in generation. param tokens: Optional[bool] = False¶ return tokens of completion. param top_k: int = 0¶ Number of most likely tokens to consider at each step. param top_p: float = 0.0¶ Total probability mass of tokens to consider at each step. param total_retries: int = 8¶ The number of retries made in case requests fail with certain retryable status codes. If the last retry fails a corresponding exception is raised. Note, that between retries an exponential backoff is applied, starting with 0.5 s after the first retry and doubling for each retry made. So with the default setting of 8 retries a total wait time of 63.5 s is added between the retries. param use_multiplicative_frequency_penalty: bool = False¶ param use_multiplicative_presence_penalty: Optional[bool] = False¶ Flag deciding whether presence penalty is applied multiplicatively (True) or additively (False). param use_multiplicative_sequence_penalty: bool = False¶ param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
cb6b2e428eb2-4
Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Asynchronously pass a sequence of prompts and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value,
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
cb6b2e428eb2-5
need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. async ainvoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Asynchronously pass a string to the model and return a string prediction. Use this method when calling pure text generation models and only the topcandidate generation is needed. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
cb6b2e428eb2-6
**kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Asynchronously pass messages to the model and return a message prediction. Use this method when calling chat models and only the topcandidate generation is needed. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. async astream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → AsyncIterator[str]¶ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. async astream_log(input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any]) → Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]¶ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
cb6b2e428eb2-7
This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. async atransform(input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → AsyncIterator[Output]¶ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. batch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. bind(**kwargs: Any) → Runnable[Input, Output]¶ Bind arguments to a Runnable, returning a new Runnable. config_schema(*, include: Optional[Sequence[str]] = None) → Type[BaseModel]¶ The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the configurable_fields and configurable_alternatives methods. Parameters include – A list of fields to include in the config schema. Returns A pydantic model that can be used to validate config.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
cb6b2e428eb2-8
Returns A pydantic model that can be used to validate config. configurable_alternatives(which: ConfigurableField, default_key: str = 'default', **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]]) → RunnableSerializable[Input, Output]¶ configurable_fields(**kwargs: Union[ConfigurableField, ConfigurableFieldSingleOption, ConfigurableFieldMultiOption]) → RunnableSerializable[Input, Output]¶ classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model¶ Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = ‘allow’ was set since it adds all passed values copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model¶ Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include – fields to include in new model exclude – fields to exclude from new model, as with values this takes precedence over include update – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep – set to True to make a deep copy of the model Returns new model instance dict(**kwargs: Any) → Dict¶ Return a dictionary of the LLM. classmethod from_orm(obj: Any) → Model¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
cb6b2e428eb2-9
classmethod from_orm(obj: Any) → Model¶ generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input. generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, **kwargs: Any) → LLMResult¶ Pass a sequence of prompts to the model and return model generations. This method should make use of batched calls for models that expose a batched API. Use this method when you want to: take advantage of batched calls, need more output from the model than just the top generated value, are building chains that are agnostic to the underlying language modeltype (e.g., pure text completion models vs chat models). Parameters prompts – List of PromptValues. A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessages for chat models). stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. callbacks – Callbacks to pass through. Used for executing additional functionality, such as logging or streaming, throughout generation.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
cb6b2e428eb2-10
functionality, such as logging or streaming, throughout generation. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns An LLMResult, which contains a list of candidate Generations for each inputprompt and additional model provider-specific output. get_input_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶ Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate input. classmethod get_lc_namespace() → List[str]¶ Get the namespace of the langchain object. For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”] get_num_tokens(text: str) → int¶ Get the number of tokens present in the text. Useful for checking if an input will fit in a model’s context window. Parameters text – The string input to tokenize. Returns The integer number of tokens in the text. get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶ Get the number of tokens in the messages. Useful for checking if an input will fit in a model’s context window. Parameters messages – The message inputs to tokenize. Returns The sum of the number of tokens across the messages. get_output_schema(config: Optional[RunnableConfig] = None) → Type[BaseModel]¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
cb6b2e428eb2-11
Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Parameters config – A config to use when generating the schema. Returns A pydantic model that can be used to validate output. get_token_ids(text: str) → List[int]¶ Return the ordered ids of the tokens in a text. Parameters text – The string input to tokenize. Returns A list of ids corresponding to the tokens in the text, in order they occurin the text. invoke(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ Transform a single input into an output. Override to implement. Parameters input – The input to the runnable. config – A config to use when invoking the runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns The output of the runnable. classmethod is_lc_serializable() → bool¶ Is this class serializable?
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
cb6b2e428eb2-12
classmethod is_lc_serializable() → bool¶ Is this class serializable? json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode¶ Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). classmethod lc_id() → List[str]¶ A unique identifier for this class for serialization purposes. The unique identifier is a list of strings that describes the path to the object. map() → Runnable[List[Input], List[Output]]¶ Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. classmethod parse_file(path: Union[str, Path], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ classmethod parse_obj(obj: Any) → Model¶ classmethod parse_raw(b: Union[str, bytes], *, content_type: unicode = None, encoding: unicode = 'utf8', proto: Protocol = None, allow_pickle: bool = False) → Model¶ predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶ Pass a single string input to the model and return a string prediction.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
cb6b2e428eb2-13
Pass a single string input to the model and return a string prediction. Use this method when passing in raw text. If you want to pass in specifictypes of chat messages, use predict_messages. Parameters text – String input to pass to the model. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a string. predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ Pass a message sequence to the model and return a message prediction. Use this method when passing in chat messages. If you want to pass in raw text,use predict. Parameters messages – A sequence of chat messages corresponding to a single model input. stop – Stop words to use when generating. Model output is cut off at the first occurrence of any of these substrings. **kwargs – Arbitrary additional keyword arguments. These are usually passed to the model provider API call. Returns Top model prediction as a message. save(file_path: Union[Path, str]) → None¶ Save the LLM. Parameters file_path – Path to file to save the LLM to. Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod schema(by_alias: bool = True, ref_template: unicode = '#/definitions/{model}') → DictStrAny¶ classmethod schema_json(*, by_alias: bool = True, ref_template: unicode = '#/definitions/{model}', **dumps_kwargs: Any) → unicode¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
cb6b2e428eb2-14
stream(input: Union[PromptValue, str, List[BaseMessage]], config: Optional[RunnableConfig] = None, *, stop: Optional[List[str]] = None, **kwargs: Any) → Iterator[str]¶ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ transform(input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) → Iterator[Output]¶ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. classmethod update_forward_refs(**localns: Any) → None¶ Try to update ForwardRefs on fields based on this Model, globalns and localns. classmethod validate(value: Any) → Model¶ with_config(config: Optional[RunnableConfig] = None, **kwargs: Any) → Runnable[Input, Output]¶ Bind config to a Runnable, returning a new Runnable. with_fallbacks(fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (<class 'Exception'>,)) → RunnableWithFallbacksT[Input, Output]¶ Add fallbacks to a runnable, returning a new Runnable. Parameters fallbacks – A sequence of runnables to try if the original runnable fails. exceptions_to_handle – A tuple of exception types to handle. Returns A new Runnable that will try the original runnable, and then each fallback in order, upon failures.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
cb6b2e428eb2-15
fallback in order, upon failures. with_listeners(*, on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None) → Runnable[Input, Output]¶ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. with_retry(*, retry_if_exception_type: ~typing.Tuple[~typing.Type[BaseException], ...] = (<class 'Exception'>,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3) → Runnable[Input, Output]¶ Create a new Runnable that retries the original runnable on exceptions. Parameters retry_if_exception_type – A tuple of exception types to retry on wait_exponential_jitter – Whether to add jitter to the wait time between retries stop_after_attempt – The maximum number of attempts to make before giving up Returns A new Runnable that retries the original runnable on exceptions. with_types(*, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None) → Runnable[Input, Output]¶ Bind input and output types to a Runnable, returning a new Runnable. property InputType: TypeAlias¶ Get the input type for this runnable. property OutputType: Type[str]¶ Get the input type for this runnable. property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
cb6b2e428eb2-16
property config_specs: List[langchain.schema.runnable.utils.ConfigurableFieldSpec]¶ List configurable fields for this runnable. property input_schema: Type[pydantic.main.BaseModel]¶ The type of input this runnable accepts specified as a pydantic model. property lc_attributes: Dict¶ List of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_secrets: Dict[str, str]¶ A map of constructor argument names to secret ids. For example,{“openai_api_key”: “OPENAI_API_KEY”} property output_schema: Type[pydantic.main.BaseModel]¶ The type of output this runnable produces specified as a pydantic model. Examples using AlephAlpha¶ Aleph Alpha
lang/api.python.langchain.com/en/latest/llms/langchain.llms.aleph_alpha.AlephAlpha.html
417552ad3e52-0
langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint¶ class langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint[source]¶ Bases: LLM, BaseModel Azure ML Online Endpoint models. Example azure_llm = AzureMLOnlineEndpoint( endpoint_url="https://<your-endpoint>.<your_region>.inference.ml.azure.com/score", endpoint_api_key="my-api-key", content_formatter=content_formatter, ) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param cache: Optional[bool] = None¶ param callback_manager: Optional[BaseCallbackManager] = None¶ param callbacks: Callbacks = None¶ param content_formatter: Any = None¶ The content formatter that provides an input and output transform function to handle formats between the LLM and the endpoint param deployment_name: str = ''¶ Deployment Name for Endpoint. NOT REQUIRED to call endpoint. Should be passed to constructor or specified as env var AZUREML_DEPLOYMENT_NAME. param endpoint_api_key: str = ''¶ Authentication Key for Endpoint. Should be passed to constructor or specified as env var AZUREML_ENDPOINT_API_KEY. param endpoint_url: str = ''¶ URL of pre-existing Endpoint. Should be passed to constructor or specified as env var AZUREML_ENDPOINT_URL. param metadata: Optional[Dict[str, Any]] = None¶ Metadata to add to the run trace. param model_kwargs: Optional[dict] = None¶ Keyword arguments to pass to the model. param tags: Optional[List[str]] = None¶ Tags to add to the run trace. param verbose: bool [Optional]¶ Whether to print out response text.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html
417552ad3e52-1
param verbose: bool [Optional]¶ Whether to print out response text. __call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, metadata: Optional[Dict[str, Any]] = None, **kwargs: Any) → str¶ Check Cache and run the LLM on the given prompt and input. async abatch(inputs: List[Union[PromptValue, str, List[BaseMessage]]], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Any) → List[str]¶ Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Union[List[BaseCallbackHandler], BaseCallbackManager, None, List[Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]]]] = None, *, tags: Optional[Union[List[str], List[List[str]]]] = None, metadata: Optional[Union[Dict[str, Any], List[Dict[str, Any]]]] = None, run_name: Optional[Union[str, List[str]]] = None, **kwargs: Any) → LLMResult¶ Run the LLM on the given prompt and input.
lang/api.python.langchain.com/en/latest/llms/langchain.llms.azureml_endpoint.AzureMLOnlineEndpoint.html