issue_owner_repo
sequencelengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.llms import MLXPipeline
from langchain_community.chat_models.mlx import ChatMLX
from langchain.agents import AgentExecutor, load_tools
from langchain_core.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from langchain.tools.render import render_text_description
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import (
ReActJsonSingleInputOutputParser,
)
system = '''a'''
human = '''{input}
{agent_scratchpad}
'''
def get_custom_prompt():
messages = [
SystemMessagePromptTemplate.from_template(system),
HumanMessagePromptTemplate.from_template(human),
]
input_variables = ["agent_scratchpad", "input", "tool_names", "tools"]
return ChatPromptTemplate(input_variables=input_variables, messages=messages)
llm = MLXPipeline.from_model_id(
model_id="mlx-community/Meta-Llama-3-8B-Instruct-4bit",
)
chat_model = ChatMLX(llm=llm)
prompt = get_custom_prompt()
prompt = prompt.partial(
tools=render_text_description([]),
tool_names=", ".join([t.name for t in []]),
)
chat_model_with_stop = chat_model.bind(stop=["\nObservation"])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
}
| prompt
| chat_model_with_stop
| ReActJsonSingleInputOutputParser()
)
# instantiate AgentExecutor
agent_executor = AgentExecutor(agent=agent, tools=[], verbose=True)
agent_executor.invoke(
{
"input": "What is your name?"
}
)
```
### Error Message and Stack Trace (if applicable)
File "/Users/==/.pyenv/versions/hack/lib/python3.11/site-packages/langchain_community/chat_models/mlx.py", line 184, in _stream
text = self.tokenizer.decode(token.item())
^^^^^^^^^^
AttributeError: 'int' object has no attribute 'item'
Uncaught exception. Entering post mortem debugging
### Description
Hi there,
I assume this bug is similar to this issue https://github.com/langchain-ai/langchain/issues/20561. Why is that? Because if you locally apply changes from this patch https://github.com/langchain-ai/langchain/commit/ad48f77e5733a0fd6e027d7fe6feecf6bed035e1 starting from line 174 to langchain_community/chat_models/mlx.py, the bug disappears.
Best wishes
### System Info
langchain==0.2.12
langchain-community==0.2.11
langchain-core==0.2.28
langchain-experimental==0.0.64
langchain-huggingface==0.0.3
langchain-text-splitters==0.2.2
mac
3.11.9 | Mistype issue using MLX Chat Model via MLXPipeline | https://api.github.com/repos/langchain-ai/langchain/issues/25134/comments | 0 | 2024-08-07T09:27:26Z | 2024-08-07T09:30:04Z | https://github.com/langchain-ai/langchain/issues/25134 | 2,453,012,391 | 25,134 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
"""
You will be asked a question about a dataframe and you will determine the necessary function that should be run to give a response to the question. You won't answer the question, you will only state the name of a function or respond with NONE as I will explain you later.
I will explain you a few functions which you can use if the user asks you to analyze the data. The methods will provide you the necessary analysis and prediction. You must state the method's name and the required parameters to use it. Each method has a dataframe as it's first parameter which will be given to you so you can just state DF for that parameter. Also, if the user's question doesn't state specific metrics, you can pass ['ALL'] as the list of metrics. Your answer must only contain the function name with the parameters.
The first method is: create_prophet_predictions(df, metrics_for_forecasting, periods=28),
It takes 3 arguments, 1st is a dataframe, 2nd is a list of metrics which we want to get the forecast for, and the 3rd, an optional period argument that represents the number of days which we want to extend our dataframe for.
It returns an extended version of the provided initial dataframe by adding the future prediction results. It returns the initial dataframe without making additions if it fails to forecast so there is no error raised in any case. You will use this method if the user wishes to learn about the state of his campaigns' future. The user doesn't have to state a period, you can just choose 2 weeks or a month to demonstrate.
The second method is: calculate_statistics(df, metrics),
It takes 2 arguments, 1st is a dataframe, and 2nd is a list of metrics.
It returns a dictionary of different statistics for each metric provided in the 2nd parameter.
The returned dictionary looks like this:
{'metric': [], 'mean': [], 'median': [], 'std_dev': [], 'variance': [], 'skewness': [], 'kurtosis': [], 'min': [], 'max': [], '25th_percentile': [], '75th_percentile': [], 'trend_slope': [], 'trend_intercept': [], 'r_value': [], 'p_value': [], 'std_err': []}
If any of the keys of this dictionary is asked in the question, this method should be used. Also, if the user asks an overall analysis of his campaigns, this method should be used with metrics parameter of the function as ['ALL'] to comment on specific metrics. These statistics provide a comprehensive overview of the central tendency, dispersion, distribution shape, and trend characteristics of the data, as well as the relationship between variables in regression analysis and some simple statistics like mean, min and max can help you answer questions.
The third method is: feature_importance_analysis(df, target_column, size_column, feature_columns, is_regression=True),
It takes 5 parameters. 1st parameter is the dataframe, 2nd parameter is the column name of the target variable, 3rd parameter is the name of the column which contains the size of our target column, and it is used to adjust the dataframe, 4th parameter is the feature_columns list and it should be the list of features which we want to analyze the importance of, and the 5th parameter is the boolean value representing if our model is a regression model or classification model (True = regression, False = classification)
It uses machine learning algorithms and calculates feature importance of some features provided by you. It also gives information about our audience size and target size. And lastly it gives the single and combined shap values of the given features to determine the contributions of each of them to the feature importance analysis. If the question contains the phrases "audience size" or "target size" or "importance" or if the user wants to know why do the model thinks that some features will impact our results more significantly, it is a very high chance that you will use this function.
```python
analysis_examples = [
{
"question": "Can you analyze my top performing 10 Google Ads campaigns in terms of CTR?",
"answer": "calculate_statistics(DF, ['ALL'])"
},
{
"question": "Can you give me the projection of my campaign's cost and cpm results for the next week?",
"answer": "create_prophet_predictions(DF, ['cost', 'cpm'], 7)"
},
{
"question": "Which metric in my last google ads campaign serves a key role?",
"answer": "feature_importance_analysis(DF, 'revenue', 'cost', ['ctr', 'roas', 'cpc', 'clicks', 'impressions'], True)"
},
{
"question": "Can you give me the projection of my campaign's cost and cpm results for the next week?",
"answer": "create_prophet_predictions(DF, ['cost', 'cpm'], 7)"
},
{
"question": "What is the mean of the cost values of my top performing 10 campaigns based on ROAS values?",
"answer": "calculate_statistics(DF, ['cost'])"
},
]
analysis_example_prompt = ChatPromptTemplate.from_messages(
[
("human", "{question}"),
("ai", "{answer}"),
]
)
analysis_few_shot_prompt = FewShotChatMessagePromptTemplate(
example_prompt=analysis_example_prompt,
examples=analysis_examples,
)
with open("/analysis/guidance_statistics_funcs.txt", "r") as f:
guidance_text = f.read()
analysis_final_prompt = ChatPromptTemplate.from_messages(
[
("system", guidance_text),
analysis_few_shot_prompt,
("human", "{input}"),
]
)
analysis_chain = analysis_final_prompt | ChatOpenAI(model="gpt-4o-mini", temperature=0) | StrOutputParser()
response = analysis_chain.invoke({"input": analysis_sentence})
```
### Error Message and Stack Trace (if applicable)
ErrorMessage: 'Input to ChatPromptTemplate is missing variables {"\'metric\'"}. Expected: ["\'metric\'", \'input\'] Received: [\'input\']'
I couldn't provide the whole stack trace since I run it on a web app. But the exception is raised in the invoke process.
### Description
from langchain_core.prompts import ChatPromptTemplate
The error is caused by my prompt, specifically the guidance text which I passed as the "system" message to the ChatPromptTemplate. I described a dictionary structure to the LLM which a function I will use returns, but the curly braces I provided somehow caused an injection-like problem, causing my chain to expect more inputs than I provided. When I deleted the first key of the dictionary in my prompt, this time it expected the 2nd key of the dictionary as an input to the chain. And once I deleted the curly braces in my system prompt, the issue resolved. So I am certain that this problem is caused by the ChatPromptTemplate object.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:19:05 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T8112
> Python Version: 3.11.9 (v3.11.9:de54cf5be3, Apr 2 2024, 07:12:50) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.28
> langchain: 0.2.10
> langchain_community: 0.2.7
> langsmith: 0.1.82
> langchain_chroma: 0.1.1
> langchain_experimental: 0.0.62
> langchain_openai: 0.1.20
> langchain_text_splitters: 0.2.1
> langchainhub: 0.1.20
> langgraph: 0.1.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve
| Prompt Template Injection??? | https://api.github.com/repos/langchain-ai/langchain/issues/25132/comments | 2 | 2024-08-07T08:50:27Z | 2024-08-07T17:00:44Z | https://github.com/langchain-ai/langchain/issues/25132 | 2,452,933,304 | 25,132 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_nvidia_ai_endpoints import ChatNVIDIA
from langchain_community.utilities.sql_database import SQLDatabase
from langchain_community.agent_toolkits.sql.base import create_sql_agent
from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
from langchain.agents.agent_types import AgentType
import os
import logging
client = ChatNVIDIA(
model="meta/llama-3.1-405b-instruct",
api_key="api_key",
temperature=0.2,
top_p=0.7,
max_tokens=1024,
)
inventory_db_path = os.path.expanduser('~/database.db')
db = SQLDatabase.from_uri(f"sqlite:///{inventory_db_path}")
toolkit = SQLDatabaseToolkit(db=db, llm=client)
agent_executor = create_sql_agent(
llm=client,
toolkit=toolkit,
verbose=True,
)
def handle_conversation(context, user_input):
try:
result = agent_executor.run(user_input)
return result
except Exception as e:
logging.error(f"Exception in handle_conversation: {e}")
return "Error: Exception occurred while processing the request."
### Error Message and Stack Trace (if applicable)
Action: sql_db_schema
Action Input: inventory, inband_ping
ObservDEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): integrate.api.nvidia.com:443
DEBUG:urllib3.connectionpool:https://integrate.api.nvidia.com:443 "POST /v1/chat/completions HTTP/11" 200 None
Error: table_names {'inband_ping\nObserv'} not found in databaseIt looks like you're having trouble getting the schema of the 'inventory' table. Let me try a different approach.
### Description
Currently running langchain==0.2.11, with llama3.1 against sqlite db to query data from the tables but running into an issue where the llm is using nObserv while searching for table in the database. I tried using different LLM models (llama, mistral) and running into the same issue
### System Info
pip freeze | grep langchain
langchain==0.2.11
langchain-community==0.0.20
langchain-core==0.2.28
langchain-nvidia-ai-endpoints==0.2.0
langchain-ollama==0.1.1
langchain-text-splitters==0.2.2
python==3.12.4 | Langchain sqlagent - Error: table_names {'inventory\nObserv'} not found in database | https://api.github.com/repos/langchain-ai/langchain/issues/25122/comments | 2 | 2024-08-07T00:32:28Z | 2024-08-10T12:20:00Z | https://github.com/langchain-ai/langchain/issues/25122 | 2,451,897,631 | 25,122 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
### Error Message and Stack Trace (if applicable)
contain also doesn't work
ValueError: Received disallowed comparator contain. Allowed comparators are [<Comparator.EQ: 'eq'>, <Comparator.NE: 'ne'>, <Comparator.GT: 'gt'>, <Comparator.GTE: 'gte'>, <Comparator.LT: 'lt'>, <Comparator.LTE: 'lte'>]
### Description
Allowed operators in SelfQueryRetriever not allowing contain and in.
### System Info
Python 3.12
langchain 0.2.12
chroma 0.5.5 | SelfQueryRetriever alloowed operators does not allow contain Chroma | https://api.github.com/repos/langchain-ai/langchain/issues/25120/comments | 0 | 2024-08-06T22:51:47Z | 2024-08-06T22:54:22Z | https://github.com/langchain-ai/langchain/issues/25120 | 2,451,813,576 | 25,120 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code should instantiate a new `Chroma` instance from the supplied `List` of `Document`s:
```python
from langchain_chroma import Chroma
from langchain_community.embeddings import GPT4AllEmbeddings
from langchain_core.documents.base import Document
vectorstore = Chroma.from_documents(
documents=[Document(page_content="text", metadata={"source": "local"})],
embedding=GPT4AllEmbeddings(model_name='all-MiniLM-L6-v2.gguf2.f16.gguf'),
)
```
### Error Message and Stack Trace (if applicable)
```
ValidationError Traceback (most recent call last)
Cell In[10], line 7
2 from langchain_community.embeddings import GPT4AllEmbeddings
3 from langchain_core.documents.base import Document
5 vectorstore = Chroma.from_documents(
6 documents=[Document(page_content="text", metadata={"source": "local"})],
----> 7 embedding=GPT4AllEmbeddings(model_name='all-MiniLM-L6-v2.gguf2.f16.gguf'),
8 )
File ~/src/rag/.venv/lib/python3.10/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for GPT4AllEmbeddings
__root__
gpt4all.gpt4all.Embed4All() argument after ** must be a mapping, not NoneType (type=type_error)
```
### Description
The code fragment above is based on the [Document Loading section of the Using local models tutorial](https://python.langchain.com/v0.1/docs/use_cases/question_answering/local_retrieval_qa/#document-loading).
The issue is that #21238 updated `GPT4AllEmbeddings.validate_environment()` to pass `gpt4all_kwargs` through to the `Embed4All` constructor, but did not consider existing (or new) code that does not supply a value for `gpt4all_kwargs` when creating a `GPT4AllEmbeddings`.
The workaround is to set `gpt4all_kwargs` to an empty dict when creating a `GPT4AllEmbeddings`:
```python
vectorstore = Chroma.from_documents(
documents=[Document(page_content="text", metadata={"source": "local"})],
embedding=GPT4AllEmbeddings(model_name='all-MiniLM-L6-v2.gguf2.f16.gguf', gpt4all_kwargs={}),
)
```
The fix, which I shall provide shortly as a PR, is for `GPT4AllEmbeddings.validate_environment()` to pass an empty dict to the `Embed4All` constructor if the incoming `gpt4all_kwargs` is not set:
```python
values["client"] = Embed4All(
model_name=values.get("model_name"),
n_threads=values.get("n_threads"),
device=values.get("device"),
**(values.get("gpt4all_kwargs") or {}),
)
```
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
> Python Version: 3.10.14 (main, Mar 20 2024, 14:43:31) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.2.28
> langchain: 0.2.12
> langchain_community: 0.2.11
> langsmith: 0.1.84
> langchain_chroma: 0.1.2
> langchain_openai: 0.1.16
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | Instantiating GPT4AllEmbeddings with no gpt4all_kwargs argument raises a ValidationError | https://api.github.com/repos/langchain-ai/langchain/issues/25119/comments | 0 | 2024-08-06T22:46:45Z | 2024-08-06T22:49:25Z | https://github.com/langchain-ai/langchain/issues/25119 | 2,451,808,971 | 25,119 |
[
"hwchase17",
"langchain"
] | ### URL
https://js.langchain.com/v0.2/docs/how_to/message_history/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
There is currently no documentation available on how to implement tool calling with message history in LangChain, or it cannot be found. The existing documentation(https://js.langchain.com/v0.2/docs/how_to/message_history/) provides examples of adding message history, but it does not cover integrating tool calling.
I suggest adding a section that demonstrates how to:
Implement tool calling within the context of a message history.
Configure tools to work seamlessly with historical messages.
Use practical examples to illustrate the setup and usage.
This addition would be highly beneficial for users looking to leverage both features together.
### Idea or request for content:
_No response_ | DOC: Guide for Implementing Tool Calling with Message History in LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/25099/comments | 0 | 2024-08-06T12:28:48Z | 2024-08-06T12:31:25Z | https://github.com/langchain-ai/langchain/issues/25099 | 2,450,764,229 | 25,099 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
import pandas as pd
from langchain_core.messages import ToolMessage
class Foo:
def __init__(self, bar: str) -> None:
self.bar = bar
foo = Foo("bar")
msg = ToolMessage(content="OK", artifact=foo, tool_call_id="123")
py_dict = msg.to_json() # ok, it's a dictionary,
data_frame = pd.DataFrame({"name": ["Alice", "Bob"], "age": [17, 19]})
msg = ToolMessage(content="Error", artifact=data_frame, tool_call_id="456")
py_dict = msg.to_json() # error, because DataFrame cannot be evaluated as a bool().
```
### Error Message and Stack Trace (if applicable)
```plain
Traceback (most recent call last):
File "/home/gbaian10/work/my_gpt/issue.py", line 16, in <module>
py_dict = msg.to_json() # error, because DataFrame cannot be evaluated as a bool().
File "/home/gbaian10/.local/lib/python3.10/site-packages/langchain_core/load/serializable.py", line 182, in to_json
lc_kwargs = {
File "/home/gbaian10/.local/lib/python3.10/site-packages/langchain_core/load/serializable.py", line 186, in <dictcomp>
and _is_field_useful(self, k, v)
File "/home/gbaian10/.local/lib/python3.10/site-packages/langchain_core/load/serializable.py", line 260, in _is_field_useful
return field.required is True or value or field.get_default() != value
File "/home/gbaian10/.local/lib/python3.10/site-packages/pandas/core/generic.py", line 1577, in __nonzero__
raise ValueError(
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
```
### Description
![image](https://github.com/user-attachments/assets/7c67cd41-2761-4899-9b7a-fb5d751f4db3)
The issue is caused by directly evaluating the truth value of the object, leading to an exception.
I think pandas df should be a rare exception, as most objects should be able to be evaluated for their value using bool().
But I think it should be possible to access Python objects within the ToolMessage artifact. Right?
### System Info
langchain==0.2.12
langchain-core==0.2.28
pandas==2.2.2
platform==linux
python-version==3.10.12 | An error might occur during execution in _is_field_useful within Serializable | https://api.github.com/repos/langchain-ai/langchain/issues/25095/comments | 1 | 2024-08-06T08:56:23Z | 2024-08-06T15:49:42Z | https://github.com/langchain-ai/langchain/issues/25095 | 2,450,329,128 | 25,095 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.vectorstores import Chroma
for doc in docs:
vectordb = Chroma.from_documents(
documents=doc,
embedding=bge_embeddings)
Each round, I initialize the vectordb, why the next round will appear the history document, such as:
1)for the first round, i feed the document to chroma, and the output is 'Document(page_content='工程预算总表(表一)建设项目名称....)
2)for the second round, i feed another document to chroma, and the ouput is '[Document(page_content='设计预算总表的总价值的除税价为452900.05元。......'), Document(page_content='工程预算总表(表一)名称....]'
for the second round, i initialize the vectordb, why will appear the first document content?
### Error Message and Stack Trace (if applicable)
_No response_
### Description
langchain 0.0.354
### System Info
ubuntu 22
pytorch2.3
python 3.8 | There is a bug for Chroma. | https://api.github.com/repos/langchain-ai/langchain/issues/25089/comments | 4 | 2024-08-06T02:25:23Z | 2024-08-06T16:09:45Z | https://github.com/langchain-ai/langchain/issues/25089 | 2,449,800,643 | 25,089 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code raise LLM not supported:
```python
from langchain_aws.chat_models.bedrock import ChatBedrock
from langchain_community.llms.loading import load_llm_from_config
llm = ChatBedrock(
model_id='anthropic.claude-3-5-sonnet-20240620-v1:0',
model_kwargs={"temperature": 0.0}
)
config = llm.dict()
load_llm_from_config(config)
```
Something similar happens with `langchain_aws.chat_models.bedrock.ChatBedrockConverse`
### Error Message and Stack Trace (if applicable)
ValueError: Loading amazon_bedrock_chat LLM not supported
### Description
I am trying to use `load_llm_from_config` for a `ChatBedrock` LLM. It seems that `langchain_community.llms.get_type_to_cls_dict` does not include `amazon_bedrock_chat`. Moreover the dictionary representation does not allows the initialization of the class as it is.
```python
from langchain_aws.chat_models.bedrock import ChatBedrock
llm = ChatBedrock(
model_id='anthropic.claude-3-5-sonnet-20240620-v1:0',
model_kwargs={"temperature": 0.0}
)
config = llm.dict()
llm_cls = config.pop("_type")
ChatBedrock(**config)
```
Raises
```
ValidationError: 5 validation errors for ChatBedrock
guardrailIdentifier
extra fields not permitted (type=value_error.extra)
guardrailVersion
extra fields not permitted (type=value_error.extra)
stream
extra fields not permitted (type=value_error.extra)
temperature
extra fields not permitted (type=value_error.extra)
trace
extra fields not permitted (type=value_error.extra)
```
# Possible solutions
Change dict representation of ChatBedrock and implement `amazon_bedrock_chat` in `get_type_to_cls_dict`.
Moreover, it seems that langchain-aws is moving to `ChatBedrockConverse`, which will need an additional implementation.
### System Info
Python 3.10.14
langchain-community: 0.2.10
langchain-aws: 0.1.13 | Outdated Bedrock LLM when using load model from config | https://api.github.com/repos/langchain-ai/langchain/issues/25086/comments | 2 | 2024-08-05T22:36:09Z | 2024-08-05T22:53:01Z | https://github.com/langchain-ai/langchain/issues/25086 | 2,449,589,170 | 25,086 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
loader = AzureAIDocumentIntelligenceLoader(api_endpoint=endpoint, api_key=key, file_path=file_path,
api_model="prebuilt-layout", api_version=api_version, analysis_features = ["ocrHighResolution"]
)
documents = loader.load()
for document in documents:
print(f"Page Content: {document.page_content}")
### Error Message and Stack Trace (if applicable)
_No response_
### Description
While trying the same document on Azure portal (https://documentintelligence.ai.azure.com/studio/layout) with ocrHighResolution enabled, I am getting the correct OCR result. When the feature is disabled, I am seeing obvious mistakes in result. In case of langchain, I am getting the same mistakes whether I pass the analysis feature or not with ocrHighResolution.
### System Info
langchain==0.2.11
langchain-community==0.2.10
langchain-core==0.2.23
langchain-text-splitters==0.2.2
Platform: MacBook Pro (M2 Pro)
Python version: 3.11.5 | AzureAIDocumentIntelligenceLoader analysis feature ocrHighResolution not making any difference | https://api.github.com/repos/langchain-ai/langchain/issues/25081/comments | 3 | 2024-08-05T22:10:07Z | 2024-08-06T15:00:11Z | https://github.com/langchain-ai/langchain/issues/25081 | 2,449,562,037 | 25,081 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```python
from typing import Any, Type
from langchain.pydantic_v1 import BaseModel, Field
from langchain_core.tools import BaseTool
from langchain_core.utils.function_calling import convert_to_openai_tool
name = "testing"
description = "testing"
def run(some_param, **kwargs):
pass
class ToolSchema(BaseModel, extra="allow"):
some_param: str = Field(default="", description="some_param")
class RunTool(BaseTool):
name = name
description = description
args_schema: Type[BaseModel] = ToolSchema
def _run(
self,
some_param: str = "",
) -> Any:
return run(
some_param=some_param,
**self.metadata,
)
convert_to_openai_tool(RunTool())
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/home/freire/dev/navi-ai-api/app/playground.py", line 34, in <module>
convert_to_openai_tool(RunTool())
File "/home/freire/dev/navi-ai-api/venv/lib/python3.10/site-packages/langchain_core/utils/function_calling.py", line 392, in convert_to_openai_tool
function = convert_to_openai_function(tool)
File "/home/freire/dev/navi-ai-api/venv/lib/python3.10/site-packages/langchain_core/utils/function_calling.py", line 363, in convert_to_openai_function
return cast(Dict, format_tool_to_openai_function(function))
File "/home/freire/dev/navi-ai-api/venv/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 168, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/home/freire/dev/navi-ai-api/venv/lib/python3.10/site-packages/langchain_core/utils/function_calling.py", line 282, in format_tool_to_openai_function
if tool.tool_call_schema:
File "/home/freire/dev/navi-ai-api/venv/lib/python3.10/site-packages/langchain_core/tools.py", line 398, in tool_call_schema
return _create_subset_model(
File "/home/freire/dev/navi-ai-api/venv/lib/python3.10/site-packages/langchain_core/utils/pydantic.py", line 252, in _create_subset_model
return _create_subset_model_v1(
File "/home/freire/dev/navi-ai-api/venv/lib/python3.10/site-packages/langchain_core/utils/pydantic.py", line 184, in _create_subset_model_v1
field = model.__fields__[field_name]
KeyError: 'extra_data'
```
### Description
After updating langchain-core to 0.2.27+
Working fine on 0.2.26 or if i remove the `extra="allow"` option
### System Info
```
langchain==0.2.12
langchain-cli==0.0.28
langchain-community==0.2.10
langchain-core==0.2.26
langchain-openai==0.1.20
langchain-text-splitters==0.2.2
langchainhub==0.1.20
```
Linux
python3.10 (tested also on 3.12) | extra="allow" not working after langchain-core==0.2.27 | https://api.github.com/repos/langchain-ai/langchain/issues/25072/comments | 0 | 2024-08-05T20:21:58Z | 2024-08-05T20:47:55Z | https://github.com/langchain-ai/langchain/issues/25072 | 2,449,415,532 | 25,072 |
[
"hwchase17",
"langchain"
] | ### URL
langchain/cookbook /baby_agi.ipynb
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Is it really possible to reproduce results with an empty vectore as proposed?
I tried with an empty and with a populated vectorstore but I still cant reproduce or get some results at all !!
Tried langchain 0.2 latest and 0.1.10 but required version is not specified.
[baby_agi_help.md](https://github.com/user-attachments/files/16501356/baby_agi_help.md)
Thank you!
### Idea or request for content:
_No response_ | DOC: Could not reproduce notebook output | https://api.github.com/repos/langchain-ai/langchain/issues/25068/comments | 1 | 2024-08-05T18:17:38Z | 2024-08-05T22:59:17Z | https://github.com/langchain-ai/langchain/issues/25068 | 2,449,193,060 | 25,068 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
class CitationSubModel(TypedDict):
number: int = Field(description="An integer numbering the citation i.e. 1st citation, 2nd citation.")
id: str = Field(description="The identifiying document name or number for a document.")
class FinalAnswerModel(TypedDict):
answer: Annotated[str, ..., "The answer to the user question using the citations"]
citations: Annotated[List[CitationSubModel], ..., "A dictionary that includes the numbering and the id references for the citations to be used to answer"]
model_answer = get_model(state.get("deployment_id","gpt-4o-global"), streaming=True)
model_answer = model_answer.with_structured_output(FinalAnswerModel)
# Create chain using chat_history, prompt_template and the model. Parse results through a simple string parser.
chain = (RunnablePassthrough.assign(chat_history=RunnableLambda(memory.load_memory_variables) | itemgetter("chat_history")) | prompt_template | model_answer)
for chunk in chain.stream({"context": context, "task": state["task"], "citation": citation_instruction, "coding_instructions": coding_instructions}):
print(chunk)
```
### Error Message and Stack Trace (if applicable)
``` python
File "/var/task/chatbot_workflow.py", line 858, in solve
for chunk in chain.stream({"context": context, "task": state["task"], "citation": citation_instruction, "coding_instructions": coding_instructions}):
File "/opt/python/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3253, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/opt/python/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3240, in transform
yield from self._transform_stream_with_config(
File "/opt/python/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2053, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/opt/python/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3202, in _transform
for output in final_pipeline:
File "/opt/python/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1271, in transform
for ichunk in input:
File "/opt/python/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 5267, in transform
yield from self.bound.transform(
File "/opt/python/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1289, in transform
yield from self.stream(final, config, **kwargs)
File "/opt/python/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 373, in stream
raise e
File "/opt/python/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 353, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 439, in _stream
result = self._generate(
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 518, in _generate
for chunk in self._stream(messages, stop, run_manager, **kwargs):
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 439, in _stream
result = self._generate(
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 518, in _generate
for chunk in self._stream(messages, stop, run_manager, **kwargs):
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 439, in _stream
### .
### . Error Message Repeats many times
### .
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 439, in _stream
result = self._generate(
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 518, in _generate
for chunk in self._stream(messages, stop, run_manager, **kwargs):
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 439, in _stream
result = self._generate(
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 514, in _generate
self._get_provider(), "stop_reason"
File "/opt/python/lib/python3.10/site-packages/langchain_aws/llms/bedrock.py", line 594, in _get_provider
if self.model_id.startswith("arn"):
RecursionError: maximum recursion depth exceeded while calling a Python object
Stack (most recent call last):
File "/var/lang/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/var/lang/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/var/lang/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/var/lang/lib/python3.10/concurrent/futures/thread.py", line 83, in _worker
work_item.run()
File "/var/lang/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/python/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 587, in wrapper
return func(*args, **kwargs)
File "/var/task/chatbot_workflow.py", line 204, in wrap_func
result = func(*args, **kwargs)
File "/var/task/chatbot_workflow.py", line 917, in solve
logging.exception(f"Error Streaming Chunk: {e}", exc_info=True, stack_info=True)
```
### Description
### Description
I have a custom function that calls get_model that returns either an chatazureopenai or chatbedrock object. I then pass the pydantic object to the chat object and stream the response. This works perfectly fine with ChatAzureOpenai but it fails with ChatBedrock.
I am trying to stream the response in a structured output. I have created a TypedDict pydantic class and I am trying to stream as it is generating.
I am getting the following error in a loop as my code is using langgraph and it starts iterating until it hits the max recursion limit.
Not sure what is causing the issue from this error trace. Can anyone help?
### System Info
System Information
------------------
AWS Lambda ARM
Python Version: 3.10
Package Information
-------------------
langchain_core: 0.2.27
langchain: 0.2.11
langchain_community: 0.2.5
langchain_aws: 0.1.13
langchain_openai: 0.1.20
langchainhub: 0.1.14
langgraph: 0.1.19 | ChatBedrock: I am unable to stream when using with_structured_output. I can either stream or I can use with_structured_output. | https://api.github.com/repos/langchain-ai/langchain/issues/25056/comments | 3 | 2024-08-05T14:42:11Z | 2024-08-06T14:28:53Z | https://github.com/langchain-ai/langchain/issues/25056 | 2,448,733,898 | 25,056 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_ollama import ChatOllama
from langchain_ollama.embeddings import OllamaEmbeddings
llama3_1 = ChatOllama(
headers=headers,
base_url="some_host",
model="llama3.1",
temperature=0.001,
)
from langchain.prompts import ChatPromptTemplate
chat_prompt = ChatPromptTemplate.from_messages([
("system", "you are a helpful search assistant"),
])
chain = chat_prompt | llama3_1
chain.invoke({})
# embeddings
embeddings = (
OllamaEmbeddings(
headers=headers,
base_url="my_public_host",
model="mxbai-embed-large",
)
)
### Error Message and Stack Trace (if applicable)
# for chat
Error 401
# for embeddings
ValidationError: 2 validation errors for OllamaEmbeddings
base_url
extra fields not permitted (type=value_error.extra)
headers
extra fields not permitted (type=value_error.extra)
### Description
After I migrated to the new ChatOllama module (langchain-chatollama), I am unable to set headers or auth.
I am hosting ollama behind ngrok publicly, and I need to authenticate the calls.
When using the langchain_community ChatOllama integration, I was able to set those.
This seems to be like base_url, which was added in the latest version.
If you know of any env var that I can use to fix this (like OLLAMA_HOST) to set auth headers, I'd be very thankful :)
### System Info
langchain==0.2.12
langchain-community==0.2.11
langchain-core==0.2.28
langchain-ollama==0.1.1
langchain-openai==0.1.19
langchain-postgres==0.0.9
langchain-text-splitters==0.2.2
langchainhub==0.1.20
platform: mac
python: 3.12.3 | Unable to set authenticiation (headers or auth) like I used to do in the community ollama integration | https://api.github.com/repos/langchain-ai/langchain/issues/25055/comments | 1 | 2024-08-05T13:28:24Z | 2024-08-05T21:31:33Z | https://github.com/langchain-ai/langchain/issues/25055 | 2,448,566,533 | 25,055 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.chains import create_history_aware_retriever, create_retrieval_chain
condense_question_system_template = (
"Given a chat history and the latest user question "
"which might reference context in the chat history, "
"formulate a standalone question which can be understood "
"without the chat history. Do NOT answer the question, "
"just reformulate it if needed and otherwise return it as is."
)
condense_question_prompt = ChatPromptTemplate.from_messages(
[
("system", condense_question_system_template),
("placeholder", "{chat_history}"),
("human", "{input}"),
]
)
history_aware_retriever = create_history_aware_retriever(
llm, vectorstore.as_retriever(), condense_question_prompt
)
system_prompt = (
"You are an assistant for question-answering tasks. "
"Use the following pieces of retrieved context to answer "
"the question. If you don't know the answer, say that you "
"don't know. Use three sentences maximum and keep the "
"answer concise."
"\n\n"
"{context}"
)
qa_prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
("placeholder", "{chat_history}"),
("human", "{input}"),
]
)
qa_chain = create_stuff_documents_chain(llm, qa_prompt)
convo_qa_chain = create_retrieval_chain(history_aware_retriever, qa_chain)
convo_qa_chain.invoke(
{
"input": "What are autonomous agents?",
"chat_history": [],
}
)
### Error Message and Stack Trace (if applicable)
No error message.
### Description
Im migrating my code which using LEGACY method: ConversationalRetrievalChain.from_llm to LCEL method (create_history_aware_retriever, create_stuff_documents_chain and create_retrieval_chain)
In my current design i'm returning the streaming output using AsyncFinalIteratorCallbackHandler().
When I check the result, I observed that the condensed_question that being generated also being part of the returned. It will first return the condensed question in stream then return the actual answer in one-shot at last.
### System Info
langchain=0.2.10 | ConversationRetrievalChain LCEL method in the data when streaming using AsyncFinalIteratorCallbackHandler() | https://api.github.com/repos/langchain-ai/langchain/issues/25045/comments | 1 | 2024-08-05T03:43:26Z | 2024-08-06T01:58:51Z | https://github.com/langchain-ai/langchain/issues/25045 | 2,447,529,411 | 25,045 |
[
"hwchase17",
"langchain"
] | ### URL
https://api.python.langchain.com/en/latest/embeddings/langchain_huggingface.embeddings.huggingface.HuggingFaceEmbeddings.html#langchain_huggingface.embeddings.huggingface.HuggingFaceEmbeddings
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Currently, the documentation of the cache_folder keyword only mentions the SENTENCE_TRANSFORMERS_HOME environment variable. It appears that the HF_HOME variable is also considered and takes precedence over SENTENCE_TRANSFORMERS_HOME if no cache_folder keyword is provided. I tested this on Linux with current versions of all involved modules.
### Idea or request for content:
The documentation should be amended to include handling of HF_HOME. | DOC: HuggingFaceEmbeddings support for HF_HOME environment variable | https://api.github.com/repos/langchain-ai/langchain/issues/25038/comments | 1 | 2024-08-04T13:46:36Z | 2024-08-05T21:11:12Z | https://github.com/langchain-ai/langchain/issues/25038 | 2,447,142,608 | 25,038 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
from dotenv import load_dotenv
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from pydantic.v1 import BaseModel, Field
load_dotenv()
class SampleModel(BaseModel):
numbers: list[int] = Field(min_items=2, max_items=4)
@tool(args_schema=SampleModel)
def foo() -> None:
"""bar"""
return
ChatOpenAI().bind_tools([foo])
```
### Error Message and Stack Trace (if applicable)
ValueError: On field "numbers" the following field constraints are set but not enforced: min_items, max_items.
### Description
In from langchain_core.utils.pydantic import _create_subset_model_v1
```py
def _create_subset_model_v1(
name: str,
model: Type[BaseModel],
field_names: list,
*,
descriptions: Optional[dict] = None,
fn_description: Optional[str] = None,
) -> Type[BaseModel]:
"""Create a pydantic model with only a subset of model's fields."""
from langchain_core.pydantic_v1 import create_model
fields = {}
for field_name in field_names:
field = model.__fields__[field_name]
t = (
# this isn't perfect but should work for most functions
field.outer_type_
if field.required and not field.allow_none
else Optional[field.outer_type_]
)
if descriptions and field_name in descriptions:
field.field_info.description = descriptions[field_name]
fields[field_name] = (t, field.field_info)
rtn = create_model(name, **fields) # type: ignore
rtn.__doc__ = textwrap.dedent(fn_description or model.__doc__ or "")
return rtn
```
As the comment explains, the issue lies in the process of obtaining t.
The pydantic v2 version has the issue raised in #25031 .
### System Info
langchain==0.2.12
langchain-core==0.2.28
langchain-openai==0.1.20
pydantic==2.8.2
platform==windows
python-version==3.12.4 | The tool schema can't apply `min_items` or `max_items` when using BaseModel Field in Pydantic V1. | https://api.github.com/repos/langchain-ai/langchain/issues/25036/comments | 3 | 2024-08-04T11:37:43Z | 2024-08-05T21:14:08Z | https://github.com/langchain-ai/langchain/issues/25036 | 2,447,095,448 | 25,036 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from pydantic import BaseModel, Field
from typing import List
from langchain_core.tools import tool
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import AIMessage
from dotenv import load_dotenv
load_dotenv()
class SampleModel(BaseModel):
numbers: List[int] = Field(description="Favorite numbers", min_length=10, max_length=15)
@tool(args_schema=SampleModel)
def choose_numbers():
"""Choose your favorite numbers"""
pass
model = ChatAnthropic(model_name="claude-3-haiku-20240307", temperature=0)
model = model.bind_tools([choose_numbers], tool_choice="choose_numbers")
result: AIMessage = model.invoke("Hello world!")
print(result.tool_calls[0]["args"])
# Output: {'numbers': [7, 13, 42]}
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When additional metadata is specified on Pydantic fields, e.g. `min_length` or `max_length`, these aren't serialized into the final output that's sent out to the model. In the attached screenshot from Langsmith, although it carries over the description, the `length` params are missing.
https://smith.langchain.com/public/62060327-e5be-4156-93cb-6960078ec7fb/r
<img width="530" alt="image" src="https://github.com/user-attachments/assets/4fa2a6b6-105b-43c2-957f-f56b84fef10b">
### System Info
```
langchain==0.2.11
langchain-anthropic==0.1.21
langchain-community==0.2.10
langchain-core==0.2.24
langchain-google-genai==1.0.8
langchain-google-vertexai==1.0.7
langchain-openai==0.1.19
langchain-text-splitters==0.2.2
```` | Pydantic field metadata not being serialized in tool calls | https://api.github.com/repos/langchain-ai/langchain/issues/25031/comments | 6 | 2024-08-04T01:34:53Z | 2024-08-05T20:50:16Z | https://github.com/langchain-ai/langchain/issues/25031 | 2,446,706,381 | 25,031 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code throws an import error!
```
from langchain_community.document_loaders import DirectoryLoader
loader = DirectoryLoader("path-to-directory-where-pdfs-are-present")
docs = loader.load()
```
while the below one seems to work fine ---
```
from langchain_community.document_loaders import DirectoryLoader
loader = PubMedLoader("...")
docs = loader.load()
```
### Error Message and Stack Trace (if applicable)
```
Error loading file /home/ec2-user/sandbox/IntelliFix/satya/CT_sample/CT/Frontier-Maxima/ServiceManuals/5229839-100.pdf
Traceback (most recent call last):
File "/home/ec2-user/sandbox/IntelliFix/satya/rag_eval.py", line 5, in <module>
docs = loader.load()
^^^^^^^^^^^^^
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/langchain_community/document_loaders/directory.py", line 117, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/langchain_community/document_loaders/directory.py", line 182, in lazy_load
yield from self._lazy_load_file(i, p, pbar)
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/langchain_community/document_loaders/directory.py", line 220, in _lazy_load_file
raise e
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/langchain_community/document_loaders/directory.py", line 210, in _lazy_load_file
for subdoc in loader.lazy_load():
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/langchain_community/document_loaders/unstructured.py", line 88, in lazy_load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/langchain_community/document_loaders/unstructured.py", line 168, in _get_elements
from unstructured.partition.auto import partition
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/unstructured/partition/auto.py", line 78, in <module>
from unstructured.partition.pdf import partition_pdf
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/unstructured/partition/pdf.py", line 54, in <module>
from unstructured.partition.pdf_image.analysis.bbox_visualisation import (
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/unstructured/partition/pdf_image/analysis/bbox_visualisation.py", line 16, in <module>
from unstructured_inference.inference.layout import DocumentLayout
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/unstructured_inference/inference/layout.py", line 15, in <module>
from unstructured_inference.inference.layoutelement import (
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/unstructured_inference/inference/layoutelement.py", line 7, in <module>
from layoutparser.elements.layout import TextBlock
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/layoutparser/elements/__init__.py", line 16, in <module>
from .layout_elements import (
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/layoutparser/elements/layout_elements.py", line 25, in <module>
from cv2 import getPerspectiveTransform as _getPerspectiveTransform
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
```
### Description
The `DirectoryLoader` might have some incompatible issues. I am trying to load a folder with PDFs but am facing libGL1 not found.
I've tried uninstalling/installing OpenCV/OpenCV headless following stackoverflow discussions but nothing seems to work!
On the other hand, other loaders like `PubMedLoader` and `WebBaseLoader` seems to work fine (not sure if they are hitting this endpoint!)
**P.S:** Raised an [issue](https://github.com/opencv/opencv/issues/25988) at OpenCV to understand if the issue is at their end.
### System Info
I work on an EC2 instance which has linux background. My Python version is 3.12.4. Other relevant packages that might be useful are:
```
langchain==0.2.11
langchain-community==0.2.1
langchain-core==0.2.26
langchain-openai==0.1.20
langchain-text-splitters==0.2.0
langchainhub==0.1.20
opencv-contrib-python-headless==4.8.0.76
opencv-python==4.10.0.84
opencv-python-headless==4.8.0.76
python-dateutil==2.9.0.post0
python-docx==1.1.2
python-dotenv==1.0.1
python-iso639==2024.4.27
python-magic==0.4.27
python-multipart==0.0.9
python-oxmsg==0.0.1
python-pptx==0.6.23
unstructured==0.15.0
unstructured-client==0.25.1
unstructured-inference==0.7.36
unstructured.pytesseract==0.3.12
``` | [DirectoryLoader] ImportError: libGL.so.1: cannot open shared object file: No such file or directory | https://api.github.com/repos/langchain-ai/langchain/issues/25029/comments | 0 | 2024-08-03T23:11:39Z | 2024-08-03T23:17:51Z | https://github.com/langchain-ai/langchain/issues/25029 | 2,446,648,952 | 25,029 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.embeddings import OllamaEmbeddings
from langchain_ollama import OllamaLLM
embeddings_model = OllamaEmbeddings(base_url = "http://192.168.11.98:9000", model="nomic-embed-text:v1.5", num_ctx=4096)
embeddings_model.embed_query("Test")
## LLM Model
llm_model = OllamaLLM(base_url = "http://192.168.11.98:9000",model="llama3.1:8b",num_ctx = 2048)
llm_model.invoke("Test")
```
```Dockerfile
FROM ubuntu
# Install Prequisites
RUN apt-get update && apt-get install -y build-essential cmake gfortran libcurl4-openssl-dev libssl-dev libxml2-dev python3-dev python3-pip python3-venv
RUN pip install langchain langchain-core langchain-community langchain-experimental langchain-chroma langchain_ollama pandas --break-system-packages
```
### Error Message and Stack Trace (if applicable)
>>> from langchain_community.embeddings import OllamaEmbeddings
>>> from langchain_ollama import OllamaLLM
>>> embeddings_model = OllamaEmbeddings(base_url = "http://192.168.11.98:9000", model="nomic-embed-text:v1.5", num_ctx=4096)
>>> embeddings_model.embed_query("Test")
[0.8171377182006836, 0.7424322366714478, -3.6913845539093018, -0.5350275635719299, 1.98311185836792, -0.08007726818323135, 0.7974349856376648, -0.5946609377861023, 1.4877475500106812, -0.8044648766517639, 0.38856828212738037, 1.0630642175674438, 0.6806553602218628, -0.9530377984046936, -1.4606661796569824, -0.2956351637840271, -0.9512965083122253]
>>>
>>> ## LLM Model
>>> llm_model = OllamaLLM(base_url = "http://192.168.11.98:9000",model="llama3.1:8b",num_ctx = 2048)
>>> llm_model.invoke("Test")
Traceback (most recent call last):
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
yield
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_transports/default.py", line 233, in handle_request
resp = self._pool.handle_request(req)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
raise exc from None
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
response = connection.handle_request(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
raise exc
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
stream = self._connect(request)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 122, in _connect
stream = self._network_backend.connect_tcp(**kwargs)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpcore/_backends/sync.py", line 205, in connect_tcp
with map_exceptions(exc_map):
File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 346, in invoke
self.generate_prompt(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 703, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 882, in generate
output = self._generate_helper(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 740, in _generate_helper
raise e
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 727, in _generate_helper
self._generate(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_ollama/llms.py", line 268, in _generate
final_chunk = self._stream_with_aggregation(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_ollama/llms.py", line 236, in _stream_with_aggregation
for stream_resp in self._create_generate_stream(prompt, stop, **kwargs):
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_ollama/llms.py", line 186, in _create_generate_stream
yield from ollama.generate(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/ollama/_client.py", line 79, in _stream
with self._client.stream(method, url, **kwargs) as r:
File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_client.py", line 870, in stream
response = self.send(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_client.py", line 914, in send
response = self._send_handling_auth(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_client.py", line 942, in _send_handling_auth
response = self._send_handling_redirects(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
response = self._send_single_request(request)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_client.py", line 1015, in _send_single_request
response = transport.handle_request(request)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_transports/default.py", line 232, in handle_request
with map_httpcore_exceptions():
File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [Errno 111] Connection refused
### Description
I have the following code inside a docker container python script I am trying to run.
While the embedding model works fine, The LLM model returns Connection refused
Both works fine from outside the container though and inside the container as well when run through say curl
```
root@1fec10f8d40e:/# curl http://192.168.11.98:9000/api/generate -d '{
"model": "llama3.1:8b",
"prompt": "Test",
"stream": false
}'
{"model":"llama3.1:8b","created_at":"2024-08-04T03:49:46.282365097Z","response":"It looks like you want to test me. I'm happy to play along!\n\nHow would you like to proceed? Would you like to:\n\nA) Ask a simple question\nB) Provide a statement and ask for feedback\nC) Engage in a conversation on a specific topic\nD) Something else (please specify)\n\nLet me know, and we can get started!","done":true,"done_reason":"stop","context":[128006,882,128007,271,2323,128009,128006,78191,128007,271,2181,5992,1093,499,1390,311,1296,757,13,358,2846,6380,311,1514,3235,2268,4438,1053,499,1093,311,10570,30,19418,499,1093,311,1473,32,8,21069,264,4382,3488,198,33,8,40665,264,5224,323,2610,369,11302,198,34,8,3365,425,304,264,10652,389,264,3230,8712,198,35,8,25681,775,320,31121,14158,696,10267,757,1440,11,323,584,649,636,3940,0],"total_duration":2073589200,"load_duration":55691013,"prompt_eval_count":11,"prompt_eval_duration":32157000,"eval_count":76,"eval_duration":1943850000}
```
I have checked the model names etc and they are correct and since it works outside the python langchain environment.
The issue appears when the OllamaLLM is run inside container environment.
I have attached the Dockerfile, Cleaned it out for reproducing the issue. Attaching to docker with `docker run -it image bash` to run the python code and the error appears
### System Info
pip freeze | grep langchai
langchain==0.2.12
langchain-chroma==0.1.2
langchain-community==0.2.11
langchain-core==0.2.28
langchain-experimental==0.0.64
langchain-ollama==0.1.1
langchain-text-splitters==0.2.2
| OllamaLLM Connection refused from within docker container while OllamaEmbeddings works The base_url is custom and same for both. | https://api.github.com/repos/langchain-ai/langchain/issues/25022/comments | 0 | 2024-08-03T17:14:03Z | 2024-08-04T03:52:38Z | https://github.com/langchain-ai/langchain/issues/25022 | 2,446,508,964 | 25,022 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.document_transformers import BeautifulSoupTransformer
from langchain_core.documents import Document
text="""<a href="https://google.com/"><span>google</span></a>"""
b = BeautifulSoupTransformer()
docs = b.transform_documents(
[Document(text)],
tags_to_extract=["p", "li", "div", "a", "span", "h1", "h2", "h3", "h4", "h5", "h6"],
remove_comments=True
)
print(docs[0].page_content)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Instead of seeing the same format as when extracting a `<a href="https://google.com/">google</a>` namely `google (https://google.com/)` we get just `google` because of the interior tags
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.12.3 (tags/v3.12.3:f6650f9, Apr 9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.28
> langchain: 0.2.12
> langchain_community: 0.2.11
> langsmith: 0.1.96
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | BeautifulSoup transformer fails to treat links with internal tags the same way | https://api.github.com/repos/langchain-ai/langchain/issues/25018/comments | 0 | 2024-08-03T10:49:51Z | 2024-08-03T10:52:20Z | https://github.com/langchain-ai/langchain/issues/25018 | 2,446,293,298 | 25,018 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from typing import Literal
from langchain_groq import ChatGroq
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
# Data model
class RouteQuery(BaseModel):
"""Route a user query to the most relevant prompt template."""
datasource: Literal["expert_prompt", "summarize_prompt", "normal_QA"] = Field(
...,
description="Given a user question choose which prompt would be most relevant for append to the PromptTemplate",
)
# LLM with function call
llm = ChatGroq(model_name="llama3-groq-8b-8192-tool-use-preview", temperature=0,api_key= "API") ## Replace to real LLMs (Cohere / Groq / OpenAI)
structured_llm = llm.with_structured_output(RouteQuery)
# Prompt
system = """You are an expert at routing a user question to the appropriate prompt template.
Based on the question is referring to, route it to the relevant prompt template. If you can't route , return the RAG_prompt"""
prompt = ChatPromptTemplate.from_messages(
[
("system", system),
("human", "{question}"),
]
)
# Define router
router = prompt | structured_llm
question = """
Giải thích và so sánh thí nghiệm khe đôi của Young trong cơ học cổ điển và cơ học lượng tử. Làm thế nào mà hiện tượng giao thoa lại được giải thích trong cơ học lượng tử?
"""
result = router.invoke({"question": question})
print(result)
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
InternalServerError Traceback (most recent call last)
Cell In[122], [line 1](vscode-notebook-cell:?execution_count=122&line=1)
----> [1](vscode-notebook-cell:?execution_count=122&line=1) router = llm_router.route_prompt("Giải thích hiện tượng biến mất của đạo hàm khi thực hiện huấn luyện mạng RNN")
Cell In[114], [line 20](vscode-notebook-cell:?execution_count=114&line=20)
[18](vscode-notebook-cell:?execution_count=114&line=18) def route_prompt(self, question) :
[19](vscode-notebook-cell:?execution_count=114&line=19) router = self._format_prompt(question)
---> [20](vscode-notebook-cell:?execution_count=114&line=20) result = router.invoke({"question": question})
[22](vscode-notebook-cell:?execution_count=114&line=22) return result.datasource
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:2875, in RunnableSequence.invoke(self, input, config, **kwargs)
[2873](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:2873) input = step.invoke(input, config, **kwargs)
[2874](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:2874) else:
-> [2875](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:2875) input = step.invoke(input, config)
[2876](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:2876) # finish the root run
[2877](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:2877) except BaseException as e:
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5060, in RunnableBindingBase.invoke(self, input, config, **kwargs)
[5054](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5054) def invoke(
[5055](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5055) self,
[5056](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5056) input: Input,
[5057](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5057) config: Optional[RunnableConfig] = None,
[5058](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5058) **kwargs: Optional[Any],
[5059](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5059) ) -> Output:
-> [5060](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5060) return self.bound.invoke(
[5061](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5061) input,
[5062](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5062) self._merge_configs(config),
[5063](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5063) **{**self.kwargs, **kwargs},
[5064](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5064) )
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:274, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
[263](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:263) def invoke(
[264](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:264) self,
[265](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:265) input: LanguageModelInput,
(...)
[269](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:269) **kwargs: Any,
[270](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:270) ) -> BaseMessage:
[271](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:271) config = ensure_config(config)
[272](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:272) return cast(
[273](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:273) ChatGeneration,
--> [274](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:274) self.generate_prompt(
[275](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:275) [self._convert_input(input)],
[276](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:276) stop=stop,
[277](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:277) callbacks=config.get("callbacks"),
[278](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:278) tags=config.get("tags"),
[279](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:279) metadata=config.get("metadata"),
[280](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:280) run_name=config.get("run_name"),
[281](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:281) run_id=config.pop("run_id", None),
[282](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:282) **kwargs,
[283](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:283) ).generations[0][0],
[284](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:284) ).message
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:714, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
[706](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:706) def generate_prompt(
[707](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:707) self,
[708](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:708) prompts: List[PromptValue],
(...)
[711](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:711) **kwargs: Any,
[712](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:712) ) -> LLMResult:
[713](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:713) prompt_messages = [p.to_messages() for p in prompts]
--> [714](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:714) return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:571, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
[569](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:569) if run_managers:
[570](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:570) run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> [571](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:571) raise e
[572](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:572) flattened_outputs = [
[573](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:573) LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
[574](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:574) for res in results
[575](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:575) ]
[576](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:576) llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:561, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
[558](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:558) for i, m in enumerate(messages):
[559](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:559) try:
[560](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:560) results.append(
--> [561](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:561) self._generate_with_cache(
[562](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:562) m,
[563](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:563) stop=stop,
[564](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:564) run_manager=run_managers[i] if run_managers else None,
[565](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:565) **kwargs,
[566](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:566) )
[567](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:567) )
[568](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:568) except BaseException as e:
[569](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:569) if run_managers:
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:793, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
[791](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:791) else:
[792](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:792) if inspect.signature(self._generate).parameters.get("run_manager"):
--> [793](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:793) result = self._generate(
[794](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:794) messages, stop=stop, run_manager=run_manager, **kwargs
[795](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:795) )
[796](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:796) else:
[797](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:797) result = self._generate(messages, stop=stop, **kwargs)
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_groq/chat_models.py:472, in ChatGroq._generate(self, messages, stop, run_manager, **kwargs)
[467](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_groq/chat_models.py:467) message_dicts, params = self._create_message_dicts(messages, stop)
[468](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_groq/chat_models.py:468) params = {
[469](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_groq/chat_models.py:469) **params,
[470](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_groq/chat_models.py:470) **kwargs,
[471](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_groq/chat_models.py:471) }
--> [472](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_groq/chat_models.py:472) response = self.client.create(messages=message_dicts, **params)
[473](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_groq/chat_models.py:473) return self._create_chat_result(response)
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:289, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, parallel_tool_calls, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
[148](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:148) def create(
[149](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:149) self,
[150](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:150) *,
(...)
[177](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:177) timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
[178](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:178) ) -> ChatCompletion | Stream[ChatCompletionChunk]:
[179](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:179) """
[180](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:180) Creates a model response for the given chat conversation.
[181](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:181)
(...)
[287](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:287) timeout: Override the client-level default timeout for this request, in seconds
[288](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:288) """
--> [289](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:289) return self._post(
[290](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:290) "/openai/v1/chat/completions",
[291](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:291) body=maybe_transform(
[292](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:292) {
[293](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:293) "messages": messages,
[294](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:294) "model": model,
[295](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:295) "frequency_penalty": frequency_penalty,
[296](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:296) "function_call": function_call,
[297](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:297) "functions": functions,
[298](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:298) "logit_bias": logit_bias,
[299](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:299) "logprobs": logprobs,
[300](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:300) "max_tokens": max_tokens,
[301](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:301) "n": n,
[302](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:302) "parallel_tool_calls": parallel_tool_calls,
[303](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:303) "presence_penalty": presence_penalty,
[304](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:304) "response_format": response_format,
[305](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:305) "seed": seed,
[306](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:306) "stop": stop,
[307](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:307) "stream": stream,
[308](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:308) "temperature": temperature,
[309](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:309) "tool_choice": tool_choice,
[310](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:310) "tools": tools,
[311](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:311) "top_logprobs": top_logprobs,
[312](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:312) "top_p": top_p,
[313](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:313) "user": user,
[314](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:314) },
[315](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:315) completion_create_params.CompletionCreateParams,
[316](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:316) ),
[317](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:317) options=make_request_options(
[318](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:318) extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
[319](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:319) ),
[320](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:320) cast_to=ChatCompletion,
[321](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:321) stream=stream or False,
[322](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:322) stream_cls=Stream[ChatCompletionChunk],
[323](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:323) )
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1225, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
[1211](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1211) def post(
[1212](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1212) self,
[1213](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1213) path: str,
(...)
[1220](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1220) stream_cls: type[_StreamT] | None = None,
[1221](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1221) ) -> ResponseT | _StreamT:
[1222](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1222) opts = FinalRequestOptions.construct(
[1223](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1223) method="post", url=path, json_data=body, files=to_httpx_files(files), **options
[1224](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1224) )
-> [1225](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1225) return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:920, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
[911](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:911) def request(
[912](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:912) self,
[913](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:913) cast_to: Type[ResponseT],
(...)
[918](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:918) stream_cls: type[_StreamT] | None = None,
[919](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:919) ) -> ResponseT | _StreamT:
--> [920](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:920) return self._request(
[921](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:921) cast_to=cast_to,
[922](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:922) options=options,
[923](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:923) stream=stream,
[924](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:924) stream_cls=stream_cls,
[925](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:925) remaining_retries=remaining_retries,
[926](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:926) )
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1018, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
[1015](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1015) err.response.read()
[1017](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1017) log.debug("Re-raising status error")
-> [1018](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1018) raise self._make_status_error_from_response(err.response) from None
[1020](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1020) return self._process_response(
[1021](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1021) cast_to=cast_to,
[1022](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1022) options=options,
(...)
[1025](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1025) stream_cls=stream_cls,
[1026](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1026) )
InternalServerError: Error code: 502 - {'error': {'type': 'internal_server_error', 'code': 'service_unavailable'}}
### Description
I'm just try to test model for prompt routing, i use Groq API , and already enter my API key, but the following error raised
Im' also test with Groq packages and it still OK
import os
from groq import Groq
client = Groq(
api_key= "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",
)
chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "Explain the importance of fast language models",
}
],
model="llama3-8b-8192",
)
print(chat_completion.choices[0].message.content)
### System Info
langchain==0.2.12
langchain-chroma==0.1.2
langchain-community==0.2.10
langchain-core==0.2.27
langchain-groq==0.1.9
langchain-text-splitters==0.2.2 | Langchain Groq 502 error | https://api.github.com/repos/langchain-ai/langchain/issues/25016/comments | 1 | 2024-08-03T09:23:00Z | 2024-08-05T23:30:49Z | https://github.com/langchain-ai/langchain/issues/25016 | 2,446,260,635 | 25,016 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.output_parsers import OutputFixingParser
from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI, OpenAI
from langchain.output_parsers import RetryOutputParser
from langchain_core.runnables import RunnableLambda, RunnableParallel
template = """Based on the user question, provide an Action and Action Input for what step should be taken.
{format_instructions}
Question: {query}
Response:"""
class Action(BaseModel):
action: str = Field(description="action to take")
action_input: str = Field(description="input to the action")
parser = PydanticOutputParser(pydantic_object=Action)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()},
)
completion_chain = prompt | ChatOpenAI(temperature=0) # Should be OpenAI
retry_parser = RetryOutputParser.from_llm(parser=parser, llm=ChatOpenAI(temperature=0)) # Should be OpenAI
main_chain = RunnableParallel(
completion=completion_chain, prompt_value=prompt
) | RunnableLambda(lambda x: retry_parser.parse_with_prompt(**x))
main_chain.invoke({"query": "who is leo di caprios gf?"})
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[11], [line 35](vscode-notebook-cell:?execution_count=11&line=35)
[29](vscode-notebook-cell:?execution_count=11&line=29) retry_parser = RetryOutputParser.from_llm(parser=parser, llm=ChatOpenAI(temperature=0))
[30](vscode-notebook-cell:?execution_count=11&line=30) main_chain = RunnableParallel(
[31](vscode-notebook-cell:?execution_count=11&line=31) completion=completion_chain, prompt_value=prompt
[32](vscode-notebook-cell:?execution_count=11&line=32) ) | RunnableLambda(lambda x: retry_parser.parse_with_prompt(**x))
---> [35](vscode-notebook-cell:?execution_count=11&line=35) main_chain.invoke({"query": "who is leo di caprios gf?"})
ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
### Description
The code was copied from the official document https://python.langchain.com/v0.2/docs/how_to/output_parser_retry/
The original code example works.
But when I changed the OpenAI model to ChatOpenAI, it failed.
Does the OutputFixingParser only support OpenAI model, not ChatOpenAI?
### System Info
python 3.11.9
langchain 0.2.12
langchain-core 0.2.27 | OutputFixingParser doesn't support ChatOpenAI model (not OpenAI model)? | https://api.github.com/repos/langchain-ai/langchain/issues/24995/comments | 0 | 2024-08-02T19:47:30Z | 2024-08-02T19:50:08Z | https://github.com/langchain-ai/langchain/issues/24995 | 2,445,665,021 | 24,995 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/local_rag/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I was following the "Build a Local RAG Application" tutorial from the v0.2 docs, and especially followed the Setup steps for installing all the relevant packages:
```python
# Document loading, retrieval methods and text splitting
%pip install -qU langchain langchain_community
# Local vector store via Chroma
%pip install -qU langchain_chroma
# Local inference and embeddings via Ollama
%pip install -qU langchain_ollama
```
I think I followed every step of the tutorial correctly, yet, when I tried to run the next coming steps in the tutorial, I was thrown a `ModuleNotFoundError: No module named 'bs4'` suggesting that we are missing a pip install BeautifulSoup step.
In particular, running the `.load` method from `langchain_community.document_loaders.WebBaseLoader` raises the `ModuleNotFoundError`. Clearly, this method relies on BeautifulSoup.
So either I am missing some install steps in the Setup or a step to install `BeautifulSoup` is canonically missing from the tutorial which we should add for completeness.
An easy fix, of course, is to simply add `pip install beautifulsoup4` somewhere in the setup stage of the tutorial.
Cheers,
Salman
### Idea or request for content:
_No response_ | DOC: Naively following "Build a Local RAG Application" in v0.2 docs throws a BeautifulSoup import error | https://api.github.com/repos/langchain-ai/langchain/issues/24991/comments | 0 | 2024-08-02T18:31:12Z | 2024-08-02T18:33:53Z | https://github.com/langchain-ai/langchain/issues/24991 | 2,445,543,496 | 24,991 |
[
"hwchase17",
"langchain"
] | ### Example Code
```python
async for event in agent.astream_events(
input={...},
config={...},
include_tags=[...],
version="v2"
):
print(event)
```
### Error Message and Stack Trace
```
File /opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5256, in RunnableBindingBase.astream_events(self, input, config, **kwargs)
[5250](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5250) async def astream_events(
[5251](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5251) self,
[5252](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5252) input: Input,
[5253](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5253) config: Optional[RunnableConfig] = None,
[5254](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5254) **kwargs: Optional[Any],
[5255](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5255) ) -> AsyncIterator[StreamEvent]:
-> [5256](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5256) async for item in self.bound.astream_events(
[5257](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5257) input, self._merge_configs(config), **{**self.kwargs, **kwargs}
[5258](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5258) ):
[5259](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5259) yield item
File /opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:1246, in Runnable.astream_events(self, input, config, version, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs)
[1241](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:1241) raise NotImplementedError(
[1242](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:1242) 'Only versions "v1" and "v2" of the schema is currently supported.'
[1243](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:1243) )
[1245](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:1245) async with aclosing(event_stream):
-> [1246](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:1246) async for event in event_stream:
[1247](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:1247) yield event
File /opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:985, in _astream_events_implementation_v2(runnable, input, config, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs)
[980](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:980) first_event_sent = True
[981](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:981) # This is a work-around an issue where the inputs into the
[982](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:982) # chain are not available until the entire input is consumed.
[983](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:983) # As a temporary solution, we'll modify the input to be the input
[984](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:984) # that was passed into the chain.
--> [985](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:985) event["data"]["input"] = input
[986](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:986) first_event_run_id = event["run_id"]
[987](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:987) yield event
TypeError: list indices must be integers or slices, not str
```
### Description
I'm trying to switch from **astream_events v1 to astream_events v2** in order to use custom events. The above code works perfectly fine in version v1, but throws the error after changing only the version parameter.
The documentation says no changes are required in order to switch to the new version.
Anyone had this issue and resolved it somehow?
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.0.0: Sat Jul 13 00:55:20 PDT 2024; root:xnu-11215.0.165.0.4~50/RELEASE_ARM64_T8112
> Python Version: 3.11.8 (main, Feb 26 2024, 15:36:12) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.2.25
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.79
> langchain_cli: 0.0.21
> langchain_openai: 0.1.19
> langchain_pinecone: 0.1.3
> langchain_text_splitters: 0.2.2
> langgraph: 0.1.17
> langserve: 0.2.2 | astream_events in version v2 throws: "TypeError: list indices must be integers or slices, not str" | https://api.github.com/repos/langchain-ai/langchain/issues/24987/comments | 2 | 2024-08-02T17:41:05Z | 2024-08-02T18:06:25Z | https://github.com/langchain-ai/langchain/issues/24987 | 2,445,472,929 | 24,987 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.document_loaders import PyPDFLoader
myurl='https://fraser.stlouisfed.org/docs/publications/usbudget/usbudget_1923a.pdf'
loader = PyPDFLoader(myurl)
pages = loader.load()
### Error Message and Stack Trace (if applicable)
....lib/python3.12/site-packages/langchain_community/document_loaders/pdf.py:199: ResourceWarning: unclosed file <_io.BufferedReader name='/var/folders/5n/_zzhgwqd2pqdbk6t3hckrsnh0000gn/T/tmpz1ilifhb/tmp.pdf'>
blob = Blob.from_data(open(self.file_path, "rb").read(), path=self.web_path) # type: ignore[attr-defined]
Object allocated at (most recent call last):
File "/Users/blabla/.local/share/virtualenvs/llm-narrative-restrict-concept-IiXDtsX5/lib/python3.12/site-packages/langchain_community/document_loaders/pdf.py", lineno 199
blob = Blob.from_data(open(self.file_path, "rb").read(), path=self.web_path) # type: ignore[attr-defined]
### Description
I am trying to use PyPDFLoader for importing pdf files from the internet. Sometimes, I get a warning that slows down the reading of PDFs. Dosu suggested that the warning can be fixed by changing the code in the package. See the discussion https://github.com/langchain-ai/langchain/discussions/24972?notification_referrer_id=NT_kwDOAeiAOrQxMTc4MTAyNzI4MzozMjAxNDM5NA#discussioncomment-10223198
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:16:51 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T8103
> Python Version: 3.12.1 (v3.12.1:2305ca5144, Dec 7 2023, 17:23:38) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.27
> langchain: 0.2.12
> langchain_community: 0.2.10
> langsmith: 0.1.96
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.20
> langchain_text_splitters: 0.2.2
> langchain_weaviate: 0.0.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Fixwarning for unclosed document when usinf PYPDFLoader for urls | https://api.github.com/repos/langchain-ai/langchain/issues/24973/comments | 0 | 2024-08-02T12:22:05Z | 2024-08-02T12:24:45Z | https://github.com/langchain-ai/langchain/issues/24973 | 2,444,844,534 | 24,973 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
from langchain.chat_models import init_chat_model
from langchain.chat_models.base import _check_pkg
_check_pkg("langchain_ollama") # success
_check_pkg("langchain_community") # success
model = init_chat_model("llama3.1:8b", model_provider="ollama")
print(type(model)) # <class 'langchain_community.chat_models.ollama.ChatOllama'>
```
### Description
When I install langchain_ollama and langchain_community at the same time, it will call langchain_community first.
I think this is unreasonable. I installed langchain_ollama just because I want to use it first.
The current code snippet is as follows. When both are present, langchain_community will override langchain_ollama.
```py
elif model_provider == "ollama":
try:
_check_pkg("langchain_ollama")
from langchain_ollama import ChatOllama
except ImportError:
pass
# For backwards compatibility
try:
_check_pkg("langchain_community")
from langchain_community.chat_models import ChatOllama
except ImportError:
# If both langchain-ollama and langchain-community aren't available, raise
# an error related to langchain-ollama
_check_pkg("langchain_ollama")
return ChatOllama(model=model, **kwargs)
```
I think the import langchain_community should be placed in the except ImportError section of langchain_ollama (pass part).
### System Info
langchain==0.2.12
langchain-community==0.2.10
langchain-core==0.2.27
langchain-ollama==0.1.1
platform==linux
python-version==3.10.12
| The import priority of init_chat_model for the ollama package | https://api.github.com/repos/langchain-ai/langchain/issues/24970/comments | 1 | 2024-08-02T11:34:30Z | 2024-08-02T15:34:46Z | https://github.com/langchain-ai/langchain/issues/24970 | 2,444,768,235 | 24,970 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
athena_loader = AthenaLoader(
query=f"SELECT 1",
database="default",
s3_output_uri="s3://fake-bucket/fake-prefix",
profile_name=None,
)
```
This code works but the type hinting is incorrect which results in error warnings from type checkers.
### Error Message and Stack Trace (if applicable)
![image](https://github.com/user-attachments/assets/d23beaa3-3ba8-4701-8b50-5c79d12cd61f)
### Description
The Athena loader has code to handle a non-profile. I think should be an optional kwarg like this:
```python
profile_name: Optional[str] = None,
```
The code here shows that `None` is actually handle and is a valid input.
https://github.com/langchain-ai/langchain/blob/d7688a4328f5d66f3b274db6e7b024a24b15cc8e/libs/community/langchain_community/document_loaders/athena.py#L62-L67
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
> Python Version: 3.11.4 (main, Mar 26 2024, 16:28:52) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.25
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.94
> langchain_openai: 0.1.19
> langchain_postgres: 0.0.9
> langchain_text_splitters: 0.2.2
> langgraph: 0.1.17
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve
| AthenaLoader profile type hinting is incorrect | https://api.github.com/repos/langchain-ai/langchain/issues/24957/comments | 1 | 2024-08-02T04:19:24Z | 2024-08-05T19:46:05Z | https://github.com/langchain-ai/langchain/issues/24957 | 2,443,993,898 | 24,957 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
infinity server:
`docker run -it --gpus all -v ~/llms/:/app/.cache -p 8000:8000 michaelf34/infinity:latest v2 --model-id /app/.cache/multilingual-e5-large" --port 8000`
```python3
import asyncio
from langchain_community.embeddings import InfinityEmbeddings
async def main():
infinity_api_url = "http://<URL>:8000"
embeddings = InfinityEmbeddings(
model=".cache/multilingual-e5-large", infinity_api_url=infinity_api_url
)
query = "Where is Paris?"
query_result = await embeddings.aembed_query(query)
asyncio.run(main())
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/home/[email protected]/RAG_test/local.py", line 30, in <module>
asyncio.run(main())
File "/usr/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/home/[email protected]/RAG_test/local.py", line 16, in main
query_result = await embeddings.aembed_query(query)
File "/home/[email protected]/.local/lib/python3.9/site-packages/langchain_community/embeddings/infinity.py", line 115, in aembed_query
embeddings = await self.aembed_documents([text])
File "/home/[email protected]/.local/lib/python3.9/site-packages/langchain_community/embeddings/infinity.py", line 89, in aembed_documents
embeddings = await self.client.aembed(
File "/home/[email protected]/.local/lib/python3.9/site-packages/langchain_community/embeddings/infinity.py", line 315, in aembed
*[
File "/home/[email protected]/.local/lib/python3.9/site-packages/langchain_community/embeddings/infinity.py", line 316, in <listcomp>
self._async_request(
TypeError: _async_request() got an unexpected keyword argument 'url'
```
### Description
This is the minima example, in general some applications with FAISS doesn't work as well with the same error.
```python3
db = FAISS.load_local("some_vectorstore",
embeddings,
allow_dangerous_deserialization=True)
retriever = db.as_retriever(search_kwargs={"k" : 2})
result = await retriever.ainvoke(query) #or await db.asimilarity_search(query)
```
Everything works very well with no async.
Thank you!
### System Info
langchain==0.2.10
langchain-community==0.2.9
langchain-core==0.2.22
langchain-openai==0.1.17
langchain-text-splitters==0.2.2
Debian 5.10.197-1 (2023-09-29) x86_64 GNU/Linux
Python 3.9.2 | InfinityEmbeddings do not work properly in a asynchronous mode (aembed falls with error) | https://api.github.com/repos/langchain-ai/langchain/issues/24942/comments | 0 | 2024-08-01T16:45:44Z | 2024-08-02T06:41:07Z | https://github.com/langchain-ai/langchain/issues/24942 | 2,442,912,548 | 24,942 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/chatbot/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Initial setup has this:
```
model = AzureChatOpenAI(
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],
)
```
Everything works normally until I get to this section of the documentation:
```
from langchain_core.messages import SystemMessage, trim_messages
trimmer = trim_messages(
max_tokens=65,
strategy="last",
token_counter=model,
include_system=True,
allow_partial=False,
start_on="human",
)
messages = [
SystemMessage(content="you're a good assistant"),
HumanMessage(content="hi! I'm bob"),
AIMessage(content="hi!"),
HumanMessage(content="I like vanilla ice cream"),
AIMessage(content="nice"),
HumanMessage(content="whats 2 + 2"),
AIMessage(content="4"),
HumanMessage(content="thanks"),
AIMessage(content="no problem!"),
HumanMessage(content="having fun?"),
AIMessage(content="yes!"),
]
trimmer.invoke(messages)
```
This fails with an Attribute Error: None has no Attribute startswith
I was able to fix this error by adding the following into my model setup:
```
model = AzureChatOpenAI(
model_name=os.environ["AZURE_OPENAI_MODEL_NAME"],
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],
)
```
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/tutorials/chatbot/> - trimmer failing without model_name being filled in | https://api.github.com/repos/langchain-ai/langchain/issues/24928/comments | 0 | 2024-08-01T15:01:47Z | 2024-08-01T15:04:28Z | https://github.com/langchain-ai/langchain/issues/24928 | 2,442,703,142 | 24,928 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.pydantic_v1 import BaseModel, Field
class ModelA(BaseModel):
field_a: str = Field(description='Base class field')
class ModelB(ModelA):
field_b: str = Field(description='Subclass class field')
mytool = tool(func, args_schema=ModelB)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Hi,
I noticed that in the current version of langchain_core, tools using an args_schema have incomplete inputs if the schema is derived from a superclass. That is because as of recently there is a property `tool_call_schema`, which creates a schema with only "non-injected" fields: https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/tools.py#L387
However, it derives the field names to be retained from the `__annotations__` property of the schema, which does not inherit fields of the base class. Hence, all fields from the base class (ModelA in the example above) are deleted. This causes incomplete tool inputs when using schemas that use inheritance.
Is this a regression or should the schemas be used differently?
Thanks!
Valentin
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Mon Jul 15 21:39:34 UTC 2024
> Python Version: 3.11.4 (main, Jul 30 2024, 10:36:58) [GCC 14.1.1 20240522]
Package Information
-------------------
> langchain_core: 0.2.26
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.94
> langchain_openai: 0.1.19
> langchain_text_splitters: 0.2.2
> langserve: 0.2.2 | BaseTool's `tool_call_schema` ignores inherited fields of an `args_schema`, causing incomplete tool inputs | https://api.github.com/repos/langchain-ai/langchain/issues/24925/comments | 2 | 2024-08-01T12:44:58Z | 2024-08-02T19:37:14Z | https://github.com/langchain-ai/langchain/issues/24925 | 2,442,363,033 | 24,925 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
AGENT_PROMPT = """
{tool_names}
Valid action values: "Final Answer" or {tools}
Follow this format, example:
Question: the input question you must answer
Thought: you should always think about what to do
Action(Tool): the action to take
Action Input(Tool Input): the input to the action
Observation: the result of the action
Thought: I now know the final answer
Final Answer: the final answer to the original input question
"""
langchain_llm_client = ChatOpenAI(
model='gpt-4o',
temperature=0.,
api_key=OPENAI_API_KEY,
streaming=True,
max_tokens=None,
)
@tool
async def test():
"""Test tool"""
return f'Test Successfully.\n'
tools = [test]
agent = create_tool_calling_agent(langchain_llm_client, tools, AGENT_PROMPT)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=False,
return_intermediate_steps=True
)
async def agent_completion_async(
agent_executor,
message: str,
tools: List = None,
) -> AsyncGenerator:
"""Base on query to decide the tool which should use.
Response with `async` and `streaming`.
"""
tool_names = [tool.name for tool in tools]
async for event in agent_executor.astream_events(
{
"input": messages,
"tools": tools,
"tool_names": tool_names,
"agent_scratchpad": lambda x: format_to_openai_tool_messages(x["intermediate_steps"]),
},
version='v2'
):
kind = event['event']
if kind == "on_chain_start":
if (
event["name"] == "Agent"
):
yield(
f"\n### Start Agent: `{event['name']}`, Agent Input: `{event['data'].get('input')}`\n"
)
elif kind == "on_chat_model_stream":
# llm model response
content = event["data"]["chunk"].content
if content:
yield content
elif kind == "on_tool_start":
yield(
f"\n### Start Tool: `{event['name']}`, Tool Input: `{event['data'].get('input')}`\n"
)
elif kind == "on_tool_end":
yield(
f"\n### Finished Tool: `{event['name']}`, Tool Results: \n"
)
if isinstance(event['data'].get('output'), AsyncGenerator):
async for event_chunk in event['data'].get('output'):
yield event_chunk
else:
yield(
f"`{event['data'].get('output')}`\n"
)
elif kind == "on_chain_end":
if (
event["name"] == "Agent"
):
yield(
f"\n### Finished Agent: `{event['name']}`, Agent Results: \n"
)
yield(
f"{event['data'].get('output')['output']}\n"
)
async def main():
async for response in agent_completion_async(agent_executor, ['use test tool'], tools)
print(response)
```
### Results
```
Question: use test tool
Thought: I should use the test tool to fulfill the user's request.
Action(Tool): test
Action Input(Tool Input): {}
Observation: The test tool has been executed successfully.
Thought: I now know the final answer.
Final Answer: The test tool has been executed successfully.
```
### Error Message and Stack Trace (if applicable)
```
Exception ignored in: <async_generator object AgentExecutorIterator.__aiter__ at 0x0000024953FD6D40>
Traceback (most recent call last):
File "C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\agent.py", line 1794, in astream
yield step
RuntimeError: async generator ignored GeneratorExit
```
### Description
When using the agent astream, it sometimes executes successfully, but other times it encounters errors and doesn't execute the tool as expected.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.8 (tags/v3.11.8:db85d51, Feb 6 2024, 22:03:32) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.23
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.86
> langchain_anthropic: 0.1.20
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | RuntimeError: async generator ignored GeneratorExit when using agent `astream` | https://api.github.com/repos/langchain-ai/langchain/issues/24914/comments | 0 | 2024-08-01T03:14:40Z | 2024-08-09T14:04:07Z | https://github.com/langchain-ai/langchain/issues/24914 | 2,441,355,256 | 24,914 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
## Issue
To make our retriever integrations as easy to use as possible we need to make sure the docs for them are thorough and standardized. There are two parts to this: updating the retriever docstrings and updating the actual integration docs.
This needs to be done for each retriever integration, ideally with one PR per retriever.
Related to broader issues #21983 and #22005.
## Docstrings
Each retriever class docstring should have the sections shown in the [Appendix](#appendix) below. The sections should have input and output code blocks when relevant.
To build a preview of the API docs for the package you're working on run (from root of repo):
```bash
make api_docs_clean; make api_docs_quick_preview API_PKG=community
```
where `API_PKG=` should be the parent directory that houses the edited package (e.g. "community" for `langchain-community`).
## Doc pages
Each retriever [docs page](https://python.langchain.com/v0.2/docs/integrations/retrievers/) should follow [this template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/retrievers.ipynb).
See example [here](https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/retrievers/tavily.ipynb).
You can use the `langchain-cli` to quickly get started with a new integration docs page (run from root of repo):
```bash
poetry run pip install -e libs/cli
poetry run langchain-cli integration create-doc --name "foo-bar" --name-class FooBar --component-type retriever --destination-dir ./docs/docs/integrations/retrievers/
```
where `--name` is the integration package name without the "langchain-" prefix and `--name-class` is the class name without the "Retriever" postfix. This will create a template doc with some autopopulated fields at docs/docs/integrations/retrievers/foo_bar.ipynb.
To build a preview of the docs you can run (from root):
```bash
make docs_clean
make docs_build
cd docs/build/output-new
yarn
yarn start
```
## Appendix
Expected sections for the retriever class docstring.
```python
"""__ModuleName__ retriever.
# TODO: Replace with relevant packages, env vars, etc.
Setup:
Install ``__package_name__`` and set environment variable ``__MODULE_NAME___API_KEY``.
.. code-block:: bash
pip install -U __package_name__
export __MODULE_NAME___API_KEY="your-api-key"
# TODO: Populate with relevant params.
Key init args:
arg 1: type
description
arg 2: type
description
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from __package_name__ import __ModuleName__Retriever
retriever = __ModuleName__Retriever(
# ...
)
Usage:
.. code-block:: python
query = "..."
retriever.invoke(query)
.. code-block:: python
# TODO: Example output.
Use within a chain:
.. code-block:: python
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
prompt = ChatPromptTemplate.from_template(
\"\"\"Answer the question based only on the context provided.
Context: {context}
Question: {question}\"\"\"
)
llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
chain.invoke("...")
.. code-block:: python
# TODO: Example output.
""" # noqa: E501
```
See example [here](https://github.com/langchain-ai/langchain/blob/a24c445e027cfa5893f99f772fc19dd3e4b28b2e/libs/community/langchain_community/retrievers/tavily_search_api.py#L18). | Standardize retriever integration docs | https://api.github.com/repos/langchain-ai/langchain/issues/24908/comments | 0 | 2024-07-31T22:14:31Z | 2024-07-31T22:17:02Z | https://github.com/langchain-ai/langchain/issues/24908 | 2,441,035,254 | 24,908 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
.
.
.
from langchain.agents import AgentExecutor, OpenAIFunctionsAgent, create_openai_functions_agent
from langchain.agents.agent_toolkits import create_retriever_tool
from langchain.agents.openai_functions_agent.agent_token_buffer_memory import AgentTokenBufferMemory
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor
from langchain.schema.messages import SystemMessage
from langchain_core.prompts.chat import MessagesPlaceholder
from langchain_openai.chat_models import ChatOpenAI
.
.
.
@cl.on_chat_start
async def start():
memory_key = 'history'
prompt = OpenAIFunctionsAgent.create_prompt(
system_message=SystemMessage(content=Cu.get_system_prompt()),
extra_prompt_messages=[MessagesPlaceholder(variable_name=memory_key)],
)
cl.user_session.set('chain',
AgentExecutor(
agent=create_openai_functions_agent(__llm, __tools, prompt),
tools=__tools,
verbose=__verbose,
memory=AgentTokenBufferMemory(memory_key=memory_key, llm=__llm),
return_intermediate_steps=True
))
.
.
.
@cl.on_message
async def main(cl_message):
response = await cl.make_async(__process_message)(cl_message.content)
.
.
.
await cl.Message(
content=response['output'],
).send()
def __process_message(message):
.
.
.
else:
if __single_collection:
response = __get_response(message)
.
.
.
return response
def __get_response(message):
chain = cl.user_session.get('chain')
cb = cl.LangchainCallbackHandler(
stream_final_answer=True,
answer_prefix_tokens=['FINAL', 'ANSWER']
)
cb.answer_reached = True
return chain.invoke(
{'input': message},
callbacks=[cb]
)
```
### Error Message and Stack Trace (if applicable)
File "/aiui/app.py", line 148, in __process_message
response = __get_response(message)
^^^^^^^^^^^^^^^^^^^^^^^
File "/aiui/app.py", line 189, in __get_response
return chain.invoke(
^^^^^^^^^^^^^
File "/aiui/venv/lib/python3.12/site-packages/langchain/chains/base.py", line 166, in invoke
raise e
File "/aiui/venv/lib/python3.12/site-packages/langchain/chains/base.py", line 161, in invoke
final_outputs: Dict[str, Any] = self.prep_outputs(
^^^^^^^^^^^^^^^^^^
File "/aiui/venv/lib/python3.12/site-packages/langchain/chains/base.py", line 460, in prep_outputs
self.memory.save_context(inputs, outputs)
File "/aiui/venv/lib/python3.12/site-packages/langchain/agents/openai_functions_agent/agent_token_buffer_memory.py", line 97, in save_context
curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/aiui/venv/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 877, in get_num_tokens_from_messages
num_tokens += len(encoding.encode(value))
^^^^^^^^^^^^^^^^^^^^^^
File "/aiui/venv/lib/python3.12/site-packages/tiktoken/core.py", line 116, in encode
if match := _special_token_regex(disallowed_special).search(text):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: expected string or buffer
### Description
My application is a ChatBot built for The New School and is currently in the POC stage.
I started to get the error above after upgrading my Langchain libraries.
After debugging the issue, I found the problem in the following class/method
**langchain/libs/partners/openai/langchain_openai/chat_models/base.py**
**get_num_tokens_from_messages**
Changing the **line 877**
from
**num_tokens += len(encoding.encode(value))**
to
**num_tokens += len(encoding.encode(str(value)))**
fixes the issue
**line 875** has this comment
**# Cast str(value) in case the message value is not a string**
but I didn't see it in the code
Please note that, above I replaced all irrelevant pieces of my code with
**.
.
.**
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.6.0: Wed Oct 4 21:26:23 PDT 2023; root:xnu-8796.141.3.701.17~4/RELEASE_ARM64_T6000
> Python Version: 3.12.4 (main, Jun 6 2024, 18:26:44) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.2.25
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.94
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.19
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| get_num_tokens_from_messages method in langchain_openai/chat_models/base.py generates "TypeError: expected string or buffer" error | https://api.github.com/repos/langchain-ai/langchain/issues/24901/comments | 0 | 2024-07-31T21:00:06Z | 2024-07-31T21:02:40Z | https://github.com/langchain-ai/langchain/issues/24901 | 2,440,946,660 | 24,901 |
[
"hwchase17",
"langchain"
] | To make our KV-store integrations as easy to use as possible we need to make sure the docs for them are thorough and standardized. There are two parts to this: updating the KV-store docstrings and updating the actual integration docs.
This needs to be done for each KV-store integration, ideally with one PR per KV-store.
Related to broader issues #21983 and #22005.
## Docstrings
Each KV-store class docstring should have the sections shown in the [Appendix](#appendix) below. The sections should have input and output code blocks when relevant.
To build a preview of the API docs for the package you're working on run (from root of repo):
```shell
make api_docs_clean; make api_docs_quick_preview API_PKG=openai
```
where `API_PKG=` should be the parent directory that houses the edited package (e.g. community, openai, anthropic, huggingface, together, mistralai, groq, fireworks, etc.). This should be quite fast for all the partner packages.
## Doc pages
Each KV-store [docs page](https://python.langchain.com/v0.2/docs/integrations/stores/) should follow [this template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/kv_store.ipynb).
Here is an example: https://python.langchain.com/v0.2/docs/integrations/stores/in_memory/
You can use the `langchain-cli` to quickly get started with a new chat model integration docs page (run from root of repo):
```shell
poetry run pip install -e libs/cli
poetry run langchain-cli integration create-doc --name "foo-bar" --name-class FooBar --component-type kv_store --destination-dir ./docs/docs/integrations/stores/
```
where `--name` is the integration package name without the "langchain-" prefix and `--name-class` is the class name without the "ByteStore" suffix. This will create a template doc with some autopopulated fields at docs/docs/integrations/stores/foo_bar.ipynb.
To build a preview of the docs you can run (from root):
```shell
make docs_clean
make docs_build
cd docs/build/output-new
yarn
yarn start
```
## Appendix
Expected sections for the KV-store class docstring.
```python
"""__ModuleName__ completion KV-store integration.
# TODO: Replace with relevant packages, env vars.
Setup:
Install ``__package_name__`` and set environment variable ``__MODULE_NAME___API_KEY``.
.. code-block:: bash
pip install -U __package_name__
export __MODULE_NAME___API_KEY="your-api-key"
# TODO: Populate with relevant params.
Key init args — client params:
api_key: Optional[str]
__ModuleName__ API key. If not passed in will be read from env var __MODULE_NAME___API_KEY.
See full list of supported init args and their descriptions in the params section.
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from __module_name__ import __ModuleName__ByteStore
kv_store = __ModuleName__ByteStore(
# api_key="...",
# other params...
)
Set keys:
.. code-block:: python
kv_pairs = [
["key1", "value1"],
["key2", "value2"],
]
kv_store.mset(kv_pairs)
.. code-block:: python
Get keys:
.. code-block:: python
kv_store.mget(["key1", "key2"])
.. code-block:: python
# TODO: Example output.
Delete keys:
..code-block:: python
kv_store.mdelete(["key1", "key2"])
..code-block:: python
""" # noqa: E501
``` | Standardize KV-Store Docs | https://api.github.com/repos/langchain-ai/langchain/issues/24888/comments | 0 | 2024-07-31T17:28:17Z | 2024-07-31T21:41:15Z | https://github.com/langchain-ai/langchain/issues/24888 | 2,440,545,637 | 24,888 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
### user will read optional_arguments as a way make some of the
### variables in the template as optional as shown in example . Here
### I intend to make greetings as optional. Notice that this is inseted both in
### input_variables which are mandatory as well as optional_variables . See the ouptut
### user has to provide both 'user_input' as well as 'greetings' as key in the input. Otherwise
### the code breaks. partial_variables works as intended.
template = ChatPromptTemplate([
("system", "You are a helpful AI bot. Your name is {bot_name}."),
("human", "Hello, how are you doing?"),
("ai", "{greetings}, I'm doing well, thanks!"),
("human", "{user_input}"),
],
input_variables=['user_input'],
optional_variables=["greetings"],
partial_variables={"bot_name": "Monalisa"}
)
print(template)
final_input = {
"user_input": "What is your name?"
}
try:
prompt_value = template.invoke(final_input)
except Exception as e:
print(e)
input_variables=['greetings', 'user_input'] optional_variables=['greetings'] partial_variables={'bot_name': 'Monalisa'} messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=['bot_name'], template='You are a helpful AI bot. Your name is {bot_name}.')), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='Hello, how are you doing?')), AIMessagePromptTemplate(prompt=PromptTemplate(input_variables=['greetings'], template="{greetings}, I'm doing well, thanks!")), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['user_input'], template='{user_input}'))]
"Input to ChatPromptTemplate is missing variables {'greetings'}. Expected: ['greetings', 'user_input'] Received: ['user_input']"
### Error Message and Stack Trace (if applicable)
"Input to ChatPromptTemplate is missing variables {'greetings'}. Expected: ['greetings', 'user_input'] Received: ['user_input']"
### Description
as shown in the above section
[opt-vars-chat-template.pdf](https://github.com/user-attachments/files/16444392/opt-vars-chat-template.pdf)
### System Info
langchain 0.2.11
langchain-community 0.2.10
langchain-core 0.2.25
langchain-experimental 0.0.63
langchain-openai 0.1.17
langchain-text-splitters 0.2.2 | optional_variables argument in ChatPromptTemplate is not effective | https://api.github.com/repos/langchain-ai/langchain/issues/24884/comments | 4 | 2024-07-31T16:01:07Z | 2024-08-05T23:57:46Z | https://github.com/langchain-ai/langchain/issues/24884 | 2,440,401,099 | 24,884 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.schema import HumanMessage
from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(
base_url="BASE_URL" + "/deployments/" + "gpt-4v",
openai_api_version = "2024-02-01",
api_key="API-KEY"
)
message = HumanMessage(content="""{
"role": "system",
"content": "You are a helpful assistant and can help with identifying or making assumptions about content in images."
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this picture:"
},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/e/e3/Plains_Zebra_Equus_quagga.jpg/800px-Plains_Zebra_Equus_quagga.jpg"
}
}
]
}""")
print(llm.invoke([message]))
```
### Error Message and Stack Trace (if applicable)
This leads to the following error:
<b>
openai.BadRequestError: Error code: 400 - {'error': {'message': '1 validation error for Request\nbody -> logprobs\n extra fields not permitted (type=value_error.extra)', 'type': 'invalid_request_error', 'param': None, 'code': None}}
</b>
### Description
The error only occurs when using langchain-openai>=0.1.17 and can be attributed to the following PR: https://github.com/langchain-ai/langchain/pull/23691
Here, the parameter logprobs is added to requests per default.
However, AzureOpenAI takes issue with this parameter as stated here: https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/chatgpt?tabs=python-new&pivots=programming-language-chat-completions -> "If you set any of these parameters, you get an error."
(Using langchain-openai<=0.1.16 or even adding a # comment in front of the logprobs addition in the site-package file circumvents the issue)
### System Info
langchain==0.2.11
langchain-core==0.2.25
langchain-mistralai==0.1.11
langchain-openai==0.1.19
langchain-text-splitters==0.2.2 | langchain-openai>=0.1.17 adds logprobs parameter to gpt-4vision requests which leads to an error in AzureOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/24880/comments | 3 | 2024-07-31T13:38:07Z | 2024-08-09T13:32:43Z | https://github.com/langchain-ai/langchain/issues/24880 | 2,440,087,663 | 24,880 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
## Issue
To make our Embeddings integrations as easy to use as possible we need to make sure the docs for them are thorough and standardized. There are two parts to this: updating the embeddings docstrings and updating the actual integration docs.
This needs to be done for each embeddings integration, ideally with one PR per embedding provider.
Related to broader issues #21983 and #22005.
## Docstrings
Each Embeddings class docstring should have the sections shown in the [Appendix](#appendix) below. The sections should have input and output code blocks when relevant.
To build a preview of the API docs for the package you're working on run (from root of repo):
```bash
make api_docs_clean; make api_docs_quick_preview API_PKG=openai
```
where `API_PKG=` should be the parent directory that houses the edited package (e.g. community, openai, anthropic, huggingface, together, mistralai, groq, fireworks, etc.). This should be quite fast for all the partner packages.
## Doc pages
Each Embeddings [docs page](https://python.langchain.com/v0.2/docs/integrations/text_embedding/) should follow [this template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/text_embedding.ipynb).
- [ ] TODO(Erick): populate a complete example
You can use the `langchain-cli` to quickly get started with a new chat model integration docs page (run from root of repo):
```bash
poetry run pip install -e libs/cli
poetry run langchain-cli integration create-doc --name "foo-bar" --name-class FooBar --component-type Embeddings --destination-dir ./docs/docs/integrations/text_embedding/
```
where `--name` is the integration package name without the "langchain-" prefix and `--name-class` is the class name without the "Embedding" prefix. This will create a template doc with some autopopulated fields at docs/docs/integrations/text_embedding/foo_bar.ipynb.
To build a preview of the docs you can run (from root):
```bash
make docs_clean
make docs_build
cd docs/build/output-new
yarn
yarn start
```
## Appendix
Expected sections for the Embedding class docstring.
```python
"""__ModuleName__ embedding model integration.
# TODO: Replace with relevant packages, env vars.
Setup:
Install ``__package_name__`` and set environment variable ``__MODULE_NAME___API_KEY``.
.. code-block:: bash
pip install -U __package_name__
export __MODULE_NAME___API_KEY="your-api-key"
# TODO: Populate with relevant params.
Key init args — completion params:
model: str
Name of __ModuleName__ model to use.
See full list of supported init args and their descriptions in the params section.
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from __module_name__ import __ModuleName__Embeddings
embed = __ModuleName__Embeddings(
model="...",
# api_key="...",
# other params...
)
Embed single text:
.. code-block:: python
input_text = "The meaning of life is 42"
embed.embed_query(input_text)
.. code-block:: python
# TODO: Example output.
# TODO: Delete if token-level streaming isn't supported.
Embed multiple text:
.. code-block:: python
input_texts = ["Document 1...", "Document 2..."]
embed.embed_documents(input_texts)
.. code-block:: python
# TODO: Example output.
# TODO: Delete if native async isn't supported.
Async:
.. code-block:: python
await embed.aembed_query(input_text)
# multiple:
# await embed.aembed_documents(input_texts)
.. code-block:: python
# TODO: Example output.
"""
``` | Standardize Embeddings Docs | https://api.github.com/repos/langchain-ai/langchain/issues/24856/comments | 2 | 2024-07-31T02:16:56Z | 2024-07-31T22:05:16Z | https://github.com/langchain-ai/langchain/issues/24856 | 2,438,970,707 | 24,856 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import os
from langchain_openai import AzureChatOpenAI
from langchain.callbacks import get_openai_callback
from langchain_core.tracers.context import collect_runs
from dotenv import load_dotenv
load_dotenv()
with get_openai_callback() as cb:
result = model.invoke(["Hi"])
print(result.response_metadata['model_name'])
print("\n")
with collect_runs() as cb:
result = model.invoke(["Hi"])
print(result.response_metadata['model_name'],"\n")
print(cb.traced_runs[0].extra['invocation_params'])
output
```
gpt-4-turbo-2024-04-09
gpt-4-turbo-2024-04-09
{'model': 'gpt-3.5-turbo', 'azure_deployment': 'gpt-4', 'model_name': 'gpt-3.5-turbo', 'stream': False, 'n': 1, 'temperature': 0.0, '_type': 'azure-openai-chat', 'stop': None}
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Following is the screenshot of the issue
![image](https://github.com/user-attachments/assets/af6f07ae-c802-4df3-b20b-145fc12db635)
### System Info
langchain = "^0.2.5"
langchain-community = "^0.2.5"
langchain-openai = "^0.1.9" | Error in trace, Trace for AzureChatOpenAI with gpt-4-turbo-2024-04-09 is not correct | https://api.github.com/repos/langchain-ai/langchain/issues/24838/comments | 2 | 2024-07-30T20:09:04Z | 2024-07-31T15:19:06Z | https://github.com/langchain-ai/langchain/issues/24838 | 2,438,595,293 | 24,838 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def chat_with_search_engine_and_knowledgebase(self, history: list[dict], message: str):
history.append({
"role": "user",
"content": message,
})
self.logger.info(f"收到一个浏览器对话的请求,prompt:{message}")
chat_completion = self.client.chat.completions.create(
messages=history,
model=MODEL_NAME,
stream=False,
tools=['search_internet','search_local_knowledgebase'],
timeout=MODEL_OUT_TIMEOUT,
)
response = chat_completion.choices[0].message.content
self.logger.info(f"模型回答为:{response}")
return response
history = []
message = "请根据知识库,推荐5个我可能喜欢的电影,给出我一个json格式的list,每个元素里面包含一个title和一个reason,title是电影的名字,reason是推荐的原因,推荐原因用一句话说明即可,不要有额外的内容。例如你应该输出:[{"title":"标题","reason":"原因"}]"
```
### Error Message and Stack Trace (if applicable)
Agent stopped due to iteration limit or time limit.
### Description
输入一句话,模型不断的调用某个agent,一直到报错Agent stopped due to iteration limit or time limit.
以下是使用的prompt:请根据知识库,推荐5个我可能喜欢的电影,给出我一个json格式的list,每个元素里面包含一个title和一个reason,title是电影的名字,reason是推荐的原因,推荐原因用一句话说明即可,不要有额外的内容。例如你应该输出:[{"title":"标题","reason":"原因"}]
![image](https://github.com/user-attachments/assets/b30b8bb4-ffe5-43c8-8277-fddcef19ab11)
![image](https://github.com/user-attachments/assets/adc92d32-203c-4d94-8772-5eaca0ef72fb)
### System Info
langchain-chatchat:0.3.1.3
platform: linux
python:3.11.7 | 部分prompt下出现不断调用某个agent的情况 | https://api.github.com/repos/langchain-ai/langchain/issues/24828/comments | 0 | 2024-07-30T17:25:28Z | 2024-07-30T17:27:59Z | https://github.com/langchain-ai/langchain/issues/24828 | 2,438,327,384 | 24,828 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
## Issue
To make our toolkit integrations as easy to use as possible we need to make sure the docs for them are thorough and standardized. There are two parts to this: updating the toolkit docstrings and updating the actual integration docs.
This needs to be done for each toolkit integration, ideally with one PR per toolkit.
Related to broader issues #21983 and #22005.
## Docstrings
Each toolkit class docstring should have the sections shown in the [Appendix](#appendix) below. The sections should have input and output code blocks when relevant.
To build a preview of the API docs for the package you're working on run (from root of repo):
```bash
make api_docs_clean; make api_docs_quick_preview API_PKG=community
```
where `API_PKG=` should be the parent directory that houses the edited package (e.g. "community" for `langchain-community`).
## Doc pages
Each toolkit [docs page](https://python.langchain.com/v0.2/docs/integrations/toolkits/) should follow [this template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/toolkits.ipynb).
See example [here](https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/toolkits/sql_database.ipynb).
You can use the `langchain-cli` to quickly get started with a new integration docs page (run from root of repo):
```bash
poetry run pip install -e libs/cli
poetry run langchain-cli integration create-doc --name "foo-bar" --name-class FooBar --component-type toolkit --destination-dir ./docs/docs/integrations/toolkits/
```
where `--name` is the integration package name without the "langchain-" prefix and `--name-class` is the class name without the "Toolkit" postfix. This will create a template doc with some autopopulated fields at docs/docs/integrations/toolkits/foo_bar.ipynb.
To build a preview of the docs you can run (from root):
```bash
make docs_clean
make docs_build
cd docs/build/output-new
yarn
yarn start
```
## Appendix
Expected sections for the toolkit class docstring.
```python
"""__ModuleName__ toolkit.
# TODO: Replace with relevant packages, env vars, etc.
Setup:
Install ``__package_name__`` and set environment variable ``__MODULE_NAME___API_KEY``.
.. code-block:: bash
pip install -U __package_name__
export __MODULE_NAME___API_KEY="your-api-key"
# TODO: Populate with relevant params.
Key init args:
arg 1: type
description
arg 2: type
description
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from __package_name__ import __ModuleName__Toolkit
toolkit = __ModuleName__Toolkit(
# ...
)
Tools:
.. code-block:: python
toolkit.get_tools()
.. code-block:: python
# TODO: Example output.
Use within an agent:
.. code-block:: python
from langgraph.prebuilt import create_react_agent
agent_executor = create_react_agent(llm, tools)
example_query = "..."
events = agent_executor.stream(
{"messages": [("user", example_query)]},
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
.. code-block:: python
# TODO: Example output.
"""
``` | Standardize Toolkit docs | https://api.github.com/repos/langchain-ai/langchain/issues/24820/comments | 0 | 2024-07-30T14:26:32Z | 2024-08-06T18:19:41Z | https://github.com/langchain-ai/langchain/issues/24820 | 2,437,982,131 | 24,820 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
import asyncio
from enum import Enum
from dotenv import load_dotenv
from langchain_core.output_parsers import PydanticToolsParser
from langchain_ollama import ChatOllama
from langchain_openai import ChatOpenAI
from pydantic.v1 import BaseModel, Field
load_dotenv()
class DateEnum(str, Enum):
first_day = "2024-10-10 10:00:00"
second_day = "2024-10-11 14:00:00"
third_day = "2024-10-12 14:00:00"
class SelectItem(BaseModel):
"""Confirm the user's choice based on the user's answer."""
item: DateEnum = Field(..., description="Select a date based on user responses")
tools = [SelectItem]
ollama_llm = ChatOllama(model="llama3.1:8b").bind_tools(tools)
openai_llm = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)
parser = PydanticToolsParser(tools=tools)
chain = ollama_llm | parser
fall_back_chain = openai_llm | parser
with_fallback_chain = chain.with_fallbacks([fall_back_chain])
messages = [
("ai", f"Which day is most convenient for you in {list(DateEnum)}?"),
("human", "30"),
]
async def main():
async for event in with_fallback_chain.astream_events(messages, version="v2"):
print(event) # It will not call fall_back
print("-" * 20)
print(await with_fallback_chain.ainvoke(messages)) # It will call fall_back
asyncio.run(main())
```
### Error Message and Stack Trace (if applicable)
![image](https://github.com/user-attachments/assets/9a1cbca3-be3e-44b4-a379-03b7dc7a5da0)
![image](https://github.com/user-attachments/assets/d861fd0e-1929-4323-bf01-df4375a6e1f3)
### Description
ChatOllama won't use with_fallbacks when I use astream_events.
But it will use with_fallbacks when I use ainvoke.
My goal is to know which model produced this output.
When I connect PydanticToolsParser behind the model output, I can't seem to know who generated it. (it is hidden in the AIMessage of the intermediate model output).
So I wanted to take out the intermediate result from astream_events to determine who generated it.
Later I found that ChatOllama seems to be unable to call fall_back under astream_events? Is there a better solution?
### System Info
langchain==0.2.11
langchain-core==0.2.24
langchain-ollama==0.1.0
langchain-openai==0.1.19
platform linux
python version = 3.10.12
| ChatOllama won't use with_fallbacks when I use astream_events. | https://api.github.com/repos/langchain-ai/langchain/issues/24816/comments | 0 | 2024-07-30T12:50:56Z | 2024-08-01T22:47:36Z | https://github.com/langchain-ai/langchain/issues/24816 | 2,437,766,390 | 24,816 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
## Issue
To make our LLM integrations as easy to use as possible we need to make sure the docs for them are thorough and standardized. There are two parts to this: updating the llm docstrings and updating the actual integration docs.
This needs to be done for each LLM integration, ideally with one PR per LLM.
Related to broader issues #21983 and #22005.
## Docstrings
Each LLM class docstring should have the sections shown in the [Appendix](#appendix) below. The sections should have input and output code blocks when relevant.
To build a preview of the API docs for the package you're working on run (from root of repo):
```bash
make api_docs_clean; make api_docs_quick_preview API_PKG=openai
```
where `API_PKG=` should be the parent directory that houses the edited package (e.g. community, openai, anthropic, huggingface, together, mistralai, groq, fireworks, etc.). This should be quite fast for all the partner packages.
## Doc pages
Each LLM [docs page](https://python.langchain.com/v0.2/docs/integrations/llms/) should follow [this template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/llms.ipynb).
- [ ] TODO(Erick): populate a complete example
You can use the `langchain-cli` to quickly get started with a new chat model integration docs page (run from root of repo):
```bash
poetry run pip install -e libs/cli
poetry run langchain-cli integration create-doc --name "foo-bar" --name-class FooBar --component-type LLM --destination-dir ./docs/docs/integrations/llms/
```
where `--name` is the integration package name without the "langchain-" prefix and `--name-class` is the class name without the "LLM" prefix. This will create a template doc with some autopopulated fields at docs/docs/integrations/llms/foo_bar.ipynb.
To build a preview of the docs you can run (from root):
```bash
make docs_clean
make docs_build
cd docs/build/output-new
yarn
yarn start
```
## Appendix
Expected sections for the LLM class docstring.
```python
"""__ModuleName__ completion model integration.
# TODO: Replace with relevant packages, env vars.
Setup:
Install ``__package_name__`` and set environment variable ``__MODULE_NAME___API_KEY``.
.. code-block:: bash
pip install -U __package_name__
export __MODULE_NAME___API_KEY="your-api-key"
# TODO: Populate with relevant params.
Key init args — completion params:
model: str
Name of __ModuleName__ model to use.
temperature: float
Sampling temperature.
max_tokens: Optional[int]
Max number of tokens to generate.
# TODO: Populate with relevant params.
Key init args — client params:
timeout: Optional[float]
Timeout for requests.
max_retries: int
Max number of retries.
api_key: Optional[str]
__ModuleName__ API key. If not passed in will be read from env var __MODULE_NAME___API_KEY.
See full list of supported init args and their descriptions in the params section.
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from __module_name__ import __ModuleName__LLM
llm = __ModuleName__LLM(
model="...",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# api_key="...",
# other params...
)
Invoke:
.. code-block:: python
input_text = "The meaning of life is "
llm.invoke(input_text)
.. code-block:: python
# TODO: Example output.
# TODO: Delete if token-level streaming isn't supported.
Stream:
.. code-block:: python
for chunk in llm.stream(input_text):
print(chunk)
.. code-block:: python
# TODO: Example output.
.. code-block:: python
''.join(llm.stream(input_text))
.. code-block:: python
# TODO: Example output.
# TODO: Delete if native async isn't supported.
Async:
.. code-block:: python
await llm.ainvoke(input_text)
# stream:
# async for chunk in (await llm.astream(input_text))
# batch:
# await llm.abatch([input_text])
.. code-block:: python
# TODO: Example output.
""" # noqa: E501
``` | Standardize LLM Docs | https://api.github.com/repos/langchain-ai/langchain/issues/24803/comments | 0 | 2024-07-30T00:48:34Z | 2024-07-31T16:55:59Z | https://github.com/langchain-ai/langchain/issues/24803 | 2,436,660,709 | 24,803 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
## Issue
To make our vector store integrations as easy to use as possible we need to make sure the docs for them are thorough and standardized. There are two parts to this: updating the vector store docstrings and updating the actual integration docs.
This needs to be done for each VectorStore integration, ideally with one PR per VectorStore.
Related to broader issues #21983 and #22005.
## Docstrings
Each VectorStore class docstring should have the sections shown in the [Appendix](#appendix) below. The sections should have input and output code blocks when relevant.
To build a preview of the API docs for the package you're working on run (from root of repo):
```bash
make api_docs_clean; make api_docs_quick_preview API_PKG=openai
```
where `API_PKG=` should be the parent directory that houses the edited package (e.g. community, openai, anthropic, huggingface, together, mistralai, groq, fireworks, etc.). This should be quite fast for all the partner packages.
## Doc pages
Each ChatModel [docs page](https://python.langchain.com/v0.2/docs/integrations/vectorstores/) should follow [this template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/chat.ipynb). See [ChatOpenAI](https://python.langchain.com/v0.2/docs/integrations/chat/openai/) for an example.
You can use the `langchain-cli` to quickly get started with a new chat model integration docs page (run from root of repo):
```bash
poetry run pip install -e libs/cli
poetry run langchain-cli integration create-doc --name "foo-bar" --name-class FooBar --component-type VectorStore --destination-dir ./docs/docs/integrations/vectorstores/
```
where `--name` is the integration package name without the "langchain-" prefix and `--name-class` is the class name without the "VectorStore" prefix. This will create a template doc with some autopopulated fields at docs/docs/integrations/vectorstores/foo_bar.ipynb.
To build a preview of the docs you can run (from root):
```bash
make docs_clean
make docs_build
cd docs/build/output-new
yarn
yarn start
```
## Appendix
Expected sections for the VectorStore class docstring.
```python
"""__ModuleName__ vector store integration.
Setup:
...
Key init args - indexing params:
...
Key init args - client params:
...
See full list of supported init args and their descriptions in the params section.
Instantiate:
...
Add Documents:
...
Update Documents:
...
Delete Documents:
...
Search:
...
Search with score:
...
Use as Retriever:
...
""" # noqa: E501
``` | Standardize vector store docs | https://api.github.com/repos/langchain-ai/langchain/issues/24800/comments | 0 | 2024-07-30T00:10:32Z | 2024-08-02T15:35:20Z | https://github.com/langchain-ai/langchain/issues/24800 | 2,436,626,924 | 24,800 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import bs4
from langchain_community.document_loaders import WebBaseLoader,UnstructuredURLLoader,DirectoryLoader,TextLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter,CharacterTextSplitter
# Create a WebBaseLoader or UnstructuredURLLoader instance to load documents from web sources
#directory_path="/opt/aiworkspase/langchain/HOMEwork/zjb/test3"
directory_path="/opt/aiworkspase/langchain/HOMEwork/zjb/articles"
docs=[]
chunk_size=500
chunk_overlap=50
#获取一个文件的所有内容并分片的方法
def load_text_from_path(file_path):
"""
Load text content from the given file path using TextLoader.
:param file_path: The path to the text file to be loaded.
:return: The content of the text file as a string.
"""
loader = TextLoader(file_path)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap)
# Split the documents into chunks using the text_splitter
doc_one = text_splitter.split_documents(document)
return doc_one
#从一个目录下取出所有文件的方法
def get_all_files_in_directory(directory_path):
"""
Get all files in the given directory and its subdirectories.
:param directory_path: The path to the directory.
:return: A list of paths to all files in the directory.
"""
all_files = []
for root, dirs, files in os.walk(directory_path):
for file in files:
file_path = os.path.join(root, file)
all_files.append(file_path)
return all_files
#合并一个目录下所有文件的内容分片集合
def process_files(directory_path):
docs_temp=[]
all_files=get_all_files_in_directory(directory_path)
"""
Process each file in the given list of file paths.
:param file_paths: A list of file paths.
"""
for file_path in all_files:
# 处理每个文件路径,并获取其分片数组
doc=load_text_from_path(file_path)
docs_temp.extend(doc)
return docs_temp
docs.extend(process_files(directory_path))
#二次预处理入向量库文件文件
from langchain_milvus import Milvus, Zilliz
import time
def split_list(arr, n):
"""
将数组每n项重新组成一个数组,最后把这些新的数组重新存到一个大数组里。
:param arr: 原始数组
:param n: 每个小数组的项数
:return: 包含小数组的大数组
"""
return [arr[i:i + n] for i in range(0, len(arr), n)]
doc_4=split_list(docs, 4)
from langchain_milvus import Milvus, Zilliz
import time
#有数据以后循环加载
start=0
m=0
for doc_4_item in doc_4:
if m>start:
vectorstore = Milvus.from_documents( # or Zilliz.from_documents
documents=doc_4_item,
collection_name="cyol_zjb_1",
embedding=embeddings,
connection_args={
"uri": "/opt/aiworkspase/langchain/milvus_zjb_500_50_0729.db",
},
drop_old=False, # Drop the old Milvus collection if it exists
)
time.sleep(1)
m=m+1 #m=3084上次
#进行业务查询
from langchain_core.runnables import RunnablePassthrough
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
#llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
retriever = vectorstore_slect.as_retriever()
template = """使用后面给的内容回答提出的问题。
如果给的内容里不知道答案,就说你不知道,不要试图编造答案。
最多使用三句话,并尽可能简洁地回答。
总是在答案的末尾说“谢谢你的提问!”。
给的内容:{context}
问题: {question}
有用的答案:"""
#rag_prompt = PromptTemplate.from_template(template)
rag_prompt = PromptTemplate(
template=template, input_variables=["context", "question"]
)
# 初始化输出解析器,将模型输出转换为字符串
output_parser = StrOutputParser()
rag_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| rag_prompt
| model
|output_parser
)
#print(rag_chain.invoke("'宋宝颖是谁?多介绍一下他。"))尹学芸
#print(rag_chain.invoke("请告诉我孙家栋是干什么的?"))
#print(rag_chain.invoke("'尹学芸是谁?多介绍一下他。"))
print(rag_chain.invoke("'尹学芸"))
### Error Message and Stack Trace (if applicable)
用小于100个分块的数据查询能查询到。但是超过4万条数据的查询就查不到,获取的 结果与问题几乎无相似性,
query = "孙家栋"
vectorstore_slect.similarity_search(query, k=5)
在大数据集下k需要>=50 才可以查询到
### Description
是不是 前期向量模型配置问题?
### System Info
from langchain_core.runnables import RunnablePassthrough
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser | Similarity Search Returns no useful,when Using Milvus(使用Milvus做向量相似性搜索,返回无用数据) | https://api.github.com/repos/langchain-ai/langchain/issues/24784/comments | 0 | 2024-07-29T14:59:33Z | 2024-07-29T15:02:15Z | https://github.com/langchain-ai/langchain/issues/24784 | 2,435,662,561 | 24,784 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/output_parser_retry/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
from langchain.output_parsers import RetryOutputParser
template = """Based on the user question, provide an name and the gender.
{format_instructions}
Question: {query}
Response:"""
from langchain.output_parsers import YamlOutputParser
class details(BaseModel):
name: str = Field(description="name of the person")
gender: str = Field(description="Gender of the person")
prompt = PromptTemplate(template=template,input_variables=['query'],partial_variables={"format_instructions": parser.get_format_instructions()})
parser = PydanticOutputParser(pydantic_object=details)
retry_parser = RetryOutputParser.from_llm(parser=parser, llm=OpenAI(temperature=0),max_retries=1)
from langchain_core.runnables import RunnableLambda, RunnableParallel
completion_chain = prompt | llm
main_chain = RunnableParallel(
completion=completion_chain, prompt_value=prompt
) | RunnableLambda(lambda x: retry_parser.parse_with_prompt(**x))
result = main_chain.invoke({"query":"They called him alice"})
reference link: https://python.langchain.com/v0.2/docs/how_to/output_parser_retry/
error:
ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
### Idea or request for content:
Retry-output parser throwing some error. How the bad response is extracted from the error message. Instead of manual bad response input, it should passes from error message or is there any way to get that from the error message | DOC: <Issue related to /v0.2/docs/how_to/output_parser_retry/> | https://api.github.com/repos/langchain-ai/langchain/issues/24778/comments | 1 | 2024-07-29T13:38:09Z | 2024-07-29T20:01:19Z | https://github.com/langchain-ai/langchain/issues/24778 | 2,435,464,336 | 24,778 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
vectorstore = DocumentDBVectorSearch.from_connection_string(
connection_string=connection_string,
namespace=namespace,
embedding=embeddings,
index_name=INDEX_NAME,
)
# calling similarity_search without filter, makes filter get the default value None and
docs = vectorstore.similarity_search(query=keyword)
```
### Error Message and Stack Trace (if applicable)
Error message: the match filter must be an expression in an object, full error: {'ok': 0.0, 'code': 15959, 'errmsg': 'the match filter must be an expression in an object', 'operationTime': Timestamp(1722245629, 1)}.
### Description
I am trying to use AWS DocumentDB as vector database and when I call similarity_search method from a DocumentDBVectorSearch instance, without filter, only query text, DocumentDB returns an error like: "the match filter must be an expression in an object". This is because None $match expressions are not supported and have to be removed from the pipeline when filter is None.
### System Info
langchain==0.2.11
langchain-aws==0.1.6
langchain-cohere==0.1.9
langchain-community==0.2.10
langchain-core==0.2.23
langchain-experimental==0.0.63
langchain-openai==0.1.17
langchain-text-splitters==0.2.2
platform=mac
python=3.12.4 | AWS DocumentDB similarity search does not work when no filter is used. Error msg: "the match filter must be an expression in an object" | https://api.github.com/repos/langchain-ai/langchain/issues/24775/comments | 1 | 2024-07-29T10:11:34Z | 2024-07-29T15:53:43Z | https://github.com/langchain-ai/langchain/issues/24775 | 2,435,009,768 | 24,775 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.document_loaders import UnstructuredImageLoader
loader = UnstructuredImageLoader(
"https://photo.16pic.com/00/53/98/16pic_5398252_b.jpg", mode="elements"
)
docs = loader.load()
for doc in docs:
print(doc)
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/connection.py", line 203, in _new_conn
sock = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
OSError: [Errno 101] Network is unreachable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/connectionpool.py", line 791, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/connectionpool.py", line 492, in _make_request
raise new_e
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/connectionpool.py", line 468, in _make_request
self._validate_conn(conn)
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/connectionpool.py", line 1097, in _validate_conn
conn.connect()
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/connection.py", line 611, in connect
self.sock = sock = self._new_conn()
^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/connection.py", line 218, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x77ea228afb50>: Failed to establish a new connection: [Errno 101] Network is unreachable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/requests/adapters.py", line 667, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/connectionpool.py", line 845, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /unstructuredio/yolo_x_layout/resolve/main/yolox_l0.05.onnx (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x77ea228afb50>: Failed to establish a new connection: [Errno 101] Network is unreachable'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1722, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(url=url, proxies=proxies, timeout=etag_timeout, headers=headers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1645, in get_hf_file_metadata
r = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 372, in _request_wrapper
response = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 395, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 66, in send
return super().send(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/requests/adapters.py", line 700, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /unstructuredio/yolo_x_layout/resolve/main/yolox_l0.05.onnx (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x77ea228afb50>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"), '(Request ID: 9e333dcd-b659-4be2-ad0b-cfb63b2cc7f9)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/jh/liuchao_project/tosql2.py", line 15, in <module>
docs = loader.load()
^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/langchain_core/document_loaders/base.py", line 30, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/langchain_community/document_loaders/unstructured.py", line 89, in lazy_load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/langchain_community/document_loaders/image.py", line 33, in _get_elements
return partition_image(filename=self.file_path, **self.unstructured_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured/documents/elements.py", line 593, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured/file_utils/filetype.py", line 385, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured/chunking/dispatch.py", line 74, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured/partition/image.py", line 103, in partition_image
return partition_pdf_or_image(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured/partition/pdf.py", line 310, in partition_pdf_or_image
elements = _partition_pdf_or_image_local(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured/utils.py", line 249, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured/partition/pdf.py", line 564, in _partition_pdf_or_image_local
inferred_document_layout = process_file_with_model(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured_inference/inference/layout.py", line 353, in process_file_with_model
model = get_model(model_name, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured_inference/models/base.py", line 79, in get_model
model.initialize(**initialize_params)
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured_inference/utils.py", line 47, in __getitem__
value = evaluate(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured_inference/utils.py", line 195, in download_if_needed_and_get_local_path
return hf_hub_download(path_or_repo, filename, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1221, in hf_hub_download
return _hf_hub_download_to_cache_dir(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1325, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1826, in _raise_on_head_call_error
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
### Description
I just load image and print document
but it is a error :
huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
### System Info
langchain=0.2.9
python 3.11 | about image load bug | https://api.github.com/repos/langchain-ai/langchain/issues/24774/comments | 0 | 2024-07-29T09:59:13Z | 2024-07-29T10:01:49Z | https://github.com/langchain-ai/langchain/issues/24774 | 2,434,983,670 | 24,774 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_openai import ChatOpenAI
from httpx import AsyncCilent as HttpxAsyncClient
session = HttpxAsyncClient(verify=False)
model = ChatOpenAI(
streaming=stream,
verbose=True,
openai api key=base key
openai api base=base url
http async client=session
model name=llm name,
temperature=temperature
max tokens=max tokens,stop=["\n"]
prompt template =prompt comment template)
### Error Message and Stack Trace (if applicable)
httpx.ConnectError: ISSL: CERTIFICATE VERIFY FAILEDl certificate:verify failed: self-signed certificatel.c:1007)
The above exception was the direct cause of the following exception:
Traceback(most recent call last):
File "/usr/local/lib/python3,10/site-packages/starlette/responses.py", line 260, in wrapawait func()
File "/usr/local/lib/python3.10/site-packages/starlette/responses.py", line 249, in stream responseasync for chunk in self.body iterator
File "/home/chatTestcenter/service/code explain.py", line 101., in code chatresponses = chain.batch(lst, config={"max concurrency": 3})
File "/usr/local/lib/python3,10/site-packages/langchain core/runnables/base.py", line 647, in batchreturn cast(Listl0utputl, list(executor.map(invoke, inputs,configs)))
File "/usr/local/lib/python3.10/concurrent/futures/ base.py",line 62l, in result iteratoryield result or cancel(fs,pop())
File "/usr/local/lib/python3.10/concurrent/futures/ base.py",line 319, in result or cancelreturn fut,result(timeout.
File "/usr/local/lib/python3,10/concurrent/futures/ base.py", line 458; in result.txt”2939L
### Description
I am trying to disalbel ssl ceritficate by passing http_async param an instance of Httpx.AysncClient calss, with verify=False.
But after that, I still see ssl verification process and it failed.
### System Info
langchain: 0.2.11
langchain-community: 0.2.10
langchain-core: 0.2.24
langchain-openai: 0.1.19
langchain-text-splitter: 0.2.2
langsmith: 0.1.93
openai: 1.37.1
python: 3.10 | langchain_openai.ChatOpenAI: client attribute not recognized | https://api.github.com/repos/langchain-ai/langchain/issues/24770/comments | 0 | 2024-07-29T08:20:25Z | 2024-07-29T19:38:51Z | https://github.com/langchain-ai/langchain/issues/24770 | 2,434,768,961 | 24,770 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# model = SentenceTransformer(config.EMBEDDING_MODEL_NAME)
KG_vector_store = Neo4jVector.from_existing_index(
embedding=SentenceTransformerEmbeddings(model_name = config.EMBEDDING_MODEL_NAME),
url=NEO4J_URI,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
database="neo4j",
index_name=VECTOR_INDEX_NAME,
text_node_property=VECTOR_SOURCE_PROPERTY,
retrieval_query=retrieval_query_extra_text,
)
# Create a retriever from the vector store
retriever_extra_text = KG_vector_store.as_retriever(
search_type="mmr",
search_kwargs={'k': 6, 'fetch_k': 50} #,'lambda_mult': 0.25
)
![image](https://github.com/user-attachments/assets/ea123fb3-7505-4659-ada0-e9a60da80ed1)
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
[<ipython-input-8-569f7332a067>](https://localhost:8080/#) in <cell line: 1>()
----> 1 rag.query("Please describe in detail what is the evidence report about?")['answer']
8 frames
[/content/RAG/KG_for_RAG/src/execute_rag.py](https://localhost:8080/#) in query(self, query)
318 self.init_graph_for_query()
319
--> 320 answer = self.QA_CHAIN.invoke(
321 {"question": query},
322 return_only_outputs=True,
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
164 except BaseException as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
168
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
154 self._validate_inputs(inputs)
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
[/usr/local/lib/python3.10/dist-packages/langchain/chains/qa_with_sources/base.py](https://localhost:8080/#) in _call(self, inputs, run_manager)
150 )
151 if accepts_run_manager:
--> 152 docs = self._get_docs(inputs, run_manager=_run_manager)
153 else:
154 docs = self._get_docs(inputs) # type: ignore[call-arg]
[/usr/local/lib/python3.10/dist-packages/langchain/chains/qa_with_sources/retrieval.py](https://localhost:8080/#) in _get_docs(self, inputs, run_manager)
47 ) -> List[Document]:
48 question = inputs[self.question_key]
---> 49 docs = self.retriever.invoke(
50 question, config={"callbacks": run_manager.get_child()}
51 )
[/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
219 except Exception as e:
220 run_manager.on_retriever_error(e)
--> 221 raise e
222 else:
223 run_manager.on_retriever_end(
[/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
212 _kwargs = kwargs if self._expects_other_args else {}
213 if self._new_arg_supported:
--> 214 result = self._get_relevant_documents(
215 input, run_manager=run_manager, **_kwargs
216 )
[/usr/local/lib/python3.10/dist-packages/langchain_core/vectorstores/base.py](https://localhost:8080/#) in _get_relevant_documents(self, query, run_manager)
1255 docs = [doc for doc, _ in docs_and_similarities]
1256 elif self.search_type == "mmr":
-> 1257 docs = self.vectorstore.max_marginal_relevance_search(
1258 query, **self.search_kwargs
1259 )
[/usr/local/lib/python3.10/dist-packages/langchain_core/vectorstores/base.py](https://localhost:8080/#) in max_marginal_relevance_search(self, query, k, fetch_k, lambda_mult, **kwargs)
929 List of Documents selected by maximal marginal relevance.
930 """
--> 931 raise NotImplementedError
932
933 async def amax_marginal_relevance_search(
NotImplementedError:
### Description
MMR NotImplimented in Neo4jVector despite the documentation saying otherwise.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024
> Python Version: 3.10.12 (main, Mar 22 2024, 16:50:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.24
> langchain: 0.2.11
> langchain_community: 0.2.0
> langsmith: 0.1.93
> langchain_google_genai: 1.0.8
> langchain_openai: 0.1.7
> langchain_text_splitters: 0.2.2 | MMR NotImplimented in Neo4jVector.But the documentation says otherwise with an example implimentation of MMR | https://api.github.com/repos/langchain-ai/langchain/issues/24768/comments | 3 | 2024-07-29T08:12:33Z | 2024-08-08T16:42:17Z | https://github.com/langchain-ai/langchain/issues/24768 | 2,434,753,679 | 24,768 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The code is picked up from LangChain documentations
[https://python.langchain.com/v0.2/docs/how_to/tools_chain/](https://python.langchain.com/v0.2/docs/how_to/tools_chain/)
```Python
from langchain import hub
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.tools import tool
model = # A mistral model
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
@tool
def multiply(first_int: int, second_int: int) -> int:
"""Multiply two integers together."""
return first_int * second_int
@tool
def add(first_int: int, second_int: int) -> int:
"Add two integers."
return first_int + second_int
@tool
def exponentiate(base: int, exponent: int) -> int:
"Exponentiate the base to the exponent power."
return base**exponent
tools = [multiply, add, exponentiate]
# Construct the tool calling agent
agent = create_tool_calling_agent(model,tools, prompt)
# Create an agent executor by passing in the agent and tools
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
result = agent_executor.invoke(
{
"input": "Take 3 to the fifth power and multiply that by the sum of twelve and three, then square the whole result"
}
)
```
### Error Message and Stack Trace (if applicable)
TypeError: Object of type StructuredTool is not JSON serializable
### Description
I am trying to run the sample code in [https://python.langchain.com/v0.2/docs/how_to/tools_chain/](https://python.langchain.com/v0.2/docs/how_to/tools_chain/) to call an agent equipped with tools. I see two problems:
- If I run the code as it is, it generates the error that "Object of type StructuredTool is not JSON serializable".
- If I create the agent with empty tools list (i.e., tools=[]) it generates the response. However, it is not supposed to be the right way of creating agents, as far as I understand. Besides the answer with mistral7b model is very inaccurate. Even in the example provided in the link above, the answer seems to be different and wrong when checking the [langSmith run](https://smith.langchain.com/public/eeeb27a4-a2f8-4f06-a3af-9c983f76146c/r?runtab=0).
### System Info
langchain-core==0.1.52
langchain==0.1.16
| Langchain agent with tools generates "StructuredTool is not JSON serializable" | https://api.github.com/repos/langchain-ai/langchain/issues/24766/comments | 2 | 2024-07-29T07:50:07Z | 2024-07-29T21:30:32Z | https://github.com/langchain-ai/langchain/issues/24766 | 2,434,709,913 | 24,766 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
llm = AzureChatOpenAI(
azure_endpoint=api_base,
deployment_name=engine,
model_name=engine,
api_key=key,
api_version=api_version,
temperature=0
) # it's a chat-gpt4o deployed in Azure
def multiply2(a: int, b: int) -> int:
"""Multiplies a and b."""
print('----in multi---')
return a * b
tools = [multiply2]
llm_with_tools = llm.bind_tools(tools)
query = "what's the next integer after 109382*381001?"
r1=llm.invoke(query) # not using tools
print(r1)
print('1------------------')
r2=llm_with_tools.invoke(query)
print(r2)
```
### Error Message and Stack Trace (if applicable)
None
### Description
the content of r1 is
```
To find the next integer after the product of 109382 and 381001, we first need to calculate the product:\n\n\\[ 109382 \\times 381001 = 41632617482 \\]\n\nThe next integer after 41632617482 is:\n\n\\[ 41632617482 + 1 = 41632617483 \\]\n\nSo, the next integer after \\( 109382 \\times 381001 \\) is 41632617483.
```
while r2 is:
```
content='' additional_kwargs={'tool_calls': [{'id': '……', 'function': {'arguments': '{"a":109382,"b":381001}', 'name': 'multiply2'}, 'type': 'function'}]} response_metadata={'token_usage': {'completion_tokens': 20, 'prompt_tokens': 61, 'total_tokens': 81}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': '……', 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'jailbreak': {'filtered': False, 'detected': False}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'tool_calls', 'logprobs': None, 'content_filter_results': {}} id='……' tool_calls=[{'name': 'multiply2', 'args': {'a': 109382, 'b': 381001}, 'id': '……', 'type': 'tool_call'}] usage_metadata={'input_tokens': 61, 'output_tokens': 20, 'total_tokens': 81}
```
the content is empty, why?
and the log inside the multiply2() is not printed.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:20:11) [MSC v.1938 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.24
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.77
> langchain_experimental: 0.0.63
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.1
> langchainhub: 0.1.20
> langgraph: 0.1.11
> langserve: 0.2.2 | gpt4o in azure returning empty content when using tools | https://api.github.com/repos/langchain-ai/langchain/issues/24765/comments | 3 | 2024-07-29T07:31:07Z | 2024-07-31T20:39:54Z | https://github.com/langchain-ai/langchain/issues/24765 | 2,434,674,361 | 24,765 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.storage import SQLStore
from langchain.embeddings.cache import CacheBackedEmbeddings
from langchain_community.embeddings import DeterministicFakeEmbedding
sql_store = SQLStore(namespace="some_ns",
db_url='sqlite:///embedding_store.db')
# Note - it is required to create the schema first
sql_store.create_schema()
# Using DeterministicFakeEmbedding
# and sql_store
cache_backed_embeddings = CacheBackedEmbeddings(
underlying_embeddings=DeterministicFakeEmbedding(size=128),
document_embedding_store=sql_store
)
# The execution of this complains because
# embed_documents returns list[list[float]]
# whereas the cache store is expecting bytes (LargeBinary)
cache_backed_embeddings.embed_documents(['foo', 'bar'])
```
You can reproduce the issue using this notebook
https://colab.research.google.com/drive/1mLCGRbdWGBOgpdSTxK9qtDL7JbeKT4j2?usp=sharing
### Error Message and Stack Trace (if applicable)
TypeError: memoryview: a bytes-like object is required, not 'list'
The above exception was the direct cause of the following exception:
StatementError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/sqlalchemy/sql/sqltypes.py](https://localhost:8080/#) in process(value)
891 def process(value):
892 if value is not None:
--> 893 return DBAPIBinary(value)
894 else:
895 return None
StatementError: (builtins.TypeError) memoryview: a bytes-like object is required, not 'list'
[SQL: INSERT INTO langchain_key_value_stores (namespace, "key", value) VALUES (?, ?, ?)]
[parameters: [{'key': 'foo', 'namespace': 'some_ns', 'value': [-2.336764233933763, 0.27510289545940503, -0.7957597268194339, 2.528701058861104, -0.15510189915015854 ... (2403 characters truncated) ... 2, 1.1312065514444096, -0.49558882193160414, -0.06710991747197836, -0.8768019783331409, 1.2976620676496629, -0.7436590792948876, -0.9567656775129801]}, {'key': 'bar', 'namespace': 'some_ns', 'value': [1.1438074881297355, -1.162000219732062, -0.5320296411623279, -0.04450529917299604, -2.210793183255032 ... (2391 characters truncated) ... 199, -1.4820970212122928, 0.36170213573657495, -0.10575371799110189, -0.881757661512149, -0.1130288120425299, 0.07494672180577358, 2.013154033982629]}]]
### Description
I am trying to use `CacheBackedEmbeddings` with `SQLStore`
The `embed_document` of `Embeddings` returns `list[list[float]]` whereas the SQLStore schema expects it to be `bytes`
### System Info
langchain==0.2.11
langchain-community==0.2.10
langchain-core==0.2.24
langchain-text-splitters==0.2.2 | `langchain_community.storage.SQLStore` does not work `langchain.embeddings.cache.CacheBackedEmbeddings` | https://api.github.com/repos/langchain-ai/langchain/issues/24758/comments | 1 | 2024-07-28T20:45:42Z | 2024-07-28T21:49:25Z | https://github.com/langchain-ai/langchain/issues/24758 | 2,434,104,052 | 24,758 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/callbacks/streamlit/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The `langchain_community.callbacks.StreamlitCallbackHandler` just include example with langchain, but there are not equivalent example for langgraph workflows.
Naive attempts to use `langchain_community.callbacks.StreamlitCallbackHandler` with langgraph can easily result in the following error:
```
Error in StreamlitCallbackHandler.on_llm_end callback: RuntimeError('Current LLMThought is unexpectedly None!')
```
See [this Stack Overflow post](https://stackoverflow.com/questions/78015804/how-to-use-streamlitcallbackhandler-with-langgraph) for more info.
So, it would be helpful to include more support for `StreamlitCallbackHandler` and langgraph.
### Idea or request for content:
Given that users would like to generate more complex langgraph agents in streamlit apps (e.g., multi-agent workflows), it would be helpful to include more docs on this topic, such as how to properly use `StreamlitCallbackHandler` (or an equivalent) with langgraph. | DOC: using StreamlitCallbackHandler (or equivalent) with langgraph | https://api.github.com/repos/langchain-ai/langchain/issues/24757/comments | 0 | 2024-07-28T19:06:08Z | 2024-07-28T19:08:38Z | https://github.com/langchain-ai/langchain/issues/24757 | 2,434,070,886 | 24,757 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I found execution by LlamaCpp in langchain_community.llms.llamacpp is much slower than Llama in llama_cpp (by 2- 3 times in >10 experiments)
1. Llama in llama_cpp
-- 1784 token per second
<img width="1010" alt="Screenshot 2024-07-29 at 12 37 02 AM" src="https://github.com/user-attachments/assets/ee4ebdbd-1e00-4e62-9f4d-072091c93485">
3. LlamaCpp in langchain_community.llms.llamacpp
-- 560 token per second
<img width="1019" alt="Screenshot 2024-07-29 at 12 37 07 AM" src="https://github.com/user-attachments/assets/b05e473f-3d34-4e4f-8be3-779e8459dd90">
Did I have a wrong settings or is it a bug?
### Error Message and Stack Trace (if applicable)
_No response_
### Description
```
# 1.
from llama_cpp import Llama
llm = Llama(
model_path="/Users/marcus/Downloads/data_science/llama-all/llama3.1/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf",
n_gpu_layers=-1, # Uncomment to use GPU acceleration
# seed=1337, # Uncomment to set a specific seed
# n_ctx=2048, # Uncomment to increase the context window
n_ctx=8096,
)
res = llm.create_chat_completion(
messages=[
{
"role": "system",
"content": """You are a helpful Assistant."""
},
{
"role": "user",
"content": "Write a bubble sort in python"
}
],
temperature = 0.0,
)
# 2.
from langchain_community.llms.llamacpp import LlamaCpp
from langchain_core.prompts import ChatPromptTemplate
n_gpu_layers = -1 # The number of layers to put on the GPU. The rest will be on the CPU. If you don't know how many layers there are, you can use -1 to move all to GPU.
n_batch = 512
llm = LlamaCpp(
model_path="/Users/marcus/Downloads/data_science/llama-all/llama3.1/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf",
n_ctx=8096,
n_gpu_layers=n_gpu_layers,
f16_kv=True,
temperature=0,
n_batch=n_batch,
)
question = """Write a bubble sort in python"""
system = "You are a helpful assistant."
human = "{text}"
prompt = ChatPromptTemplate.from_messages([("system", system), ("user", human)])
res = (prompt | llm).invoke(question)
```
Did I have a wrong settings or is it a bug?
### System Info
python = "3.11.3"
langchain = "^0.2.11"
llama-cpp-python = "^0.2.83"
model = Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf | Huge performance differences between llama_cpp_python and langchain_community.llms.llamacpp | https://api.github.com/repos/langchain-ai/langchain/issues/24756/comments | 0 | 2024-07-28T16:53:09Z | 2024-07-28T16:55:40Z | https://github.com/langchain-ai/langchain/issues/24756 | 2,434,023,675 | 24,756 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from dotenv import load_dotenv, find_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.output_parsers import PydanticOutputParser
from langchain.output_parsers import OutputFixingParser
from langchain.prompts import PromptTemplate
_ = load_dotenv(find_dotenv())
llm = ChatOpenAI(model="gpt-4o")
##############################
### Auto-Fixing Parser
##############################
class Date(BaseModel):
year: int = Field(description="Year")
month: int = Field(description="Month")
day: int = Field(description="Day")
era: str = Field(description="BC or AD")
prompt_template = """
Extact the date within user input.
{format_instructions}
User Input:
{query}
"""
parser = PydanticOutputParser(pydantic_object=Date)
new_parser = OutputFixingParser.from_llm(
parser=parser,
llm=llm
)
template = PromptTemplate(
template=prompt_template,
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
query = "Sunny weather on April 6 2023"
prompt = template.format_prompt(query=query)
response = llm.invoke(prompt.to_messages())
incorrect_output = response.content.replace("4", "April")
print("====Incorrect output=====")
print(incorrect_output)
try:
response = parser.parse(incorrect_output)
except Exception as e:
print("===Exception===")
print(e)
print("===Parsing using outputfixingparser===")
date = new_parser.parse(incorrect_output)
print(date.json())
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/Users/selinali/Documents/dev/zhihu-ai-engineer-lecture-notes/practise/08-langchain/run2.py", line 60, in <module>
date = new_parser.parse(incorrect_output)
File "/Users/selinali/Documents/dev/zhihu-ai-engineer-lecture-notes/.venv/lib/python3.10/site-packages/langchain/output_parsers/fix.py", line 69, in parse
return self.parser.parse(completion)
File "/Users/selinali/Documents/dev/zhihu-ai-engineer-lecture-notes/.venv/lib/python3.10/site-packages/langchain_core/output_parsers/pydantic.py", line 77, in parse
return super().parse(text)
File "/Users/selinali/Documents/dev/zhihu-ai-engineer-lecture-notes/.venv/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 98, in parse
return self.parse_result([Generation(text=text)])
File "/Users/selinali/Documents/dev/zhihu-ai-engineer-lecture-notes/.venv/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
```
### Description
I am trying to test using OutputFixingParser by following a tutorial. But it gave me exception as shown in the Stack Trace.
### System Info
python = "^3.10"
langchain = "^0.2.11"
langchain-openai = "^0.1.19" | OutputFixingParser not working | https://api.github.com/repos/langchain-ai/langchain/issues/24753/comments | 2 | 2024-07-28T12:50:21Z | 2024-07-29T20:01:55Z | https://github.com/langchain-ai/langchain/issues/24753 | 2,433,917,153 | 24,753 |
[
"hwchase17",
"langchain"
] | ### URL
https://docs.smith.langchain.com/old/tracing/faq/langchain_specific_guides
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Most of my langsmith traces end up with many nested items that are challenging to untangle and understand. Here, I've already added a `name` parameter to many of the Runnable subclasses that I know that have it, and yet it's quite difficult to see what's going on:
![image](https://github.com/user-attachments/assets/a6646309-a0b9-49ff-84a8-c33a21454be6)
As a LangChain, LangServe and LangSmith pro user, I expect the docs to contain a basic example of how to rename the components of a non-trivial chain so that their business intent is transparent.
### Idea or request for content:
1. Please create a runnable example of a non-trivial chain with at least 100 trace steps that shows how to rename the runs [UPDATE: and traces] in the tree browser in langsmith.
2. Please explicitly mention the LCEL Runnables that take a `name` parameter and those that do not, and also explicitly mention whether there are any `.with_config()` invocations that can substitute for compound chains (for example, I expected `(chain_a | chain_b).with_config(name="chain_a_and_b")` to name the chain in langsmith, but it did not) | DOC: Sample python which customizes the trace names of the runnables in the chain | https://api.github.com/repos/langchain-ai/langchain/issues/24752/comments | 3 | 2024-07-28T10:25:44Z | 2024-08-04T10:11:08Z | https://github.com/langchain-ai/langchain/issues/24752 | 2,433,861,694 | 24,752 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_huggingface import ChatHuggingFace,HuggingFaceEndpoint
os.environ['HUGGINGFACEHUB_API_TOKEN'] = 'xxxxxxxxx'
llm = HuggingFaceEndpoint(
repo_id="microsoft/Phi-3-mini-4k-instruct",
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/mac/langchain/test.py", line 18, in <module>
llm = HuggingFaceEndpoint(
File "/opt/anaconda3/envs/langchain/lib/python3.9/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for HuggingFaceEndpoint
__root__
Did not find endpoint_url, please add an environment variable `HF_INFERENCE_ENDPOINT` which contains it, or pass `endpoint_url` as a named parameter. (type=value_error)
### Description
I am trying to initialize the `HuggingFaceEndpoint`, but despite passing the correct `repo_id`, I am encountering an error. I have identified the bug: even though I provide the `repo_id`, the `HuggingFaceEndpoint` validation always checks for the `endpoint_url`, which is incorrect. If the `repo_id` is passed, it should not be checking for the `endpoint_url`. I will create a PR to fix this issue.
### System Info
Package Information
-------------------
> langchain_core: 0.2.24
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.93
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.17 | HuggingFaceEndpoint `Endpoint URL` validation Error | https://api.github.com/repos/langchain-ai/langchain/issues/24742/comments | 4 | 2024-07-27T14:24:11Z | 2024-08-05T00:53:59Z | https://github.com/langchain-ai/langchain/issues/24742 | 2,433,501,157 | 24,742 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
embeddings = AzureOpenAIEmbeddings(
azure_endpoint=azure_endpoint,
openai_api_version=openai_api_version,
openai_api_key=openai_api_key,
openai_api_type=openai_api_type,
deployment=deployment,
chunk_size=1)
vectorstore = AzureSearch(
azure_search_endpoint=azure_search_endpoint,
azure_search_key=azure_search_key,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
system_message_prompt = SystemMessagePromptTemplate.from_template(
system_prompt)
human_message_prompt = HumanMessagePromptTemplate.from_template(
human_template)
chat_prompt = ChatPromptTemplate.from_messages(
[system_message_prompt, human_message_prompt])
doc_chain = load_qa_chain(
conversation_llm, chain_type="stuff", prompt=chat_prompt, callback_manager=default_manager
)
conversation_chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(search_type="similarity_score_threshold", k=rag_top_k,
search_kwargs={"score_threshold": rag_score_threshold}),
combine_docs_chain=doc_chain,
question_generator=question_generator,
return_source_documents=True,
callback_manager=default_manager,
rephrase_question=False,
memory=memory,
max_tokens_limit=max_retrieval_tokens,
)
result = await conversation_chain.ainvoke({"question": question, "chat_history": chat_history}
```
### Error Message and Stack Trace (if applicable)
TypeError("'AsyncSearchItemPaged' object is not iterable")Traceback (most recent call last):
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 208, in ainvoke
await self._acall(inputs, run_manager=run_manager)
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 212, in _acall
docs = await self._aget_docs(new_question, inputs, run_manager=_run_manager)
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 410, in _aget_docs
docs = await self.retriever.ainvoke(
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain_core/retrievers.py", line 280, in ainvoke
raise e
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain_core/retrievers.py", line 273, in ainvoke
result = await self._aget_relevant_documents(
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain_community/vectorstores/azuresearch.py", line 1590, in _aget_relevant_documents
await self.vectorstore.asimilarity_search_with_relevance_scores(
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain_community/vectorstores/azuresearch.py", line 663, in asimilarity_search_with_relevance_scores
result = await self.avector_search_with_score(query, k=k, **kwargs)
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain_community/vectorstores/azuresearch.py", line 750, in avector_search_with_score
return _results_to_documents(results)
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain_community/vectorstores/azuresearch.py", line 1623, in _results_to_documents
docs = [
TypeError: 'AsyncSearchItemPaged' object is not iterable
### Description
[This commit](https://github.com/langchain-ai/langchain/commit/ffe6ca986ee5b439e85c82781c1d8ce3578a3e88) for issue #24064 caused a regression in async support. After that commit, `avector_search_with_score()` calls `_asimple_search()`, which uses `async with self.async_client`, and then tries to call `_results_to_documents()` with the results — but that triggers a "TypeError: 'AsyncSearchItemPaged' object is not iterable" because it uses `AsyncSearchItemPaged` on a closed HTTP connection (because the connection closed at the end of the `_asimple_search()` `with` block.
The original async PR #22075 seemed to have the right idea: the async results need to be handled within the `with` block. Looking at that code, it looks like it should probably work. However, if I roll back to 0.2.7, I run into the "KeyError('content_vector')" that triggered issue #24064. For the moment, I've gotten things running by overriding AzureSearch as follows:
```python
class ExtendedAzureSearch(AzureSearch):
"""Extended AzureSearch class with patch to fix async support."""
async def _asimple_search_docs(
self,
embedding: List[float],
text_query: str,
k: int,
*,
filters: Optional[str] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Perform vector or hybrid search in the Azure search index.
Args:
embedding: A vector embedding to search in the vector space.
text_query: A full-text search query expression;
Use "*" or omit this parameter to perform only vector search.
k: Number of documents to return.
filters: Filtering expression.
Returns:
Matching documents with scores
"""
from azure.search.documents.models import VectorizedQuery
async with self.async_client as async_client:
results = await async_client.search(
search_text=text_query,
vector_queries=[
VectorizedQuery(
vector=np.array(embedding, dtype=np.float32).tolist(),
k_nearest_neighbors=k,
fields=FIELDS_CONTENT_VECTOR,
)
],
filter=filters,
top=k,
**kwargs,
)
docs = [
(
Document(
page_content=result.pop(FIELDS_CONTENT),
metadata=json.loads(result[FIELDS_METADATA])
if FIELDS_METADATA in result
else {
key: value for key, value in result.items() if key != FIELDS_CONTENT_VECTOR
},
),
float(result["@search.score"]),
)
async for result in results
]
return docs
# AP-254 - This version of avector_search_with_score() calls _asimple_search_docs() instead of _asimple_search()
# followed by _results_to_documents(results) because _asimple_search() uses `async with self.async_client`, which
# closes the paging connection on return, which makes it so the results are not available for
# _results_to_documents() (triggering "TypeError: 'AsyncSearchItemPaged' object is not iterable").
async def avector_search_with_score(
self,
query: str,
k: int = 4,
filters: Optional[str] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query (str): Text to look up documents similar to.
k (int, optional): Number of Documents to return. Defaults to 4.
filters (str, optional): Filtering expression. Defaults to None.
Returns:
List[Tuple[Document, float]]: List of Documents most similar
to the query and score for each
"""
embedding = await self._aembed_query(query)
return await self._asimple_search_docs(
embedding, "", k, filters=filters, **kwargs
)
```
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
> Python Version: 3.10.9 (v3.10.9:1dd9be6584, Dec 6 2022, 14:37:36) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.81
> langchain_aws: 0.1.7
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.2
> langchainplus_sdk: 0.0.21
> langgraph: 0.1.14 | AzureSearch.avector_search_with_score() triggers "TypeError: 'AsyncSearchItemPaged' object is not iterable" when calling _results_to_documents() | https://api.github.com/repos/langchain-ai/langchain/issues/24740/comments | 4 | 2024-07-27T11:33:30Z | 2024-08-08T11:14:21Z | https://github.com/langchain-ai/langchain/issues/24740 | 2,433,439,253 | 24,740 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import asyncio
from langchain_core.language_models.base import LanguageModelInput
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_ollama import OllamaLLM
model = OllamaLLM(model="qwen2:0.5b", repeat_penalty=1.1, top_k=10, temperature=0.8, top_p=0.5)
input = [
SystemMessage(content="Some system content..."),
HumanMessage(content="Some user content..."),
]
async def stream_response(input: LanguageModelInput):
async for chunk in model.astream(input):
print(f"{chunk=}")
asyncio.run(stream_response(input))
```
### Error Message and Stack Trace (if applicable)
Every response chunk is empty.
```python
chunk=''
chunk=''
chunk=''
...
chunk=''
```
### Description
Asynchronous streaming via`.astream(...)` instance method always returns empty string for each chunk of model response. It's because response content is contained within unexpected key, thus is not being extracted.
Checked for models: `qwen2:0.5b`, `qwen2:1.5b`, `llama3.1:8b` using Ollama 0.3.0.
Changing [.astream source code](https://github.com/langchain-ai/langchain/blob/152427eca13da070cc03f3f245a43bff312e43d1/libs/partners/ollama/langchain_ollama/llms.py#L332) from
```python
chunk = GenerationChunk(
text=(
stream_resp["message"]["content"]
if "message" in stream_resp
else ""
),
generation_info=(
dict(stream_resp) if stream_resp.get("done") is True else None
),
)
````
to
```python
chunk = GenerationChunk(
text=(
stream_resp["message"]["content"]
if "message" in stream_resp
else stream_resp.get("response", "")
),
generation_info=(
dict(stream_resp) if stream_resp.get("done") is True else None
),
)
````
resolves this issue.
Synchronous version of this method works fine.
### System Info
langchain==0.2.11
langchain-core==0.2.23
langchain-ollama==0.1.0
langchain-openai==0.1.17
langchain-text-splitters==0.2.2 | [langchain-ollama] `.astream` does not extract model response content | https://api.github.com/repos/langchain-ai/langchain/issues/24737/comments | 2 | 2024-07-27T06:36:23Z | 2024-07-29T20:00:59Z | https://github.com/langchain-ai/langchain/issues/24737 | 2,433,305,218 | 24,737 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint
llm = HuggingFaceEndpoint(
endpoint_url="http://10.165.9.23:9009",
task="text-generation",
max_new_tokens=10,
do_sample=False,
temperature=0.8,
)
res = llm.invoke("Hugging Face is")
print(res)
print('-------------------')
llm_engine_hf = ChatHuggingFace(llm=llm, model_id = "meta-llama/Meta-Llama-3-8B-Instruct")
res = llm_engine_hf.invoke("Hugging Face is")
print(res)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Using ChatHuggingFace with llm being HuggingFaceEndpoint got 422 error "Unprocessable entity" from huggingface-hub Inference client post function, when using the latest versions of langchain and huggigface-hub 0.24.3. Downgrading to the following version, I got the code to run.
The working versions of packages
huggingface_hub==0.24.0
langchain==0.2.9
langchain-core==0.2.21
langchain-huggingface==0.0.3
langchain_community==0.2.7
### System Info
The following versions are what caused problems
langchain-community==0.0.38
langchain-core==0.2.19
langchain-huggingface==0.0.3
langchain-openai==0.1.16
huggingface_hub==0.24.3 | Chathuggingface 422 error | https://api.github.com/repos/langchain-ai/langchain/issues/24720/comments | 1 | 2024-07-26T16:48:47Z | 2024-08-02T04:14:05Z | https://github.com/langchain-ai/langchain/issues/24720 | 2,432,601,071 | 24,720 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms import LlamaCpp
model_path = "/AI/language-models/Llama-3-Taiwan-8B-Instruct.Q5_K_M.gguf"
llm = LlamaCpp(
model_path=model_path,
n_gpu_layers=100,
n_batch=512,
n_ctx=2048,
f16_kv=True,
max_tokens=2048,
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
verbose=True
)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "d:\program\python\ddavid-langchain\ddavid_langchain\start.py", line 15, in <module>
llm = LlamaCpp(
^^^^^^^^^
File "D:\miniconda3\envs\ddavid-langchain\Lib\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for LlamaCpp
__root__
Could not load Llama model from path: /AI/language-models/Llama-3-Taiwan-8B-Instruct.Q5_K_M.gguf. Received error exception: access violation reading 0x0000000000000000 (type=value_error)
Exception ignored in: <function Llama.__del__ at 0x00000247FF061120>
Traceback (most recent call last):
File "D:\miniconda3\envs\ddavid-langchain\Lib\site-packages\llama_cpp\llama.py", line 2089, in __del__
AttributeError: 'Llama' object has no attribute '_lora_adapter'
### Description
I install brand new LangChain + llama-cpp-python under Python 3.10.14, Windows 11. Several days ago it works well, until I try to upgrade llama-cpp-python from 0.2.82 -> 0.2.83. After upgrade the error "AttributeError: 'Llama' object has no attribute '_lora_adapter'" happened.
I try to install again under new env under Python 3.11.9, but still encounter the same error.
I'm not 100% sure that the version of llama-cpp-python leads to this error, because currently I haven't tried python-llama-cpp 0.2.82 again.
### System Info
langchain==0.2.11
langchain-community==0.2.10
langchain-core==0.2.23
langchain-text-splitters==0.2.2
llama_cpp_python==0.2.83
Windows 11
Python 3.10.14 / Python 3.11.9
Install options:
$Env:CMAKE_ARGS="-DGGML_CUDA=on"
$Env:FORCE_CMAKE=1
| Get AttributeError: 'Llama' object has no attribute '_lora_adapter' with llama cpp | https://api.github.com/repos/langchain-ai/langchain/issues/24718/comments | 3 | 2024-07-26T15:33:21Z | 2024-07-31T14:53:35Z | https://github.com/langchain-ai/langchain/issues/24718 | 2,432,483,863 | 24,718 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I have the following code for building a RAG Chatbot (using [this](https://python.langchain.com/v0.2/docs/how_to/streaming/) example):
```
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain.chains import create_retrieval_chain
from langchain.chains.history_aware_retriever import create_history_aware_retriever
from langchain.chains.combine_documents import create_stuff_documents_chain
vectordb = FAISS.load_local(persist_directory, embedding, index_name, allow_dangerous_deserialization=True)
retriever=vectordb.as_retriever()
llm = ChatOpenAI()
....
prompt={.....}
....
question_answer_chain = create_stuff_documents_chain(llm, prompt, output_parser=parser)
rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)
conversational_rag_chain = RunnableWithMessageHistory(
rag_chain,
get_session_history,
input_messages_key="input",
history_messages_key="chat_history",
output_messages_key="answer",
)
while True:
query = input("Ask a question: ")
for chunk in conversational_rag_chain.stream(
{"input": query,},
config={
"configurable": {
"session_id": "demo_1"
}
}
):
if answer_chunk := chunk.get("answer"):
print(f"{answer_chunk}", end="", flush=True)
print()
```
### Error Message and Stack Trace (if applicable)
```
Ask a question: How many colors are in rainbow?
Error in RootListenersTracer.on_chain_end callback: KeyError('answer')
Error in callback coroutine: KeyError('answer')
A rainbow typically has seven colors, which are: Red, Orange, Yellow, Green, Blue, Indigo, Violet.</s>
Ask a question:
```
### Description
Hi,
I am trying to get the answer as `stream`, the problem is whenever the `conversational_rag_chain.stream()` is initiating, using an `input` it is giving the following errors:
`Error in RootListenersTracer.on_chain_end callback: KeyError('answer')`
`Error in callback coroutine: KeyError('answer')`
and then the output is printing as intended.
My question is, how can I solve it? I have entered `output_messages_key="answer"` in the `conversational_rag_chain` already, so am I doing something wrong or is a `bug`?
Any little discussion or help is welcome. Thanks in advance.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #39-Ubuntu SMP PREEMPT_DYNAMIC Fri Jul 5 21:49:14 UTC 2024
> Python Version: 3.12.3 (main, Apr 10 2024, 05:33:47) [GCC 13.2.0]
Package Information
-------------------
> langchain_core: 0.2.23
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.93
> langchain_chroma: 0.1.2
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Error in RootListenersTracer.on_chain_end callback: KeyError('answer') while streaming a RAG Chain Output | https://api.github.com/repos/langchain-ai/langchain/issues/24713/comments | 24 | 2024-07-26T13:26:39Z | 2024-08-10T16:50:23Z | https://github.com/langchain-ai/langchain/issues/24713 | 2,432,241,854 | 24,713 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/vectorstores/pgvector/#drop-tables
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Missing some functions
### Idea or request for content:
I need to check for vectors made by the embedding model and stored in the pgvector instance, I need alse to persist the instance or the vectors made in the vector database. Thanks | DOC: PGvector instance content and persistence | https://api.github.com/repos/langchain-ai/langchain/issues/24708/comments | 0 | 2024-07-26T10:05:54Z | 2024-07-26T10:08:24Z | https://github.com/langchain-ai/langchain/issues/24708 | 2,431,893,558 | 24,708 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Description
In below code i am not getting any option to use stream , So kindly suggest how i can implement Streaming
this code is in a Node of State Agent
model = ChatGoogleGenerativeAI(model="gemini-pro", convert_system_message_to_human=True,temperature=.20) runnable = chat_message_prompt | model
with_message_history = RunnableWithMessageHistory(
runnable,
get_session_history,
input_messages_key="input",
history_messages_key="history"
)
print("********PROMPT TEST********", chat_message_prompt, "*******************")
response = with_message_history.invoke(
{"ability": "teaching", "input": prompt},
config={"configurable": {"session_id": phone_number}},
)
print("******* RESPONSE FROM GEMINI PRO = ", response.content, "*******")
answer = [response.content]
| How we can use Streaming with ChatGoogleGenerativeAI along with message history | https://api.github.com/repos/langchain-ai/langchain/issues/24706/comments | 2 | 2024-07-26T09:29:03Z | 2024-07-29T07:37:15Z | https://github.com/langchain-ai/langchain/issues/24706 | 2,431,823,094 | 24,706 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
When running:
```python
from langchain_ollama import ChatOllama
llm = ChatOllama(
model=MODEL_NAME,
base_url=BASE_URL,
seed=42
)
```
The parameters base_url and seed get ignored. Reviewing the code of this instance, I see that the class definition is missing these attributes.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Regarding seed, in [PR 249](https://github.com/rasbt/LLMs-from-scratch/issues/249) in ollama, this feature was added to allow reproducibility of the experiments.
Regarding base_url, since ollama allow us to host llms in our own servers, we need to be able to specify the url of the server.
Plus in OllamaFunctions from the package langchain_experimental does provide support to this.
### System Info
langchain==0.2.11
langchain-chroma==0.1.2
langchain-community==0.2.10
langchain-core==0.2.23
langchain-experimental==0.0.63
langchain-groq==0.1.6
langchain-ollama==0.1.0
langchain-text-splitters==0.2.2 | ChatOllama is missing the parameters seed and base_url | https://api.github.com/repos/langchain-ai/langchain/issues/24703/comments | 9 | 2024-07-26T08:27:42Z | 2024-07-30T15:02:00Z | https://github.com/langchain-ai/langchain/issues/24703 | 2,431,706,599 | 24,703 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Following code
```
from langchain_unstructured import UnstructuredLoader
loader = UnstructuredLoader(
file_name,
file=io.BytesIO(content),
partition_via_api=True,
server_url=get_from_env("url", "UNSTRUCTURED_ENDPOINT"),
)
for document in loader.lazy_load():
print("=" * 50)
print(document)
```
doesn't work, because I cannot give file_name and file-content at same time.
File-content is In-Memory, cannot really load from file.
If I don't give file_name (because I can't), the API doesn't work either, because file type is unknown.
### Error Message and Stack Trace (if applicable)
Both file and file_name given:
```
File "/opt/initai_copilot/experiments/langchain_ext/document_loaders/unstructured_tests.py", line 35, in <module>
loader = UnstructuredLoader(
File "/usr/lib/python3/dist-packages/langchain_unstructured/document_loaders.py", line 96, in __init__
raise ValueError("file_path and file cannot be defined simultaneously.")
ValueError: file_path and file cannot be defined simultaneously.
```
No file_name given:
```
File "/usr/lib/python3/dist-packages/langchain_unstructured/document_loaders.py", line 150, in lazy_load
yield from load_file(f=self.file, f_path=self.file_path)
File "/usr/lib/python3/dist-packages/langchain_unstructured/document_loaders.py", line 185, in lazy_load
else self._elements_json
File "/usr/lib/python3/dist-packages/langchain_unstructured/document_loaders.py", line 202, in _elements_json
return self._elements_via_api
File "/usr/lib/python3/dist-packages/langchain_unstructured/document_loaders.py", line 231, in _elements_via_api
response = client.general.partition(req) # type: ignore
File "/usr/lib/python3/dist-packages/unstructured_client/general.py", line 100, in partition
raise errors.SDKError('API error occurred', http_res.status_code, http_res.text, http_res)
unstructured_client.models.errors.sdkerror.SDKError: API error occurred: Status 400
{"detail":"File type None is not supported."}
```
### Description
See above.
Two problems with file-parameter (In-Memory content):
* Without given file_name, the API partition mode doesn't work.
* With given file_name the constructor doesn't allow both params
### System Info
Not relevant | langchain_unstructured.UnstructuredLoader in api-partition-mode with given file-content also needs file-name | https://api.github.com/repos/langchain-ai/langchain/issues/24701/comments | 0 | 2024-07-26T07:42:10Z | 2024-07-26T07:44:48Z | https://github.com/langchain-ai/langchain/issues/24701 | 2,431,630,922 | 24,701 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_groq import ChatGroq
from langchain_community.tools.ddg_search import DuckDuckGoSearchRun
from langchain.prompts import ChatPromptTemplate
from langchain.agents import create_tool_calling_agent
from langchain.agents import AgentExecutor
llm = ChatGroq(temperature=0, model_name="llama-3.1-70b-versatile", api_key="", streaming=True)
ddg_search = DuckDuckGoSearchRun()
prompt = ChatPromptTemplate.from_messages([("system","You are a helpful Search Assistant"),
("human","{input}"),
("placeholder","{agent_scratchpad}")])
tools = [ddg_search]
search_agent = create_tool_calling_agent(llm,tools,prompt)
search_agent_executor = AgentExecutor(agent=search_agent, tools=tools, verbose=False, handle_parsing_errors=True)
async for event in search_agent_executor.astream_events(
{"input": "who is narendra modi"}, version="v1"
):
kind = event["event"]
if kind == "on_chat_model_stream":
content = event["data"]["chunk"].content
if content:
print(content, end="", flush=True)
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[37], [line 1](vscode-notebook-cell:?execution_count=37&line=1)
----> [1](vscode-notebook-cell:?execution_count=37&line=1) async for event in search_agent_executor.astream_events(
[2](vscode-notebook-cell:?execution_count=37&line=2) {"input": "who is narendra modi"}, version="v1"
[3](vscode-notebook-cell:?execution_count=37&line=3) ):
[4](vscode-notebook-cell:?execution_count=37&line=4) kind = event["event"]
[6](vscode-notebook-cell:?execution_count=37&line=6) if kind == "on_chat_model_stream":
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\runnables\base.py:1246, in Runnable.astream_events(self, input, config, version, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs)
[1241](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1241) raise NotImplementedError(
[1242](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1242) 'Only versions "v1" and "v2" of the schema is currently supported.'
[1243](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1243) )
[1245](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1245) async with aclosing(event_stream):
-> [1246](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1246) async for event in event_stream:
[1247](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1247) yield event
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\tracers\event_stream.py:778, in _astream_events_implementation_v1(runnable, input, config, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs)
[774](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:774) root_name = config.get("run_name", runnable.get_name())
[776](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:776) # Ignoring mypy complaint about too many different union combinations
[777](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:777) # This arises because many of the argument types are unions
--> [778](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:778) async for log in _astream_log_implementation( # type: ignore[misc]
[779](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:779) runnable,
[780](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:780) input,
[781](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:781) config=config,
[782](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:782) stream=stream,
[783](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:783) diff=True,
[784](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:784) with_streamed_output_list=True,
[785](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:785) **kwargs,
[786](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:786) ):
[787](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:787) run_log = run_log + log
[789](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:789) if not encountered_start_event:
[790](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:790) # Yield the start event for the root runnable.
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\tracers\log_stream.py:670, in _astream_log_implementation(runnable, input, config, stream, diff, with_streamed_output_list, **kwargs)
[667](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:667) finally:
[668](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:668) # Wait for the runnable to finish, if not cancelled (eg. by break)
[669](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:669) try:
--> [670](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:670) await task
[671](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:671) except asyncio.CancelledError:
[672](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:672) pass
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\tracers\log_stream.py:624, in _astream_log_implementation.<locals>.consume_astream()
[621](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:621) prev_final_output: Optional[Output] = None
[622](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:622) final_output: Optional[Output] = None
--> [624](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:624) async for chunk in runnable.astream(input, config, **kwargs):
[625](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:625) prev_final_output = final_output
[626](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:626) if final_output is None:
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain\agents\agent.py:1793, in AgentExecutor.astream(self, input, config, **kwargs)
[1781](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1781) config = ensure_config(config)
[1782](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1782) iterator = AgentExecutorIterator(
[1783](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1783) self,
[1784](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1784) input,
(...)
[1791](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1791) **kwargs,
[1792](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1792) )
-> [1793](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1793) async for step in iterator:
[1794](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1794) yield step
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain\agents\agent_iterator.py:266, in AgentExecutorIterator.__aiter__(self)
[260](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:260) while self.agent_executor._should_continue(
[261](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:261) self.iterations, self.time_elapsed
[262](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:262) ):
[263](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:263) # take the next step: this plans next action, executes it,
[264](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:264) # yielding action and observation as they are generated
[265](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:265) next_step_seq: NextStepOutput = []
--> [266](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:266) async for chunk in self.agent_executor._aiter_next_step(
[267](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:267) self.name_to_tool_map,
[268](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:268) self.color_mapping,
[269](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:269) self.inputs,
[270](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:270) self.intermediate_steps,
[271](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:271) run_manager,
[272](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:272) ):
[273](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:273) next_step_seq.append(chunk)
[274](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:274) # if we're yielding actions, yield them as they come
[275](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:275) # do not yield AgentFinish, which will be handled below
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain\agents\agent.py:1483, in AgentExecutor._aiter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
[1480](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1480) intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
[1482](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1482) # Call the LLM to see what to do.
-> [1483](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1483) output = await self.agent.aplan(
[1484](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1484) intermediate_steps,
[1485](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1485) callbacks=run_manager.get_child() if run_manager else None,
[1486](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1486) **inputs,
[1487](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1487) )
[1488](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1488) except OutputParserException as e:
[1489](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1489) if isinstance(self.handle_parsing_errors, bool):
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain\agents\agent.py:619, in RunnableMultiActionAgent.aplan(self, intermediate_steps, callbacks, **kwargs)
[611](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:611) final_output: Any = None
[612](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:612) if self.stream_runnable:
[613](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:613) # Use streaming to make sure that the underlying LLM is invoked in a
[614](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:614) # streaming
(...)
[617](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:617) # Because the response from the plan is not a generator, we need to
[618](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:618) # accumulate the output into final output and return that.
--> [619](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:619) async for chunk in self.runnable.astream(
[620](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:620) inputs, config={"callbacks": callbacks}
[621](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:621) ):
[622](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:622) if final_output is None:
[623](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:623) final_output = chunk
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\runnables\base.py:3278, in RunnableSequence.astream(self, input, config, **kwargs)
[3275](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3275) async def input_aiter() -> AsyncIterator[Input]:
[3276](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3276) yield input
-> [3278](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3278) async for chunk in self.atransform(input_aiter(), config, **kwargs):
[3279](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3279) yield chunk
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\runnables\base.py:3261, in RunnableSequence.atransform(self, input, config, **kwargs)
[3255](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3255) async def atransform(
[3256](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3256) self,
[3257](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3257) input: AsyncIterator[Input],
[3258](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3258) config: Optional[RunnableConfig] = None,
[3259](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3259) **kwargs: Optional[Any],
[3260](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3260) ) -> AsyncIterator[Output]:
-> [3261](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3261) async for chunk in self._atransform_stream_with_config(
[3262](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3262) input,
[3263](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3263) self._atransform,
[3264](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3264) patch_config(config, run_name=(config or {}).get("run_name") or self.name),
[3265](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3265) **kwargs,
[3266](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3266) ):
[3267](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3267) yield chunk
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\runnables\base.py:2160, in Runnable._atransform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
[2158](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:2158) while True:
[2159](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:2159) if accepts_context(asyncio.create_task):
-> [2160](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:2160) chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
[2161](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:2161) py_anext(iterator), # type: ignore[arg-type]
[2162](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:2162) context=context,
[2163](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:2163) )
[2164](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:2164) else:
[2165](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:2165) chunk = cast(Output, await py_anext(iterator))
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\tracers\log_stream.py:258, in LogStreamCallbackHandler.tap_output_aiter(self, run_id, output)
[246](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:246) async def tap_output_aiter(
[247](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:247) self, run_id: UUID, output: AsyncIterator[T]
[248](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:248) ) -> AsyncIterator[T]:
[249](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:249) """Tap an output async iterator to stream its values to the log.
[250](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:250)
[251](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:251) Args:
(...)
[256](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:256) T: The output value.
[257](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:257) """
--> [258](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:258) async for chunk in output:
[259](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:259) # root run is handled in .astream_log()
[260](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:260) if run_id != self.root_id:
[261](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:261) # if we can't find the run silently ignore
[262](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:262) # eg. because this run wasn't included in the log
[263](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:263) if key := self._key_map_by_run_id.get(run_id):
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\runnables\base.py:3231, in RunnableSequence._atransform(self, input, run_manager, config, **kwargs)
[3229](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3229) else:
[3230](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3230) final_pipeline = step.atransform(final_pipeline, config)
-> [3231](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3231) async for output in final_pipeline:
[3232](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3232) yield output
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\runnables\base.py:1313, in Runnable.atransform(self, input, config, **kwargs)
[1310](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1310) final: Input
[1311](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1311) got_first_val = False
-> [1313](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1313) async for ichunk in input:
[1314](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1314) # The default implementation of transform is to buffer input and
[1315](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1315) # then call stream.
[1316](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1316) # It'll attempt to gather all input into a single chunk using
[1317](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1317) # the `+` operator.
[1318](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1318) # If the input is not addable, then we'll assume that we can
[1319](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1319) # only operate on the last chunk,
[1320](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1320) # and we'll iterate until we get to the last chunk.
[1321](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1321) if not got_first_val:
[1322](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1322) final = ichunk
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\runnables\base.py:5276, in RunnableBindingBase.atransform(self, input, config, **kwargs)
[5270](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5270) async def atransform(
[5271](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5271) self,
[5272](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5272) input: AsyncIterator[Input],
[5273](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5273) config: Optional[RunnableConfig] = None,
[5274](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5274) **kwargs: Any,
[5275](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5275) ) -> AsyncIterator[Output]:
-> [5276](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5276) async for item in self.bound.atransform(
[5277](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5277) input,
[5278](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5278) self._merge_configs(config),
[5279](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5279) **{**self.kwargs, **kwargs},
[5280](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5280) ):
[5281](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5281) yield item
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\runnables\base.py:1331, in Runnable.atransform(self, input, config, **kwargs)
[1328](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1328) final = ichunk
[1330](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1330) if got_first_val:
-> [1331](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1331) async for output in self.astream(final, config, **kwargs):
[1332](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1332) yield output
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py:439, in BaseChatModel.astream(self, input, config, stop, **kwargs)
[434](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:434) except BaseException as e:
[435](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:435) await run_manager.on_llm_error(
[436](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:436) e,
[437](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:437) response=LLMResult(generations=[[generation]] if generation else []),
[438](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:438) )
--> [439](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:439) raise e
[440](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:440) else:
[441](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:441) await run_manager.on_llm_end(
[442](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:442) LLMResult(generations=[[generation]]),
[443](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:443) )
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py:417, in BaseChatModel.astream(self, input, config, stop, **kwargs)
[415](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:415) generation: Optional[ChatGenerationChunk] = None
[416](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:416) try:
--> [417](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:417) async for chunk in self._astream(
[418](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:418) messages,
[419](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:419) stop=stop,
[420](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:420) **kwargs,
[421](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:421) ):
[422](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:422) if chunk.message.id is None:
[423](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:423) chunk.message.id = f"run-{run_manager.run_id}"
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_groq\chat_models.py:582, in ChatGroq._astream(self, messages, stop, run_manager, **kwargs)
[578](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:578) if "tools" in kwargs:
[579](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:579) response = await self.async_client.create(
[580](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:580) messages=message_dicts, **{**params, **kwargs}
[581](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:581) )
--> [582](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:582) chat_result = self._create_chat_result(response)
[583](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:583) generation = chat_result.generations[0]
[584](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:584) message = cast(AIMessage, generation.message)
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_groq\chat_models.py:665, in ChatGroq._create_chat_result(self, response)
[663](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:663) generations = []
[664](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:664) if not isinstance(response, dict):
--> [665](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:665) response = response.dict()
[666](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:666) token_usage = response.get("usage", {})
[667](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:667) for res in response["choices"]:
AttributeError: 'AsyncStream' object has no attribute 'dict'
### Description
langchain Version: 0.2.11
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.23
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.93
> langchain_cohere: 0.1.9
> langchain_experimental: 0.0.63
> langchain_groq: 0.1.6
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | agent_executor.astream_events does not work with ChatGroq | https://api.github.com/repos/langchain-ai/langchain/issues/24699/comments | 1 | 2024-07-26T06:03:59Z | 2024-07-26T15:32:26Z | https://github.com/langchain-ai/langchain/issues/24699 | 2,431,489,026 | 24,699 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Please see https://github.com/langchain-ai/langchain/issues/10864
### Error Message and Stack Trace (if applicable)
Please see https://github.com/langchain-ai/langchain/issues/10864
### Description
Negative similarity scores.
Multiple users have reported negative similarity scores with various models. Can we please reopen https://github.com/langchain-ai/langchain/issues/10864 ? Thanks.
### System Info
Please see https://github.com/langchain-ai/langchain/issues/10864 | When search_type="similarity_score_threshold, retriever returns negative scores (duplicate) | https://api.github.com/repos/langchain-ai/langchain/issues/24698/comments | 0 | 2024-07-26T05:23:15Z | 2024-07-29T10:18:10Z | https://github.com/langchain-ai/langchain/issues/24698 | 2,431,447,825 | 24,698 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings
urls = [
"https://lilianweng.github.io/posts/2023-06-23-agent/",
"https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/",
"https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/",
]
docs = [WebBaseLoader(url).load() for url in urls]
docs_list = [item for sublist in docs for item in sublist]
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=250, chunk_overlap=0
)
doc_splits = text_splitter.split_documents(docs_list)
# Add to vectorDB
vectorstore = Chroma.from_documents(
documents=doc_splits,
collection_name="rag-chroma",
embedding=OpenAIEmbeddings(),
)
retriever = vectorstore.as_retriever()
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[2], line 21
18 doc_splits = text_splitter.split_documents(docs_list)
20 # Add to vectorDB
---> 21 vectorstore = Chroma.from_documents(
22 documents=doc_splits,
23 collection_name="rag-chroma",
24 embedding=OpenAIEmbeddings(),
25 )
26 retriever = vectorstore.as_retriever()
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_community\vectorstores\chroma.py:878, in Chroma.from_documents(cls, documents, embedding, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
876 texts = [doc.page_content for doc in documents]
877 metadatas = [doc.metadata for doc in documents]
--> 878 return cls.from_texts(
879 texts=texts,
880 embedding=embedding,
881 metadatas=metadatas,
882 ids=ids,
883 collection_name=collection_name,
884 persist_directory=persist_directory,
885 client_settings=client_settings,
886 client=client,
887 collection_metadata=collection_metadata,
...
---> 99 if key in self.model_fields:
100 return getattr(self, key)
101 return None
AttributeError: 'Collection' object has no attribute 'model_fields'
### Description
I just copy the Self-RAG document
### System Info
OS: Windows
OS Version: 10.0.22631
Python Version: 3.11.7 | packaged by Anaconda, Inc. | (main, Dec 15 2023, 18:05:47) [MSC v.1916 64 bit (AMD64)]
langchain_core: 0.2.23
langchain: 0.2.11
langchain_community: 0.2.10
langsmith: 0.1.82
langchain_cohere: 0.1.9
langchain_experimental: 0.0.59
langchain_openai: 0.1.17
langchain_text_splitters: 0.2.0
langchainhub: 0.1.20
langgraph: 0.1.14 | AttributeError: 'Collection' object has no attribute 'model_fields' | https://api.github.com/repos/langchain-ai/langchain/issues/24696/comments | 5 | 2024-07-26T03:06:20Z | 2024-08-02T07:44:15Z | https://github.com/langchain-ai/langchain/issues/24696 | 2,431,319,999 | 24,696 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.1/docs/templates/neo4j-advanced-rag/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
When setting this template up for the first time, and before ingesting data, running into the error:
`ValueError: The specified vector index name does not exist. Make sure to check if you spelled it correctly`.
Is existing index a prerequisite, can the doc clarify this?
### Idea or request for content:
_No response_ | DOC: Templates/neo4j-advanced-rag assumes index already exists | https://api.github.com/repos/langchain-ai/langchain/issues/24688/comments | 1 | 2024-07-25T21:06:42Z | 2024-07-26T02:36:02Z | https://github.com/langchain-ai/langchain/issues/24688 | 2,430,977,702 | 24,688 |
[
"hwchase17",
"langchain"
] | Unfortunately this function fails for pydantic v1 models that use `Annotated` with `Field`, e.g.
```python
class InputModel(BaseModel):
query: Annotated[str, pydantic_v1.Field(description="Hello World")]
_create_subset_model_v1("test", InputModel, InputModel.__annotations__.keys())
```
This produces the following error:
```plain
ValueError: cannot specify `Annotated` and value `Field`s together for 'query'
```
_Originally posted by @tdiggelm in https://github.com/langchain-ai/langchain/pull/24418#discussion_r1691736664_
| `langchain_core.utils.pydantic._create_subset_model_v1` fails for pydantic v1 models that use `Annotated` with `Field`, e.g. | https://api.github.com/repos/langchain-ai/langchain/issues/24676/comments | 2 | 2024-07-25T16:05:21Z | 2024-07-26T05:42:36Z | https://github.com/langchain-ai/langchain/issues/24676 | 2,430,427,140 | 24,676 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/toolkits/sql_database/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Hello everyone,
I think my issue is more about something missing in the doc than a bug.
Feel free to tell me if I did wrong.
In the documentation, there is a great disclaimer: "The query chain may generate insert/update/delete queries. When this is not expected, use a custom prompt or create a SQL users without write permissions."
However, there is no information on the minimal permissions needed for an user.
Currently, I have a script working perfectly with an admin account but I get the following error with an user that have only:
* Read access on MyView
* Read definition
I can request manually the view but with LangChain, I get a "include_tables {MyView} not found in database".
Again, it's working with an admin account.
But I have the schema defined and the view_support set to true.
### Idea or request for content:
A redirection under the disclaimer to explain what kind of rights the "include_tables" need. | DOC: Minimal permissions needed to work with SQL Server | https://api.github.com/repos/langchain-ai/langchain/issues/24675/comments | 0 | 2024-07-25T16:00:51Z | 2024-07-25T16:03:26Z | https://github.com/langchain-ai/langchain/issues/24675 | 2,430,412,390 | 24,675 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
`agents.openai_assistant.base.OpenAIAssistantRunnable` has code like
```python
required_tool_call_ids = {
tc.id for tc in run.required_action.submit_tool_outputs.tool_calls
}
```
See https://github.com/langchain-ai/langchain/blob/langchain%3D%3D0.2.11/libs/langchain/langchain/agents/openai_assistant/base.py#L497.
`required_action` is an optional field on OpenAI's `Run` entity. See https://github.com/openai/openai-python/blob/v1.37.0/src/openai/types/beta/threads/run.py#L161.
This results in an error when `run.required_action` is `None`, which does sometimes occur.
### Error Message and Stack Trace (if applicable)
AttributeError: 'NoneType' object has no attribute 'submit_tool_outputs'
```
/SITE_PACKAGES/langchain/agents/openai_assistant/base.py:497 in _parse_intermediate_steps
495:
run = self._wait_for_run(last_action.run_id, last_action.thread_id)
496:
required_tool_call_ids = {
497:
tc.id for tc in run.required_action.submit_tool_outputs.tool_calls
498:
}
499:
tool_outputs = [
/SITE_PACKAGES/langchain_community/agents/openai_assistant/base.py:312 in invoke
310:
# Being run within AgentExecutor and there are tool outputs to submit.
311:
if self.as_agent and input.get("intermediate_steps"):
312:
tool_outputs = self._parse_intermediate_steps(
313:
input["intermediate_steps"]
314:
)
/SITE_PACKAGES/langchain_community/agents/openai_assistant/base.py:347 in invoke
345:
except BaseException as e:
346:
run_manager.on_chain_error(e)
347:
raise e
348:
try:
349:
response = self._get_response(run)
/SITE_PACKAGES/langchain_core/runnables/base.py:854 in stream
852:
The output of the Runnable.
853:
"""
854:
yield self.invoke(input, config, **kwargs)
855:
856:
async def astream(
/SITE_PACKAGES/langchain/agents/agent.py:580 in plan
578:
# Because the response from the plan is not a generator, we need to
579:
# accumulate the output into final output and return that.
580:
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
581:
if final_output is None:
582:
final_output = chunk
/SITE_PACKAGES/langchain/agents/agent.py:1346 in _iter_next_step
1344:
1345:
# Call the LLM to see what to do.
1346:
output = self.agent.plan(
1347:
intermediate_steps,
1348:
callbacks=run_manager.get_child() if run_manager else None,
/SITE_PACKAGES/langchain/agents/agent.py:1318 in <listcomp>
1316:
) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1317:
return self._consume_next_step(
1318:
[
1319:
a
1320:
for a in self._iter_next_step(
/SITE_PACKAGES/langchain/agents/agent.py:1318 in _take_next_step
1316:
) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1317:
return self._consume_next_step(
1318:
[
1319:
a
1320:
for a in self._iter_next_step(
/SITE_PACKAGES/langchain/agents/agent.py:1612 in _call
1610:
# We now enter the agent loop (until it returns something).
1611:
while self._should_continue(iterations, time_elapsed):
1612:
next_step_output = self._take_next_step(
1613:
name_to_tool_map,
1614:
color_mapping,
/SITE_PACKAGES/langchain/chains/base.py:156 in invoke
154:
self._validate_inputs(inputs)
155:
outputs = (
156:
self._call(inputs, run_manager=run_manager)
157:
if new_arg_supported
158:
else self._call(inputs)
/SITE_PACKAGES/langchain/chains/base.py:166 in invoke
164:
except BaseException as e:
165:
run_manager.on_chain_error(e)
166:
raise e
167:
run_manager.on_chain_end(outputs)
168:
/SITE_PACKAGES/langchain_core/runnables/base.py:5057 in invoke
5055:
**kwargs: Optional[Any],
5056:
) -> Output:
5057:
return self.bound.invoke(
5058:
input,
5059:
self._merge_configs(config),
PROJECT_ROOT/assistants/[openai_native_assistant.py](https://github.com/Shopximity/astrology/tree/master/PROJECT_ROOT/assistants/openai_native_assistant.py#L583):583 in _run
581:
metadata=get_contextvars()
582:
) as manager:
583:
result = agent_executor.invoke(run_args, config=dict(callbacks=manager))
```
### Description
`OpenAIAssistantRunnable._parse_intermediate_steps` assumes that every OpenAI `run` will have a `required_action`, but that is not correct.
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
> Python Version: 3.11.7 (main, Jan 2 2024, 08:56:15) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.2.13
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.81
> langchain_anthropic: 0.1.19
> langchain_exa: 0.1.0
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | agents.openai_assistant.base.OpenAIAssistantRunnable assumes existence of an Optional field | https://api.github.com/repos/langchain-ai/langchain/issues/24673/comments | 1 | 2024-07-25T15:46:25Z | 2024-07-25T19:43:33Z | https://github.com/langchain-ai/langchain/issues/24673 | 2,430,366,029 | 24,673 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I've tried this code on 2 platforms(JetBrains Datalore Online and Replit), and they both give me the same error.
```py
# -*- coding: utf-8 -*-
# Some API KEY and model name
GROQ_API_KEY = "MY_GROQ_KEY"# I have filled this, no problem in this
llm_name = "llama3-groq-70b-8192-tool-use-preview"
# Import
from langchain_groq import ChatGroq
from langchain_core.messages import AIMessage, SystemMessage, HumanMessage
from langchain_core.chat_history import (
BaseChatMessageHistory,
InMemoryChatMessageHistory,
)
from langchain_core.runnables.history import RunnableWithMessageHistory
# Chat History Module
store = {}
# The exactly same code in the tutorial
def get_session_history(session_id: str) -> BaseChatMessageHistory:
if session_id not in store:
store[session_id] = InMemoryChatMessageHistory()
return store[session_id]
model = ChatGroq(
model = llm_name,
temperature = 0.5,
max_tokens = 1024,
stop_sequences = None,
api_key = GROQ_API_KEY
)
with_message_history = RunnableWithMessageHistory(model, get_session_history)
# Session ID
config = {"configurable": {"session_id": "abc"}}
model.invoke([HumanMessage(content = "Hi! My name's Kevin.")])
# Stream: I fail in this
for chunk in with_message_history.stream(
[HumanMessage(content = "What's my name?")],
config = config,
):
print(chunk.content, end = '')
print()
print("Done!")
# Invoke: This works well just as I want
response = with_message_history.invoke(
[HumanMessage(content="Hi! I'm Bob")],
config=config,
)
print(response.content)# This works
```
### Error Message and Stack Trace (if applicable)
Your name is Kevin.
Done!
Error in RootListenersTracer.on_chain_end callback: ValueError()
Error in callback coroutine: ValueError()
### Description
* I use code in Langchain official tutorials (https://python.langchain.com/v0.2/docs/tutorials/chatbot/#prompt-templates) with few modifications.
* In stream mode, it outputs the correct response, but with some error under it.
### System Info
The first service I tried: (JetBrains Datalore Online)
```
System Information
------------------
> OS: Linux
> OS Version: #40~20.04.1-Ubuntu SMP Mon Apr 24 00:21:13 UTC 2023
> Python Version: 3.8.12 (default, Jun 27 2024, 14:42:59)
[GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.23
> langsmith: 0.1.93
> langchain_groq: 0.1.6
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
```
The second service I tried (Replit):
```
System Information
------------------
> OS: Linux
> OS Version: #26~22.04.1-Ubuntu SMP Fri Jun 14 18:48:45 UTC 2024
> Python Version: 3.10.14 (main, Mar 19 2024, 21:46:16) [GCC 13.2.0]
Package Information
-------------------
> langchain_core: 0.2.23
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.93
> langchain_groq: 0.1.6
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | Model with history work well on `invoke`, but not well in `stream` (many parts exactly same to official tutorial `Build a Chatbot`) | https://api.github.com/repos/langchain-ai/langchain/issues/24660/comments | 8 | 2024-07-25T09:25:28Z | 2024-08-07T12:51:34Z | https://github.com/langchain-ai/langchain/issues/24660 | 2,429,478,416 | 24,660 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
llm =build_llm(load_model_from="azure")
type(llm)# Outputs: langchain_community.chat_models.azureml_endpoint.AzureMLChatOnlineEndpoint
llm.invoke("Hallo") # Outputs: BaseMessage(content='Hallo! Wie kann ich Ihnen helfen?', type='assistant', id='run-f606d912-b21f-4c0c-861d-9338fa001724-0')
from backend_functions.langgraph_rag_workflow import create_workflow_app
from backend_functions.rag_functions import serialize_documents
from langchain_core.messages import HumanMessage
import json
question = "Hello, who are you?"
thread_id = "id_1"
model_type_for_astream_event = "chat_model"
chain = create_workflow_app(retriever=retriever, model=llm)
input_message = HumanMessage(content=question)
config = {
"configurable": {"thread_id": thread_id}, #for every user, a different thread_id should be selected
}
#print(f"Updated State from previous question: {chain.get_state(config).values}")
async for event in chain.astream_events(
#{"messages": [input_message]},
{"messages": question}, #test für azure
version="v1",
config=config
):
print(event)
if event["event"] == f"on_{model_type_for_astream_event}_start" and event.get("metadata", {}).get("langgraph_node") == "generate":
print("Stream started...")
if model_type_for_astream_event == "llm":
prompt_length = len(event["data"]["input"]["prompts"][0])
else:
prompt_length= len(event["data"]["input"]["messages"][0][0].content)
print(f'data: {json.dumps({"type": "prompt_length_characters", "content": prompt_length})}\n\n')
print(f'data: {json.dumps({"type": "prompt_length_tokens", "content": prompt_length / 4})}\n\n')
if event["event"] == f"on_{model_type_for_astream_event}_stream" and event.get("metadata", {}).get("langgraph_node") == "generate":
if model_type_for_astream_event == "llm":
chunks = event["data"]['chunk']
else:
chunks = event["data"]['chunk'].content
print(f'data: {json.dumps({"type": "chunk", "content": chunks})}\n\n')
elif event["event"] == "on_chain_end" and event.get("metadata", {}).get("langgraph_node") == "format_docs" and event["name"] == "format_docs":
retrieved_docs = event["data"]["input"]["raw_docs"]
serialized_docs = serialize_documents(retrieved_docs)
print(f'data: {{"type": "docs", "content": {serialized_docs}}}\n\n')
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
APIStatusError Traceback (most recent call last)
[/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb) Zelle 49 line 1
[12](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=11) config = {
[13](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=12) "configurable": {"thread_id": thread_id}, #for every user, a different thread_id should be selected
[14](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=13) }
[15](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=14) #print(f"Updated State from previous question: {chain.get_state(config).values}")
---> [16](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=15) async for event in chain.astream_events(
[17](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=16) #{"messages": [input_message]},
[18](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=17) {"messages": question}, #test für azure
[19](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=18) version="v1",
[20](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=19) config=config
[21](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=20) ):
[22](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=21) print(event)
[23](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y211sZmlsZQ%3D%3D?line=22) if event["event"] == f"on_{model_type_for_astream_event}_start" and event.get("metadata", {}).get("langgraph_node") == "generate":
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1246](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1246), in Runnable.astream_events(self, input, config, version, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs)
1241 raise NotImplementedError(
1242 'Only versions "v1" and "v2" of the schema is currently supported.'
1243 )
1245 async with aclosing(event_stream):
-> 1246 async for event in event_stream:
1247 yield event
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:778](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:778), in _astream_events_implementation_v1(runnable, input, config, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs)
774 root_name = config.get("run_name", runnable.get_name())
776 # Ignoring mypy complaint about too many different union combinations
777 # This arises because many of the argument types are unions
--> 778 async for log in _astream_log_implementation( # type: ignore[misc]
779 runnable,
780 input,
781 config=config,
782 stream=stream,
783 diff=True,
784 with_streamed_output_list=True,
785 **kwargs,
786 ):
787 run_log = run_log + log
789 if not encountered_start_event:
790 # Yield the start event for the root runnable.
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py:670](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py:670), in _astream_log_implementation(runnable, input, config, stream, diff, with_streamed_output_list, **kwargs)
667 finally:
668 # Wait for the runnable to finish, if not cancelled (eg. by break)
669 try:
--> 670 await task
671 except asyncio.CancelledError:
672 pass
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py:624](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py:624), in _astream_log_implementation.<locals>.consume_astream()
621 prev_final_output: Optional[Output] = None
622 final_output: Optional[Output] = None
--> 624 async for chunk in runnable.astream(input, config, **kwargs):
625 prev_final_output = final_output
626 if final_output is None:
File [~/anaconda3/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1336](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1336), in Pregel.astream(self, input, config, stream_mode, output_keys, input_keys, interrupt_before, interrupt_after, debug)
1333 del fut, task
1335 # panic on failure or timeout
-> 1336 _panic_or_proceed(done, inflight, step)
1337 # don't keep futures around in memory longer than needed
1338 del done, inflight, futures
File [~/anaconda3/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1540](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1540), in _panic_or_proceed(done, inflight, step)
1538 inflight.pop().cancel()
1539 # raise the exception
-> 1540 raise exc
1542 if inflight:
1543 # if we got here means we timed out
1544 while inflight:
1545 # cancel all pending tasks
File [~/anaconda3/lib/python3.11/site-packages/langgraph/pregel/retry.py:117](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langgraph/pregel/retry.py:117), in arun_with_retry(task, retry_policy, stream)
115 # run the task
116 if stream:
--> 117 async for _ in task.proc.astream(task.input, task.config):
118 pass
119 else:
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:3278](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:3278), in RunnableSequence.astream(self, input, config, **kwargs)
3275 async def input_aiter() -> AsyncIterator[Input]:
3276 yield input
-> 3278 async for chunk in self.atransform(input_aiter(), config, **kwargs):
3279 yield chunk
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:3261](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:3261), in RunnableSequence.atransform(self, input, config, **kwargs)
3255 async def atransform(
3256 self,
3257 input: AsyncIterator[Input],
3258 config: Optional[RunnableConfig] = None,
3259 **kwargs: Optional[Any],
3260 ) -> AsyncIterator[Output]:
-> 3261 async for chunk in self._atransform_stream_with_config(
3262 input,
3263 self._atransform,
3264 patch_config(config, run_name=(config or {}).get("run_name") or self.name),
3265 **kwargs,
3266 ):
3267 yield chunk
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2160](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2160), in Runnable._atransform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
2158 while True:
2159 if accepts_context(asyncio.create_task):
-> 2160 chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
2161 py_anext(iterator), # type: ignore[arg-type]
2162 context=context,
2163 )
2164 else:
2165 chunk = cast(Output, await py_anext(iterator))
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py:258](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/tracers/log_stream.py:258), in LogStreamCallbackHandler.tap_output_aiter(self, run_id, output)
246 async def tap_output_aiter(
247 self, run_id: UUID, output: AsyncIterator[T]
248 ) -> AsyncIterator[T]:
249 """Tap an output async iterator to stream its values to the log.
250
251 Args:
(...)
256 T: The output value.
257 """
--> 258 async for chunk in output:
259 # root run is handled in .astream_log()
260 if run_id != self.root_id:
261 # if we can't find the run silently ignore
262 # eg. because this run wasn't included in the log
263 if key := self._key_map_by_run_id.get(run_id):
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:3231](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:3231), in RunnableSequence._atransform(self, input, run_manager, config, **kwargs)
3229 else:
3230 final_pipeline = step.atransform(final_pipeline, config)
-> 3231 async for output in final_pipeline:
3232 yield output
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1313](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1313), in Runnable.atransform(self, input, config, **kwargs)
1310 final: Input
1311 got_first_val = False
-> 1313 async for ichunk in input:
1314 # The default implementation of transform is to buffer input and
1315 # then call stream.
1316 # It'll attempt to gather all input into a single chunk using
1317 # the `+` operator.
1318 # If the input is not addable, then we'll assume that we can
1319 # only operate on the last chunk,
1320 # and we'll iterate until we get to the last chunk.
1321 if not got_first_val:
1322 final = ichunk
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1331](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1331), in Runnable.atransform(self, input, config, **kwargs)
1328 final = ichunk
1330 if got_first_val:
-> 1331 async for output in self.astream(final, config, **kwargs):
1332 yield output
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:874](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:874), in Runnable.astream(self, input, config, **kwargs)
856 async def astream(
857 self,
858 input: Input,
859 config: Optional[RunnableConfig] = None,
860 **kwargs: Optional[Any],
861 ) -> AsyncIterator[Output]:
862 """
863 Default implementation of astream, which calls ainvoke.
864 Subclasses should override this method if they support streaming output.
(...)
872 The output of the Runnable.
873 """
--> 874 yield await self.ainvoke(input, config, **kwargs)
File [~/anaconda3/lib/python3.11/site-packages/langgraph/utils.py:117](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langgraph/utils.py:117), in RunnableCallable.ainvoke(self, input, config, **kwargs)
115 kwargs["config"] = config
116 if sys.version_info >= (3, 11):
--> 117 ret = await asyncio.create_task(
118 self.afunc(input, **kwargs), context=context
119 )
120 else:
121 ret = await self.afunc(input, **kwargs)
File [~/Documents/GitHub/fastapi_rag_demo/backend_functions/langgraph_rag_workflow.py:264](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/Documents/GitHub/fastapi_rag_demo/backend_functions/langgraph_rag_workflow.py:264), in create_workflow_app.<locals>.generate(state)
262 system_message = state["system_prompt"]
263 state["prompt_length"] = len(system_message)
--> 264 response = await model.ainvoke([SystemMessage(content=system_message)] + messages)
265 state["generation"] = response
266 if isinstance(model, OllamaLLM):
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:291](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:291), in BaseChatModel.ainvoke(self, input, config, stop, **kwargs)
282 async def ainvoke(
283 self,
284 input: LanguageModelInput,
(...)
288 **kwargs: Any,
289 ) -> BaseMessage:
290 config = ensure_config(config)
--> 291 llm_result = await self.agenerate_prompt(
292 [self._convert_input(input)],
293 stop=stop,
294 callbacks=config.get("callbacks"),
295 tags=config.get("tags"),
296 metadata=config.get("metadata"),
297 run_name=config.get("run_name"),
298 run_id=config.pop("run_id", None),
299 **kwargs,
300 )
301 return cast(ChatGeneration, llm_result.generations[0][0]).message
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:713](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:713), in BaseChatModel.agenerate_prompt(self, prompts, stop, callbacks, **kwargs)
705 async def agenerate_prompt(
706 self,
707 prompts: List[PromptValue],
(...)
710 **kwargs: Any,
711 ) -> LLMResult:
712 prompt_messages = [p.to_messages() for p in prompts]
--> 713 return await self.agenerate(
714 prompt_messages, stop=stop, callbacks=callbacks, **kwargs
715 )
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:673](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:673), in BaseChatModel.agenerate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
660 if run_managers:
661 await asyncio.gather(
662 *[
663 run_manager.on_llm_end(
(...)
671 ]
672 )
--> 673 raise exceptions[0]
674 flattened_outputs = [
675 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item, union-attr]
676 for res in results
677 ]
678 llm_output = self._combine_llm_outputs([res.llm_output for res in results]) # type: ignore[union-attr]
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:846](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:846), in BaseChatModel._agenerate_with_cache(self, messages, stop, run_manager, **kwargs)
827 if (
828 type(self)._astream != BaseChatModel._astream
829 or type(self)._stream != BaseChatModel._stream
(...)
843 ),
844 ):
845 chunks: List[ChatGenerationChunk] = []
--> 846 async for chunk in self._astream(messages, stop=stop, **kwargs):
847 chunk.message.response_metadata = _gen_info_and_msg_metadata(chunk)
848 if run_manager:
File [~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:386](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:386), in AzureMLChatOnlineEndpoint._astream(self, messages, stop, run_manager, **kwargs)
383 params = {"stream": True, "stop": stop, "model": None, **kwargs}
385 default_chunk_class = AIMessageChunk
--> 386 async for chunk in await async_client.chat.completions.create(
387 messages=message_dicts, **params
388 ):
389 if not isinstance(chunk, dict):
390 chunk = chunk.dict()
File [~/anaconda3/lib/python3.11/site-packages/openai/resources/chat/completions.py:1159](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/openai/resources/chat/completions.py:1159), in AsyncCompletions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
1128 @required_args(["messages", "model"], ["messages", "model", "stream"])
1129 async def create(
1130 self,
(...)
1157 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
1158 ) -> ChatCompletion | AsyncStream[ChatCompletionChunk]:
-> 1159 return await self._post(
1160 "[/chat/completions](https://file+.vscode-resource.vscode-cdn.net/chat/completions)",
1161 body=await async_maybe_transform(
1162 {
1163 "messages": messages,
1164 "model": model,
1165 "frequency_penalty": frequency_penalty,
1166 "function_call": function_call,
1167 "functions": functions,
1168 "logit_bias": logit_bias,
1169 "logprobs": logprobs,
1170 "max_tokens": max_tokens,
1171 "n": n,
1172 "presence_penalty": presence_penalty,
1173 "response_format": response_format,
1174 "seed": seed,
1175 "stop": stop,
1176 "stream": stream,
1177 "temperature": temperature,
1178 "tool_choice": tool_choice,
1179 "tools": tools,
1180 "top_logprobs": top_logprobs,
1181 "top_p": top_p,
1182 "user": user,
1183 },
1184 completion_create_params.CompletionCreateParams,
1185 ),
1186 options=make_request_options(
1187 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
1188 ),
1189 cast_to=ChatCompletion,
1190 stream=stream or False,
1191 stream_cls=AsyncStream[ChatCompletionChunk],
1192 )
File [~/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1790](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1790), in AsyncAPIClient.post(self, path, cast_to, body, files, options, stream, stream_cls)
1776 async def post(
1777 self,
1778 path: str,
(...)
1785 stream_cls: type[_AsyncStreamT] | None = None,
1786 ) -> ResponseT | _AsyncStreamT:
1787 opts = FinalRequestOptions.construct(
1788 method="post", url=path, json_data=body, files=await async_to_httpx_files(files), **options
1789 )
-> 1790 return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
File [~/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1493](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1493), in AsyncAPIClient.request(self, cast_to, options, stream, stream_cls, remaining_retries)
1484 async def request(
1485 self,
1486 cast_to: Type[ResponseT],
(...)
1491 remaining_retries: Optional[int] = None,
1492 ) -> ResponseT | _AsyncStreamT:
-> 1493 return await self._request(
1494 cast_to=cast_to,
1495 options=options,
1496 stream=stream,
1497 stream_cls=stream_cls,
1498 remaining_retries=remaining_retries,
1499 )
File [~/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1584](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1584), in AsyncAPIClient._request(self, cast_to, options, stream, stream_cls, remaining_retries)
1581 await err.response.aread()
1583 log.debug("Re-raising status error")
-> 1584 raise self._make_status_error_from_response(err.response) from None
1586 return await self._process_response(
1587 cast_to=cast_to,
1588 options=options,
(...)
1591 stream_cls=stream_cls,
1592 )
APIStatusError: Error code: 424 - {'detail': 'Not Found'}
### Description
Hi,
I want to use a Model from Azure ML in my Langgraph Pipeline. The provided code works for several model loaders like OllamaLLM or ChatGroq. However I am getting an error if I switch to an Azure model loaded with: AzureMLChatOnlineEndpoint. General responses work with it, but not the `astream_events`.
When running the code with a Azure LLM I am getting this error: `APIStatusError: Error code: 424 - {'detail': 'Not Found'}`.
I observed the events in astream_events and saw that the event "on_chat_model_start" starts but in the next step "on_chat_model_end" occurs and the genration is ofd type None. I tried `model_type_for_astream_event = "chat_model"` and `model_type_for_astream_event = "llm"`
I think this is a bug or do I have an error on my implementation?
### System Info
langchain 0.2.7 pypi_0 pypi
langchain-chroma 0.1.0 pypi_0 pypi
langchain-community 0.2.7 pypi_0 pypi
langchain-core 0.2.23 pypi_0 pypi
langchain-experimental 0.0.63 pypi_0 pypi
langchain-groq 0.1.5 pypi_0 pypi
langchain-huggingface 0.0.3 pypi_0 pypi
langchain-ollama 0.1.0 pypi_0 pypi
langchain-openai 0.1.7 pypi_0 pypi
langchain-postgres 0.0.3 pypi_0 pypi
langchain-text-splitters 0.2.1 pypi_0 pypi | Astream Events not working for AzureMLChatOnlineEndpoint | https://api.github.com/repos/langchain-ai/langchain/issues/24659/comments | 2 | 2024-07-25T08:59:18Z | 2024-07-25T15:59:17Z | https://github.com/langchain-ai/langchain/issues/24659 | 2,429,422,432 | 24,659 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
#Step 1
```
import os
from langchain_huggingface import HuggingFaceEmbeddings
from langchain_qdrant import Qdrant, FastEmbedSparse, RetrievalMode
embeddings = HuggingFaceEmbeddings(model_name='OrdalieTech/Solon-embeddings-large-0.1', model_kwargs={"device": "cuda"})
sparse_embeddings = FastEmbedSparse(model_name="Qdrant/bm25")
vectordb = Qdrant.from_texts(
texts=texts,
embedding=embeddings,
sparse_embedding=sparse_embeddings,
sparse_vector_name="sparse-vector"
path=os.path.join(os.getcwd(), 'manuscrits_biblissima_vectordb'),
collection_name="manuscrits_biblissima",
retrieval_mode=RetrievalMode.HYBRID,
)
```
#Step 2
```
model_kwargs = {"device": "cuda"}
embeddings = HuggingFaceEmbeddings(
model_name='OrdalieTech/Solon-embeddings-large-0.1',
model_kwargs=model_kwargs
)
sparse_embeddings = FastEmbedSparse(
model_name="Qdrant/bm25",
model_kwargs=model_kwargs,
)
qdrant = QdrantVectorStore.from_existing_collection(
collection_name="manuscrits_biblissima",
path=os.path.join(os.getcwd(), 'manuscrits_biblissima_vectordb'),
retrieval_mode=RetrievalMode.HYBRID,
embedding=embeddings,
sparse_embedding=sparse_embeddings,
sparse_vector_name="sparse-vector"
)
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/local/eferra01/data/get_ref_llama3_70B_gguf.py", line 101, in <module>
qdrant = QdrantVectorStore.from_existing_collection(
File "/local/eferra01/miniconda3/envs/llama-cpp-env/lib/python3.9/site-packages/langchain_qdrant/qdrant.py", line 286, in from_existing_collection
return cls(
File "/local/eferra01/miniconda3/envs/llama-cpp-env/lib/python3.9/site-packages/langchain_qdrant/qdrant.py", line 87, in __init__
self._validate_collection_config(
File "/local/eferra01/miniconda3/envs/llama-cpp-env/lib/python3.9/site-packages/langchain_qdrant/qdrant.py", line 937, in _validate_collection_config
cls._validate_collection_for_sparse(
File "/local/eferra01/miniconda3/envs/llama-cpp-env/lib/python3.9/site-packages/langchain_qdrant/qdrant.py", line 1022, in _validate_collection_for_sparse
raise QdrantVectorStoreError(
langchain_qdrant.qdrant.QdrantVectorStoreError: Existing Qdrant collection manuscrits_biblissima does not contain sparse vectors named None. If you want to recreate the collection, set force_recreate parameter to True.
```
### Description
I first create a qdrant database (#Step 1).
Then, in another script, to do RAG, I try to load the database (#Step 2).
However, I have the error above.
I named the sparse vectors when creating the database (Step 1) and took care to mention this name when loading the database for the RAG, (Step 2) but it doesn't seem to have been taken into account...
### System Info
langchain-qdrant==0.1.3
OS : Linux
OS Version : Linux dgx 6.1.0-18-amd64 https://github.com/langchain-ai/langchain/pull/1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 (2024-02-01) x86_64 GNU/Linux
Python Version : 3.9.19 | packaged by conda-forge | (main, Mar 20 2024, 12:50:21) \n[GCC 12.3.0] | sparse vectors name unknown | https://api.github.com/repos/langchain-ai/langchain/issues/24658/comments | 2 | 2024-07-25T08:20:55Z | 2024-07-25T10:54:11Z | https://github.com/langchain-ai/langchain/issues/24658 | 2,429,342,236 | 24,658 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Introduced by https://github.com/langchain-ai/langchain/commit/70761af8cfdcbe35e4719e1f358c735765efb020 - aiohttp has not `verify` parameter https://github.com/langchain-ai/langchain/blame/master/libs/community/langchain_community/utilities/requests.py (line 65 & others) causing the application to crash in async context.
### Error Message and Stack Trace (if applicable)
### Description
See above, can hardly be more descriptive. You need to replace `verify` by `verify_ssl`.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Mon, 15 Jul 2024 09:23:08 +0000
> Python Version: 3.12.4 (main, Jun 7 2024, 06:33:07) [GCC 14.1.1 20240522]
Package Information
-------------------
> langchain_core: 0.2.23
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.93
> langchain_cli: 0.0.24
> langchain_cohere: 0.1.9
> langchain_experimental: 0.0.63
> langchain_mongodb: 0.1.7
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langgraph: 0.1.11
> langserve: 0.2.1 | [Regression] SSL verification for requests wrapper crashes for async requests | https://api.github.com/repos/langchain-ai/langchain/issues/24654/comments | 0 | 2024-07-25T07:42:21Z | 2024-07-25T15:09:23Z | https://github.com/langchain-ai/langchain/issues/24654 | 2,429,267,518 | 24,654 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_experimental.llms.ollama_functions import OllamaFunctions
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\KALYAN\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain_experimental\llms\ollama_functions.py", line 44, in <module>
from langchain_core.utils.pydantic import is_basemodel_instance, is_basemodel_subclass
ImportError: cannot import name 'is_basemodel_instance' from 'langchain_core.utils.pydantic' (C:\Users\<Profile>\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain_core\utils\pydantic.py)
```
### Description
I'm trying to use langchain for tooling in Ollama, but I'm encountering an ImportError when attempting to initialize the Ollama Functions module. The error states that is_basemodel_instance cannot be imported from langchain_core.utils.pydantic.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.11
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.83
> langchain_experimental: 0.0.63
> langchain_fireworks: 0.1.4
> langchain_groq: 0.1.4
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
> langgraph: 0.1.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | Unable to Initialize the Ollama Functions Module Due to ImportError in Langchain Core Utils | https://api.github.com/repos/langchain-ai/langchain/issues/24652/comments | 1 | 2024-07-25T05:09:11Z | 2024-08-08T04:20:18Z | https://github.com/langchain-ai/langchain/issues/24652 | 2,429,035,203 | 24,652 |
[
"hwchase17",
"langchain"
] | ### URL
_No response_
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | DOC: <Please write a comprehensive title after the 'DOC: ' prefix>AttributeError: 'RunnableSequence' object has no attribute 'predict_and_parse' | https://api.github.com/repos/langchain-ai/langchain/issues/24651/comments | 1 | 2024-07-25T04:49:14Z | 2024-07-26T01:44:14Z | https://github.com/langchain-ai/langchain/issues/24651 | 2,428,992,744 | 24,651 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
class VectorStoreCreator:
"""
A class to create a vector store from documents.
Methods
-------
create_vectorstore(documents, embed_model, filepath):
Creates a vector store from a set of documents using the provided embedding model.
"""
@staticmethod
def create_vectorstore(documents, embed_model, collection_name):
"""
Creates a vector store from a set of documents using the provided embedding model.
This function utilizes the Chroma library to create a vector store, which is a
data structure that facilitates efficient similarity searches over the document
embeddings. Optionally, a persistent directory and collection name can be specified
for storing the vector store on disk.
Parameters
----------
documents : list
A list of documents to be embedded and stored.
embed_model : object
The embedding model used to convert documents into embeddings.
filepath : str
The file path for persisting the vector store.
Returns
-------
object
A Chroma vector store instance containing the document embeddings.
"""
try:
# Create the vector store using Chroma
vectorstore = Chroma.from_texts(
texts=documents,
embedding=embed_model,
# persist_directory=f"chroma_db_{filepath}",
collection_name=f"{collection_name}"
)
logger.info("Vector store created successfully.")
return vectorstore
except Exception as e:
logger.error(f"An error occurred during vector store creation: {str(e)}")
return None
@staticmethod
def create_collection(file_name):
"""
Create a sanitized collection name from the given file name.
This method removes non-alphanumeric characters from the file name and truncates it to a maximum of 36 characters to form the collection name.
Args:
file_name (str): The name of the file from which to create the collection name.
Returns:
str: The sanitized and truncated collection name.
Raises:
Exception: If an error occurs during the collection name creation process, it logs the error.
"""
try:
collection_name = re.compile(r'[^a-zA-Z0-9]').sub('', file_name)[:36]
logger.info(f"A collection name created for the filename: {file_name} as {collection_name}")
return collection_name
except Exception as e:
logger.error(f"An errro occured during the collection name creation : {str(e)}")
@staticmethod
def delete_vectorstore(collection_name):
"""
Delete the specified vector store collection.
This method deletes a collection in the vector store identified by the collection name.
Args:
collection_name (str): The name of the collection to delete.
Returns:
None: This method does not return a value.
Raises:
Exception: If an error occurs during the deletion process, it logs the error.
"""
try:
Chroma.delete_collection()
return None
except Exception as e:
logger.error(f"An error occured during vector store deletion:{str(e)}")
return None
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to delete the collection while using the chroma. But actually it's not working. Could anyone help me to fix this issues.
```
class VectorStoreCreator:
"""
A class to create a vector store from documents.
Methods
-------
create_vectorstore(documents, embed_model, filepath):
Creates a vector store from a set of documents using the provided embedding model.
"""
@staticmethod
def create_vectorstore(documents, embed_model, collection_name):
"""
Creates a vector store from a set of documents using the provided embedding model.
This function utilizes the Chroma library to create a vector store, which is a
data structure that facilitates efficient similarity searches over the document
embeddings. Optionally, a persistent directory and collection name can be specified
for storing the vector store on disk.
Parameters
----------
documents : list
A list of documents to be embedded and stored.
embed_model : object
The embedding model used to convert documents into embeddings.
filepath : str
The file path for persisting the vector store.
Returns
-------
object
A Chroma vector store instance containing the document embeddings.
"""
try:
# Create the vector store using Chroma
vectorstore = Chroma.from_texts(
texts=documents,
embedding=embed_model,
# persist_directory=f"chroma_db_{filepath}",
collection_name=f"{collection_name}"
)
logger.info("Vector store created successfully.")
return vectorstore
except Exception as e:
logger.error(f"An error occurred during vector store creation: {str(e)}")
return None
@staticmethod
def create_collection(file_name):
"""
Create a sanitized collection name from the given file name.
This method removes non-alphanumeric characters from the file name and truncates it to a maximum of 36 characters to form the collection name.
Args:
file_name (str): The name of the file from which to create the collection name.
Returns:
str: The sanitized and truncated collection name.
Raises:
Exception: If an error occurs during the collection name creation process, it logs the error.
"""
try:
collection_name = re.compile(r'[^a-zA-Z0-9]').sub('', file_name)[:36]
logger.info(f"A collection name created for the filename: {file_name} as {collection_name}")
return collection_name
except Exception as e:
logger.error(f"An errro occured during the collection name creation : {str(e)}")
@staticmethod
def delete_vectorstore(collection_name):
"""
Delete the specified vector store collection.
This method deletes a collection in the vector store identified by the collection name.
Args:
collection_name (str): The name of the collection to delete.
Returns:
None: This method does not return a value.
Raises:
Exception: If an error occurs during the deletion process, it logs the error.
"""
try:
Chroma.delete_collection()
return None
except Exception as e:
logger.error(f"An error occured during vector store deletion:{str(e)}")
return None
```
### System Info
langchain==0.1.10 | Delete collection for chroma not Working. | https://api.github.com/repos/langchain-ai/langchain/issues/24650/comments | 1 | 2024-07-25T04:38:42Z | 2024-08-10T12:57:30Z | https://github.com/langchain-ai/langchain/issues/24650 | 2,428,975,672 | 24,650 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Setup:
```
from typing import Any, Dict, List, Optional
from langchain.chat_models import ChatOpenA
from langchain_core.callbacks.base import BaseCallbackHandler, BaseCallbackManager
from langchain_core.output_parsers import StrOutputParser
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(
input_variables=["question"],
template="Answer this question: {question}",
)
model = prompt | ChatOpenAI(temperature=0) | StrOutputParser()
from typing import Any, Dict, List, Optional
from langchain_core.callbacks.base import (
AsyncCallbackHandler,
BaseCallbackHandler,
BaseCallbackManager,
)
class CustomCallbackHandler(BaseCallbackHandler):
def on_chain_start(self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) -> None:
print("chain_start")
def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
print("chain_end")
```
Invoking with a list of callbacks => chain events print three times per each.
```
model.invoke("Hi", config={"callbacks": [CustomCallbackHandler()]})
# > Output:
# chain_start
# chain_start
# chain_end
# chain_start
# chain_end
# chain_end
# 'Hello! How can I assist you today?'
```
Invoking with a callback manager => chain events print only once
```
model.invoke("Hi", config={"callbacks": BaseCallbackManager([CustomCallbackHandler()])})
# > Output:
# chain_start
# chain_end
# 'Hello! How can I assist you today?'
```
### Error Message and Stack Trace (if applicable)
NA
### Description
When passing callbacks to the runnable's `.invoke` method, there are two ways to do that:
1. Pass as a list: `model.invoke("Hi", config={"callbacks": [CustomCallbackHandler()]})`
2. Pass as a callback manager: `model.invoke("Hi", config={"callbacks": BaseCallbackManager([CustomCallbackHandler()])})`
However, the behavior is different between two. The former triggers the handler more times then the latter.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #70~20.04.1-Ubuntu SMP Fri Jun 14 15:42:13 UTC 2024
> Python Version: 3.11.0rc1 (main, Aug 12 2022, 10:02:14) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.2.23
> langchain: 0.2.10
> langchain_community: 0.0.38
> langsmith: 0.1.93
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Callbacks called different times when passed in a list or callback manager. | https://api.github.com/repos/langchain-ai/langchain/issues/24642/comments | 7 | 2024-07-25T00:55:28Z | 2024-07-30T01:33:54Z | https://github.com/langchain-ai/langchain/issues/24642 | 2,428,719,527 | 24,642 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.1/docs/get_started/introduction/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | DOC: Page Navigation link references (href); Page's navigation links at the bottom incorrectly references the same page instead of the next. | https://api.github.com/repos/langchain-ai/langchain/issues/24627/comments | 0 | 2024-07-24T20:42:18Z | 2024-07-24T20:44:48Z | https://github.com/langchain-ai/langchain/issues/24627 | 2,428,436,331 | 24,627 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_experimental.graph_transformers.llm import create_simple_model
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
temperature=0,
model_name="gpt-4o-mini-2024-07-18"
)
schema = create_simple_model(
node_labels = ["Person", "Organization"],
rel_types = ["KNOWS", "EMPLOYED_BY"],
llm_type = llm._llm_type # openai-chat
)
print(schema.schema_json(indent=4))
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The `_Graph` pydantic model generated from `create_simple_model` (which `LLMGraphTransformer` uses when allowed nodes and relationships are provided) does not constrain the relationships (source and target types, relationship type), and the node and relationship properties with enums when using ChatOpenAI.
One can see this by outputting the json schema from the `_Graph` schema and seeing `enum` missing from all but `SimpleNode.type`.
**The issue is that when calling `optional_enum_field` throughout `create_simple_model` the `llm_type` parameter is not passed in except for when creating node type. Passing it into each call fixes the issue.**
```json
{
"title": "DynamicGraph",
"description": "Represents a graph document consisting of nodes and relationships.",
"type": "object",
"properties": {
"nodes": {
"title": "Nodes",
"description": "List of nodes",
"type": "array",
"items": {
"$ref": "#/definitions/SimpleNode"
}
},
"relationships": {
"title": "Relationships",
"description": "List of relationships",
"type": "array",
"items": {
"$ref": "#/definitions/SimpleRelationship"
}
}
},
"definitions": {
"SimpleNode": {
"title": "SimpleNode",
"type": "object",
"properties": {
"id": {
"title": "Id",
"description": "Name or human-readable unique identifier.",
"type": "string"
},
"type": {
"title": "Type",
"description": "The type or label of the node.. Available options are ['Person', 'Organization']",
"enum": [
"Person",
"Organization"
],
"type": "string"
}
},
"required": [
"id",
"type"
]
},
"SimpleRelationship": {
"title": "SimpleRelationship",
"type": "object",
"properties": {
"source_node_id": {
"title": "Source Node Id",
"description": "Name or human-readable unique identifier of source node",
"type": "string"
},
"source_node_type": {
"title": "Source Node Type",
"description": "The type or label of the source node.. Available options are ['Person', 'Organization']",
"type": "string"
},
"target_node_id": {
"title": "Target Node Id",
"description": "Name or human-readable unique identifier of target node",
"type": "string"
},
"target_node_type": {
"title": "Target Node Type",
"description": "The type or label of the target node.. Available options are ['Person', 'Organization']",
"type": "string"
},
"type": {
"title": "Type",
"description": "The type of the relationship.. Available options are ['KNOWS', 'EMPLOYED_BY']",
"type": "string"
}
},
"required": [
"source_node_id",
"source_node_type",
"target_node_id",
"target_node_type",
"type"
]
}
}
}
```
### System Info
```bash
> pip freeze | grep langchain
langchain==0.2.10
langchain-community==0.2.9
langchain-core==0.2.22
langchain-experimental==0.0.62
langchain-openai==0.1.17
langchain-text-splitters==0.2.2
```
platform: wsl2 windows
Python 3.10.14 | graph_transformers.llm.py create_simple_model not constraining relationships with enums when using OpenAI LLM | https://api.github.com/repos/langchain-ai/langchain/issues/24615/comments | 0 | 2024-07-24T16:27:18Z | 2024-07-24T16:30:04Z | https://github.com/langchain-ai/langchain/issues/24615 | 2,428,013,260 | 24,615 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
# Defining the Bing Search Tool
from langchain_community.utilities import BingSearchAPIWrapper
from langchain_community.tools.bing_search import BingSearchResults
import os
BING_SUBSCRIPTION_KEY = os.getenv("BING_SUBSCRIPTION_KEY")
api_wrapper = BingSearchAPIWrapper(bing_subscription_key = BING_SUBSCRIPTION_KEY, bing_search_url = 'https://api.bing.microsoft.com/v7.0/search')
bing_tool = BingSearchResults(api_wrapper=api_wrapper)
# Defining the Agent elements
from langchain.agents import AgentExecutor
from langchain_openai import AzureChatOpenAI
from langchain_core.runnables import RunnablePassthrough
from langchain_core.utils.utils import convert_to_secret_str
from langchain.agents.format_scratchpad.openai_tools import (
format_to_openai_tool_messages,
)
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser
from langchain_core.runnables import RunnablePassthrough
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain.agents.format_scratchpad.openai_tools import (
format_to_openai_tool_messages,
)
from langchain import hub
instructions = """You are an assistant."""
base_prompt = hub.pull("langchain-ai/openai-functions-template")
prompt = base_prompt.partial(instructions=instructions)
llm = AzureChatOpenAI(
azure_deployment=os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME"),
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
api_key=convert_to_secret_str(os.getenv("AZURE_OPENAI_API_KEY")), # type: ignore
api_version=os.getenv("AZURE_OPENAI_API_VERSION"), # type: ignore
temperature=0,
)
bing_tools = [bing_tool]
bing_llm_with_tools = llm.bind(tools=[convert_to_openai_tool(tool) for tool in bing_tools])
# Defining the Agent
from langchain_core.runnables import RunnablePassthrough, RunnableSequence
bing_agent = RunnableSequence(
RunnablePassthrough.assign(
agent_scratchpad=lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
)
),
# RunnablePassthrough()
prompt,
bing_llm_with_tools,
OpenAIToolsAgentOutputParser(),
)
# Defining the Agent Executor
bing_agent_executor = AgentExecutor(
agent=bing_agent,
tools=bing_tools,
verbose=True,
)
# Calling the Agent Executor
bing_agent_executor.invoke({"input":"tell me about the last version of angular"})
```
### Error Message and Stack Trace (if applicable)
TypeError: Object of type CallbackManagerForToolRun is not JSON serializable
```
{
"name": "TypeError",
"message": "Object of type CallbackManagerForToolRun is not JSON serializable",
"stack": "---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[31], line 1
----> 1 bing_agent_executor.invoke({\"input\":\"tell me about the last version of angular\"})
3 print(\"done\")
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs)
164 except BaseException as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
169 if include_run_info:
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
153 try:
154 self._validate_inputs(inputs)
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
159 )
161 final_outputs: Dict[str, Any] = self.prep_outputs(
162 inputs, outputs, return_only_outputs
163 )
164 except BaseException as e:
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py:1612, in AgentExecutor._call(self, inputs, run_manager)
1610 # We now enter the agent loop (until it returns something).
1611 while self._should_continue(iterations, time_elapsed):
-> 1612 next_step_output = self._take_next_step(
1613 name_to_tool_map,
1614 color_mapping,
1615 inputs,
1616 intermediate_steps,
1617 run_manager=run_manager,
1618 )
1619 if isinstance(next_step_output, AgentFinish):
1620 return self._return(
1621 next_step_output, intermediate_steps, run_manager=run_manager
1622 )
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py:1318, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1309 def _take_next_step(
1310 self,
1311 name_to_tool_map: Dict[str, BaseTool],
(...)
1315 run_manager: Optional[CallbackManagerForChainRun] = None,
1316 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1317 return self._consume_next_step(
-> 1318 [
1319 a
1320 for a in self._iter_next_step(
1321 name_to_tool_map,
1322 color_mapping,
1323 inputs,
1324 intermediate_steps,
1325 run_manager,
1326 )
1327 ]
1328 )
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py:1318, in <listcomp>(.0)
1309 def _take_next_step(
1310 self,
1311 name_to_tool_map: Dict[str, BaseTool],
(...)
1315 run_manager: Optional[CallbackManagerForChainRun] = None,
1316 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1317 return self._consume_next_step(
-> 1318 [
1319 a
1320 for a in self._iter_next_step(
1321 name_to_tool_map,
1322 color_mapping,
1323 inputs,
1324 intermediate_steps,
1325 run_manager,
1326 )
1327 ]
1328 )
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py:1346, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1343 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
1345 # Call the LLM to see what to do.
-> 1346 output = self.agent.plan(
1347 intermediate_steps,
1348 callbacks=run_manager.get_child() if run_manager else None,
1349 **inputs,
1350 )
1351 except OutputParserException as e:
1352 if isinstance(self.handle_parsing_errors, bool):
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain/agents/agent.py:580, in RunnableMultiActionAgent.plan(self, intermediate_steps, callbacks, **kwargs)
572 final_output: Any = None
573 if self.stream_runnable:
574 # Use streaming to make sure that the underlying LLM is invoked in a
575 # streaming
(...)
578 # Because the response from the plan is not a generator, we need to
579 # accumulate the output into final output and return that.
--> 580 for chunk in self.runnable.stream(inputs, config={\"callbacks\": callbacks}):
581 if final_output is None:
582 final_output = chunk
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:3253, in RunnableSequence.stream(self, input, config, **kwargs)
3247 def stream(
3248 self,
3249 input: Input,
3250 config: Optional[RunnableConfig] = None,
3251 **kwargs: Optional[Any],
3252 ) -> Iterator[Output]:
-> 3253 yield from self.transform(iter([input]), config, **kwargs)
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:3240, in RunnableSequence.transform(self, input, config, **kwargs)
3234 def transform(
3235 self,
3236 input: Iterator[Input],
3237 config: Optional[RunnableConfig] = None,
3238 **kwargs: Optional[Any],
3239 ) -> Iterator[Output]:
-> 3240 yield from self._transform_stream_with_config(
3241 input,
3242 self._transform,
3243 patch_config(config, run_name=(config or {}).get(\"run_name\") or self.name),
3244 **kwargs,
3245 )
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:2053, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
2051 try:
2052 while True:
-> 2053 chunk: Output = context.run(next, iterator) # type: ignore
2054 yield chunk
2055 if final_output_supported:
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:3202, in RunnableSequence._transform(self, input, run_manager, config, **kwargs)
3199 else:
3200 final_pipeline = step.transform(final_pipeline, config)
-> 3202 for output in final_pipeline:
3203 yield output
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:1271, in Runnable.transform(self, input, config, **kwargs)
1268 final: Input
1269 got_first_val = False
-> 1271 for ichunk in input:
1272 # The default implementation of transform is to buffer input and
1273 # then call stream.
1274 # It'll attempt to gather all input into a single chunk using
1275 # the `+` operator.
1276 # If the input is not addable, then we'll assume that we can
1277 # only operate on the last chunk,
1278 # and we'll iterate until we get to the last chunk.
1279 if not got_first_val:
1280 final = ichunk
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:5264, in RunnableBindingBase.transform(self, input, config, **kwargs)
5258 def transform(
5259 self,
5260 input: Iterator[Input],
5261 config: Optional[RunnableConfig] = None,
5262 **kwargs: Any,
5263 ) -> Iterator[Output]:
-> 5264 yield from self.bound.transform(
5265 input,
5266 self._merge_configs(config),
5267 **{**self.kwargs, **kwargs},
5268 )
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py:1289, in Runnable.transform(self, input, config, **kwargs)
1286 final = ichunk
1288 if got_first_val:
-> 1289 yield from self.stream(final, config, **kwargs)
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:365, in BaseChatModel.stream(self, input, config, stop, **kwargs)
358 except BaseException as e:
359 run_manager.on_llm_error(
360 e,
361 response=LLMResult(
362 generations=[[generation]] if generation else []
363 ),
364 )
--> 365 raise e
366 else:
367 run_manager.on_llm_end(LLMResult(generations=[[generation]]))
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:345, in BaseChatModel.stream(self, input, config, stop, **kwargs)
343 generation: Optional[ChatGenerationChunk] = None
344 try:
--> 345 for chunk in self._stream(messages, stop=stop, **kwargs):
346 if chunk.message.id is None:
347 chunk.message.id = f\"run-{run_manager.run_id}\"
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:513, in BaseChatOpenAI._stream(self, messages, stop, run_manager, **kwargs)
505 def _stream(
506 self,
507 messages: List[BaseMessage],
(...)
510 **kwargs: Any,
511 ) -> Iterator[ChatGenerationChunk]:
512 kwargs[\"stream\"] = True
--> 513 payload = self._get_request_payload(messages, stop=stop, **kwargs)
514 default_chunk_class: Type[BaseMessageChunk] = AIMessageChunk
515 if self.include_response_headers:
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:604, in BaseChatOpenAI._get_request_payload(self, input_, stop, **kwargs)
601 if stop is not None:
602 kwargs[\"stop\"] = stop
603 return {
--> 604 \"messages\": [_convert_message_to_dict(m) for m in messages],
605 **self._default_params,
606 **kwargs,
607 }
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:604, in <listcomp>(.0)
601 if stop is not None:
602 kwargs[\"stop\"] = stop
603 return {
--> 604 \"messages\": [_convert_message_to_dict(m) for m in messages],
605 **self._default_params,
606 **kwargs,
607 }
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:198, in _convert_message_to_dict(message)
196 message_dict[\"function_call\"] = message.additional_kwargs[\"function_call\"]
197 if message.tool_calls or message.invalid_tool_calls:
--> 198 message_dict[\"tool_calls\"] = [
199 _lc_tool_call_to_openai_tool_call(tc) for tc in message.tool_calls
200 ] + [
201 _lc_invalid_tool_call_to_openai_tool_call(tc)
202 for tc in message.invalid_tool_calls
203 ]
204 elif \"tool_calls\" in message.additional_kwargs:
205 message_dict[\"tool_calls\"] = message.additional_kwargs[\"tool_calls\"]
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:199, in <listcomp>(.0)
196 message_dict[\"function_call\"] = message.additional_kwargs[\"function_call\"]
197 if message.tool_calls or message.invalid_tool_calls:
198 message_dict[\"tool_calls\"] = [
--> 199 _lc_tool_call_to_openai_tool_call(tc) for tc in message.tool_calls
200 ] + [
201 _lc_invalid_tool_call_to_openai_tool_call(tc)
202 for tc in message.invalid_tool_calls
203 ]
204 elif \"tool_calls\" in message.additional_kwargs:
205 message_dict[\"tool_calls\"] = message.additional_kwargs[\"tool_calls\"]
File ~/.cache/pypoetry/virtualenvs/test-py3.11/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:1777, in _lc_tool_call_to_openai_tool_call(tool_call)
1771 def _lc_tool_call_to_openai_tool_call(tool_call: ToolCall) -> dict:
1772 return {
1773 \"type\": \"function\",
1774 \"id\": tool_call[\"id\"],
1775 \"function\": {
1776 \"name\": tool_call[\"name\"],
-> 1777 \"arguments\": json.dumps(tool_call[\"args\"]),
1778 },
1779 }
File ~/.pyenv/versions/3.11.9/lib/python3.11/json/__init__.py:231, in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
226 # cached encoder
227 if (not skipkeys and ensure_ascii and
228 check_circular and allow_nan and
229 cls is None and indent is None and separators is None and
230 default is None and not sort_keys and not kw):
--> 231 return _default_encoder.encode(obj)
232 if cls is None:
233 cls = JSONEncoder
File ~/.pyenv/versions/3.11.9/lib/python3.11/json/encoder.py:200, in JSONEncoder.encode(self, o)
196 return encode_basestring(o)
197 # This doesn't pass the iterator directly to ''.join() because the
198 # exceptions aren't as detailed. The list call should be roughly
199 # equivalent to the PySequence_Fast that ''.join() would do.
--> 200 chunks = self.iterencode(o, _one_shot=True)
201 if not isinstance(chunks, (list, tuple)):
202 chunks = list(chunks)
File ~/.pyenv/versions/3.11.9/lib/python3.11/json/encoder.py:258, in JSONEncoder.iterencode(self, o, _one_shot)
253 else:
254 _iterencode = _make_iterencode(
255 markers, self.default, _encoder, self.indent, floatstr,
256 self.key_separator, self.item_separator, self.sort_keys,
257 self.skipkeys, _one_shot)
--> 258 return _iterencode(o, 0)
File ~/.pyenv/versions/3.11.9/lib/python3.11/json/encoder.py:180, in JSONEncoder.default(self, o)
161 def default(self, o):
162 \"\"\"Implement this method in a subclass such that it returns
163 a serializable object for ``o``, or calls the base implementation
164 (to raise a ``TypeError``).
(...)
178
179 \"\"\"
--> 180 raise TypeError(f'Object of type {o.__class__.__name__} '
181 f'is not JSON serializable')
TypeError: Object of type CallbackManagerForToolRun is not JSON serializable"
}
```
### Description
I'm trying use the Bing Search tool in an Agent Executor.
The search tool itself works, even the agent works, the problem is when I use it in an Agent Executor.
The same issue occurs when using the Google Search tool from the langchain-google-community package
```python
from langchain_google_community import GoogleSearchAPIWrapper, GoogleSearchResults
google_tool = GoogleSearchResults(api_wrapper=GoogleSearchAPIWrapper())
```
Instead, it **does not** occur with DuckDuckGo
```python
from langchain_community.tools import DuckDuckGoSearchResults
duckduckgo_tool = DuckDuckGoSearchResults()
```
### System Info
From `python -m langchain_core.sys_info`
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.11.9 (main, Jun 27 2024, 21:37:40) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.23
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.93
> langchain_google_community: 1.0.7
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langserve: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
``` | Agent Executor using some specific search tools is causing an error | https://api.github.com/repos/langchain-ai/langchain/issues/24614/comments | 4 | 2024-07-24T16:10:03Z | 2024-08-04T06:27:06Z | https://github.com/langchain-ai/langchain/issues/24614 | 2,427,980,563 | 24,614 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangGraph/LangChain rather than my code.
- [X] I am sure this is better as an issue [rather than a GitHub discussion](https://github.com/langchain-ai/langgraph/discussions/new/choose), since this is a LangGraph bug and not a design question.
### Example Code
```python
for s in graph.stream(
{
"messages": [
HumanMessage(content="Code hello world and print it to the terminal")
]
}
):
if "__end__" not in s:
print(s)
print("----")
```
### Error Message and Stack Trace (if applicable)
```shell
TypeError('Object of type CallbackManagerForToolRun is not JSON serializable')Traceback (most recent call last):
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\pregel\__init__.py", line 946, in stream
_panic_or_proceed(done, inflight, loop.step)
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\pregel\__init__.py", line 1347, in _panic_or_proceed
raise exc
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\pregel\executor.py", line 60, in done
task.result()
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\pregel\retry.py", line 25, in run_with_retry
task.proc.invoke(task.input, task.config)
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 2873, in invoke
input = step.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\utils.py", line 102, in invoke
ret = context.run(self.func, input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\arthur.lachini\AppData\Local\Temp\ipykernel_8788\519499601.py", line 3, in agent_node
result = agent.invoke(state)
^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\chains\base.py", line 166, in invoke
raise e
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\chains\base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\agents\agent.py", line 1612, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\agents\agent.py", line 1318, in _take_next_step
[
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\agents\agent.py", line 1346, in _iter_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\agents\agent.py", line 580, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 3253, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 3240, in transform
yield from self._transform_stream_with_config(
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 2053, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 3202, in _transform
for output in final_pipeline:
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 1271, in transform
for ichunk in input:
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 5264, in transform
yield from self.bound.transform(
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 1289, in transform
yield from self.stream(final, config, **kwargs)
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\language_models\chat_models.py", line 365, in stream
raise e
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\language_models\chat_models.py", line 345, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_openai\chat_models\base.py", line 513, in _stream
payload = self._get_request_payload(messages, stop=stop, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_openai\chat_models\base.py", line 604, in _get_request_payload
"messages": [_convert_message_to_dict(m) for m in messages],
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_openai\chat_models\base.py", line 199, in _convert_message_to_dict
_lc_tool_call_to_openai_tool_call(tc) for tc in message.tool_calls
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_openai\chat_models\base.py", line 1777, in _lc_tool_call_to_openai_tool_call
"arguments": json.dumps(tool_call["args"]),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\json\__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\json\encoder.py", line 200, in encode
chunks = self.iterencode(o, _one_shot=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\json\encoder.py", line 258, in iterencode
return _iterencode(o, 0)
^^^^^^^^^^^^^^^^^
File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\json\encoder.py", line 180, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type CallbackManagerForToolRun is not JSON serializable
```
### Description
I tried to replicate the tutorial in my local machine, but the coder function does not works as it suposed to. The ressearcher function works just fine and can do multiple consecutive researchers but as soone as the coder agent is called, it breakes the function. I've annexed prints of the langsmith dashboard to provide further insight on the error.
![Sem título](https://github.com/user-attachments/assets/46a957ed-b90b-4389-b31d-b39b639685cc)
![Sem título2](https://github.com/user-attachments/assets/6e544eec-23b7-43b9-8302-561e9c8bf331)
![Sem título3](https://github.com/user-attachments/assets/39db1f44-0875-4c84-879a-149102eac708)
### System Info
Windows 10
Python 3.12.4
aiohttp==3.9.5
aiosignal==1.3.1
annotated-types==0.7.0
anyio==4.4.0
asttokens==2.4.1
attrs==23.2.0
certifi==2024.7.4
charset-normalizer==3.3.2
colorama==0.4.6
comm==0.2.2
contourpy==1.2.1
cycler==0.12.1
dataclasses-json==0.6.7
debugpy==1.8.2
decorator==5.1.1
distro==1.9.0
executing==2.0.1
fonttools==4.53.1
frozenlist==1.4.1
greenlet==3.0.3
h11==0.14.0
httpcore==1.0.5
httpx==0.27.0
idna==3.7
ipykernel==6.29.5
ipython==8.26.0
jedi==0.19.1
jsonpatch==1.33
jsonpointer==3.0.0
jupyter_client==8.6.2
jupyter_core==5.7.2
kiwisolver==1.4.5
langchain==0.2.11
langchain-community==0.2.10
langchain-core==0.2.23
langchain-experimental==0.0.63
langchain-openai==0.1.17
langchain-text-splitters==0.2.2
langchainhub==0.1.20
langgraph==0.1.10
langsmith==0.1.93
marshmallow==3.21.3
matplotlib==3.9.1
matplotlib-inline==0.1.7
multidict==6.0.5
mypy-extensions==1.0.0
nest-asyncio==1.6.0
numpy==1.26.4
openai==1.37.0
orjson==3.10.6
packaging==24.1
parso==0.8.4
pillow==10.4.0
platformdirs==4.2.2
prompt_toolkit==3.0.47
psutil==6.0.0
pure_eval==0.2.3
pydantic==2.8.2
pydantic_core==2.20.1
Pygments==2.18.0
pyparsing==3.1.2
python-dateutil==2.9.0.post0
pywin32==306
PyYAML==6.0.1
pyzmq==26.0.3
regex==2024.5.15
requests==2.32.3
six==1.16.0
sniffio==1.3.1
SQLAlchemy==2.0.31
stack-data==0.6.3
tenacity==8.5.0
tiktoken==0.7.0
tornado==6.4.1
tqdm==4.66.4
traitlets==5.14.3
types-requests==2.32.0.20240712
typing-inspect==0.9.0
typing_extensions==4.12.2
urllib3==2.2.2
wcwidth==0.2.13
yarl==1.9.4 | TypeError('Object of type CallbackManagerForToolRun is not JSON serializable') on Coder agent | https://api.github.com/repos/langchain-ai/langchain/issues/24621/comments | 11 | 2024-07-24T14:35:00Z | 2024-08-07T12:30:47Z | https://github.com/langchain-ai/langchain/issues/24621 | 2,428,311,547 | 24,621 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/output_parser_fixing/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I was running the code in the "How to use the output-fixing parser" page. After running the last line of code `new_parser.parse(misformatted)` instead of fixing it and returning the correct output, it gives an error:
```
ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
```
### Idea or request for content:
_No response_ | DOC: Running Output-fixing parser example code results in an error | https://api.github.com/repos/langchain-ai/langchain/issues/24600/comments | 1 | 2024-07-24T10:04:19Z | 2024-07-25T21:58:22Z | https://github.com/langchain-ai/langchain/issues/24600 | 2,427,134,422 | 24,600 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
def dumpd(obj: Any) -> Any:
"""Return a json dict representation of an object."""
#result = json.loads(dumps(obj))
_id: List[str] = []
try:
if hasattr(obj, "__name__"):
_id = [*obj.__module__.split("."), obj.__name__]
elif hasattr(obj, "__class__"):
_id = [*obj.__class__.__module__.split("."), obj.__class__.__name__]
except Exception:
pass
result = {
"lc": 1,
"type": "not_implemented",
"id": _id,
"repr": None,
}
name = getattr(obj, "name", None)
if name:
result['name'] = name
return result
```
### Error Message and Stack Trace (if applicable)
None
### Description
dumpd is much too slow. For a complex chain like ours, this costs extra 1s per request. We replace it based on to_json_not_implemented. Please fix it formally. At least use Serializable.to_json() when possible.
In the original code, we use `Serializable.to_json()` or `to_json_not_implemented` to get a json dict, then dump it as json_str, then load it to get the original json dict. Why? This seems quite redundant. **Just use to_json_not_implemented or Serializable.to_json() will be much faster**. It is not difficult to code a special Serializable.to_json() that only gives str json_dict | dumpd costs extra 1s per invoke | https://api.github.com/repos/langchain-ai/langchain/issues/24599/comments | 0 | 2024-07-24T08:52:39Z | 2024-07-25T07:01:08Z | https://github.com/langchain-ai/langchain/issues/24599 | 2,426,969,368 | 24,599 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.document_loaders import WebBaseLoader
from langchain_core.pydantic_v1 import BaseModel
from langchain_community.embeddings import QianfanEmbeddingsEndpoint
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import Chroma
import os
from langchain_community.llms import QianfanLLMEndpoint
from langchain_core.prompts import ChatPromptTemplate
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
# 定义向量模型
embeddings = QianfanEmbeddingsEndpoint(
qianfan_ak='****',
qianfan_sk='****',
chunk_size= 16,
model="Embedding-V1"
)
### Error Message and Stack Trace (if applicable)
USER_AGENT environment variable not set, consider setting it to identify your requests.
Traceback (most recent call last):
File "C:\Users\ISSUSER\PycharmProjects\pythonProject\LangChainRetrievalChain.py", line 23, in <module>
embeddings = QianfanEmbeddingsEndpoint(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ISSUSER\AppData\Local\Programs\Python\Python312\Lib\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for QianfanEmbeddingsEndpoint
qianfan_ak
str type expected (type=type_error.str)
qianfan_sk
str type expected (type=type_error.str)
### Description
qianfan_ak='****', check is ok
qianfan_sk='****', check is ok
### System Info
C:\Users\ISSUSER>pip list
Package Version
------------------------ --------
aiohttp 3.9.5
aiolimiter 1.1.0
aiosignal 1.3.1
annotated-types 0.7.0
attrs 23.2.0
bce-python-sdk 0.9.17
beautifulsoup4 4.12.3
bs4 0.0.2
certifi 2024.7.4
charset-normalizer 3.3.2
click 8.1.7
colorama 0.4.6
comtypes 1.4.5
dataclasses-json 0.6.7
dill 0.3.8
diskcache 5.6.3
frozenlist 1.4.1
future 1.0.0
greenlet 3.0.3
idna 3.7
jsonpatch 1.33
jsonpointer 3.0.0
langchain 0.2.9
langchain-community 0.2.7
langchain-core 0.2.21
langchain-text-splitters 0.2.2
langsmith 0.1.92
markdown-it-py 3.0.0
marshmallow 3.21.3
mdurl 0.1.2
multidict 6.0.5
multiprocess 0.70.16
mypy-extensions 1.0.0
numpy 1.26.4
orjson 3.10.6
packaging 24.1
pip 24.1.2
prompt_toolkit 3.0.47
pycryptodome 3.20.0
pydantic 2.8.2
pydantic_core 2.20.1
Pygments 2.18.0
python-dotenv 1.0.1
PyYAML 6.0.1
qianfan 0.4.1.2
requests 2.32.3
rich 13.7.1
shellingham 1.5.4
six 1.16.0
soupsieve 2.5
SQLAlchemy 2.0.31
tenacity 8.5.0
typer 0.12.3
typing_extensions 4.12.2
typing-inspect 0.9.0
uiautomation 2.0.20
urllib3 2.2.2
validators 0.33.0
wcwidth 0.2.13
yarl 1.9.4 | QianfanEmbeddingsEndpoint error in LangChain 0.2.9 | https://api.github.com/repos/langchain-ai/langchain/issues/24590/comments | 0 | 2024-07-24T01:28:50Z | 2024-07-24T01:31:22Z | https://github.com/langchain-ai/langchain/issues/24590 | 2,426,398,316 | 24,590 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
loader = S3DirectoryLoader(bucket=s3_bucket_name, prefix=s3_prefix)
try:
documents = loader.load()
logging.info(f"size of the loaded documents {len(documents)}")
except Exception as e:
logging.info(f"error loading documents: {e}")
### Error Message and Stack Trace (if applicable)
Detected a JSON file that does not conform to the Unstructured schema. partition_json currently only processes serialized Unstructured output.
doc = loader.load()
^^^^^^^^^^^^^
File "/prj/.venv/lib/python3.12/site-packages/langchain_community/document_loaders/s3_directory.py", line 139, in load
docs.extend(loader.load())
^^^^^^^^^^^^^
File "/prj/.venv/lib/python3.12/site-packages/langchain_core/document_loaders/base.py", line 30, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "/prj/.venv/lib/python3.12/site-packages/langchain_community/document_loaders/unstructured.py", line 89, in lazy_load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "/prj/.venv/lib/python3.12/site-packages/langchain_community/document_loaders/s3_file.py", line 135, in _get_elements
return partition(filename=file_path, **self.unstructured_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/prj/.venv/lib/python3.12/site-packages/unstructured/partition/auto.py", line 389, in partition
raise ValueError(
ValueError: Detected a JSON file that does not conform to the Unstructured schema. partition_json currently only processes serialized Unstructured output.
### Description
My S3 bucket has a single folder, this folder contains json files.
Bucket name: "abc-bc-name"
Prefix: "output"
file content is json
{
"abc": "This is a text json file",
"source": "https://asf.test/4865422_f4866011606d84f50d10e60e0b513b7",
"correlation_id": "4865422_f4866011606d84f50d10e60e0b513b7"
}
### System Info
langchain==0.2.10
langchain-cli==0.0.25
langchain-community==0.2.9
langchain-core==0.2.22
langchain-openai==0.1.17
langchain-text-splitters==0.2.2
macOS
Python 3.12.0 | Detected a JSON file that does not conform to the Unstructured schema. partition_json currently only processes serialized Unstructured output while using langchain S3DirectoryLoader | https://api.github.com/repos/langchain-ai/langchain/issues/24588/comments | 3 | 2024-07-24T00:00:20Z | 2024-08-02T23:38:10Z | https://github.com/langchain-ai/langchain/issues/24588 | 2,426,320,642 | 24,588 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
# This part works as expected
llm = HuggingFaceEndpoint(endpoint_url="http://127.0.0.1:8080")
# This part raises huggingface_hub.errors.LocalTokenNotFoundError
chat_llm = ChatHuggingFace(llm=llm)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
.venv/lib/python3.10/site-packages/langchain_huggingface/chat_models/huggingface.py", line 320, in __init__
self._resolve_model_id()
.venv/lib/python3.10/site-packages/langchain_huggingface/chat_models/huggingface.py", line 458, in _resolve_model_id
available_endpoints = list_inference_endpoints("*")
.venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 7081, in list_inference_endpoints
user = self.whoami(token=token)
.venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
.venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 1390, in whoami
headers=self._build_hf_headers(
.venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 8448, in _build_hf_headers
return build_hf_headers(
.venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
.venv/lib/python3.10/site-packages/huggingface_hub/utils/_headers.py", line 124, in build_hf_headers
token_to_send = get_token_to_send(token)
.venv/lib/python3.10/site-packages/huggingface_hub/utils/_headers.py", line 158, in get_token_to_send
raise LocalTokenNotFoundError(
huggingface_hub.errors.LocalTokenNotFoundError: Token is required (`token=True`), but no token found. You need to provide a token or be logged in to Hugging Face with `huggingface-cli login` or `huggingface_hub.login`. See https://huggingface.co/settings/tokens.
### Description
- I am trying to use `langchain_huggingface` library to connect to a TGI instance served locally. The problem is when wrapping a `HuggingFaceEndpoint` into `ChatHuggingFace`, it raises error requesting user token to be provided when it shouldn't be necessary a token when the model has already being downloaded and is serving locally.
- There is a similar issue #23872 but the fix they mentioned doesn't work because adding the `model_id` parameter to the `ChatHuggingFace` doesn't avoid falling in the following case:
```python
class ChatHuggingFace(BaseChatModel):
"""Hugging Face LLM's as ChatModels.
...
""" # noqa: E501
...
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
from transformers import AutoTokenizer # type: ignore[import]
self._resolve_model_id() # ---> Even when providing the model_id it will enter here
self.tokenizer = (
AutoTokenizer.from_pretrained(self.model_id)
if self.tokenizer is None
else self.tokenizer
)
...
def _resolve_model_id(self) -> None:
"""Resolve the model_id from the LLM's inference_server_url"""
from huggingface_hub import list_inference_endpoints # type: ignore[import]
if _is_huggingface_hub(self.llm) or (
hasattr(self.llm, "repo_id") and self.llm.repo_id
):
self.model_id = self.llm.repo_id
return
elif _is_huggingface_textgen_inference(self.llm):
endpoint_url: Optional[str] = self.llm.inference_server_url
elif _is_huggingface_pipeline(self.llm):
self.model_id = self.llm.model_id
return
else: # This is the case we are in when _is_huggingface_endpoint() is True
endpoint_url = self.llm.endpoint_url
available_endpoints = list_inference_endpoints("*") # ---> This line raises the error if we don't provide the hf token
for endpoint in available_endpoints:
if endpoint.url == endpoint_url:
self.model_id = endpoint.repository
if not self.model_id:
raise ValueError(
"Failed to resolve model_id:"
f"Could not find model id for inference server: {endpoint_url}"
"Make sure that your Hugging Face token has access to the endpoint."
)
```
I was able to solve the issue by modifying the constructor method so when providing the `model_id` it doesn't resolve it:
```python
class ChatHuggingFace(BaseChatModel):
"""Hugging Face LLM's as ChatModels.
...
""" # noqa: E501
...
def __init__(self, **kwargs: Any):
super().__init__(**kwargs)
from transformers import AutoTokenizer # type: ignore[import]
self.model_id or self._resolve_model_id() # ---> Not a good solution because if model_id is invalid then the tokenizer instantiation will fail only if the tokinizer is not provided and also won't check other hf_hub inference cases
self.tokenizer = (
AutoTokenizer.from_pretrained(self.model_id)
if self.tokenizer is None
else self.tokenizer
)
```
I imagine there is a better way to solve this, for example by adding some logic to check if the `endpoint_url` is a valid ip to request or if it is served with TGI or simply by checking if it's localhost:
```python
class ChatHuggingFace(BaseChatModel):
"""Hugging Face LLM's as ChatModels.
...
""" # noqa: E501
...
def _resolve_model_id(self) -> None:
"""Resolve the model_id from the LLM's inference_server_url"""
from huggingface_hub import list_inference_endpoints # type: ignore[import]
if _is_huggingface_hub(self.llm) or (
hasattr(self.llm, "repo_id") and self.llm.repo_id
):
self.model_id = self.llm.repo_id
return
elif _is_huggingface_textgen_inference(self.llm):
endpoint_url: Optional[str] = self.llm.inference_server_url
elif _is_huggingface_pipeline(self.llm):
self.model_id = self.llm.model_id
return
elif _is_huggingface_endpoint(self.llm): # ---> New case added to check url
... # Take the following code with a grain of salt
if is_tgi_hosted(self.llm.endpoint_url):
if not self.model_id and not self.tokenizer:
raise ValueError("You must provide valid model id or a valid tokenizer")
return
...
endpoint_url = self.llm.endpoint_url
else: # ---> New last case in which no valid huggingface interface was provided
raise TypeError("llm must be `HuggingFaceTextGenInference`, `HuggingFaceEndpoint`, `HuggingFaceHub`, or `HuggingFacePipeline`.")
available_endpoints = list_inference_endpoints("*")
for endpoint in available_endpoints:
if endpoint.url == endpoint_url:
self.model_id = endpoint.repository
if not self.model_id:
raise ValueError(
"Failed to resolve model_id:"
f"Could not find model id for inference server: {endpoint_url}"
"Make sure that your Hugging Face token has access to the endpoint."
)
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #126-Ubuntu SMP Mon Jul 1 10:14:24 UTC 2024
> Python Version: 3.10.14 (main, Jul 18 2024, 23:22:54) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.22
> langchain: 0.2.10
> langchain_community: 0.2.9
> langsmith: 0.1.93
> langchain_google_community: 1.0.7
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.2 | langchain-huggingface: Using ChatHuggingFace requires hf token for local TGI using localhost HuggingFaceEndpoint | https://api.github.com/repos/langchain-ai/langchain/issues/24571/comments | 3 | 2024-07-23T19:49:50Z | 2024-07-24T13:41:56Z | https://github.com/langchain-ai/langchain/issues/24571 | 2,426,003,836 | 24,571 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_openai import OpenAIEmbeddings
from langchain_qdrant import QdrantVectorStore
openai_api_key = ''
qdrant_api_key = ''
qdrant_url = ''
qdrant_collection = ''
query = ''
embeddings = OpenAIEmbeddings(api_key=openai_api_key, )
qdrant = QdrantVectorStore.from_existing_collection(
embedding=embeddings,
url=qdrant_url,
api_key=qdrant_api_key,
collection_name=qdrant_collection,
)
retriever = qdrant.as_retriever()
print(retriever.invoke(query)[0])
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/alexanderschmidt/Projects/qdrant_issue/main.py", line 10, in <module>
qdrant = QdrantVectorStore.from_existing_collection(
File "/Users/alexanderschmidt/.local/share/virtualenvs/qdrant_issue-MiqCFk3H/lib/python3.9/site-packages/langchain_qdrant/qdrant.py", line 286, in from_existing_collection
return cls(
File "/Users/alexanderschmidt/.local/share/virtualenvs/qdrant_issue-MiqCFk3H/lib/python3.9/site-packages/langchain_qdrant/qdrant.py", line 87, in __init__
self._validate_collection_config(
File "/Users/alexanderschmidt/.local/share/virtualenvs/qdrant_issue-MiqCFk3H/lib/python3.9/site-packages/langchain_qdrant/qdrant.py", line 924, in _validate_collection_config
cls._validate_collection_for_dense(
File "/Users/alexanderschmidt/.local/share/virtualenvs/qdrant_issue-MiqCFk3H/lib/python3.9/site-packages/langchain_qdrant/qdrant.py", line 978, in _validate_collection_for_dense
vector_config = vector_config[vector_name] # type: ignore
TypeError: 'VectorParams' object is not subscriptable
### Description
I am not able to get Qdrant as_retriver working and always receiving the error message:
TypeError: 'VectorParams' object is not subscriptable
### System Info
❯ python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Thu Dec 21 02:29:41 PST 2023; root:xnu-10002.81.5~11/RELEASE_ARM64_T8122
> Python Version: 3.9.6 (default, Feb 3 2024, 15:58:27)
[Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.22
> langchain: 0.2.10
> langsmith: 0.1.93
> langchain_openai: 0.1.17
> langchain_qdrant: 0.1.2
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | TypeError: 'VectorParams' object is not subscriptable | https://api.github.com/repos/langchain-ai/langchain/issues/24558/comments | 6 | 2024-07-23T15:49:15Z | 2024-07-25T13:15:59Z | https://github.com/langchain-ai/langchain/issues/24558 | 2,425,545,329 | 24,558 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_ollama import ChatOllama
MODEL_NAME = "some_local_model"
MODEL_API_BASE_URL = "http://<some_host>:11434"
# there is no possibility to supply base_url
# as it is done in `from langchain_community.llms.ollama import Ollama` package
llm = ChatOllama(model=MODEL_NAME)
```
### Error Message and Stack Trace (if applicable)
Since the underlying `ollama` client ends up using `localhost` the API call fails with connection refused
### Description
I am trying to use the partner package langchain_ollama. My ollama server is running on another machine. The API does not provide a way to specify the `base_url`
The `from langchain_community.llms.ollama import Ollama` does provide that support
### System Info
langchain==0.2.10
langchain-community==0.2.9
langchain-core==0.2.22
langchain-experimental==0.0.62
langchain-ollama==0.1.0
langchain-openai==0.1.17
langchain-text-splitters==0.2.2
langchainhub==0.1.20 | ChatOllama & Ollama from langchain_ollama partner package does not provide support to pass base_url | https://api.github.com/repos/langchain-ai/langchain/issues/24555/comments | 8 | 2024-07-23T15:26:20Z | 2024-07-28T18:25:59Z | https://github.com/langchain-ai/langchain/issues/24555 | 2,425,496,515 | 24,555 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_milvus.vectorstores import Milvus
from langchain.schema import Document
from langchain_community.embeddings import OllamaEmbeddings
URI = "<mymilvusURI>"
# Initialize embedding function
embedding_function = embeddings_model = OllamaEmbeddings(
model="<model>",
base_url="<myhostedURL>"
)
# Milvus vector store initialization parameters
collection_name = "example_collection"
# Initialize the Milvus vector store
milvus_store = Milvus(
embedding_function=embedding_function,
collection_name=collection_name,
connection_args={"uri": URI}
drop_old=True, # Set to True if you want to drop the old collection if it exists
auto_id=True
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
There appears to be an issue with the Milvus vector store implementation where the collection is not being created during initialization. This occurs because the `_create_collection` method is never called when initializing the `Milvus` class without providing embeddings.
1. When initializing `Milvus()` without providing embeddings, the `_init` method is called from `__init__`.
2. In `_init`, the collection creation is conditional on `embeddings` being provided:
```python
if embeddings is not None:
self._create_collection(embeddings, metadatas)
Am i missing something here?
### System Info
linux
python 3.10.12
| Milvus Vector Store: Collection Not Created During Initialization | https://api.github.com/repos/langchain-ai/langchain/issues/24554/comments | 0 | 2024-07-23T14:16:09Z | 2024-07-23T14:18:42Z | https://github.com/langchain-ai/langchain/issues/24554 | 2,425,334,524 | 24,554 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from typing import List, Tuple
from langchain_community.vectorstores import Chroma
from langchain_core.documents import Document
from my_embeddings import my_embeddings
vectorStore = Chroma(
collection_name="products",
embedding_function=my_embeddings
persist_directory="./database",
)
# these two functions should give the same result, but the relevance scores are different
def get_similar_docs1(sku: str, count: int) -> List[Tuple[Document, float]]:
base_query = vectorStore.get(ids=sku).get("documents")[0]
return vectorStore.similarity_search_with_relevance_scores(query=base_query, k=(count + 1))[1:]
def get_similar_docs2(sku: str, count: int) -> List[Tuple[Document, float]]:
base_vector = vectorStore.get(ids=sku, include=["embeddings"]).get("embeddings")[0]
return vectorStore.similarity_search_by_vector_with_relevance_scores(embedding=base_vector, k=(count + 1))[1:]
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am writing a function, that finds ```count``` number of the most simillar document to the document with id ```sku```.
I started with the first function and it works as expected. I then tried to rewrite the function, so that it retrieves the embedding vector so I do not have to calculate it again. This returns same documents as the first function (also in the same order), but the relevance scores are completely different. Firstly, it seems that the most relevant result now has the lowest relevance score, but even if I do ```(1 - score)``` I do not get the same score as in the first function.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #38-Ubuntu SMP PREEMPT_DYNAMIC Fri Jun 7 15:25:01 UTC 2024
> Python Version: 3.12.3 (main, Apr 10 2024, 05:33:47) [GCC 13.2.0]
Package Information
-------------------
> langchain_core: 0.2.13
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.85
> langchain_huggingface: 0.0.3
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langgraph: 0.1.7
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | Chroma - wrong relevance scores. | https://api.github.com/repos/langchain-ai/langchain/issues/24545/comments | 1 | 2024-07-23T11:24:29Z | 2024-07-23T11:46:17Z | https://github.com/langchain-ai/langchain/issues/24545 | 2,424,952,624 | 24,545 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_core.prompts import PromptTemplate
from langchain_huggingface.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
### Error Message and Stack Trace (if applicable)
ImportError: cannot import name 'AutoModelForCausalLM' from partially initialized module 'transformers' (most likely due to a circular import) (~\venv2\Lib\site-packages\transformers\__init__.py)
### Description
I created a virtual environment "venv2". And after run the command `pip install langchain_huggingface`, I can't import AutoModelForCausalLM from transformers.
### System Info
annotated-types==0.7.0
certifi==2024.7.4
charset-normalizer==3.3.2
colorama==0.4.6
filelock==3.15.4
fsspec==2024.6.1
huggingface-hub==0.24.0
idna==3.7
intel-openmp==2021.4.0
Jinja2==3.1.4
joblib==1.4.2
jsonpatch==1.33
jsonpointer==3.0.0
langchain-core==0.2.22
langchain-huggingface==0.0.3
langsmith==0.1.93
MarkupSafe==2.1.5
mkl==2021.4.0
mpmath==1.3.0
networkx==3.3
numpy==1.26.4
orjson==3.10.6
packaging==24.1
pillow==10.4.0
pydantic==2.8.2
pydantic_core==2.20.1
PyYAML==6.0.1
regex==2024.5.15
requests==2.32.3
safetensors==0.4.3
scikit-learn==1.5.1
scipy==1.14.0
sentence-transformers==3.0.1
sympy==1.13.1
tbb==2021.13.0
tenacity==8.5.0
threadpoolctl==3.5.0
tokenizers==0.19.1
torch==2.3.1
tqdm==4.66.4
transformers==4.42.4
typing_extensions==4.12.2
urllib3==2.2.2 | ImportError: cannot import name 'AutoModelForCausalLM' from partially initialized module 'transformers' (most likely due to a circular import) | https://api.github.com/repos/langchain-ai/langchain/issues/24542/comments | 0 | 2024-07-23T09:54:16Z | 2024-07-23T09:59:00Z | https://github.com/langchain-ai/langchain/issues/24542 | 2,424,769,491 | 24,542 |
[
"hwchase17",
"langchain"
] | ### URL
https://js.langchain.com/v0.2/docs/integrations/retrievers/vectorstore
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
With the new version of doc V 0.2 in langchain JS its getting hard to find the exact infomation regarding the stuff developer are looking for. The version V0.1 was pretty handly and it contained the description of all the retrievers and everything. But finding the context in V0.2 is very difficult. Please update the content or website to make it handy.
Else the overall functionality is awesome
### Idea or request for content:
I am mainly focusing to improve the description part of every aspects on langchain V0.2 | DOC: Need improvement in the langchain js docs v0.2 | https://api.github.com/repos/langchain-ai/langchain/issues/24540/comments | 0 | 2024-07-23T09:52:49Z | 2024-07-23T09:53:25Z | https://github.com/langchain-ai/langchain/issues/24540 | 2,424,766,130 | 24,540 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import langchain_google_genai raise ImportError
### Error Message and Stack Trace (if applicable)
ImportError Traceback (most recent call last)
[<ipython-input-34-26070003cb78>](https://localhost:8080/#) in <cell line: 6>()
4 # !pip install --upgrade langchain
5 # from langchain_google_genai import GoogleGenerativeAI
----> 6 import langchain_google_genai# import GoogleGenerativeAI
7
8 # llm = ChatGoogleGenerativeAI(model="gemini-pro")
1 frames
[/usr/local/lib/python3.10/dist-packages/langchain_google_genai/__init__.py](https://localhost:8080/#) in <module>
57
58 from langchain_google_genai._enums import HarmBlockThreshold, HarmCategory
---> 59 from langchain_google_genai.chat_models import ChatGoogleGenerativeAI
60 from langchain_google_genai.embeddings import GoogleGenerativeAIEmbeddings
61 from langchain_google_genai.genai_aqa import (
[/usr/local/lib/python3.10/dist-packages/langchain_google_genai/chat_models.py](https://localhost:8080/#) in <module>
54 )
55 from langchain_core.language_models import LanguageModelInput
---> 56 from langchain_core.language_models.chat_models import BaseChatModel, LangSmithParams
57 from langchain_core.messages import (
58 AIMessage,
ImportError: cannot import name 'LangSmithParams' from 'langchain_core.language_models.chat_models' (/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py)
### Description
I am am trying to use the GoogleGenerativeAI wrapper for a project of mine.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024
> Python Version: 3.10.12 (main, Mar 22 2024, 16:50:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.22
> langchain: 0.2.10
> langchain_community: 0.0.38
> langsmith: 0.1.93
> langchain_google_genai: 1.0.8
> langchain_openai: 0.1.7
> langchain_text_splitters: 0.2.2 | ImportError: cannot import name 'LangSmithParams' from 'langchain_core.language_models.chat_models'(import langchain_google_genai) in collab environment | https://api.github.com/repos/langchain-ai/langchain/issues/24533/comments | 6 | 2024-07-23T07:19:19Z | 2024-08-05T18:12:04Z | https://github.com/langchain-ai/langchain/issues/24533 | 2,424,456,171 | 24,533 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from datetime import date
import requests
from langchain_community.utilities import SerpAPIWrapper
from langchain_core.output_parsers import StrOutputParser
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langchain import hub
from langchain.agents import create_openai_functions_agent
from langchain.agents import AgentExecutor
serpapi_api_key = "xxxxxxxxxx"
api_key = "sk-xxxxxxxxx"
api_url = "https://ai-yyds.com/v1"
llm = ChatOpenAI(base_url=api_url, api_key=api_key, model_name="gpt-4")
prompt = hub.pull("hwchase17/openai-functions-agent")
print(prompt.messages)
@tool
def search(text: str):
"""This tool is only used when real-time information needs to be searched. The search returns only the first 3 items"""
serp = SerpAPIWrapper(serpapi_api_key=serpapi_api_key)
response = serp.run(text)
print(type(response))
content = ""
if type(response) is list:
for item in response:
content += str(item["title"]) + "\n"
else:
content = response
return content
@tool
def time() -> str:
"""Return today's date and use it for any questions related to today's date.
The input should always be an empty string, and this function will always return today's date. Any mathematical operation on a date should occur outside of this function"""
return str(date.today())
@tool
def weather(city: str):
"""When you need to check the weather, you can use this tool, which returns the weather conditions for the day, tomorrow, and the day after tomorrow"""
url = "https://api.seniverse.com/v3/weather/daily.json?key=SrlXSW6OX9PssfOJ1&location=beijing&language=zh-Hans&unit=c&start=0"
response = requests.get(url)
data = response.json()
if not data or len(data['results']) == 0:
return None
daily = data['results'][0]["daily"]
content = ""
res = []
for day in daily:
info = {"city": city, "date": day["date"], "info": day["text_day"], "temperature_high": day["high"],
"temperature_low": day["low"]}
content += f"{city} date:{day['date']} info:{day['text_day']} maximum temperature:{day['high']} minimum temperature:{day['low']}\n"
res.append(info)
return content
tools = [time, weather, search]
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
response1 = agent_executor.invoke({"input": "What's the weather like in Shanghai tomorrow"})
print(response1)
```
### Error Message and Stack Trace (if applicable)
```
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain/agents/agent.py", line 1636, in _call
next_step_output = self._take_next_step(
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain/agents/agent.py", line 1342, in _take_next_step
[
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain/agents/agent.py", line 1342, in <listcomp>
[
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain/agents/agent.py", line 1370, in _iter_next_step
output = self.agent.plan(
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain/agents/agent.py", line 463, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3251, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3238, in transform
yield from self._transform_stream_with_config(
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 2052, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 3200, in _transform
for output in final_pipeline:
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 1270, in transform
for ichunk in input:
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 5262, in transform
yield from self.bound.transform(
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/runnables/base.py", line 1288, in transform
yield from self.stream(final, config, **kwargs)
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 360, in stream
raise e
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 340, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/langchain_openai/chat_models/base.py", line 520, in _stream
response = self.client.create(**payload)
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/openai/_utils/_utils.py", line 277, in wrapper
return func(*args, **kwargs)
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/openai/resources/chat/completions.py", line 643, in create
return self._post(
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/openai/_base_client.py", line 1266, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/openai/_base_client.py", line 942, in request
return self._request(
File "/home/gujiachun/PycharmProjects/rainbow-robot/.venv/lib/python3.8/site-packages/openai/_base_client.py", line 1046, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid value for 'content': expected a string, got null. (request id: 20240723144941715017377sn10oSMg) (request id: 2024072306494157956522013257597)", 'type': 'invalid_request_error', 'param': 'messages.[2].content', 'code': None}}
```
### Description
Execute the above code, sometimes it returns normally, sometimes it reports an error
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid value for 'content': expected a string, got null. (request id: 20240723111146966761056DQSQiv7T) (request id: 2024072303114683478387128512399)", 'type': 'invalid_request_error', 'param': 'messages.[2].content', 'code': None}}
### System Info
platform: Mac
python: 3.8
> langchain_core: 0.2.22
> langchain: 0.2.9
> langchain_community: 0.2.9
> langsmith: 0.1.90
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
openai 1.35.13
| openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid value for 'content': expected a string, got null | https://api.github.com/repos/langchain-ai/langchain/issues/24531/comments | 3 | 2024-07-23T06:52:24Z | 2024-07-24T10:28:41Z | https://github.com/langchain-ai/langchain/issues/24531 | 2,424,402,189 | 24,531 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate
from langchain.globals import set_debug
set_debug(True)
prompt = PromptTemplate(template="user:{text}", input_variables=["text"])
model = ChatOpenAI(model="gpt-4o-mini")
chain = prompt | model
chain.invoke({"text": "hello"})
### Error Message and Stack Trace (if applicable)
[llm/start] [chain:RunnableSequence > llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"Human: user:hello"
]
}
### Description
Issue 1: Even when using custom prompt, "Human: " is added to all of my prompts, which have been messing up my outputs.
Issue 2 (possible, unverfied): This has me thinking that "\n AI:" is added to the prompt, which is in line with how my llm are reacting. For example, if I end the prompt with "\nSummary:\n" sometimes the AI would repeat "summary" unless explicitly told not to.
### System Info
langchain==0.2.10
langchain-aws==0.1.6
langchain-community==0.2.5
langchain-core==0.2.22
langchain-experimental==0.0.61
langchain-google-genai==1.0.8
langchain-openai==0.1.8
langchain-text-splitters==0.2.1
langchain-upstage==0.1.6
langchain-weaviate==0.0.2 | "Human: " added to the prompt. | https://api.github.com/repos/langchain-ai/langchain/issues/24525/comments | 2 | 2024-07-23T01:45:40Z | 2024-07-23T23:49:40Z | https://github.com/langchain-ai/langchain/issues/24525 | 2,424,066,811 | 24,525 |