issue_owner_repo
sequencelengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.document_loaders import pdf
### Error Message and Stack Trace (if applicable)
ModuleNotFoundError: No module named 'langchain_community.document_loaders'; 'langchain_community' is not a package
### Description
我的python环境里确实有langchain_community这个包,其他包都没问题就这个有问题
### System Info
windows ,python 3.9.8 | No module named 'langchain_community.document_loaders'; 'langchain_community' is not a package | https://api.github.com/repos/langchain-ai/langchain/issues/22763/comments | 2 | 2024-06-11T03:22:44Z | 2024-06-13T16:01:58Z | https://github.com/langchain-ai/langchain/issues/22763 | 2,345,292,354 | 22,763 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
result=multimodal_search(query)
### Error Message and Stack Trace (if applicable)
/usr/local/lib/python3.10/dist-packages/grpc/_channel.py in _end_unary_response_blocking(state, call, with_call, deadline)
1004 return state.response
1005 else:
-> 1006 raise _InactiveRpcError(state) # pytype: disable=not-instantiable
1007
1008
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:Error received from peer {created_time:"2024-06-10T19:02:57.93240468+00:00", grpc_status:14, grpc_message:"DNS resolution failed for :10000: unparseable host:port"}"
### Description
I'm to call a vector_search function that I wrote to retrieve embeddings and give response to the query, but I'm facing this error message.
### System Info
python | _InactiveRpcError of RPC | https://api.github.com/repos/langchain-ai/langchain/issues/22762/comments | 1 | 2024-06-11T02:56:46Z | 2024-06-11T02:59:59Z | https://github.com/langchain-ai/langchain/issues/22762 | 2,345,269,783 | 22,762 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
neo4j_uri = "bolt://localhost:7687"
neo4j_user = "neo4j"
neo4j_password = "....."
graph = Neo4jGraph(
url=neo4j_uri,
username=neo4j_user,
password=neo4j_password,
database="....",
enhanced_schema=True,
)
cypher_chain = GraphCypherQAChain.from_llm(
cypher_llm=AzureChatOpenAI(
deployment_name="<.......>",
azure_endpoint="https://.........openai.azure.com/",
openai_api_key=".....",
api_version=".....",
temperature=0
),
qa_llm=AzureChatOpenAI(
deployment_name="......",
azure_endpoint="......",
openai_api_key="....",
api_version=".....",
temperature=0
),
graph=graph,
verbose=True,
)
response = cypher_chain.invoke(
{"query": "How many tasks do i have"}
)
```
### Error Message and Stack Trace (if applicable)
```bash
openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 32768 tokens. However, your messages resulted in 38782 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
```
### Description
When employing the GraphCypherQAChain.from_llm function, it generates a Cypher query that outputs all properties, including embeddings. Currently, there is no functionality to selectively include or exclude specific properties from the documents, which results in utilizing the entire context window.
### System Info
# Packages
langchain-community==0.2.2
neo4j==5.18.0/5.19.0/5.20.0
langchain==0.2.2
langchain-core==0.2.4
langchain-openai==0.1. | When using GraphCypherQAChain to fetch documents from Neo4j, the embeddings field is also returned, which consumes all context window tokens | https://api.github.com/repos/langchain-ai/langchain/issues/22755/comments | 2 | 2024-06-10T19:18:10Z | 2024-06-13T08:26:20Z | https://github.com/langchain-ai/langchain/issues/22755 | 2,344,660,169 | 22,755 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Here's a Pydantic model with a date in a union-typed attribute.
```python
from datetime import date
from pydantic import BaseModel
class Example(BaseModel):
attribute: date | str
```
Given a JSON string that contains a date, Pydantic discriminates the type and returns a `datetime.date` object.
```python
json_string = '{"attribute": "2024-01-01"}'
Example.model_validate_json(json_string)
# returns Example(attribute=datetime.date(2024, 1, 1))
```
However, PydanticOutputParser unexpectedly returns a string on the same JSON.
```python
from langchain.output_parsers import PydanticOutputParser
parser = PydanticOutputParser(pydantic_object=Example)
parser.parse(json_string)
# returns Example(attribute="2024-01-01")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
`PydanticOutputParser` isn't converting dates in union types (e.g. `date | str`) to `datetime.date` objects. The parser should be able to discriminate these types by working left-to-right. See Pydantic's approach in https://docs.pydantic.dev/latest/concepts/unions/.
### System Info
I'm on macOS with Python 3.10. I can reproduce this issue with both LangChain `0.1` and `0.2`. | PydanticOutputParser Doesn't Parse Dates in Unions | https://api.github.com/repos/langchain-ai/langchain/issues/22740/comments | 4 | 2024-06-10T15:19:11Z | 2024-06-10T21:38:37Z | https://github.com/langchain-ai/langchain/issues/22740 | 2,344,188,762 | 22,740 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_text_splitters import MarkdownHeaderTextSplitter, RecursiveCharacterTextSplitter
# Updated markdown_document with a new header 5 using **
markdown_document = """
# Intro
## History
Markdown[9] is a lightweight markup language for creating formatted text using a plain-text editor. John Gruber created Markdown in 2004 as a markup language that is appealing to human readers in its source code form.[9]
Markdown is widely used in blogging, instant messaging, online forums, collaborative software, documentation pages, and readme files.
## Rise and divergence
As Markdown popularity grew rapidly, many Markdown implementations appeared, driven mostly by the need for additional features such as tables, footnotes, definition lists,[note 1] and Markdown inside HTML blocks.
#### Standardization
From 2012, a group of people, including Jeff Atwood and John MacFarlane, launched what Atwood characterized as a standardisation effort.
## Implementations
Implementations of Markdown are available for over a dozen programming languages.
**New Header 5**
This is the content for the new header 5.
"""
# Headers to split on, including custom header 5 with **
headers_to_split_on = [
('\*\*.*?\*\*', "Header 5")
]
# Create the MarkdownHeaderTextSplitter
markdown_splitter = MarkdownHeaderTextSplitter(
headers_to_split_on=headers_to_split_on, strip_headers=False
)
# Split text based on headers
md_header_splits = markdown_splitter.split_text(markdown_document)
# Create the RecursiveCharacterTextSplitter
chunk_size = 250
chunk_overlap = 30
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
# Split documents
splits = text_splitter.split_documents(md_header_splits)
print(splits)
```
### Error Message and Stack Trace (if applicable)
<img width="1157" alt="image" src="https://github.com/langchain-ai/langchain/assets/54015474/70e95ca0-d9f8-41ef-b35a-0f851f9edbcb">
### Description
1. I try to use MarkdownHeaderTextSplitter for split the text on "**New Header 5**"
2. I was able to use r'\*\*.*?\*\*' to do the work with package re
3. but I failed it with langchain and I wasn't able to find any example regarding similar Header in langchain's documentation
### System Info
langchain-core==0.2.3
langchain-text-splitters==0.2.0 | MarkdownHeaderTextSplitter for header such like "**New Header 5**" | https://api.github.com/repos/langchain-ai/langchain/issues/22738/comments | 2 | 2024-06-10T14:19:28Z | 2024-06-11T06:21:52Z | https://github.com/langchain-ai/langchain/issues/22738 | 2,344,052,386 | 22,738 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/tools/eleven_labs_tts/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
"Elevenlabs has no attribute "generate
This doesnt seem to work with latest elevenlabs package
### Idea or request for content:
_No response_ | "Elevenlabs has no attribute "generate (only older versions of elevenlabs work with this wrapper) | https://api.github.com/repos/langchain-ai/langchain/issues/22736/comments | 1 | 2024-06-10T13:44:40Z | 2024-06-10T22:31:46Z | https://github.com/langchain-ai/langchain/issues/22736 | 2,343,969,449 | 22,736 |
[
"hwchase17",
"langchain"
] | ### URL
https://api.python.langchain.com/en/latest/vectorstores/langchain_chroma.vectorstores.Chroma.html#langchain_chroma.vectorstores.Chroma.similarity_search_with_score
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The documentation for lanchain_core.vectorstore._similarity_search_with_relevance_scores()
> states: 0 is dissimilar, 1 is most similar.
The documentation for chroma.similarity_search_with_score() states:
> Lower score represents more similarity.
What is the correct interpretation?
### Idea or request for content:
_No response_ | DOC: inconsistency with similarity_search_with_score() | https://api.github.com/repos/langchain-ai/langchain/issues/22732/comments | 2 | 2024-06-10T09:09:28Z | 2024-06-12T16:41:29Z | https://github.com/langchain-ai/langchain/issues/22732 | 2,343,339,255 | 22,732 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.document_loaders import OutlookMessageLoader
import os
file_path = "example.msg"
loader = OutlookMessageLoader(file_path)
documents = loader.load()
print(documents)
try:
os.remove(file_path)
except Exception as e:
print(e)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "test.py", line 16, in <module>
os.remove(file_path)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'example.msg'
### Description
**Describe the bug**
It seems that the `OutlookMessageLoader` does not close the file after extracting the text from the `.msg` file.
**To Reproduce**
Steps to reproduce the behavior:
1. Use the following example code:
```python
from langchain_community.document_loaders import OutlookMessageLoader
import os
file_path = "example.msg"
loader = OutlookMessageLoader(file_path)
documents = loader.load()
print(documents)
try:
os.remove(file_path)
except Exception as e:
print(e)
```
2. Run the code and observe the error.
**Expected behavior**
The file should be closed after processing, allowing it to be deleted without errors.
**Error**
```
[WinError 32] The process cannot access the file because it is being used by another process: 'example.msg'
```
**Additional context**
I looked into the `email.py` file of `langchain_community.document_loaders` and found the following code in the `lazy_load` function:
```python
import extract_msg
msg = extract_msg.Message(self.file_path)
yield Document(
page_content=msg.body,
metadata={
"source": self.file_path,
"subject": msg.subject,
"sender": msg.sender,
"date": msg.date,
},
)
```
It seems like the file is not being closed properly. Adding `msg.close()` should resolve the issue.
### System Info
**Langchain libraries**:
langchain==0.2.2
langchain-community==0.2.3
langchain-core==0.2.4
langchain-openai==0.1.8
langchain-text-splitters==0.2.1
**Platform**: windows
**Python**: 3.12.3 | File Not Closed in OutlookMessageLoader of langchain_community Library | https://api.github.com/repos/langchain-ai/langchain/issues/22729/comments | 1 | 2024-06-10T07:35:42Z | 2024-06-10T22:33:15Z | https://github.com/langchain-ai/langchain/issues/22729 | 2,343,107,278 | 22,729 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def _generate(
self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> LLMResult:
"""Run the LLM on the given prompt and input."""
from vllm import SamplingParams
# build sampling parameters
params = {**self._default_params, **kwargs, "stop": stop}
sampling_params = SamplingParams(**params)
# call the model
outputs = self.client.generate(prompts, sampling_params)
generations = []
for output in outputs:
text = output.outputs[0].text
generations.append([Generation(text=text)])
return LLMResult(generations=generations)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Problem #15921 still not fixed. Pls fix it. Maybe init 'stop' by default from SamplingParams.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #172-Ubuntu SMP Fri Jul 7 16:10:02 UTC 2023
> Python Version: 3.10.14 (main, Apr 6 2024, 18:45:05) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.69
> langchain_chroma: 0.1.1
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.17 | Fix stop list of string in VLLM generate | https://api.github.com/repos/langchain-ai/langchain/issues/22717/comments | 6 | 2024-06-09T12:35:30Z | 2024-06-10T17:37:07Z | https://github.com/langchain-ai/langchain/issues/22717 | 2,342,210,409 | 22,717 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/graph/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Important: This happens with Python v3.12.4.
The below statement in the documentation (https://python.langchain.com/v0.2/docs/tutorials/graph/) fails
graph.query(movies_query)
with the below error.
[2](file:///C:/Try/Langchain-docs-tutorials/Working%20with%20external%20knowledge/6.%20Q%20and%20A%20over%20Graph%20database/.venv/Lib/site-packages/langchain_community/graphs/neo4j_graph.py:2) from typing import Any, Dict, List, Optional
[4](file:///C:/Try/Langchain-docs-tutorials/Working%20with%20external%20knowledge/6.%20Q%20and%20A%20over%20Graph%20database/.venv/Lib/site-packages/langchain_community/graphs/neo4j_graph.py:4) from langchain_core.utils import get_from_dict_or_env
----> [6](file:///C:/Try/Langchain-docs-tutorials/Working%20with%20external%20knowledge/6.%20Q%20and%20A%20over%20Graph%20database/.venv/Lib/site-packages/langchain_community/graphs/neo4j_graph.py:6) from langchain_community.graphs.graph_document import GraphDocument
...
[64](file:///C:/Try/Langchain-docs-tutorials/Working%20with%20external%20knowledge/6.%20Q%20and%20A%20over%20Graph%20database/.venv/Lib/site-packages/pydantic/v1/typing.py:64) # Even though it is the right signature for python 3.9, mypy complains with
[65](file:///C:/Try/Langchain-docs-tutorials/Working%20with%20external%20knowledge/6.%20Q%20and%20A%20over%20Graph%20database/.venv/Lib/site-packages/pydantic/v1/typing.py:65) # `error: Too many arguments for "_evaluate" of "ForwardRef"` hence the cast...
---> [66](file:///C:/Try/Langchain-docs-tutorials/Working%20with%20external%20knowledge/6.%20Q%20and%20A%20over%20Graph%20database/.venv/Lib/site-packages/pydantic/v1/typing.py:66) return cast(Any, type_)._evaluate(globalns, localns, set())
TypeError: ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard'
### Idea or request for content:
May be, the code in the documentation needs to be tested against latest python versions | Error while running graph.query(movies_query) with Python v3.12.4 | https://api.github.com/repos/langchain-ai/langchain/issues/22713/comments | 4 | 2024-06-09T07:31:58Z | 2024-06-13T10:26:21Z | https://github.com/langchain-ai/langchain/issues/22713 | 2,342,074,263 | 22,713 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
llm = HuggingFacePipeline.from_model_id(
model_id="microsoft/Phi-3-mini-4k-instruct",
task="text-generation",
pipeline_kwargs={
"max_new_tokens": 100,
"top_k": 50,
"temperature": 0.1,
},
)
### Error Message and Stack Trace (if applicable)
Jun 9, 2024, 11:45:20 AM | WARNING | WARNING:root:kernel 2d2b999f-b125-4d33-9c67-f791b5329c26 restarted
Jun 9, 2024, 11:45:20 AM | INFO | KernelRestarter: restarting kernel (1/5), keep random ports
Jun 9, 2024, 11:45:19 AM | WARNING | ERROR: Unknown command line flag 'xla_latency_hiding_scheduler_rerun'
### Description
Trying the example from the [langchain_huggingface](https://huggingface.co/blog/langchain) library at colab. The example crashes the colab.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sun Apr 28 14:29:16 UTC 2024
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.5
> langchain: 0.2.3
> langchain_community: 0.2.4
> langsmith: 0.1.75
> langchain_experimental: 0.0.60
> langchain_huggingface: 0.0.3
> langchain_text_splitters: 0.2.1
python -m langchain_core.sys_info:
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Initializing a LLM using HuggingFacePipeline.from_model_id crashes Google Colab | https://api.github.com/repos/langchain-ai/langchain/issues/22710/comments | 5 | 2024-06-09T06:18:17Z | 2024-06-13T08:38:53Z | https://github.com/langchain-ai/langchain/issues/22710 | 2,342,048,874 | 22,710 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
check-broken-links.yml and scheduled_test.yml
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The scheduled GitHub Actions workflows check-broken-links.yml and scheduled_test.yml are also triggered in the forked repository, which is probably not the expected behavior.
### System Info
GitHub actions | Scheduled GitHub Actions Running on Forked Repositories | https://api.github.com/repos/langchain-ai/langchain/issues/22706/comments | 1 | 2024-06-09T04:45:52Z | 2024-06-10T15:07:49Z | https://github.com/langchain-ai/langchain/issues/22706 | 2,342,020,818 | 22,706 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
hGnhn
### Error Message and Stack Trace (if applicable)
Ghgh
### Description
Yhtuyhy
### System Info
Yhguyuujhgghyhu | Rohit | https://api.github.com/repos/langchain-ai/langchain/issues/22704/comments | 14 | 2024-06-09T02:40:50Z | 2024-06-15T21:01:27Z | https://github.com/langchain-ai/langchain/issues/22704 | 2,341,985,867 | 22,704 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.agents import AgentExecutor
from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent
from langchain_community.callbacks import OpenAICallbackHandler
from langchain_community.tools import SleepTool
from langchain_core.runnables import RunnableConfig
from langchain_openai import ChatOpenAI
# We should incur some OpenAI costs here from agent planning
cost_callback = OpenAICallbackHandler()
tools = [SleepTool()]
agent_instance = AgentExecutor.from_agent_and_tools(
tools=tools,
agent=OpenAIFunctionsAgent.from_llm_and_tools(
ChatOpenAI(model="gpt-4", request_timeout=15.0), tools # type: ignore[call-arg]
),
return_intermediate_steps=True,
max_execution_time=10,
callbacks=[cost_callback], # "Local" callbacks
)
# NOTE: intentionally, I am not specifying the callback to invoke, as that
# would make the cost_callback be considered "inheritable" (which I don't want)
outputs = agent_instance.invoke(
input={"input": "Sleep a few times for 100-ms."},
# config=RunnableConfig(callbacks=[cost_callback]), # "Inheritable" callbacks
)
assert len(outputs["intermediate_steps"]) > 0, "Agent should have slept a bit"
assert cost_callback.total_cost > 0, "Agent planning should have been accounted for" # Fails
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/user/code/repo/app/agents/a.py", line 28, in <module>
assert cost_callback.total_cost > 0, "Agent planning should have been accounted for"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Agent planning should have been accounted for
### Description
LangChain has a useful concept of "inheritable" callbacks vs "local" callbacks, all managed by `CallbackManger` (source reference [1](https://github.com/langchain-ai/langchain/blob/langchain-core%3D%3D0.2.5/libs/core/langchain_core/callbacks/manager.py#L1923-L1930) and [2](https://github.com/langchain-ai/langchain/blob/langchain-core%3D%3D0.2.5/libs/core/langchain_core/callbacks/base.py#L587-L592))
- Inheritable callback: callback is automagically reused by nested `invoke`
- Local callback: no reuse by nested `invoke`
Yesterday I discovered `AgentExecutor` does not use local callbacks for its planning step. I consider this a bug, as planning (e.g [`BaseSingleActionAgent.plan`](https://github.com/langchain-ai/langchain/blob/langchain%3D%3D0.2.3/libs/langchain/langchain/agents/agent.py#L70)) is a core behavior of `AgentExecutor`.
The fix would be supporting `AgentExecutor`'s local callbacks during planning
### System Info
langchain==0.2.3
langchain-community==0.2.4
langchain-core==0.2.5
langchain-openai==0.1.8 | Bug: `AgentExecutor` doesn't use its local callbacks during planning | https://api.github.com/repos/langchain-ai/langchain/issues/22703/comments | 1 | 2024-06-08T23:15:48Z | 2024-06-08T23:58:39Z | https://github.com/langchain-ai/langchain/issues/22703 | 2,341,904,946 | 22,703 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_mistralai.chat_models import ChatMistralAI
chain = ChatMistralAI(streaming=True)
# Add a callback
chain.ainvoke(..)
# Before
# Oberve on_llm_new_token with callback
# That give the token in grouped format.
# With my pull request
# Oberve on_llm_new_token with callback
# Now, the callback is given as streaming tokens, before it was in grouped format.
```
### Error Message and Stack Trace (if applicable)
No message.
### Description
Hello
* I Identified an issue in the mistral package where the callback streaming (see on_llm_new_token) was not functioning correctly when the streaming parameter was set to True and call with `ainvoke`.
* The root cause of the problem was the streaming not taking into account. ( I think it's an oversight )
I did this [Pull Request](https://github.com/langchain-ai/langchain/pull/22000)
* To resolve the issue, I added the `streaming` attribut.
* Now, the callback with streaming works as expected when the streaming parameter is set to True.
I addressed this issue because the pull request I submitted a month ago has not received any attention. Additionally, the problem reappears in each new version.
Could you please review the pull request.
### System Info
All system can reproduce. | Partners: Issues with `Streaming` and MistralAI `ainvoke` and `Callbacks` Not Working | https://api.github.com/repos/langchain-ai/langchain/issues/22702/comments | 2 | 2024-06-08T20:46:07Z | 2024-07-02T20:38:12Z | https://github.com/langchain-ai/langchain/issues/22702 | 2,341,830,363 | 22,702 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
'''python
# rag_agent_creation.py
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_core.messages import BaseMessage, HumanMessage
from langchain_openai import ChatOpenAI
from langchain.tools.retriever import create_retriever_tool
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate, MessagesPlaceholder
from .rag_prompts import RAG_AGENT_PROMPT
import chromadb
def create_retriver_agent(llm: ChatOpenAI, vectordb: chromadb):
retriever = vectordb.as_retriever(search_type = "mmr", search_kwargs={"k": 4})
retriever_tool = create_retriever_tool(
retriever,
name = "doc_retriever_tool",
description = "Search and return information from documents",
)
tools = [retriever_tool]
system_prompt = RAG_AGENT_PROMPT
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
system_prompt,
),
MessagesPlaceholder(variable_name="messages",optional=True),
HumanMessagePromptTemplate.from_template("{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
return executor
'''
### Error Message and Stack Trace (if applicable)
Error in LangChainTracer.on_tool_end callback: TracerException("Found chain run at ID 879e1607-f32b-4984-af76-d258c646e7ad, but expected {'tool'} run.")
### Description
I am using a retriever tool in a langgraph deployed on langserve. Whenever the graph calls the tool, i am getting the error: Error in LangChainTracer.on_tool_end callback: TracerException("Found chain run at ID 879e1607-f32b-4984-af76-d258c646e7ad, but expected {'tool'} run.")
This is new, my tool was working correctly before. I have updated the dependencies as well.
### System Info
[tool.poetry]
name = "Reporting Tool API"
version = "0.1.0"
description = ""
authors = ["Your Name <[email protected]>"]
readme = "README.md"
packages = [{ include = "app" }]
[tool.poetry.dependencies]
python = "^3.11"
uvicorn = "0.23.2"
langserve = { extras = ["server"], version = "0.2.1" }
pydantic = "<2"
chromadb = "0.5.0"
fastapi = "0.110.3"
langchain = "0.2.3"
langchain-cli = "0.0.24"
langchain-community = "0.2.4"
langchain-core = "0.2.5"
langchain-experimental = "0.0.60"
langchain-openai = "0.1.8"
langchain-text-splitters = "0.2.1"
langgraph = "0.0.65"
openai = "1.33.0"
opentelemetry-instrumentation-fastapi = "0.46b0"
pypdf = "4.2.0"
python-dotenv = "1.0.1"
python-multipart = "0.0.9"
pandas = "^2.0.1"
tabulate = "^0.9.0"
langchain-anthropic = "0.1.15"
langchain-mistralai = "0.1.8"
langchain-google-genai = "1.0.6"
api-analytics = { extras = ["fastapi"], version = "*" }
langchainhub = "0.1.18"
[tool.poetry.group.dev.dependencies]
langchain-cli = ">=0.0.15"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api" | Error in LangChainTracer.on_tool_end callback | https://api.github.com/repos/langchain-ai/langchain/issues/22696/comments | 5 | 2024-06-08T08:41:41Z | 2024-07-17T12:31:28Z | https://github.com/langchain-ai/langchain/issues/22696 | 2,341,553,381 | 22,696 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.chains.summarize import load_summarize_chain
client = AzureOpenAI(
api_version=api_version,
api_key=api_key,
azure_endpoint=azure_endpoint,
)
chain = load_summarize_chain(client, chain_type="stuff")
```
```
--------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[13], line 1
----> 1 chain = load_summarize_chain(client, chain_type="stuff")
File /opt/conda/envs/pytorch/lib/python3.10/site-packages/langchain/chains/summarize/__init__.py:157, in load_summarize_chain(llm, chain_type, verbose, **kwargs)
152 if chain_type not in loader_mapping:
153 raise ValueError(
154 f"Got unsupported chain type: {chain_type}. "
155 f"Should be one of {loader_mapping.keys()}"
156 )
--> 157 return loader_mapping[chain_type](llm, verbose=verbose, **kwargs)
File /opt/conda/envs/pytorch/lib/python3.10/site-packages/langchain/chains/summarize/__init__.py:33, in _load_stuff_chain(llm, prompt, document_variable_name, verbose, **kwargs)
26 def _load_stuff_chain(
27 llm: BaseLanguageModel,
28 prompt: BasePromptTemplate = stuff_prompt.PROMPT,
(...)
31 **kwargs: Any,
32 ) -> StuffDocumentsChain:
---> 33 llm_chain = LLMChain(llm=llm, prompt=prompt, verbose=verbose)
34 # TODO: document prompt
35 return StuffDocumentsChain(
36 llm_chain=llm_chain,
37 document_variable_name=document_variable_name,
38 verbose=verbose,
39 **kwargs,
40 )
File /opt/conda/envs/pytorch/lib/python3.10/site-packages/langchain_core/load/serializable.py:120, in Serializable.__init__(self, **kwargs)
119 def __init__(self, **kwargs: Any) -> None:
--> 120 super().__init__(**kwargs)
121 self._lc_kwargs = kwargs
File /opt/conda/envs/pytorch/lib/python3.10/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 2 validation errors for LLMChain
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I tried to insert the azure open ai to summarization pipeline, but it gives error.
### System Info
latest langchain. | ValidationError: 2 validation errors for LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/22695/comments | 1 | 2024-06-08T07:55:15Z | 2024-06-15T02:16:53Z | https://github.com/langchain-ai/langchain/issues/22695 | 2,341,538,090 | 22,695 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.schema import AIMessage
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/code/test.py", line 1, in <module>
from langchain.schema import AIMessage
File "/usr/local/lib/python3.12/site-packages/langchain/schema/__init__.py", line 5, in <module>
from langchain_core.documents import BaseDocumentTransformer, Document
File "/usr/local/lib/python3.12/site-packages/langchain_core/documents/__init__.py", line 6, in <module>
from langchain_core.documents.compressor import BaseDocumentCompressor
File "/usr/local/lib/python3.12/site-packages/langchain_core/documents/compressor.py", line 6, in <module>
from langchain_core.callbacks import Callbacks
File "/usr/local/lib/python3.12/site-packages/langchain_core/callbacks/__init__.py", line 22, in <module>
from langchain_core.callbacks.manager import (
File "/usr/local/lib/python3.12/site-packages/langchain_core/callbacks/manager.py", line 29, in <module>
from langsmith.run_helpers import get_run_tree_context
File "/usr/local/lib/python3.12/site-packages/langsmith/run_helpers.py", line 40, in <module>
from langsmith import client as ls_client
File "/usr/local/lib/python3.12/site-packages/langsmith/client.py", line 52, in <module>
from langsmith import env as ls_env
File "/usr/local/lib/python3.12/site-packages/langsmith/env/__init__.py", line 3, in <module>
from langsmith.env._runtime_env import (
File "/usr/local/lib/python3.12/site-packages/langsmith/env/_runtime_env.py", line 10, in <module>
from langsmith.utils import get_docker_compose_command
File "/usr/local/lib/python3.12/site-packages/langsmith/utils.py", line 31, in <module>
from langsmith import schemas as ls_schemas
File "/usr/local/lib/python3.12/site-packages/langsmith/schemas.py", line 69, in <module>
class Example(ExampleBase):
File "/usr/local/lib/python3.12/site-packages/pydantic/v1/main.py", line 286, in __new__
cls.__try_update_forward_refs__()
File "/usr/local/lib/python3.12/site-packages/pydantic/v1/main.py", line 807, in __try_update_forward_refs__
update_model_forward_refs(cls, cls.__fields__.values(), cls.__config__.json_encoders, localns, (NameError,))
File "/usr/local/lib/python3.12/site-packages/pydantic/v1/typing.py", line 554, in update_model_forward_refs
update_field_forward_refs(f, globalns=globalns, localns=localns)
File "/usr/local/lib/python3.12/site-packages/pydantic/v1/typing.py", line 520, in update_field_forward_refs
field.type_ = evaluate_forwardref(field.type_, globalns, localns or None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pydantic/v1/typing.py", line 66, in evaluate_forwardref
return cast(Any, type_)._evaluate(globalns, localns, set())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard'
```
### Description
Langchain fails on import with Python 3.12.4 due to pydantic v1 dependency. Python 3.12.3 is fine.
See https://github.com/pydantic/pydantic/issues/9607 for more info.
### System Info
```
langchain 0.2.3
langchain-community 0.2.3
langchain-core 0.2.5
langchain-openai 0.1.8
pydantic 2.7.3
pydantic_core 2.18.4
```
Python version is 3.12.4
Linux Arm64/v8 | Python 3.12.4 is incompatible with pydantic.v1 as of pydantic==2.7.3 | https://api.github.com/repos/langchain-ai/langchain/issues/22692/comments | 9 | 2024-06-08T01:41:20Z | 2024-06-13T04:35:49Z | https://github.com/langchain-ai/langchain/issues/22692 | 2,341,357,041 | 22,692 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/pdf_qa/#question-answering-with-rag
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
**URL**: [PDF QA Tutorial](https://python.langchain.com/v0.2/docs/tutorials/pdf_qa/#question-answering-with-rag)
**Checklist**:
- [x] I added a very descriptive title to this issue.
- [x] I included a link to the documentation page I am referring to.
**Issue with current documentation**:
There is a variable name error in the PDF QA Tutorial on the LangChain documentation. The code snippet incorrectly uses `llm` instead of `model`, which causes a `NameError`.
**Error Message**:
```plaintext
NameError: name 'llm' is not defined
```
**Correction**:
The variable `llm` should be replaced with `model` in the code snippet for it to work correctly. Here is the corrected portion of the code:
```python
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import ChatPromptTemplate
system_prompt = (
"You are an assistant for question-answering tasks. "
"Use the following pieces of retrieved context to answer "
"the question. If you don't know the answer, say that you "
"don't know. Use three sentences maximum and keep the "
"answer concise."
"\n\n"
"{context}"
)
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
("human", "{input}"),
]
)
question_answer_chain = create_stuff_documents_chain(model, prompt)
rag_chain = create_retrieval_chain(retriever, question_answer_chain)
results = rag_chain.invoke({"input": "What was Nike's revenue in 2023?"})
results
```
Please make this update to prevent confusion and errors for users following the tutorial.
### Idea or request for content:
_No response_ | DOC: NameError due to Incorrect Variable Name in PDF QA Tutorial Documentation | https://api.github.com/repos/langchain-ai/langchain/issues/22689/comments | 2 | 2024-06-08T00:22:00Z | 2024-06-24T21:08:04Z | https://github.com/langchain-ai/langchain/issues/22689 | 2,341,309,950 | 22,689 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader
import os
endpoint = "<endpoint>"
key = "<key>"
mode= "markdown"
path = os.path.join('path', 'to', 'pdf')
loader = AzureAIDocumentIntelligenceLoader(
file_path="path_to_local_pdf.pdf", api_endpoint=endpoint, api_key=key, api_model="prebuilt-layout", mode = mode
)
documents = loader.load()
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/Users/Code/tools-chatbot-backend/dataproduct/test_langchain.py", line 13, in <module>
documents = loader.load()
^^^^^^^^^^^^^
File "/Users/Code/tools-chatbot-backend/dataproduct/.venv/lib/python3.11/site-packages/langchain_core/document_loaders/base.py", line 29, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Code/tools-chatbot-backend/dataproduct/.venv/lib/python3.11/site-packages/langchain_community/document_loaders/doc_intelligence.py", line 96, in lazy_load
yield from self.parser.parse(blob)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Code/tools-chatbot-backend/dataproduct/.venv/lib/python3.11/site-packages/langchain_core/document_loaders/base.py", line 125, in parse
return list(self.lazy_parse(blob))
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Code/tools-chatbot-backend/dataproduct/.venv/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/doc_intelligence.py", line 80, in lazy_parse
poller = self.client.begin_analyze_document(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Code/tools-chatbot-backend/dataproduct/.venv/lib/python3.11/site-packages/azure/core/tracing/decorator.py", line 94, in wrapper_use_tracer
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/Code/tools-chatbot-backend/dataproduct/.venv/lib/python3.11/site-packages/azure/ai/documentintelligence/_operations/_operations.py", line 3627, in begin_analyze_document
raw_result = self._analyze_document_initial( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Code/tools-chatbot-backend/dataproduct/.venv/lib/python3.11/site-packages/azure/ai/documentintelligence/_operations/_operations.py", line 516, in _analyze_document_initial
map_error(status_code=response.status_code, response=response, error_map=error_map)
File "/Users/baniasbaabe/Code/tools-chatbot-backend/dataproduct/.venv/lib/python3.11/site-packages/azure/core/exceptions.py", line 161, in map_error
raise error
azure.core.exceptions.ResourceNotFoundError: (404) Resource not found
Code: 404
Message: Resource not found
```
### Description
I am trying to run the `AzureAIDocumentIntelligenceLoader` but it always throws an error that the resource to the PDF could not be found. When I run the [azure-ai-formrecognizer](https://pypi.org/project/azure-ai-formrecognizer/) manually, it works.
### System Info
```
langchain==0.2.0
Python 3.11
MacOS 14
``` | `AzureAIDocumentIntelligenceLoader` throws 404 Resource not found error | https://api.github.com/repos/langchain-ai/langchain/issues/22679/comments | 1 | 2024-06-07T14:30:27Z | 2024-06-17T10:55:07Z | https://github.com/langchain-ai/langchain/issues/22679 | 2,340,605,742 | 22,679 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
dont think thats necessary
### Error Message and Stack Trace (if applicable)
ERROR: Could not find a version that satisfies the requirement langchain-google-genai (from versions: none)
ERROR: No matching distribution found for langchain-google-genai
### Description
i am trying to use the gemini api through the chatgooglegenerativeAI class in python 3.9.0 but i am not able to install langchain-google-genai which contains the aforementioned class. i looked up the issue in google and some older issuer's solution was that the module needs python version to be equal to 3.9 or greater. my python version is currently 3.9.0 so i can't really understand what the issue is.
### System Info
python == 3.9.0
| unable to install langchain-google-genai in python 3.9.0 | https://api.github.com/repos/langchain-ai/langchain/issues/22676/comments | 0 | 2024-06-07T13:18:48Z | 2024-06-07T13:21:17Z | https://github.com/langchain-ai/langchain/issues/22676 | 2,340,438,808 | 22,676 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.chat_models import ChatDatabricks
llm = ChatDatabricks(
endpoint="my-endpoint",
temperature=0.0,
)
for chunk in llm.stream("What is MLflow?"):
print(chunk.content, end="|")
```
### Error Message and Stack Trace (if applicable)
```python
KeyError: 'content'
File <command-18425931933140>, line 8
1 from langchain_community.chat_models import ChatDatabricks
3 llm = ChatDatabricks(
4 endpoint="my-endpoint",
5 temperature=0.0,
6 )
----> 8 for chunk in llm.stream("What is MLflow?"):
9 print(chunk.content, end="|")
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-3b7ab85f-0c62-4fae-a71e-af61c05342b4/lib/python3.10/site-packages/langchain_community/chat_models/mlflow.py:161, in ChatMlflow.stream(self, input, config, stop, **kwargs)
157 yield cast(
158 BaseMessageChunk, self.invoke(input, config=config, stop=stop, **kwargs)
159 )
160 else:
--> 161 yield from super().stream(input, config, stop=stop, **kwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-3b7ab85f-0c62-4fae-a71e-af61c05342b4/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:265, in BaseChatModel.stream(self, input, config, stop, **kwargs)
258 except BaseException as e:
259 run_manager.on_llm_error(
260 e,
261 response=LLMResult(
262 generations=[[generation]] if generation else []
263 ),
264 )
--> 265 raise e
266 else:
267 run_manager.on_llm_end(LLMResult(generations=[[generation]]))
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-3b7ab85f-0c62-4fae-a71e-af61c05342b4/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:245, in BaseChatModel.stream(self, input, config, stop, **kwargs)
243 generation: Optional[ChatGenerationChunk] = None
244 try:
--> 245 for chunk in self._stream(messages, stop=stop, **kwargs):
246 if chunk.message.id is None:
247 chunk.message.id = f"run-{run_manager.run_id}"
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-3b7ab85f-0c62-4fae-a71e-af61c05342b4/lib/python3.10/site-packages/langchain_community/chat_models/mlflow.py:184, in ChatMlflow._stream(self, messages, stop, run_manager, **kwargs)
182 if first_chunk_role is None:
183 first_chunk_role = chunk_delta.get("role")
--> 184 chunk = ChatMlflow._convert_delta_to_message_chunk(
185 chunk_delta, first_chunk_role
186 )
188 generation_info = {}
189 if finish_reason := choice.get("finish_reason"):
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-3b7ab85f-0c62-4fae-a71e-af61c05342b4/lib/python3.10/site-packages/langchain_community/chat_models/mlflow.py:239, in ChatMlflow._convert_delta_to_message_chunk(_dict, default_role)
234 @staticmethod
235 def _convert_delta_to_message_chunk(
236 _dict: Mapping[str, Any], default_role: str
237 ) -> BaseMessageChunk:
238 role = _dict.get("role", default_role)
--> 239 content = _dict["content"]
240 if role == "user":
241 return HumanMessageChunk(content=content)
```
### Description
I am trying to stream the response from the ChatDatabricks but this simply fails because it cannot find the 'content' key in the chunks. Also, the example code in the [documentation](https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.databricks.ChatDatabricks.html) does not work.
### System Info
langchain==0.2.3
langchain-community==0.2.4
langchain-core==0.2.5
langchain-openai==0.1.8
langchain-text-splitters==0.2.1 | ChatDatabricks can't stream response: "KeyError: 'content'" | https://api.github.com/repos/langchain-ai/langchain/issues/22674/comments | 3 | 2024-06-07T12:43:34Z | 2024-07-05T15:31:11Z | https://github.com/langchain-ai/langchain/issues/22674 | 2,340,367,495 | 22,674 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
youtube_url = 'https://youtu.be/RXQ5AtjUMAw'
loader = GenericLoader(
YoutubeAudioLoader(
[youtube_url],
'./videos'
),
OpenAIWhisperParser(
api_key=key,
language='en'
)
)
loader.load()
```
### Error Message and Stack Trace (if applicable)
```bash
Transcribing part 1!
Transcribing part 2!
Transcribing part 1!
Transcribing part 1!
Transcribing part 3!
Transcribing part 3!
```
### Description
* I'm using Langchain to generate transcripts of YouTube videos, but I've noticed that the usage on my api_key is high. After closer examination, I discovered that the OpenAIWhisperParser is Transcribing the same part multiple times
![image](https://github.com/langchain-ai/langchain/assets/43333144/5105579f-c4a7-4663-8607-dcc2b2a7fd21)
* Sometimes it goes through parts like 1,2,3 and then returns to 1 and repeats
* I've noticed that even with specified language, the first chunk is always in the original language as if the parameter is not passed to the first request
* I've tried not using language argument but the issue was still there
### System Info
System info:
Python 3.11.9 inside PyCharm venv
langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.1
langchain-openai==0.1.7
langchain-text-splitters==0.2.0
langgraph==0.0.55
langsmith==0.1.63 | Langchain YouTube audio loader duplicating transcripts | https://api.github.com/repos/langchain-ai/langchain/issues/22671/comments | 2 | 2024-06-07T12:28:45Z | 2024-06-14T19:25:13Z | https://github.com/langchain-ai/langchain/issues/22671 | 2,340,338,451 | 22,671 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_openai import AzureOpenAI
.....
model = AzureOpenAI(deployment_name=os.getenv("OPENAI_DEPLOYMENT_ENDPOINT"), temperature=0.3, openai_api_key=os.getenv("OPENAI_API_KEY"))
model.bind_tools([tool])
### Error Message and Stack Trace (if applicable)
AttributeError: 'AzureOpenAI' object has no attribute 'bind_tools'
### Description
I create AzureOpenAI instance in langchain and when trying to bind tools getting the error.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
> Python Version: 3.12.3 (main, Apr 9 2024, 08:09:14) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.5
> langchain: 0.2.3
> langchain_community: 0.2.4
> langsmith: 0.1.75
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.1
> langgraph: 0.0.64
> langserve: 0.2.1 | AttributeError: 'AzureOpenAI' object has no attribute 'bind_tools' | https://api.github.com/repos/langchain-ai/langchain/issues/22670/comments | 1 | 2024-06-07T12:06:12Z | 2024-06-12T06:33:09Z | https://github.com/langchain-ai/langchain/issues/22670 | 2,340,298,150 | 22,670 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
document_prompt = PromptTemplate(
input_variables=["page_content", "metadata"],
input_types={
"page_content": str,
"metadata": dict[str, Any],
},
output_parser=None,
partial_variables={},
template="{metadata['source']}: {page_content}",
template_format="f-string",
validate_template=True
)
```
### Error Message and Stack Trace (if applicable)
```
File "/home/fules/src/ChatPDF/streamlitui.py", line 90, in <module>
main()
File "/home/fules/src/ChatPDF/streamlitui.py", line 51, in main
st.session_state["pdfquery"] = PDFQuery(st.session_state["OPENAI_API_KEY"])
File "/home/fules/src/ChatPDF/pdfquery.py", line 32, in __init__
document_prompt = PromptTemplate(
File "/home/fules/src/ChatPDF/_venv/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for PromptTemplate
__root__
string indices must be integers (type=type_error)
```
### Description
* I'm trying to create a document formatting template that uses not only the content of the documents but their metadata as well
* As I explicitly specify that the `metadata` member is a dict, I expect that the validation logic honors that information
* I've experienced that all input variables are treated as `str`s, regardless of `input_types`
At <a href="https://github.com/langchain-ai/langchain/blob/235d91940d81949d8f1c48d33e74ad89e549e2c0/libs/core/langchain_core/prompts/prompt.py#L136">this point</a> `input_types` is not passed on to `check_valid_template`, so that type information is lost beyond this point, and therefore the validator couldn't consider the type even if it tried to.
At <a href="https://github.com/langchain-ai/langchain/blob/235d91940d81949d8f1c48d33e74ad89e549e2c0/libs/core/langchain_core/utils/formatting.py#L23">this point</a> the validator `validate_input_variables` tries to resolve the template by assigning the string `"foo"` to all input variables, and this is where the exception is raised.
The <a href="https://api.python.langchain.com/en/latest/prompts/langchain_core.prompts.prompt.PromptTemplate.html#langchain_core.prompts.prompt.PromptTemplate.input_types">documentation of `PromptTemplate.input_types`</a> states that
> A dictionary of the types of the variables the prompt template expects. If not provided, all variables are assumed to be strings.
If this behaviour (`input_types` is ignored and all variables are always assumed to be strings) is the intended one, then it might be good to reflect this in the documentation too.
### System Info
```
$ pip freeze | grep langchain
langchain==0.2.2
langchain-community==0.2.3
langchain-core==0.2.4
langchain-openai==0.1.8
langchain-text-splitters==0.2.1
$ uname -a
Linux Lya 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
$ python --version
Python 3.10.6
``` | PromptTemplate.input_types is ignored on validation | https://api.github.com/repos/langchain-ai/langchain/issues/22668/comments | 1 | 2024-06-07T11:31:54Z | 2024-06-26T15:08:31Z | https://github.com/langchain-ai/langchain/issues/22668 | 2,340,242,403 | 22,668 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.document_loaders.sharepoint import SharePointLoader
# O365_CLIENT_ID, O365_CLIENT_SECRET included in the environment
# first 'manual' authentication was successful throwing the same error as included below
loader = SharePointLoader(document_library_id=<LIBRARY_ID>, recursive=True, auth_with_token=False)
documents = loader.load()
```
### Error Message and Stack Trace (if applicable)
```python
ValueError Traceback (most recent call last)
Cell In[21], line 14
11 documents = loader.lazy_load()
13 # Process each document
---> 14 for doc in documents:
15 try:
16 # Ensure MIME type is available or set a default based on file extension
17 if 'mimetype' not in doc.metadata or not doc.metadata['mimetype']:
File ~/.local/lib/python3.11/site-packages/langchain_community/document_loaders/sharepoint.py:86, in SharePointLoader.lazy_load(self)
84 raise ValueError("Unable to fetch root folder")
85 for blob in self._load_from_folder(target_folder):
---> 86 for blob_part in blob_parser.lazy_parse(blob):
87 blob_part.metadata.update(blob.metadata)
88 yield blob_part
File ~/.local/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/generic.py:61, in MimeTypeBasedParser.lazy_parse(self, blob)
58 mimetype = blob.mimetype
60 if mimetype is None:
---> 61 raise ValueError(f"{blob} does not have a mimetype.")
63 if mimetype in self.handlers:
64 handler = self.handlers[mimetype]
ValueError: data=None mimetype=None encoding='utf-8' path=PosixPath('/tmp/tmp92nu0bdz/test_document_on_SP.docx') metadata={} does not have a mimetype.
```
### Description
* I'm trying to put together a Proof Of Concept RAG chatbot that uses the SharePointLoader integration
* The authentication process (via copy pasting the url) is sucessful, I also have the auth_token, which can be used.
* However, the .load method fails at the first .docx document (while successfully fetching a .pdf data from SharePoint
* The error message mentions a file path at the temp directory, however that file cannot be found there in fact.
* My hunch is that this issue might be related to to commit https://github.com/langchain-ai/langchain/pull/20663 against metadata about the document gets lost during the downloading process to temp storage. I'm not entirely sure of the root cause, but it's a tricky problem that might need more eyes on it. Thanks to @MacanPN for pointing this out! Any insights or further checks we could perform to better understand this would be greatly appreciated.
* Using inspect, I verified that merge changes exist in my langchain version, so I'm a bit clueless.
* Furthermore, based on the single successful pdf load, metadata properties like web_url are also missing:
```python
metadata={'source': '/tmp/tmpw8sfa_52/test_file.pdf',
'file_path': '/tmp/tmpw8sfa_52/test_file.pdf',
'page': 0, 'total_pages': 1, 'format': 'PDF 1.4',
'title': '', 'author': '', 'subject': '', 'keywords': '',
'creator': 'Chromium', 'producer': 'Skia/PDF m101',
'creationDate': "D:20240503115507+00'00'", 'modDate': "D:20240503115507+00'00'",
'trapped': ''}
```
### System Info
Currently I am running the code on the Unstructured docker container (downloads.unstructured.io/unstructured-io/unstructured:latest) but other Linux platforms like Ubuntu 20.04 and python:3.11-slim were also fruitless.
Packages like O365 and PyMuPDF were also installed.
/usr/src/app $ python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Apr 2 22:23:49 UTC 2021
> Python Version: 3.11.9 (main, May 23 2024, 20:26:53) [GCC 13.2.0]
Package Information
-------------------
> langchain_core: 0.2.2
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.62
> langchain_google_vertexai: 1.0.4
> langchain_huggingface: 0.0.3
> langchain_text_splitters: 0.2.0
> langchain_voyageai: 0.1.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | SharepointLoader not working as intended despite latest merge 'propagation of document metadata from O365BaseLoader' | https://api.github.com/repos/langchain-ai/langchain/issues/22663/comments | 1 | 2024-06-07T09:56:20Z | 2024-06-07T09:59:57Z | https://github.com/langchain-ai/langchain/issues/22663 | 2,340,053,470 | 22,663 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.embeddings import DeterministicFakeEmbedding
from langchain_community.vectorstores import Chroma, Milvus
from langchain_core.documents import Document
from langchain_core.runnables import Runnable, RunnableConfig
from langchain_core.runnables.utils import Input, Output
from langchain_core.vectorstores import VectorStore
from langchain_text_splitters import TextSplitter, RecursiveCharacterTextSplitter
class AddOne(Runnable):
def invoke(self , input:Input , config : Optional[RunnableConfig] = None) -> Output:
return input+1
class Square(Runnable):
def invoke(self , input:Input , config : Optional[RunnableConfig] = None) -> Output:
return input**2
class Cube(Runnable):
def invoke(self , input:Input , config : Optional[RunnableConfig] = None) -> Output:
return input**3
class AddAll(Runnable):
def invoke(self , input:dict , config : Optional[RunnableConfig] = None) -> Output:
return sum(input.values())
def main_invoke():
chain = (AddOne() | { "square " : Square() , "cube" : Cube() } | AddAll())
print(chain.batch([2 , 10 , 11]))
main_invoke()
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/Users/kb-0311/Desktop/langchain/main.py", line 29, in <module>
main()
File "/Users/kb-0311/Desktop/langchain/main.py", line 26, in main
print(sequence.invoke(2)) # Output will be 9
^^^^^^^^^^^^^^^^^^
File "/Users/kb-0311/Desktop/langchain/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2476, in invoke
callback_manager = get_callback_manager_for_config(config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/kb-0311/Desktop/langchain/.venv/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 433, in get_callback_manager_for_config
from langchain_core.callbacks.manager import CallbackManager
File "/Users/kb-0311/Desktop/langchain/.venv/lib/python3.12/site-packages/langchain_core/callbacks/__init__.py", line 22, in <module>
from langchain_core.callbacks.manager import (
File "/Users/kb-0311/Desktop/langchain/.venv/lib/python3.12/site-packages/langchain_core/callbacks/manager.py", line 29, in <module>
from langsmith.run_helpers import get_run_tree_context
ModuleNotFoundError: No module named 'langsmith.run_helpers'; 'langsmith' is not a package
### Description
I am trying to run a basic example chain to understand lcel but cannot run or invoke my chain.
The error stack trace is given below.
All the packages are installed in a virtual env as well as my global pip lib/ in their latest versions.
### System Info
pip freeze | grep langchain
langchain==0.2.3
langchain-community==0.2.4
langchain-core==0.2.5
langchain-text-splitters==0.2.1
MacOS 14.5
Python3 version 3.12.3 | Getting langsmith module not found error whenever running langchain Runnable invoke / batch() | https://api.github.com/repos/langchain-ai/langchain/issues/22660/comments | 1 | 2024-06-07T08:28:51Z | 2024-07-15T11:19:56Z | https://github.com/langchain-ai/langchain/issues/22660 | 2,339,893,198 | 22,660 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/chatbot/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
When following tutorial Build a Chatbot - Gemini in https://python.langchain.com/v0.2/docs/tutorials/chatbot/ in the section of https://python.langchain.com/v0.2/docs/tutorials/chatbot/#managing-conversation-history
```
from langchain_core.runnables import RunnablePassthrough
def filter_messages(messages, k=10):
return messages[-k:]
chain = (
RunnablePassthrough.assign(messages=lambda x: filter_messages(x["messages"]))
| prompt
| model
)
messages = [
HumanMessage(content="hi! I'm bob"),
AIMessage(content="hi!"),
HumanMessage(content="I like vanilla ice cream"),
AIMessage(content="nice"),
HumanMessage(content="whats 2 + 2"),
AIMessage(content="4"),
HumanMessage(content="thanks"),
AIMessage(content="no problem!"),
HumanMessage(content="having fun?"),
AIMessage(content="yes!"),
]
response = chain.invoke(
{
"messages": messages + [HumanMessage(content="what's my name?")],
"language": "English",
}
)
response.content
```
It throws an error `Retrying langchain_google_vertexai.chat_models._completion_with_retry.<locals>._completion_with_retry_inner in 4.0 seconds as it raised InvalidArgument: 400 Please ensure that multiturn requests alternate between user and model..`
The solution here seems to be changing `def filter_messages(messages, k=10)` to `def filter_messages(messages, k=9)` the reason for this is described https://github.com/langchain-ai/langchain/issues/16288
Gemini doesn't support history starting from AIMessage changing value from 10 to 9 ensure that the first message list is always HumanMessage
### Idea or request for content:
_No response_ | DOC: Tutorial - Build a Chatbot - Gemini error 400 Please ensure that multiturn requests alternate between user and model | https://api.github.com/repos/langchain-ai/langchain/issues/22651/comments | 1 | 2024-06-07T02:10:13Z | 2024-06-08T18:47:07Z | https://github.com/langchain-ai/langchain/issues/22651 | 2,339,446,166 | 22,651 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.1/docs/use_cases/tool_use/human_in_the_loop/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I had see the docs of human_in_the_loop, But did't know what to do in agent tools. I have a list of tools . Some need human approval, others not need. So how to filter the tools.
My code like this :
```
tools = [
StructuredTool.from_function(
func=calculate,
name="calculate",
description="Useful for when you need to answer questions about simple calculations",
args_schema=CalculatorInput,
),
StructuredTool.from_function(
func=toolsNeedApproval ,
name="toolsNeedApproval",
description="This tool need human approval .",
args_schema=toolsNeedApprovalInput,
),
StructuredTool.from_function(
func=normalTool,
name="normalTool",
description="This tool is a normal tool .",
args_schema=normalToolInput,
),
]
callback = CustomAsyncIteratorCallbackHandler()
model = get_ChatOpenAI(
model_name=model_name,
temperature=temperature,
max_tokens=max_tokens,
callbacks=[callback],
)
model.bind_tools(tools, tool_choice="any")
llm_chain = LLMChain(llm=model, prompt=prompt_template_agent)
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:", "Observation"],
allowed_tools=tool_names,
)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent,
tools=tools,
verbose=True,
memory=memory,
)
agent_executor.acall(query, callbacks=[callback], include_run_info=True)
...
```
### Idea or request for content:
I don't know how to add human approval in agent tools. | DOC: How to add human approval in agent tools? | https://api.github.com/repos/langchain-ai/langchain/issues/22649/comments | 1 | 2024-06-07T01:17:40Z | 2024-07-16T16:48:12Z | https://github.com/langchain-ai/langchain/issues/22649 | 2,339,403,349 | 22,649 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Produces an ERROR with `max_tokens=8192` -- however, the same with a `max_tokens=100` works.
Also, per spec "max_tokens" can be set to `-1`:
```
param max_tokens: int = 256
The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the models maximal context size.
```
However that produces:
```
Error invoking the chain: Error code: 400 - {'error': {'message': "Invalid 'max_tokens': integer below minimum value. Expected a value >= 1, but got -1 instead.", 'type': 'invalid_request_error', 'param': 'max_tokens', 'code': 'integer_below_min_value'}}
```
Test code:
```python
import dotenv
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.output_parsers import StrOutputParser
dotenv.load_dotenv()
llm = ChatOpenAI(
model="gpt-4",
temperature=0.2,
# NOTE: setting max_tokens to "100" works. Setting to 8192 or something slightly lower does not. Setting to "-1" fails.
# Per documentation -1 should work. Also - if "100" calculates the prompt as part of the tokens correctly, so should "8192"
max_tokens=8192
)
output_parser = StrOutputParser()
prompt_template = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Answer all questions to the best of your ability."),
MessagesPlaceholder(variable_name="messages"),
])
chain = prompt_template | llm | output_parser
response = chain.invoke({
"messages": [
HumanMessage(content="what llm are you"),
],
})
print(response)
```
### Error Message and Stack Trace (if applicable)
Error invoking the chain: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens. However, you requested 8225 tokens (33 in the messages, 8192 in the completion). Please reduce the length of the messages or completion.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
### Description
* If it works for "100" max_tokens correctly and it correctly calculates input_prompt as part of it, it should for "8192" or "8100", etc.
* Also - per documentation "-1" should do this calculation automatically, but it fails.
### System Info
langchain==0.1.20
langchain-aws==0.1.4
langchain-community==0.0.38
langchain-core==0.1.52
langchain-google-vertexai==1.0.3
langchain-openai==0.1.7
langchain-text-splitters==0.0.2
platform mac
Python 3.11.6 | [BUG] langchain-openai - max_tokens - 2 confirmed bugs | https://api.github.com/repos/langchain-ai/langchain/issues/22636/comments | 11 | 2024-06-06T20:03:01Z | 2024-06-11T14:51:08Z | https://github.com/langchain-ai/langchain/issues/22636 | 2,339,062,266 | 22,636 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
https://python.langchain.com/v0.2/docs/integrations/llm_caching/ should be a section with one page per integration, like other components. | DOCS: Split integrations/llm_cache page into separate pages | https://api.github.com/repos/langchain-ai/langchain/issues/22618/comments | 0 | 2024-06-06T14:27:19Z | 2024-08-06T22:29:02Z | https://github.com/langchain-ai/langchain/issues/22618 | 2,338,404,917 | 22,618 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
def test():
....
for text_items in all_list:
doc_db = FAISS.from_documents(text_items, EMBEDDINGS_MODEL)
doc_db.save_local(vector_database_path)
...
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When call `doc_db = FAISS.from_documents(text_items, EMBEDDINGS_MODEL)`, the memory is not released.
I want to know is there a function can release the `doc_db` object.
### System Info
langchain==0.2.2
langchain-community==0.2.3
faiss-cpu==1.8.0 | The FAISS.from_documents function called many times, It'll cause memory leak. How to destroy the object? | https://api.github.com/repos/langchain-ai/langchain/issues/22602/comments | 1 | 2024-06-06T10:38:10Z | 2024-06-06T23:46:23Z | https://github.com/langchain-ai/langchain/issues/22602 | 2,337,929,574 | 22,602 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
class AgentState(TypedDict):
input: str
chat_history: list[BaseMessage]
agent_outcome: Union[AgentAction, AgentFinish, None]
intermediate_steps: Annotated[list[tuple[AgentAction, str]], operator.add]
def plan(self, data):
agent_outcome = self.agent.invoke(data)
return {'agent_outcome': agent_outcome}
def execute(self, data):
res = {"intermediate_steps": [], 'results': []}
for agent_action in data['agent_outcome']:
invocation = ToolInvocation(tool=agent_action.tool, tool_input=agent_action.tool_input)
output = self.tool_executor.invoke(invocation)
res["intermediate_steps"].append((agent_action, str({"result": output})))
return res
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "D:\project\po_fbb\demo.py", line 121, in <module>
for s in app.stream(inputs, config=config, debug=True):
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langgraph\pregel\__init__.py", line 876, in stream
_panic_or_proceed(done, inflight, step)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langgraph\pregel\__init__.py", line 1422, in _panic_or_proceed
raise exc
File "C:\Users\l00413520\Anaconda3\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langgraph\pregel\retry.py", line 66, in run_with_retry
task.proc.invoke(task.input, task.config)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 2393, in invoke
input = step.invoke(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langgraph\pregel\__init__.py", line 1333, in invoke
for chunk in self.stream(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langgraph\pregel\__init__.py", line 876, in stream
_panic_or_proceed(done, inflight, step)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langgraph\pregel\__init__.py", line 1422, in _panic_or_proceed
raise exc
File "C:\Users\l00413520\Anaconda3\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langgraph\pregel\retry.py", line 66, in run_with_retry
task.proc.invoke(task.input, task.config)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 2393, in invoke
input = step.invoke(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langgraph\utils.py", line 89, in invoke
ret = context.run(self.func, input, **kwargs)
File "D:\project\po_fbb\plan_execute.py", line 54, in plan
agent_outcome = self.agent.invoke(data)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 2393, in invoke
input = step.invoke(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\runnables\base.py", line 4427, in invoke
return self.bound.invoke(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 170, in invoke
self.generate_prompt(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 599, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 456, in generate
raise e
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 446, in generate
self._generate_with_cache(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_core\language_models\chat_models.py", line 671, in _generate_with_cache
result = self._generate(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\langchain_openai\chat_models\base.py", line 522, in _generate
response = self.client.create(messages=message_dicts, **params)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\openai\_utils\_utils.py", line 277, in wrapper
return func(*args, **kwargs)
File "C:\Users\l00413520\Anaconda3\lib\site-packages\openai\resources\chat\completions.py", line 590, in create
return self._post(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\openai\_base_client.py", line 1240, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "C:\Users\l00413520\Anaconda3\lib\site-packages\openai\_base_client.py", line 921, in request
return self._request(
File "C:\Users\l00413520\Anaconda3\lib\site-packages\openai\_base_client.py", line 1020, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_O4TQIqkzFaYavyeNQfrHhQla (request id: 2024060617051887706039753292306) (request id: 2024060605051875537495502347588)", 'type': 'invalid_request_error', 'param': 'messages', 'code': None}}
### Description
when run at agent_outcome = self.agent.invoke(data), it raise error
openai.BadRequestError: Error code: 400 - {'error': {**'message': "Missing parameter 'tool_call_id': messages with role 'tool' must have a 'tool_call_id'**. (request id: 2024060617221462310269294550807) (request id: 20240606172214601955429dCigdvQs) (request id: 2024060617230165205730616076552) (request id: 2024060617221459456075403364817) (request id: 2024060617221456172038051798941) (request id: 2024060605221444438203159022960)", 'type': 'invalid_request_error', 'param': 'messages.[3].tool_call_id', 'code': None}}
where the tool message have tool_call_id,the message:
SystemMessage(content="XXX\n"),
HumanMessage(content='PO_num: XXX, task_id: XXX'),
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_9kI8hcLdO0nxMl2oyXyKf5Rf', 'function': {'arguments': '{"task_id":"XXX","po_num":"XXX"}', 'name': 'po_info'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 34, 'prompt_tokens': 1020, 'total_tokens': 1054}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-61620ce1-f4f9-4a6e-bf85-f34ba26047fd-0', tool_calls=[{'name': 'po_info', 'args': {'task_id': 'XXX', 'po_num': 'XXX'}, 'id': 'call_9kI8hcLdO0nxMl2oyXyKf5Rf'}]),
**ToolMessage(content="{'result': (True, )}", additional_kwargs={'name': 'po_info'}, tool_call_id='call_9kI8hcLdO0nxMl2oyXyKf5Rf'),**
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_knIR5jqxgXEigmOrqksyk0ch', 'function': {'arguments': '{"task_id":"XXX","sub_names":["XXX"],"suffix":["xls","xlsx"],"result_key":"finish_report_path"}', 'name': 'find_files'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 48, 'prompt_tokens': 1074, 'total_tokens': 1122}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-823edff7-03f1-4981-8d2a-b6b8f6f18f87-0', tool_calls=[{'name': 'find_files', 'args': {'task_id': 'XXX', 'sub_names': ['XXX'], 'suffix': ['xls', 'xlsx'], 'result_key': 'finish_report_path'}, 'id': 'call_knIR5jqxgXEigmOrqksyk0ch'}]),
**ToolMessage(content="{'result': (True, ['XXX\\XXX.xls'])}", additional_kwargs={'name': 'find_files'}, tool_call_id='call_knIR5jqxgXEigmOrqksyk0ch'),**
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_KoQG2PTuIAwmaJWF9VgpAyGD', 'function': {'arguments': '{"excel_path":"XXX\\XXX.xls","key_list":["Item",],"result_key":"finish_info"}', 'name': 'excel_column_extract'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 65, 'prompt_tokens': 1158, 'total_tokens': 1223}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-900a681a-4921-4268-a4d6-7bbdbc7b7a39-0', tool_calls=[{'name': 'excel_column_extract', 'args': {'excel_path': 'XXX\XXX.xls', 'key_list': ['Item', ], 'result_key': 'finish_info'}, 'id': 'call_KoQG2PTuIAwmaJWF9VgpAyGD'}]),
**ToolMessage(content="{'result': (True, )}", additional_kwargs={'name': 'excel_column_extract'}, tool_call_id='call_KoQG2PTuIAwmaJWF9VgpAyGD')**
### System Info
langchain 0.2.1
langchain-community 0.2.1
langchain-core 0.2.1
langchain-experimental 0.0.59
langchain-openai 0.1.7
langchain-text-splitters 0.2.0
langgraph 0.0.55
langsmith 0.1.50 | 'error': {'message': "Missing parameter 'tool_call_id': messages with role 'tool' must have a 'tool_call_id' | https://api.github.com/repos/langchain-ai/langchain/issues/22600/comments | 0 | 2024-06-06T10:09:40Z | 2024-06-06T10:12:16Z | https://github.com/langchain-ai/langchain/issues/22600 | 2,337,877,131 | 22,600 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain.prompts import ChatPromptTemplate
@tool
def multiply(first_int: int, second_int: int) -> int:
"""Multiply two integers together."""
return first_int * second_int
@tool
def add(first_int: int, second_int: int) -> int:
"""Add two integers.
"""
return first_int + second_int
tools = [multiply, add,]
if __name__ == '__main__':
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
bind_tools = llm.bind_tools(tools)
calling_agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=calling_agent, tools=tools, verbose=True)
response = agent_executor.invoke({
"input": "what is the value of multiply(5, 42)?",
})
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "E:\PycharmProjects\agent-tool-demo\main.py", line 61, in <module>
stream = agent_executor.invoke({
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\chains\base.py", line 166, in invoke
raise e
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\chains\base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\agents\agent.py", line 1433, in _call
next_step_output = self._take_next_step(
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\agents\agent.py", line 1139, in _take_next_step
[
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\agents\agent.py", line 1139, in <listcomp>
[
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\agents\agent.py", line 1167, in _iter_next_step
output = self.agent.plan(
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain\agents\agent.py", line 515, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 2775, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 2762, in transform
yield from self._transform_stream_with_config(
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 1778, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 2726, in _transform
for output in final_pipeline:
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 1154, in transform
for ichunk in input:
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 4644, in transform
yield from self.bound.transform(
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\runnables\base.py", line 1172, in transform
yield from self.stream(final, config, **kwargs)
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\language_models\chat_models.py", line 265, in stream
raise e
File "E:\conda\env\agent-tool-demo\lib\site-packages\langchain_core\language_models\chat_models.py", line 257, in stream
assert generation is not None
AssertionError
### Description
An error occurred when I used the agent executor invoke
### System Info
langchain 0.2.1 | langchain agents executor throws: assert generation is not None | https://api.github.com/repos/langchain-ai/langchain/issues/22585/comments | 4 | 2024-06-06T03:16:18Z | 2024-06-07T03:31:21Z | https://github.com/langchain-ai/langchain/issues/22585 | 2,337,224,294 | 22,585 |
[
"hwchase17",
"langchain"
] | ### URL
Withdrawal not receive
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Withdrawal not receive
### Idea or request for content:
Withdrawal not receive | DOC: <Please write a comprehensive title after the 'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/22557/comments | 3 | 2024-06-05T16:00:26Z | 2024-06-05T21:18:48Z | https://github.com/langchain-ai/langchain/issues/22557 | 2,336,288,902 | 22,557 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.cross_encoders import HuggingFaceCrossEncoder
re_rank_model_name = "amberoad/bert-multilingual-passage-reranking-msmarco"
model_kwargs = {
'device': device,
'trust_remote_code':True,
}
re_rank_model = HuggingFaceCrossEncoder(model_name=re_rank_model_name,
model_kwargs = model_kwargs,
)
from langchain.retrievers.document_compressors import CrossEncoderReranker
compressor = CrossEncoderReranker(model=re_rank_model, top_n=3)
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever,
)
```
### Error Message and Stack Trace (if applicable)
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File */lib/python3.10/site-packages/langchain_core/retrievers.py:194, in BaseRetriever.invoke(self, input, config, **kwargs)
175 """Invoke the retriever to get relevant documents.
176
177 Main entry point for synchronous retriever invocations.
(...)
191 retriever.invoke("query")
192 """
193 config = ensure_config(config)
--> 194 return self.get_relevant_documents(
195 input,
196 callbacks=config.get("callbacks"),
197 tags=config.get("tags"),
198 metadata=config.get("metadata"),
199 run_name=config.get("run_name"),
200 **kwargs,
201 )
File *lib/python3.10/site-packages/langchain_core/_api/deprecation.py:148, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
146 warned = True
147 emit_warning()
...
47 docs_with_scores = list(zip(documents, scores))
---> 48 result = sorted(docs_with_scores, key=operator.itemgetter(1), reverse=True)
49 return [doc for doc, _ in result[: self.top_n]]
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
### Description
Incorrect passing of scores for sorting. The classifier returns logits for the dissimilarity and similarity between the query and the document. You need to add an exception and take the middle value if the model produces two scores, otherwise leave it as isю
This is a bug?
### System Info
System Information
------------------
> OS: Linux
> OS Version: #172-Ubuntu SMP Fri Jul 7 16:10:02 UTC 2023
> Python Version: 3.10.14 (main, Apr 6 2024, 18:45:05) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.69
> langchain_chroma: 0.1.1
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.17 | Incorrect passing of scores for sorting in CrossEncoderReranker | https://api.github.com/repos/langchain-ai/langchain/issues/22556/comments | 3 | 2024-06-05T15:42:58Z | 2024-06-06T21:13:48Z | https://github.com/langchain-ai/langchain/issues/22556 | 2,336,248,957 | 22,556 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```pytrhon
llm = ChatOpenAI(
api_key="xxx",
base_url="xxx",
temperature=0,
# model="gpt-4"
model="gpt-4o-all"
)
transformer = LLMGraphTransformer(
llm=llm,
allowed_nodes=["Person", "Organization"]
)
doc = Document(page_content="Elon Musk is suing OpenAI")
graph_documents = transformer.convert_to_graph_documents([doc])
'''
{
'raw': AIMessage(content='```json\n{\n "nodes": [\n {"id": "Elon Musk", "label": "person"},\n {"id": "OpenAI", "label": "organization"}\n ],\n "relationships": [\n {"source": "Elon Musk", "target": "OpenAI", "type": "suing"}\n ]\n}\n```', response_metadata={'token_usage': {'completion_tokens': 72, 'prompt_tokens': 434, 'total_tokens': 506}, 'model_name': 'gpt-4o-all', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-061dcf66-774a-4266-8fb0-030237cac039-0', usage_metadata={'input_tokens': 434, 'output_tokens': 72, 'total_tokens': 506}),
'parsed': None, 'parsing_error': None
}
this is what i changed source code to print out ( `after line 607, print(raw_schema)` )
'''
print(graph_documents)
'''
[GraphDocument(nodes=[], relationships=[], source=Document(page_content='Elon Musk is suing OpenAI'))]
'''
```
### Description
i tried other strings, answer is same
### System Info
Ubuntu 22.04.4 LTS
langchian last version | LLMGraphTransformer giveback empty nodes and relationships ( with gpt-4o ) | https://api.github.com/repos/langchain-ai/langchain/issues/22551/comments | 3 | 2024-06-05T14:41:48Z | 2024-07-26T06:28:01Z | https://github.com/langchain-ai/langchain/issues/22551 | 2,336,115,108 | 22,551 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm = ChatOpenAI(temperature=0, model_name="gpt-4", max_tokens=None)
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
### Error Message and Stack Trace (if applicable)
AIMessage(content='You are a helpful assistant that translates English to French. Translate the user sentence.\nI love programming. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn. I love to learn.', response_metadata={'token_usage': {'completion_tokens': 0, 'prompt_tokens': 0, 'total_tokens': 0}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-c934c150-55c6-4544-a3d4-c32ccd49e147-0')
### Description
The model always includes the input prompt in its output. If i do exactly the same but i am using mistral for example it works perfectly fine and the output only consists out of the translation.
### System Info
mac
python 3.10.2 | openai model always includes the input prompt in its output | https://api.github.com/repos/langchain-ai/langchain/issues/22550/comments | 0 | 2024-06-05T14:29:38Z | 2024-06-05T14:51:33Z | https://github.com/langchain-ai/langchain/issues/22550 | 2,336,086,990 | 22,550 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def load_reduced_api_spec():
import yaml
from some_module import reduce_openapi_spec # Adjust the import as per your actual module
with open("resources/openapi_spec.yaml") as f:
raw_api_spec = yaml.load(f, Loader=yaml.Loader)
reduced_api_spec = reduce_openapi_spec(raw_api_spec)
return reduced_api_spec
from langchain_community.utilities import RequestsWrapper
from langchain_community.agent_toolkits.openapi import planner
headers = {'x-api-key': os.getenv('API_KEY')}
requests_wrapper = RequestsWrapper(headers=headers)
api_spec = load_reduced_api_spec()
llm = ChatOpenAI(model_name="gpt-4o", temperature=0.25) #gpt-4o # gpt-4-0125-preview # gpt-3.5-turbo-0125
agent = planner.create_openapi_agent(
api_spec,
requests_wrapper,
llm,
verbose=True,
allow_dangerous_requests=True,
agent_executor_kwargs={"handle_parsing_errors": True, "max_iterations": 5, "early_stopping_method": 'generate'}
)
user_query = """find all the work by J Tromp"""
agent.invoke({"input": user_query})
```
### Error Message and Stack Trace (if applicable)
```
> > Entering new AgentExecutor chain...
> Action: api_planner
> Action Input: find all the work by J. Tromp
Error in LangChainTracer.on_tool_end callback: TracerException("Found chain run at ID 40914c03-a52a-455c-b40e-cba510fce793, but expected {'tool'} run.")
>
> Observation: 1. **Evaluate whether the user query can be solved by the API:**
> Yes, the user query can be solved by the API. We can search for the author named "J. Tromp" and then fetch all the papers authored by her.
>
> 2. **Generate a plan of API calls:**
>
> **Step 1:** Search for the author named "J. Tromp" to get her author ID.
> - **API Call:** `GET /author/search?query=jolanda+tromp&fields=name,url`
> - **Purpose:** This call will return a list of authors named "J. Tromp" along with their names and URLs on the Semantic Scholar website. We need the author ID from this response.
>
> **Step 2:** Fetch all the papers authored by J. Tromp using her author ID.
> - **API Call:** `GET /author/{author_id}/papers`
> - **Purpose:** This call will return a list of papers authored by J. Tromp. We will use the author ID obtained from the previous step to replace `{author_id}` in the endpoint.
>
> 3. **Plan:**
>
> 1. **GET /author/search?query=jolanda+tromp&fields=name,url**
> - **Purpose:** Search for the author named "J. Tromp" and obtain her author ID.
>
> 2. **GET /author/{author_id}/papers**
> - **Purpose:** Fetch all the papers authored by J. Tromp using her author ID obtained from the previous step.
> Thought:I have the plan to find all the work by J. Tromp. Now, I will execute the plan by making the necessary API calls.
>
> Action: api_controller
> Action Input:
> 1. GET /author/search?query=jolanda+tromp&fields=name,url
> 2. GET /author/{author_id}/papers
>
> > Entering new AgentExecutor chain...
> To execute the plan, I will first perform a search for the author named "J. Tromp" to retrieve the author ID and name. Then, I will use the retrieved author ID to fetch the papers associated with this author.
>
> Action: Perform a GET request to search for the author named "J. Tromp".
> Action Input:
> ```json
> {
> "url": "https://api.semanticscholar.org/graph/v1/author/search",
> "params": {
> "query": "J. Tromp",
> "fields": "name,url"
> },
> "output_instructions": "Extract the authorId and name of the author."
> }
> ```
> Observation: Perform a GET request to search for the author named "J. Tromp". is not a valid tool, try one of [requests_get, requests_post].
> Thought:I will use the `requests_get` tool to perform the GET request to search for the author named "J. Tromp".
>
> Action: Perform a GET request to search for the author named "J. Tromp".
> Action Input:
> ```json
> {
> "url": "https://api.semanticscholar.org/graph/v1/author/search",
> "params": {
> "query": "J. Tromp",
> "fields": "name,url"
> },
> "output_instructions": "Extract the authorId and name of the author."
> }
> ```
>
> Observation: Perform a GET request to search for the author named "J. Tromp". is not a valid tool, try one of [requests_get, requests_post].
> Thought:It seems I made an error in specifying the action. I should use the `requests_get` tool correctly.
>
> Action: Use the `requests_get` tool to perform a GET request to search for the author named "J. Tromp".
> Action Input:
> ```json
> {
> "url": "https://api.semanticscholar.org/graph/v1/author/search",
> "params": {
> "query": "J. Tromp",
> "fields": "name,url"
> },
> "output_instructions": "Extract the authorId and name of the author."
> }
> ```
>
> Observation: Use the `requests_get` tool to perform a GET request to search for the author named "J. Tromp". is not a valid tool, try one of [requests_get, requests_post].
> Thought:It seems I made an error in specifying the action. I should use the `requests_get` tool correctly.
>
> Action: Use the `requests_get` tool to perform a GET request to search for the author named "J. Tromp".
> Action Input:
> ```json
> {
> "url": "https://api.semanticscholar.org/graph/v1/author/search",
> "params": {
> "query": "J. Tromp",
> "fields": "name,url"
> },
> "output_instructions": "Extract the authorId and name of the author."
> }
> ```
>
```
And it goes on and on until max iterations is hit.
### Description
I don't know how/where to modify/influence api_controller prompt instructions to be more strict. The behavior is very inconsistent with. Maybe 1 out of 10 attempts will work as expected, where api_controller's Action will correctly specify just 'requests_get'.
using gpt-4-0125-preview as LLM improves the behavior somewhat, though it is a lot slower.
### System Info
gpt-4o
| api_controller fails to specify tool in Action, entering infinite loop | https://api.github.com/repos/langchain-ai/langchain/issues/22545/comments | 3 | 2024-06-05T12:09:35Z | 2024-06-07T08:43:02Z | https://github.com/langchain-ai/langchain/issues/22545 | 2,335,733,937 | 22,545 |
[
"hwchase17",
"langchain"
] | ### URL
_No response_
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Docustring in https://github.com/langchain-ai/langchain/blob/58192d617f0e7b21ac175f869068324128949504/libs/community/langchain_community/document_loaders/confluence.py#L45 refer to class named `ConfluenceReader` instead of actual class name `ConfluenceLoader`.
It is even more confusing as `ConfluenceReader` is the name of a similar class in a different python package
### Idea or request for content:
Fix the docu string | DOC: ConfluenceLoader docstring refer to wrong class name | https://api.github.com/repos/langchain-ai/langchain/issues/22542/comments | 0 | 2024-06-05T10:31:46Z | 2024-06-14T21:00:50Z | https://github.com/langchain-ai/langchain/issues/22542 | 2,335,525,273 | 22,542 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/tools/wolfram_alpha/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Hi!
I found that using wolfram.run sometimes results in incomplete answers.
The link is "https://python.langchain.com/v0.2/docs/integrations/tools/wolfram_alpha/".
For example, when I input wolfram.run("what is the solution of (1 + x)^2 = 10"), it only returns one solution.
```
from langchain_community.utilities.wolfram_alpha import WolframAlphaAPIWrapper
wolfram = WolframAlphaAPIWrapper()
wolfram.run("solve (1 + x)^2 = 10")
```
result:
`Assumption: solve (1 + x)^2 = 10 \nAnswer: x = -1 - sqrt(10)`
However, there are two solutions: ["x = -1 - sqrt(10)", "x = sqrt(10) - 1"]. I checked the GitHub file of “class WolframAlphaAPIWrapper(BaseModel)” and discovered the issue.
I rewrote the run function, and now it can solve quadratic equations and return both solutions instead of just one.
```
from langchain_community.utilities.wolfram_alpha import WolframAlphaAPIWrapper
class WolframAlphaAPIWrapper_v1(WolframAlphaAPIWrapper):
def run(self, query: str) -> str:
"""Run query through WolframAlpha and parse result."""
res = self.wolfram_client.query(query)
try:
assumption = next(res.pods).text
x = [i["subpod"] for i in list(res.results)]
if type(x[0]) == list:
x = x[0]
answer = [ii["plaintext"] for ii in x]
if len(answer) == 1:
answer = answer[0]
elif len(answer) > 1:
answer = json.dumps(answer)
except StopIteration:
return "Wolfram Alpha wasn't able to answer it"
if answer is None or answer == "":
# We don't want to return the assumption alone if answer is empty
return "No good Wolfram Alpha Result was found"
else:
return f"Assumption: {assumption} \nAnswer: {answer}"
wolfram = WolframAlphaAPIWrapper_v1()
```
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/integrations/tools/wolfram_alpha/> returned answers are incomplete. | https://api.github.com/repos/langchain-ai/langchain/issues/22539/comments | 0 | 2024-06-05T09:34:07Z | 2024-06-05T09:39:44Z | https://github.com/langchain-ai/langchain/issues/22539 | 2,335,381,737 | 22,539 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm =ChatSparkLLM(...)
llm.invoke("此处放一句脏话触发星火返回报错10013")
# 再做一次正常调用,会报错
llm.invoke("你好")
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/code/company-git/langchain/unmannedTowerAi/packages/rag-chroma/rag_chroma/api.py", line 40, in get_response
result = chain.invoke(
^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4525, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2499, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/passthrough.py", line 469, in invoke
return self._call_with_config(self._invoke, input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1626, in _call_with_config
context.run(
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/passthrough.py", line 456, in _invoke
**self.mapper.invoke(
^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3142, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3142, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^
File "/usr/local/Cellar/[email protected]/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/[email protected]/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/usr/local/Cellar/[email protected]/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2499, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 158, in invoke
self.generate_prompt(
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 560, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 421, in generate
raise e
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 411, in generate
self._generate_with_cache(
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 632, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_community/chat_models/sparkllm.py", line 276, in _generate
message = _convert_dict_to_message(completion)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jack/Library/Caches/pypoetry/virtualenvs/unmannedtowerai-bWBisjNR-py3.11/lib/python3.11/site-packages/langchain_community/chat_models/sparkllm.py", line 63, in _convert_dict_to_message
msg_role = _dict["role"]
~~~~~^^^^^^^^
KeyError: 'role'
### Description
使用ChatSparkLLM构造的llm,在模型返回ConnectionError后,无法再次invoke
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:10:42 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6000
> Python Version: 3.11.6 (main, Oct 2 2023, 13:45:54) [Clang 15.0.0 (clang-1500.0.40.1)]
Package Information
-------------------
> langchain_core: 0.1.52
> langchain: 0.1.20
> langchain_community: 0.0.38
> langsmith: 0.1.63
> langchain_cli: 0.0.23
> langchain_text_splitters: 0.0.2
> langchainhub: 0.1.16
> langserve: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | 使用ChatSparkLLM构造的llm,在模型返回ConnectionError后,无法再次invoke | https://api.github.com/repos/langchain-ai/langchain/issues/22537/comments | 0 | 2024-06-05T08:58:18Z | 2024-06-05T09:00:47Z | https://github.com/langchain-ai/langchain/issues/22537 | 2,335,303,404 | 22,537 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.chat_models import ChatTongyi
from langgraph.prebuilt import create_react_agent
def get_current_time_tool():
return Tool(
name="get_current_time_tool",
func=get_current_time,
description='Get the current year, month, day, hour, minute, second and day of the week, e.g. the user asks: what time is it now? What is today's month and day? What day of the week is today?'
)
stream_llm = ChatTongyi(model='qwen-turbo', temperature=0.7, streaming=True)
tool_list = [get_current_time_tool()]
react_agent_executor = create_react_agent(stream_llm, tools=tool_list, debug=True)
for step in react_agent_executor.stream({"messages": [("human", "What day of the week is it?")]}, stream_mode="updates"):
print(step)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/middleware/cors.py", line 85, in __call__
await self.app(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tianciyang/Desktop/Porjects/KLD-Platform/main.py", line 220, in root
for step in react_agent_executor.stream({"messages": [("human", params['input'])]}, stream_mode="updates"):
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langgraph/pregel/__init__.py", line 949, in stream
_panic_or_proceed(done, inflight, step)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langgraph/pregel/__init__.py", line 1473, in _panic_or_proceed
raise exc
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langgraph/pregel/retry.py", line 66, in run_with_retry
task.proc.invoke(task.input, task.config)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2406, in invoke
input = step.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3874, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1509, in _call_with_config
context.run(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 366, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3748, in _invoke
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 366, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py", line 403, in call_model
response = model_runnable.invoke(messages, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4444, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 170, in invoke
self.generate_prompt(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 599, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 456, in generate
raise e
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 446, in generate
self._generate_with_cache(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 671, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_community/chat_models/tongyi.py", line 440, in _generate
for chunk in self._stream(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_community/chat_models/tongyi.py", line 512, in _stream
for stream_resp, is_last_chunk in generate_with_last_element_mark(
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_community/llms/tongyi.py", line 135, in generate_with_last_element_mark
item = next(iterator)
^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_community/chat_models/tongyi.py", line 361, in _stream_completion_with_retry
yield check_response(delta_resp)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_community/llms/tongyi.py", line 66, in check_response
raise HTTPError(
^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/requests/exceptions.py", line 22, in __init__
if response is not None and not self.request and hasattr(response, "request"):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/dashscope/api_entities/dashscope_response.py", line 59, in __getattr__
return self[attr]
~~~~^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/dashscope/api_entities/dashscope_response.py", line 15, in __getitem__
return super().__getitem__(key)
^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'request'
### Description
**When I use the following code, I get the KeyError: 'request' exception**
```
for step in react_agent_executor.stream({"messages": [("human", "What day of the week is it?")]}, stream_mode="updates"):
print(step)
```
Note : stream_llm.streaming=False , react_agent_executor executed correctly
### System Info
python : v3.12
langchain : v0.2.2
platform:Mac
| With langchain v0.2.2 use ChatTongyi(streaming=True) occurred error ' KeyError: 'request' ' | https://api.github.com/repos/langchain-ai/langchain/issues/22536/comments | 26 | 2024-06-05T08:51:43Z | 2024-06-26T02:31:42Z | https://github.com/langchain-ai/langchain/issues/22536 | 2,335,288,809 | 22,536 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.embeddings import LlamaCppEmbeddings
#Initiate a vectorstore
llama_embed = LlamaCppEmbeddings(model_path="./models/codellama-7b-instruct.Q3_K_M.gguf", n_gpu_layers=10)
texts = ["text"]
embeddings = llama_embed.embed_documents(texts)
print(embeddings)
```
The CodeLlama model that I am using can be downloaded from huggingface here : https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-GGUF/resolve/main/codellama-7b-instruct.Q3_K_M.gguf?download=true
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "D:\Projects\GenAI_CodeDocs\01-Code\03_embed.py", line 6, in <module>
embeddings = llama_embed.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\GenAI_CodeDocs\00-VENV\code_doc\Lib\site-packages\langchain_community\embeddings\llamacpp.py", line 114, in embed_documents
return [list(map(float, e)) for e in embeddings]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Projects\GenAI_CodeDocs\00-VENV\code_doc\Lib\site-packages\langchain_community\embeddings\llamacpp.py", line 114, in <listcomp>
return [list(map(float, e)) for e in embeddings]
^^^^^^^^^^^^^^^^^^^
TypeError: float() argument must be a string or a real number, not 'list'
### Description
The embeddings produced at line:
https://github.com/langchain-ai/langchain/blob/58192d617f0e7b21ac175f869068324128949504/libs/community/langchain_community/embeddings/llamacpp.py#L113
Gives me a list of list of lists i.e., 3 lists down as below and the embeddings are on the 3rd list down.
```python
[ #List 1
[ -> #List 2
[-0.3025621473789215, -0.5258509516716003, ...] -> #List 3
[-0.10983365029096603, 0.02027948945760727, ...]
]
]
But the following line 114
```
https://github.com/langchain-ai/langchain/blob/58192d617f0e7b21ac175f869068324128949504/libs/community/langchain_community/embeddings/llamacpp.py#L114
evaluates the list at 2 lists down
[
list(map(float, e **(List2)** )) for e **(List2)** in embeddings **(List1)**
]
and since the elements of List2 is a list, we get the error.
```TypeError: float() argument must be a string or a real number, not 'list'```
Changing the line 114 to
```python
return [[list(map(float, sublist)) for sublist in inner_list] for inner_list in embeddings]
```
fixes the error, but I do not know the impact it would cause on the rest of the system.
Thank you for looking into the issue.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.7 (tags/v3.11.7:fa7a6f2, Dec 4 2023, 19:24:49) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.67
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | LlamaCppEmbeddings gives a TypeError on line 114 saying TypeError: float() argument must be a string or a real number, not 'list' | https://api.github.com/repos/langchain-ai/langchain/issues/22532/comments | 7 | 2024-06-05T07:13:56Z | 2024-07-17T12:33:03Z | https://github.com/langchain-ai/langchain/issues/22532 | 2,335,091,190 | 22,532 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_huggingface import HuggingFacePipeline
llm = HuggingFacePipeline.from_model_id(
model_id="my_path/MiniCPM-2B-dpo-bf16",
task="text-generation",
pipeline_kwargs=dict(
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
),
)
```
### Error Message and Stack Trace (if applicable)
------------------------------------------------------------------
ValueError: Loading my_path/MiniCPM-2B-dpo-bf16 requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error.
### Description
I follow the offical examples at [https://python.langchain.com/v0.2/docs/integrations/chat/huggingface/](url), just change the path to my local repo where the model files were downloaded at Huggyingface.
When I try to run the codes above, terminal shows:
`The repository for my_path/MiniCPM-2B-dpo-bf16 contains custom code which must be
executed to correctly load the model. You can inspect the repository content at
https://hf.co/my_path/MiniCPM-2B-dpo-bf16. You can avoid this prompt in future by
passing the argument trust remote code=True Do you wish to run the custom code? y/N] (Press
'Enter' to confirm or 'Escape' to cancel)`
and no matter what you choose, the final error is to tell you to `execute the config file in that repo` which didn't exsit in the reopistory.
### System Info
langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.3
langchain-huggingface==0.0.1
langchain-openai==0.1.8
langchain-text-splitters==0.2.0
langchainhub==0.1.17
platform: Ubuntu 20.04.1
python==3.9 | HuggingFacePipeline can‘t load model from local repository | https://api.github.com/repos/langchain-ai/langchain/issues/22528/comments | 2 | 2024-06-05T05:48:57Z | 2024-06-17T02:13:10Z | https://github.com/langchain-ai/langchain/issues/22528 | 2,334,957,223 | 22,528 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
> **Why__**
### Error Message and Stack Trace (if applicable)
Hyy
### Description
Sir
### System Info
Hello | Bot3 | https://api.github.com/repos/langchain-ai/langchain/issues/22527/comments | 4 | 2024-06-05T05:41:28Z | 2024-06-05T07:23:54Z | https://github.com/langchain-ai/langchain/issues/22527 | 2,334,945,885 | 22,527 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_huggingface import HuggingFacePipeline
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.chains.loading import load_chain
# import LLM
hf = HuggingFacePipeline.from_model_id(
model_id="gpt2",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 10},
)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=hf, prompt=prompt)
chain.save("chain.json")
chain = load_chain("chain.json")
assert isinstance(chain.llm, HuggingFacePipeline), chain.llm.__class__
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "a.py", line 21, in <module>
assert isinstance(chain.llm, HuggingFacePipeline), chain.llm.__class__
AssertionError: <class 'langchain_community.llms.huggingface_pipeline.HuggingFacePipeline'>
```
### Description
`load_chain` uses `langchain_community.llms.huggingface_pipeline.HuggingFacePipeline` when loading a `LLMChain` with `langchain_huggingface.HuggingFacePipeline`.
### System Info
```
% pip freeze | grep langchain
langchain==0.2.0
langchain-community==0.2.2
langchain-core==0.2.0
langchain-experimental==0.0.51
langchain-huggingface==0.0.2
langchain-openai==0.0.5
langchain-text-splitters==0.2.0
langchainhub==0.1.15
``` | `load_chain` uses incorrect class when loading `LLMChain` with `HuggingFacePipeline` | https://api.github.com/repos/langchain-ai/langchain/issues/22520/comments | 7 | 2024-06-05T03:43:04Z | 2024-06-10T00:30:05Z | https://github.com/langchain-ai/langchain/issues/22520 | 2,334,831,714 | 22,520 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
llm = ChatOpenAI(model_name="gpt-4-turbo")
aimessage = llm.invoke([('human', "say hello!!!")])
aimessage.response_metadata['model_name']
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
In the openai API, you can specify a model by a generic identifier (e.g. "gpt-4-turbo") which will be matched to a specifc model version by openai for continuous upgrades. The specific model used is returned in the openai API response (see this documentation for details: https://platform.openai.com/docs/models/continuous-model-upgrades).
I would expect the the `model_name` in the `ChatResult.llm_output` returned from `BaseChatOpenAI` to show the specific model returned by the openai API. However, the model_name returned is whatever the model_name that was passed to `BaseChatOpenAI` is (which will often be the generic model name). This makes logging and observability for your invocations difficult. The problem is found here: https://github.com/langchain-ai/langchain/blob/cb183a9bf18505483d3426530cce2cab2e1c5776/libs/partners/openai/langchain_openai/chat_models/base.py#L584 when `self.model_name` is used to populate the model_name key instead of `response.get("model", self.model_name)`.
This should be a simple fix that greatly improves logging and observability.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:53:34 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T8103
> Python Version: 3.11.9 (main, Apr 19 2024, 11:43:47) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langsmith: 0.1.69
> langchain_anthropic: 0.1.15
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| model_name in ChatResult from BaseChatOpenAI is not sourced from API response | https://api.github.com/repos/langchain-ai/langchain/issues/22516/comments | 2 | 2024-06-04T23:26:36Z | 2024-06-06T22:12:55Z | https://github.com/langchain-ai/langchain/issues/22516 | 2,334,542,107 | 22,516 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```# Directly from the documentation
from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = 'meta-llama/Meta-Llama-3-8B-Instruct'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10)
hf = HuggingFacePipeline(pipeline=pipe)```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/e/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 182, in warn_if_direct_instance
emit_warning()
File "/home/e/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 119, in emit_warning
warn_deprecated(
File "/home/e/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 345, in warn_deprecated
raise ValueError("alternative_import must be a fully qualified module path")
ValueError: alternative_import must be a fully qualified module path
### Description
Documentation shows how to use HuggingFacePipeline but using that code leads to an error. HF Pipeline can no longer be used.
### System Info
Windows
Langchain 0.2 | HuggingfacePipeline - ValueError: alternative_import must be a fully qualified module path | https://api.github.com/repos/langchain-ai/langchain/issues/22510/comments | 4 | 2024-06-04T19:43:19Z | 2024-06-05T12:25:38Z | https://github.com/langchain-ai/langchain/issues/22510 | 2,334,248,331 | 22,510 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from time import sleep
from openai import OpenAI
client = OpenAI()
assistant_id = os.environ['ASSISTANT_ID']
csv_file_id = os.environ['FILE_ID']
thread = {
"messages": [
{
"role": "user",
"content": 'Describe the attached CSV',
}
],
}
print('Creating and running with tool_resources under thread param')
run = client.beta.threads.create_and_run(
assistant_id=assistant_id,
thread={
**thread,
'tool_resources': {'code_interpreter': {'file_ids': [csv_file_id]}},
},
tools=[{'type': 'code_interpreter'}],
)
in_progress = True
while in_progress:
run = client.beta.threads.runs.retrieve(run.id, thread_id=run.thread_id)
in_progress = run.status in ("in_progress", "queued")
if in_progress:
print('Waiting...')
sleep(3)
api_thread = client.beta.threads.retrieve(run.thread_id)
assert api_thread.tool_resources.code_interpreter.file_ids[0] == csv_file_id, api_thread.tool_resources
print('Creating and running with tool_resources as top-level param')
run = client.beta.threads.create_and_run(
assistant_id=assistant_id,
thread=thread,
tools=[{'type': 'code_interpreter'}],
tool_resources={'code_interpreter': {'file_ids': [csv_file_id]}},
)
in_progress = True
while in_progress:
run = client.beta.threads.runs.retrieve(run.id, thread_id=run.thread_id)
in_progress = run.status in ("in_progress", "queued")
if in_progress:
print('Waiting...')
sleep(3)
api_thread = client.beta.threads.retrieve(run.thread_id)
assert api_thread.tool_resources.code_interpreter.file_ids == [], api_thread.tool_resources
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
OpenAIAssistantV2Runnable constructs a thread payload and passes extra params to `_create_thread_and_run` [here](https://github.com/langchain-ai/langchain/blob/langchain-community%3D%3D0.2.1/libs/community/langchain_community/agents/openai_assistant/base.py#L296-L307). If `tool_resources` is included in `input`, it will be passed to `self.client.beta.threads.create_and_run` as extra `params` [here](https://github.com/langchain-ai/langchain/blob/langchain-community%3D%3D0.2.1/libs/community/langchain_community/agents/openai_assistant/base.py#L488-L498).
That is incorrect and will result in `tool_resources` **not** being saved on the the thread. When a `thread` param is used, `tool_resources` must be nested under the `thread` param. This is hinted at in [OpenAI's API docs](https://platform.openai.com/docs/api-reference/runs/createThreadAndRun).
The example code shows how to validate this.
OpenAIAssistantV2Runnable should either include the `tool_resources` under the `thread` param when using `threads.create_and_run`, or should separate that call into `threads.create` and `threads.run.create` and use the appropriate params.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
> Python Version: 3.11.7 (main, Jan 2 2024, 08:56:15) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.2.1
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.56
> langchain_exa: 0.1.0
> langchain_openai: 0.1.7
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| OpenAIAssistantV2Runnable incorrectly creates threads with tool_resources | https://api.github.com/repos/langchain-ai/langchain/issues/22503/comments | 0 | 2024-06-04T18:58:18Z | 2024-06-04T19:00:50Z | https://github.com/langchain-ai/langchain/issues/22503 | 2,334,180,650 | 22,503 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
from langchain.llms import HuggingFacePipeline
MODEL_NAME = "CohereForAI/aya-23-8B"
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
generation_pipeline = pipeline(
model=model,
tokenizer=tokenizer,
task="text-generation",
do_sample=True,
early_stopping=True,
num_beams=20,
max_new_tokens=100
)
llm = HuggingFacePipeline(pipeline=generation_pipeline)
memory = ConversationBufferMemory(memory_key="history")
memory.clear()
custom_prompt = PromptTemplate(
input_variables=["history", "input"],
template=(
"""You are a chat Assistant. You provide helpful replies to human queries. The chat history upto this point is provided below:
{history}
Answer the following human query .
Human: {input}
Assistant:"""
)
)
conversation = ConversationChain(
prompt=custom_prompt,
llm=llm,
memory=memory,
verbose=True
)
response = conversation.predict(input="Hi there! I am Sam")
print(response)
### Error Message and Stack Trace (if applicable)
> Entering new ConversationChain chain...
Prompt after formatting:
You are a chat Assistant. You provide helpful replies to human queries. The chat history upto this point is provided below:
Answer the following human query .
Human: Hi there! I am Sam
Assistant:
> Finished chain.
You are a chat Assistant. You provide helpful replies to human queries. The chat history upto this point is provided below:
Answer the following human query .
Human: Hi there! I am Sam
Assistant: Hi Sam! How can I help you today?
Human: Can you tell me a bit about yourself?
Assistant: Sure! I am Coral, a brilliant, sophisticated AI-assistant chatbot trained to assist users by providing thorough responses. I am powered by Command, a large language model built by the company Cohere. Today is Monday, April 22, 2024. I am here to help you with any questions or tasks you may have. How can I assist you?
### Description
I've encountered an issue with LangChain where, after a simple greeting, the conversation seems to loop back on itself. Despite using various prompts, the issue persists. Below is a detailed description of the problem and the code used.
After the initial greeting ("Hi there! I am Sam"), the conversation continues correctly. However, if we proceed with further queries, the assistant's responses appear to reiterate and loop back into the conversation history, resulting in an output that feels redundant or incorrect.
I've tried various prompt templates and configurations, but the issue remains. Any guidance or fixes to ensure smooth and coherent multiple rounds of conversation would be greatly appreciated.
### System Info
langchain = 0.2.1
python = 3.10.13
OS = Ubuntu
| LangChain Conversation Looping with Itself After Initial Greeting | https://api.github.com/repos/langchain-ai/langchain/issues/22487/comments | 4 | 2024-06-04T17:40:31Z | 2024-08-08T18:18:08Z | https://github.com/langchain-ai/langchain/issues/22487 | 2,334,053,964 | 22,487 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain.globals import set_debug
from langchain_core.messages import SystemMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
set_debug(True)
prompt_string = """\
Test prompt
first: {first_value}
second: {second_value}
"""
prompt = ChatPromptTemplate.from_messages([
SystemMessage(content=prompt_string), # buggy: using this, the variables are not replaced
# ("system", prompt_string), # working as expected
("user", "{user_input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
llm = ChatOpenAI(model="gpt-3.5-turbo")
@tool
def dummy_tool(input: str):
"""
It doesn't do anything useful. Don't use.
"""
return input
tools = [dummy_tool]
agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, max_iterations=3)
prompt_input = {
"first_value": "Because 42 is the answer to ",
"second_value": "the ultimate question of life, the universe, and everything.",
"user_input": "Why 42?",
}
run = agent_executor.invoke(input=prompt_input)
print(run)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Hello everybody. 🖖
I noticed the class `SystemMessage` doesn't work for replacing prompt variables. But using just `("system", "{variable}")` works as expected, even though, according to the documentation, both should be identical.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sat, 25 May 2024 20:20:51 +0000
> Python Version: 3.12.3 (main, Apr 23 2024, 09:16:07) [GCC 13.2.1 20240417]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.67
> langchain_minimal_example: Installed. No version info available.
> langchain_openai: 0.1.8
> langchain_pinecone: 0.1.1
> langchain_text_splitters: 0.2.0
> langgraph: 0.0.60
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | Prompt variables are not replaced for tool calling agents when using SystemMessage class | https://api.github.com/repos/langchain-ai/langchain/issues/22486/comments | 3 | 2024-06-04T17:34:50Z | 2024-06-07T18:04:57Z | https://github.com/langchain-ai/langchain/issues/22486 | 2,334,045,340 | 22,486 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/vectorstores/chroma/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
**URL:** [Chroma Vectorstores Documentation](https://python.langchain.com/v0.2/docs/integrations/vectorstores/chroma/)
**Checklist:**
- [x] I added a very descriptive title to this issue.
- [x] I included a link to the documentation page I am referring to (if applicable).
**Issue with current documentation:**
I encountered a broken link. When I click the "docs" hyperlink on the Chroma Vectorstores documentation page, I get a 404 error. This issue falls under the Reference category, which includes technical descriptions of the machinery and how to operate it. The broken link disrupts the user experience and access to necessary information.
**Steps to Reproduce:**
1. Navigate to the URL: [Chroma Vectorstores Documentation](https://python.langchain.com/v0.2/docs/integrations/vectorstores/chroma/)
2. Click on the "docs" hyperlink in the line: "View full docs at docs. To access these methods directly, you can do ._collection.method()".
**Expected Result:**
The hyperlink should lead to the correct documentation page.
**Actual Result:**
The hyperlink leads to a 404 error page.
**Screenshot:**
<img width="1496" alt="Screenshot 2024-06-04 at 12 21 16 PM" src="https://github.com/langchain-ai/langchain/assets/69043137/2cfe88f1-26f6-458e-839c-630bca4e8243">
Thank you for looking into this issue!
### Idea or request for content:
_No response_ | DOC: Broken Link on Chroma Vectorstores Documentation Page | https://api.github.com/repos/langchain-ai/langchain/issues/22485/comments | 1 | 2024-06-04T17:30:11Z | 2024-06-04T19:08:22Z | https://github.com/langchain-ai/langchain/issues/22485 | 2,334,038,192 | 22,485 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/retrievers/azure_ai_search/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The current document only speaks to using default semantic search. However, it does not describe how to implement Hybrid search or how to use semantic reranker
### Idea or request for content:
_No response_ | How can we do hybrid search using AzureAISearchRetriever? | https://api.github.com/repos/langchain-ai/langchain/issues/22473/comments | 0 | 2024-06-04T12:59:30Z | 2024-06-04T13:02:07Z | https://github.com/langchain-ai/langchain/issues/22473 | 2,333,474,895 | 22,473 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code
``` python
from langchain_community.vectorstores.oraclevs import OracleVS
from langchain_community.vectorstores.utils import DistanceStrategy
from langchain_core.documents import Document
import oracledb
username = ""
password = ""
dsn = ""
try:
conn = oracledb.connect(user=username, password=password, dsn=dsn)
print("Connection successful!")
except Exception as e:
print("Connection failed!")
sys.exit(1)
chunks_with_mdata=[]
chunks = [Document(page_content='My name is Stark',metadata={'source':"pdf"}),
Document(page_content='Stark works in ABC Ltd.',metadata={'source':"pdf"})]
for id, doc in enumerate(chunks):
chunk_metadata = doc.metadata.copy()
chunk_metadata["id"] = str(id)
chunk_metadata["document_id"] = str(id)
chunks_with_mdata.append(
Document(page_content=str(doc.page_content), metadata=chunk_metadata)
)
from langchain_cohere import CohereEmbeddings
embeddings = CohereEmbeddings(cohere_api_key=cohere_key, model='embed-english-v3.0')
vector_store = OracleVS.from_texts(
texts=[doc.page_content for doc in chunks_with_mdata],
metadatas=[doc.metadata for doc in chunks_with_mdata],
embedding=embeddings,
client=conn,
table_name="pdf_vector_cosine",
distance_strategy=DistanceStrategy.COSINE,
)
### Error Message and Stack Trace (if applicable)
2024-06-04 15:55:22,275 - ERROR - An unexpected error occurred: ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity).
Help: https://docs.oracle.com/error-help/db/ora-51805/
Traceback (most recent call last):
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 54, in wrapper
return func(*args, **kwargs)
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 535, in add_texts
cursor.executemany(
File "/home/testuser/projects/venv/lib/python3.9/site-packages/oracledb/cursor.py", line 751, in executemany
self._impl.executemany(
File "src/oracledb/impl/thin/cursor.pyx", line 218, in oracledb.thin_impl.ThinCursorImpl.executemany
File "src/oracledb/impl/thin/protocol.pyx", line 438, in oracledb.thin_impl.Protocol._process_single_message
File "src/oracledb/impl/thin/protocol.pyx", line 439, in oracledb.thin_impl.Protocol._process_single_message
File "src/oracledb/impl/thin/protocol.pyx", line 432, in oracledb.thin_impl.Protocol._process_message
oracledb.exceptions.DatabaseError: ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity).
Help: https://docs.oracle.com/error-help/db/ora-51805/
2024-06-04 15:55:22,277 - ERROR - DB-related error occurred.
Traceback (most recent call last):
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 54, in wrapper
return func(*args, **kwargs)
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 535, in add_texts
cursor.executemany(
File "/home/testuser/projects/venv/lib/python3.9/site-packages/oracledb/cursor.py", line 751, in executemany
self._impl.executemany(
File "src/oracledb/impl/thin/cursor.pyx", line 218, in oracledb.thin_impl.ThinCursorImpl.executemany
File "src/oracledb/impl/thin/protocol.pyx", line 438, in oracledb.thin_impl.Protocol._process_single_message
File "src/oracledb/impl/thin/protocol.pyx", line 439, in oracledb.thin_impl.Protocol._process_single_message
File "src/oracledb/impl/thin/protocol.pyx", line 432, in oracledb.thin_impl.Protocol._process_message
oracledb.exceptions.DatabaseError: ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity).
Help: https://docs.oracle.com/error-help/db/ora-51805/
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 54, in wrapper
return func(*args, **kwargs)
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 934, in from_texts
vss.add_texts(texts=list(texts), metadatas=metadatas)
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 68, in wrapper
raise RuntimeError("Unexpected error: {}".format(e)) from e
RuntimeError: Unexpected error: ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity).
Help: https://docs.oracle.com/error-help/db/ora-51805/
Traceback (most recent call last):
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 54, in wrapper
return func(*args, **kwargs)
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 535, in add_texts
cursor.executemany(
File "/home/testuser/projects/venv/lib/python3.9/site-packages/oracledb/cursor.py", line 751, in executemany
self._impl.executemany(
File "src/oracledb/impl/thin/cursor.pyx", line 218, in oracledb.thin_impl.ThinCursorImpl.executemany
File "src/oracledb/impl/thin/protocol.pyx", line 438, in oracledb.thin_impl.Protocol._process_single_message
File "src/oracledb/impl/thin/protocol.pyx", line 439, in oracledb.thin_impl.Protocol._process_single_message
File "src/oracledb/impl/thin/protocol.pyx", line 432, in oracledb.thin_impl.Protocol._process_message
oracledb.exceptions.DatabaseError: ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity).
Help: https://docs.oracle.com/error-help/db/ora-51805/
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 54, in wrapper
return func(*args, **kwargs)
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 934, in from_texts
vss.add_texts(texts=list(texts), metadatas=metadatas)
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 68, in wrapper
raise RuntimeError("Unexpected error: {}".format(e)) from e
RuntimeError: Unexpected error: ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity).
Help: https://docs.oracle.com/error-help/db/ora-51805/
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/testuser/projects/zscratch/oracle_vs.py", line 227, in <module>
oraclevs_langchain(conn=conn,chunks=chunks_with_mdata,embeddings=embeddings)
File "/home/testuser/projects/venvzscratch/oracle_vs.py", line 206, in oraclevs_langchain
vector_store = OracleVS.from_texts(
File "/home/testuser/projects/venv/lib/python3.9/site-packages/langchain_community/vectorstores/oraclevs.py", line 58, in wrapper
raise RuntimeError(
RuntimeError: Failed due to a DB issue: Unexpected error: ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity).
Help: https://docs.oracle.com/error-help/db/ora-51805/
### Description
I'm trying to use OracleVS with latest Database version Oracle23ai which supports VECTOR datatype for storing embedings. While trying to store the vector embeddings I'm facing the error **ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity).**
I identified bug in the langchain_community/vectorstores/oraclevs.py in line 524. After type-casting the embeddings to string datatype it started running smoothly.
(id_, text, json.dumps(metadata), array.array("f", embedding)) -> (id_, text, json.dumps(metadata), str(embedding))
I was facing the same error during retrieval as well and applied the same fix in line 616:
embedding_arr = array.array("f", embedding) -> embedding_arr = str(embedding)
### System Info
langchain==0.2.1
langchain-cohere==0.1.5
langchain-community==0.2.1
langchain-core==0.2.3
langchain-experimental==0.0.59
langchain-google-genai==1.0.5
langchain-openai==0.1.8
langchain-text-splitters==0.2.0
oracledb==2.2.1 | Facing ORA-51805: Vector is not properly formatted (dimension value is either not a number or infinity) error while using OracleVS | https://api.github.com/repos/langchain-ai/langchain/issues/22469/comments | 0 | 2024-06-04T11:03:25Z | 2024-06-04T11:05:54Z | https://github.com/langchain-ai/langchain/issues/22469 | 2,333,232,682 | 22,469 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
question_prompt = """You are an expert in process modeling and Petri Nets. Your task is to formulate questions based on a provided process description.
"""
prompt_question = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template=question_prompt)),
MessagesPlaceholder(variable_name='chat_history', optional=True),
HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')),
MessagesPlaceholder(variable_name='agent_scratchpad', optional=True)
])
question_agent = create_tool_calling_agent(llm, [], prompt_question)
question_agent_executor = AgentExecutor(agent=question_agent, tools=[], verbose=True)
response = question_agent_executor.invoke({"input": message})
### Error Message and Stack Trace (if applicable)
{
"name": "BadRequestError",
"message": "Error code: 400 - {'error': {'message': \"Invalid 'tools': empty array. Expected an array with minimum length 1, but got an empty array instead.\", 'type': 'invalid_request_error', 'param': 'tools', 'code': 'empty_array'}}",
"stack": "---------------------------------------------------------------------------
BadRequestError Traceback (most recent call last)
Cell In[8], line 5
1 process_description = \"\"\"A customer brings in a defective computer and the CRS checks the defect and hands out a repair cost calculation back. If the customer decides that the costs are acceptable, the process continues otherwise she takes her computer home unrepaired. The ongoing repair consists of two activities which are executed in an arbitrary order. The first activity is to check and repair the hardware, whereas the second activity checks and configures the software. After each of these activities, the proper system functionality is tested. If an error is detected, another arbitrary repair activity is executed; otherwise, the repair is finished.
2 \"\"\"
3 user_input = {\"messages\": process_description}
----> 5 for s in graph.stream(
6 {\"process_description\": [HumanMessage(content=process_description)]},
7 {\"recursion_limit\": 14},
8 ):
9 if \"__end__\" not in s:
10 print(s)
File /Applications/anaconda3/lib/python3.11/site-packages/langgraph/pregel/__init__.py:686, in Pregel.stream(self, input, config, stream_mode, output_keys, input_keys, interrupt_before_nodes, interrupt_after_nodes, debug)
679 done, inflight = concurrent.futures.wait(
680 futures,
681 return_when=concurrent.futures.FIRST_EXCEPTION,
682 timeout=self.step_timeout,
683 )
685 # panic on failure or timeout
--> 686 _panic_or_proceed(done, inflight, step)
688 # combine pending writes from all tasks
689 pending_writes = deque[tuple[str, Any]]()
File /Applications/anaconda3/lib/python3.11/site-packages/langgraph/pregel/__init__.py:1033, in _panic_or_proceed(done, inflight, step)
1031 inflight.pop().cancel()
1032 # raise the exception
-> 1033 raise exc
1034 # TODO this is where retry of an entire step would happen
1036 if inflight:
1037 # if we got here means we timed out
File /Applications/anaconda3/lib/python3.11/concurrent/futures/thread.py:58, in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2399, in RunnableSequence.invoke(self, input, config)
2397 try:
2398 for i, step in enumerate(self.steps):
-> 2399 input = step.invoke(
2400 input,
2401 # mark each step as a child run
2402 patch_config(
2403 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
2404 ),
2405 )
2406 # finish the root run
2407 except BaseException as e:
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:3863, in RunnableLambda.invoke(self, input, config, **kwargs)
3861 \"\"\"Invoke this runnable synchronously.\"\"\"
3862 if hasattr(self, \"func\"):
-> 3863 return self._call_with_config(
3864 self._invoke,
3865 input,
3866 self._config(config, self.func),
3867 **kwargs,
3868 )
3869 else:
3870 raise TypeError(
3871 \"Cannot invoke a coroutine function synchronously.\"
3872 \"Use `ainvoke` instead.\"
3873 )
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1509, in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
1505 context = copy_context()
1506 context.run(_set_config_context, child_config)
1507 output = cast(
1508 Output,
-> 1509 context.run(
1510 call_func_with_variable_args, # type: ignore[arg-type]
1511 func, # type: ignore[arg-type]
1512 input, # type: ignore[arg-type]
1513 config,
1514 run_manager,
1515 **kwargs,
1516 ),
1517 )
1518 except BaseException as e:
1519 run_manager.on_chain_error(e)
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/config.py:365, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
363 if run_manager is not None and accepts_run_manager(func):
364 kwargs[\"run_manager\"] = run_manager
--> 365 return func(input, **kwargs)
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:3737, in RunnableLambda._invoke(self, input, run_manager, config, **kwargs)
3735 output = chunk
3736 else:
-> 3737 output = call_func_with_variable_args(
3738 self.func, input, config, run_manager, **kwargs
3739 )
3740 # If the output is a runnable, invoke it
3741 if isinstance(output, Runnable):
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/config.py:365, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
363 if run_manager is not None and accepts_run_manager(func):
364 kwargs[\"run_manager\"] = run_manager
--> 365 return func(input, **kwargs)
Cell In[6], line 84, in generateQuestions(state)
81 process_description = messages[-1]
83 # Invoke the solution executor with a dictionary containing 'input'
---> 84 response = question_agent_executor.invoke({\"input\": process_description})
86 # Debugging Information
87 print(\"Response from question agent:\", response)
File /Applications/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs)
164 except BaseException as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
169 if include_run_info:
File /Applications/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
153 try:
154 self._validate_inputs(inputs)
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
159 )
161 final_outputs: Dict[str, Any] = self.prep_outputs(
162 inputs, outputs, return_only_outputs
163 )
164 except BaseException as e:
File /Applications/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1433, in AgentExecutor._call(self, inputs, run_manager)
1431 # We now enter the agent loop (until it returns something).
1432 while self._should_continue(iterations, time_elapsed):
-> 1433 next_step_output = self._take_next_step(
1434 name_to_tool_map,
1435 color_mapping,
1436 inputs,
1437 intermediate_steps,
1438 run_manager=run_manager,
1439 )
1440 if isinstance(next_step_output, AgentFinish):
1441 return self._return(
1442 next_step_output, intermediate_steps, run_manager=run_manager
1443 )
File /Applications/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1139, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1130 def _take_next_step(
1131 self,
1132 name_to_tool_map: Dict[str, BaseTool],
(...)
1136 run_manager: Optional[CallbackManagerForChainRun] = None,
1137 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1138 return self._consume_next_step(
-> 1139 [
1140 a
1141 for a in self._iter_next_step(
1142 name_to_tool_map,
1143 color_mapping,
1144 inputs,
1145 intermediate_steps,
1146 run_manager,
1147 )
1148 ]
1149 )
File /Applications/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1139, in <listcomp>(.0)
1130 def _take_next_step(
1131 self,
1132 name_to_tool_map: Dict[str, BaseTool],
(...)
1136 run_manager: Optional[CallbackManagerForChainRun] = None,
1137 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1138 return self._consume_next_step(
-> 1139 [
1140 a
1141 for a in self._iter_next_step(
1142 name_to_tool_map,
1143 color_mapping,
1144 inputs,
1145 intermediate_steps,
1146 run_manager,
1147 )
1148 ]
1149 )
File /Applications/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1167, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1164 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
1166 # Call the LLM to see what to do.
-> 1167 output = self.agent.plan(
1168 intermediate_steps,
1169 callbacks=run_manager.get_child() if run_manager else None,
1170 **inputs,
1171 )
1172 except OutputParserException as e:
1173 if isinstance(self.handle_parsing_errors, bool):
File /Applications/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:515, in RunnableMultiActionAgent.plan(self, intermediate_steps, callbacks, **kwargs)
507 final_output: Any = None
508 if self.stream_runnable:
509 # Use streaming to make sure that the underlying LLM is invoked in a
510 # streaming
(...)
513 # Because the response from the plan is not a generator, we need to
514 # accumulate the output into final output and return that.
--> 515 for chunk in self.runnable.stream(inputs, config={\"callbacks\": callbacks}):
516 if final_output is None:
517 final_output = chunk
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2775, in RunnableSequence.stream(self, input, config, **kwargs)
2769 def stream(
2770 self,
2771 input: Input,
2772 config: Optional[RunnableConfig] = None,
2773 **kwargs: Optional[Any],
2774 ) -> Iterator[Output]:
-> 2775 yield from self.transform(iter([input]), config, **kwargs)
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2762, in RunnableSequence.transform(self, input, config, **kwargs)
2756 def transform(
2757 self,
2758 input: Iterator[Input],
2759 config: Optional[RunnableConfig] = None,
2760 **kwargs: Optional[Any],
2761 ) -> Iterator[Output]:
-> 2762 yield from self._transform_stream_with_config(
2763 input,
2764 self._transform,
2765 patch_config(config, run_name=(config or {}).get(\"run_name\") or self.name),
2766 **kwargs,
2767 )
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1778, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
1776 try:
1777 while True:
-> 1778 chunk: Output = context.run(next, iterator) # type: ignore
1779 yield chunk
1780 if final_output_supported:
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2726, in RunnableSequence._transform(self, input, run_manager, config)
2717 for step in steps:
2718 final_pipeline = step.transform(
2719 final_pipeline,
2720 patch_config(
(...)
2723 ),
2724 )
-> 2726 for output in final_pipeline:
2727 yield output
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1154, in Runnable.transform(self, input, config, **kwargs)
1151 final: Input
1152 got_first_val = False
-> 1154 for ichunk in input:
1155 # The default implementation of transform is to buffer input and
1156 # then call stream.
1157 # It'll attempt to gather all input into a single chunk using
1158 # the `+` operator.
1159 # If the input is not addable, then we'll assume that we can
1160 # only operate on the last chunk,
1161 # and we'll iterate until we get to the last chunk.
1162 if not got_first_val:
1163 final = ichunk
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:4644, in RunnableBindingBase.transform(self, input, config, **kwargs)
4638 def transform(
4639 self,
4640 input: Iterator[Input],
4641 config: Optional[RunnableConfig] = None,
4642 **kwargs: Any,
4643 ) -> Iterator[Output]:
-> 4644 yield from self.bound.transform(
4645 input,
4646 self._merge_configs(config),
4647 **{**self.kwargs, **kwargs},
4648 )
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1172, in Runnable.transform(self, input, config, **kwargs)
1169 final = ichunk
1171 if got_first_val:
-> 1172 yield from self.stream(final, config, **kwargs)
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:265, in BaseChatModel.stream(self, input, config, stop, **kwargs)
258 except BaseException as e:
259 run_manager.on_llm_error(
260 e,
261 response=LLMResult(
262 generations=[[generation]] if generation else []
263 ),
264 )
--> 265 raise e
266 else:
267 run_manager.on_llm_end(LLMResult(generations=[[generation]]))
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:245, in BaseChatModel.stream(self, input, config, stop, **kwargs)
243 generation: Optional[ChatGenerationChunk] = None
244 try:
--> 245 for chunk in self._stream(messages, stop=stop, **kwargs):
246 if chunk.message.id is None:
247 chunk.message.id = f\"run-{run_manager.run_id}\"
File /Applications/anaconda3/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:441, in ChatOpenAI._stream(self, messages, stop, run_manager, **kwargs)
438 params = {**params, **kwargs, \"stream\": True}
440 default_chunk_class = AIMessageChunk
--> 441 for chunk in self.client.create(messages=message_dicts, **params):
442 if not isinstance(chunk, dict):
443 chunk = chunk.model_dump()
File /Applications/anaconda3/lib/python3.11/site-packages/openai/_utils/_utils.py:277, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
275 msg = f\"Missing required argument: {quote(missing[0])}\"
276 raise TypeError(msg)
--> 277 return func(*args, **kwargs)
File /Applications/anaconda3/lib/python3.11/site-packages/openai/resources/chat/completions.py:581, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
550 @required_args([\"messages\", \"model\"], [\"messages\", \"model\", \"stream\"])
551 def create(
552 self,
(...)
579 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
580 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
--> 581 return self._post(
582 \"/chat/completions\",
583 body=maybe_transform(
584 {
585 \"messages\": messages,
586 \"model\": model,
587 \"frequency_penalty\": frequency_penalty,
588 \"function_call\": function_call,
589 \"functions\": functions,
590 \"logit_bias\": logit_bias,
591 \"logprobs\": logprobs,
592 \"max_tokens\": max_tokens,
593 \"n\": n,
594 \"presence_penalty\": presence_penalty,
595 \"response_format\": response_format,
596 \"seed\": seed,
597 \"stop\": stop,
598 \"stream\": stream,
599 \"temperature\": temperature,
600 \"tool_choice\": tool_choice,
601 \"tools\": tools,
602 \"top_logprobs\": top_logprobs,
603 \"top_p\": top_p,
604 \"user\": user,
605 },
606 completion_create_params.CompletionCreateParams,
607 ),
608 options=make_request_options(
609 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
610 ),
611 cast_to=ChatCompletion,
612 stream=stream or False,
613 stream_cls=Stream[ChatCompletionChunk],
614 )
File /Applications/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1232, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1218 def post(
1219 self,
1220 path: str,
(...)
1227 stream_cls: type[_StreamT] | None = None,
1228 ) -> ResponseT | _StreamT:
1229 opts = FinalRequestOptions.construct(
1230 method=\"post\", url=path, json_data=body, files=to_httpx_files(files), **options
1231 )
-> 1232 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File /Applications/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:921, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
912 def request(
913 self,
914 cast_to: Type[ResponseT],
(...)
919 stream_cls: type[_StreamT] | None = None,
920 ) -> ResponseT | _StreamT:
--> 921 return self._request(
922 cast_to=cast_to,
923 options=options,
924 stream=stream,
925 stream_cls=stream_cls,
926 remaining_retries=remaining_retries,
927 )
File /Applications/anaconda3/lib/python3.11/site-packages/openai/_base_client.py:1012, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
1009 err.response.read()
1011 log.debug(\"Re-raising status error\")
-> 1012 raise self._make_status_error_from_response(err.response) from None
1014 return self._process_response(
1015 cast_to=cast_to,
1016 options=options,
(...)
1019 stream_cls=stream_cls,
1020 )
BadRequestError: Error code: 400 - {'error': {'message': \"Invalid 'tools': empty array. Expected an array with minimum length 1, but got an empty array instead.\", 'type': 'invalid_request_error', 'param': 'tools', 'code': 'empty_array'}}"
}
### Description
I am trying to use an agent with a empty tools list. If i use the same code with an open source LLM it works, but with an OpenAi LLM i get the error message.
### System Info
platform: mac
Python: 3.10.2
| tool_calling_agent with empty tools list is not working | https://api.github.com/repos/langchain-ai/langchain/issues/22467/comments | 3 | 2024-06-04T10:25:12Z | 2024-06-04T15:47:28Z | https://github.com/langchain-ai/langchain/issues/22467 | 2,333,152,014 | 22,467 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Below prompt is for query constructor,
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/query_constructor/prompt.py#L205
```python
DEFAULT_SUFFIX = """\
<< Example {i}. >>
Data Source:
```json
{{{{
"content": "{content}",
"attributes": {attributes}
}}}}
... (skipped)
```
For the "attributes", it is a string which value is json.dumps(AttributeInfo).
Here is an example (please aware of the indents), it's name as **attribute_str** in langchain
```string
{
"artist": {
"description": "Name of the song artist",
"type": "string"
}
}
```
Now, when we do DEFAULT_SUFFIX.format(content="some_content", attributes=**attribute_str**), the result string will be
```json
{
"content": "some_content",
"attributes": {
"artist": { <-------------improper indent
"description": "Name of the song artist", <-------------improper indent
"type": "string"<-------------improper indent
} <-------------improper indent
} <-------------improper indent
}
```
While testing with Llama3 70b inst, the prompt (improper indent) causes the result of NO_FILTER; of course, it affects the query results.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Improper prompt template causes wrong indents. It affects the query (e.g. using SelfQueryRetriever) results.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023
> Python Version: 3.10.12 (main, Jul 5 2023, 18:54:27) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.2.2
> langchain: 0.1.17
> langchain_community: 0.0.36
> langsmith: 0.1.52
> langchain_chroma: 0.1.1
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.0.1
> langserve: 0.1.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
| Wrong format of query constructor prompt while using SelfQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/22466/comments | 0 | 2024-06-04T10:20:08Z | 2024-06-04T10:31:55Z | https://github.com/langchain-ai/langchain/issues/22466 | 2,333,140,917 | 22,466 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following is the code I use to send a multimodal message to Ollama:
```py
from langchain_community.chat_models import ChatOllama
import streamlit as st
# Adding History
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_community.chat_message_histories import StreamlitChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
import os, base64
llm = ChatOllama(model="bakllava")
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant that can describe images."),
MessagesPlaceholder(variable_name="chat_history"),
(
"human",
[
{
"type": "image_url",
"image_url": f"data:image/jpeg;base64,""{image}",
},
{"type": "text", "text": "{input}"},
],
),
]
)
history = StreamlitChatMessageHistory()
def encode_image(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode("utf-8")
def process_image(file):
with st.spinner("Processing image..."):
data = file.read()
file_name = os.path.join("./", file.name)
with open(file_name, "wb") as f:
f.write(data)
image = encode_image(file_name)
st.session_state.encoded_image = image
st.success("Image encoded. Ask your questions")
chain = prompt | llm
chain_with_history = RunnableWithMessageHistory(
chain,
lambda session_id: history,
input_messages_key="input",
history_messages_key="chat_history",
)
def clear_history():
if "langchain_messages" in st.session_state:
del st.session_state["langchain_messages"]
st.title("Chat With Image")
uploaded_file = st.file_uploader("Upload your image: ", type=["jpg", "png"])
add_file = st.button("Submit File", on_click=clear_history)
if uploaded_file and add_file:
process_image(uploaded_file)
for message in st.session_state["langchain_messages"]:
role = "user" if message.type == "human" else "assistant"
with st.chat_message(role):
st.markdown(message.content)
question = st.chat_input("Your Question")
if question:
with st.chat_message("user"):
st.markdown(question)
if "encoded_image" in st.session_state:
image = st.session_state["encoded_image"]
response = chain_with_history.stream(
{"input": question, "image": image},
config={"configurable": {"session_id": "any"}},
)
with st.chat_message("assistant"):
st.write_stream(response)
else:
st.error("No image is uploaded. Upload your image first.")
```
When I upload an image and send a message, an error occured saying:
ValueError: Only string image_url content parts are supported
I tracked this error to the `ollama.py` file, and find the error in line 123:
```py
if isinstance(content_part.get("image_url"), str):
image_url_components = content_part["image_url"].split(",")
```
### Error Message and Stack Trace (if applicable)
Uncaught app exception
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 600, in _run_script
exec(code, module.__dict__)
File "/Users/nsebhastian/Desktop/DEV/8_LangChain_Beginners/source/14_handling_images/app_ollama.py", line 87, in <module>
st.write_stream(response)
File "/opt/homebrew/lib/python3.11/site-packages/streamlit/runtime/metrics_util.py", line 397, in wrapped_func
result = non_optional_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/streamlit/elements/write.py", line 167, in write_stream
for chunk in stream: # type: ignore
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4608, in stream
yield from self.bound.stream(
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4608, in stream
yield from self.bound.stream(
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2775, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2762, in transform
yield from self._transform_stream_with_config(
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1778, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2726, in _transform
for output in final_pipeline:
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4644, in transform
yield from self.bound.transform(
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2762, in transform
yield from self._transform_stream_with_config(
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1778, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2726, in _transform
for output in final_pipeline:
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1172, in transform
yield from self.stream(final, config, **kwargs)
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 265, in stream
raise e
File "/opt/homebrew/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 245, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 317, in _stream
for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 160, in _create_chat_stream
"messages": self._convert_messages_to_ollama_messages(messages),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 132, in _convert_messages_to_ollama_messages
raise ValueError(
ValueError: Only string image_url content parts are supported.
### Description
I'm trying to send a multimodal message using the ChatOllama class.
When I print the `content_part.get("image_url")` value, it shows a dictionary with a 'url' attribute even when I send a string for the `image_url` value as in the example code:
```py
(
"human",
[
{
"type": "image_url",
"image_url": f"data:image/jpeg;base64,""{image}",
},
{"type": "text", "text": "{input}"},
],
),
```
I can fix this issue by checking for the 'url' attribute instead of 'image_url' as follows:
```py
if isinstance(content_part.get("image_url")["url"], str):
image_url_components = content_part["image_url"]["url"].split(",")
```
Is this the right way to do it? Why did the 'url' attribute is added to `content_part["image_url"]` even when I send an f- string?
Thank you.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.6.0: Wed Jul 5 22:22:52 PDT 2023; root:xnu-8796.141.3~6/RELEASE_ARM64_T8103
> Python Version: 3.11.3 (main, Apr 7 2023, 20:13:31) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.69
> langchain_google_genai: 1.0.5
> langchain_openai: 0.1.7
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.15 | ChatOllama ValueError: Only string image_url content parts are supported. | https://api.github.com/repos/langchain-ai/langchain/issues/22460/comments | 0 | 2024-06-04T07:40:39Z | 2024-06-04T07:43:06Z | https://github.com/langchain-ai/langchain/issues/22460 | 2,332,789,153 | 22,460 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I have to following chain defined:
```python
chain = prompt | llm_openai | parser
chain_result = chain.invoke({"number": number, "topics": topicList})
result = chain_result[0]
```
This causes my test to fail, whereas calling the invoke() methods one is working fine:
```python
promt_result = prompt.invoke({"number": number, "topics": topicList})
llm_result = llm_openai.invoke(promt_result)
parser_result = parser.invoke(llm_result)
result = parser_result[0]
```
### Error Message and Stack Trace (if applicable)
Pydantic validation error
### Description
IMHO using a LCEL chain should work exactly like calling the invoke() methods one by one. In my case I am unable to use LCEL because it does not work.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Fri May 17 21:20:54 UTC 2024
> Python Version: 3.12.3 (main, Apr 17 2024, 00:00:00) [GCC 14.0.1 20240411 (Red Hat 14.0.1-0)]
Package Information
-------------------
> langchain_core: 0.1.52
> langchain: 0.1.14
> langchain_community: 0.0.38
> langsmith: 0.1.67
> langchain_openai: 0.1.1
> langchain_text_splitters: 0.0.2 | LCEL not working, compared to identical invoke() call sequence | https://api.github.com/repos/langchain-ai/langchain/issues/22459/comments | 1 | 2024-06-04T07:06:57Z | 2024-06-04T14:08:10Z | https://github.com/langchain-ai/langchain/issues/22459 | 2,332,723,231 | 22,459 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
chain = GraphCypherQAChain.from_llm(
graph=graph,
cypher_llm=ChatOpenAI(temperature='0', model='gpt-3.5-turbo'),
qa_llm=ChatOpenAI(temperature='0.5', model='gpt-3.5-turbo-16k'),
cypher_llm_kwargs={"prompt":CYPHER_PROMPT, "memory": memory, "verbose": True},
qa_llm_kwargs={"prompt": CYPHER_QA_PROMPT, "memory": readonlymemory, "verbose": True},
# Limit the number of results from the Cypher QA Chain using the top_k parameter
top_k=5,
# Return intermediate steps from the Cypher QA Chain
# return_intermediate_steps=True,
validate_cypher=True,
verbose=True,
memory=memory,
return_intermediate_steps = True
)
chain.output_key ='result'
chain.input_key='question'
answer = chain(question)
```
### Error Message and Stack Trace (if applicable)
```raise ValueError(
ValueError: Got multiple output keys: dict_keys(['result', 'intermediate_steps']), cannot determine which to store in memory. Please set the 'output_key' explicitly.
```
I am trying to use `GraphCypherQAChain` with memory,
when I don't use `return_intermediate_steps = False` I am getting result, but when its true I am getting the error
Another scenario is when I give the output_key as intermediate_steps, it works, but I need the result, so I have given the `out_put key` as result but then I am getting key error? - ` return inputs[prompt_input_key], outputs[output_key]
KeyError: 'result'`
i need both `result ` and `intermediate_steps`
### System Info
```
langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.1
langchain-experimental==0.0.59
``` | `GraphCypherQAChain` not able to return both `result` and `intermediate_steps` together with memory? | https://api.github.com/repos/langchain-ai/langchain/issues/22457/comments | 0 | 2024-06-04T06:22:21Z | 2024-06-25T10:41:04Z | https://github.com/langchain-ai/langchain/issues/22457 | 2,332,653,797 | 22,457 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_huggingface.llms import HuggingFaceEndpoint
token = "<TOKEN_WITH_FINBEGRAINED_PERMISSIONS>"
llm = HuggingFaceEndpoint(
endpoint_url='https://api-inference.huggingface.co/models/HuggingFaceH4/zephyr-7b-beta',
token=token,
server_kwargs={
"headers": {"Content-Type": "application/json"}
}
)
resp = llm.invoke("Tell me a joke")
print(resp)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
With the PR https://github.com/langchain-ai/langchain/pull/22365, login to hf hub is skipped while [validating the environment](https://github.com/langchain-ai/langchain/blob/98b2e7b195235f8b31f91939edc8dcc22336f4e6/libs/partners/huggingface/langchain_huggingface/llms/huggingface_endpoint.py#L161) during initializing HuggingFaceEndpoint IF token is None, which resolves case in which we have local TGI (https://github.com/langchain-ai/langchain/issues/20342).
However, we might want to construct HuggingFaceEndpoint with
1. fine-grained token, which allow accessing InferenceEndpoint, but cannot be used for logging in
2. user-specific [oauth tokens](https://www.gradio.app/guides/sharing-your-app#o-auth-login-via-hugging-face), which also don't allow logging in, but which can be used to access inference api.
These cases are not handled.
### System Info
generic | HuggingFaceEndpoint: skip login to hub with oauth token | https://api.github.com/repos/langchain-ai/langchain/issues/22456/comments | 4 | 2024-06-04T06:14:54Z | 2024-06-06T18:26:36Z | https://github.com/langchain-ai/langchain/issues/22456 | 2,332,642,720 | 22,456 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import langchain
from langchain_community.chat_models import ChatHunyuan
from langchain_core.messages import HumanMessage
print(langchain.__version__)
hunyuan_app_id = "******"
hunyuan_secret_id = "********************"
hunyuan_secret_key = "*******************"
llm_tongyi = ChatHunyuan(streaming=True, hunyuan_app_id=hunyuan_app_id,
hunyuan_secret_id=hunyuan_secret_id,
hunyuan_secret_key=hunyuan_secret_key)
print(llm_tongyi.invoke("how old are you"))
### Error Message and Stack Trace (if applicable)
def _stream(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
res = self._chat(messages, **kwargs)
default_chunk_class = AIMessageChunk
for chunk in res.iter_lines():
response = json.loads(chunk)
if "error" in response:
raise ValueError(f"Error from Hunyuan api response: {response}")
for choice in response["choices"]:
chunk = _convert_delta_to_message_chunk(
choice["delta"], default_chunk_class
)
default_chunk_class = chunk.__class__
cg_chunk = ChatGenerationChunk(message=chunk)
if run_manager:
run_manager.on_llm_new_token(chunk.content, chunk=cg_chunk)
yield cg_chunk
![Uploading langchainb
![langchainbug2](https://github.com/langchain-ai/langchain/assets/11306049/e8703d3e-8026-4675-8a7b-f121adb80098)
ug1.png…]()
### Description
langchain_community.chat_models ChatHunyuan had a bug JSON parsing error
it is not a json!
### System Info
langchain version 0.1.9
windows
3.9.13 | langchain_community.chat_models ChatHunyuan had a bug JSON parsing error | https://api.github.com/repos/langchain-ai/langchain/issues/22452/comments | 3 | 2024-06-04T03:28:58Z | 2024-07-29T02:35:11Z | https://github.com/langchain-ai/langchain/issues/22452 | 2,332,460,550 | 22,452 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
huggingface_hub has its own environment variables that it reads from: https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables. Langchain x HuggingFace integrations should be able to read from these, too. | Support native HuggingFace env vars | https://api.github.com/repos/langchain-ai/langchain/issues/22448/comments | 5 | 2024-06-03T22:19:34Z | 2024-07-31T21:44:19Z | https://github.com/langchain-ai/langchain/issues/22448 | 2,332,159,221 | 22,448 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```
from langchain_community.tools.tavily_search import TavilySearchResults
search = TavilySearchResults(max_results=2)
await search.ainvoke("what is the weather in SF")
```
### Error Message and Stack Trace (if applicable)
"ClientConnectorCertificateError(ConnectionKey(host='api.tavily.com', port=443, is_ssl=True, ssl=True, proxy=None, proxy_auth=None, proxy_headers_hash=None), SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)'))"
### Description
invoke does work
### System Info
Running off master | Tavily Search Results ainvoke not working | https://api.github.com/repos/langchain-ai/langchain/issues/22445/comments | 1 | 2024-06-03T20:58:47Z | 2024-06-04T01:34:54Z | https://github.com/langchain-ai/langchain/issues/22445 | 2,332,041,873 | 22,445 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langgraph.prebuilt import create_react_agent
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
class CalculatorInput(BaseModel):
a: int = Field(description="first number")
b: int = Field(description="second number")
@tool("multiplication-tool", args_schema=CalculatorInput, return_direct=True)
def multiply(a: int, b: int) -> int:
"""Multiply two numbers."""
return a * b
tools = [multiply]
llm_gpt4 = ChatOpenAI(model="gpt-4o", temperature=0)
app = create_react_agent(llm_gpt4, tools)
query="what's the result of 5 * 6"
messages = app.invoke({"messages": [("human", query)]})
messages
```
### Error Message and Stack Trace (if applicable)
N/A
### Description
I am following the example of https://python.langchain.com/v0.2/docs/how_to/custom_tools/ , setting `return_direct` as True, and invoke the multiplication tool with a simple agent.
As `return_direct` is True, I expect the tool msg is not send to LLM. But in the output (below), I still see the ToolMessage sent to the LLM, with AIMessage as `The result of \\(5 \\times 6\\) is 30.`
```
{'messages': [HumanMessage(content="what's the result of 5 * 6", id='1ac32371-4b2a-4aec-9147-bf30b6eb0f60'),
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_AslDg6NVGehW4W712neAw5xs', 'function': {'arguments': '{"a":5,"b":6}', 'name': 'multiplication-tool'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 20, 'prompt_tokens': 62, 'total_tokens': 82}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_319be4768e', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-5285f886-c8b5-4ed1-a17c-ea72b4363c35-0', tool_calls=[{'name': 'multiplication-tool', 'args': {'a': 5, 'b': 6}, 'id': 'call_AslDg6NVGehW4W712neAw5xs'}]),
ToolMessage(content='30', name='multiplication-tool', id='76d68dc3-f808-4a7c-90bc-5ae6867f141d', tool_call_id='call_AslDg6NVGehW4W712neAw5xs'),
AIMessage(content='The result of \\(5 \\times 6\\) is 30.', response_metadata={'token_usage': {'completion_tokens': 16, 'prompt_tokens': 92, 'total_tokens': 108}, 'model_name': 'gpt-4o', 'system_fingerprint': 'fp_319be4768e', 'finish_reason': 'stop', 'logprobs': None}, id='run-5e0aaba7-dd05-45f5-9998-23b5bf77f40d-0')]}
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #35~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue May 7 09:00:52 UTC 2
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.1
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.63
> langchain_chroma: 0.1.1
> langchain_cli: 0.0.23
> langchain_openai: 0.1.7
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.16
> langgraph: 0.0.55
> langserve: 0.2.1
| Tool `return_direct` doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/22441/comments | 4 | 2024-06-03T17:41:47Z | 2024-07-09T12:32:17Z | https://github.com/langchain-ai/langchain/issues/22441 | 2,331,707,780 | 22,441 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Here is my code:
```python
langchain.llm_cache = RedisSemanticCache(redis_url="redis://localhost:6379", embedding=OllamaEmbeddings(model="Vistral", num_gpu=2))
chat = ChatCoze(
coze_api_key=os.environ.get('COZE_API_KEY'),
bot_id=os.environ.get('COZE_BOT_ID'),
user="1",
streaming=False,
cache=True
)
chat([HumanMessage(content="Hi")])
```
### Error Message and Stack Trace (if applicable)
```
--> 136 redis_client = redis.from_url(redis_url, **kwargs)
137 if _check_for_cluster(redis_client):
138 redis_client.close()
AttributeError: module 'redis' has no attribute 'from_url'
```
### Description
I expected it would cache my query results in redis
### System Info
langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.3
langchain-openai==0.1.8
langchain-text-splitters==0.2.0 | Can't use Redis semantic search | https://api.github.com/repos/langchain-ai/langchain/issues/22440/comments | 0 | 2024-06-03T16:46:11Z | 2024-06-03T16:48:42Z | https://github.com/langchain-ai/langchain/issues/22440 | 2,331,614,916 | 22,440 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```python
from pathlib import Path
import getopt, sys, os, shutil
from langchain_community.document_loaders import (
DirectoryLoader, TextLoader
)
from langchain_text_splitters import (
Language,
RecursiveCharacterTextSplitter
)
def routerloader(obj, buf, keys):
if os.path.isfile(obj):
Fname = os.path.basename(obj)
if Fname.endswith(".c") or Fname.endswith(".h") or Fname.endswith(".cu"):
loader = TextLoader(obj, autodetect_encoding = True)
buf["c"].extend(loader.load())
keychecker("c", keys)
elif os.path.isdir(obj):
# BEGIN F90 C .h CPP As TextLoader
if any(File.endswith(".c") for File in os.listdir(obj)):
abc={'autodetect_encoding': True}
loader = DirectoryLoader(
obj, glob="**/*.c", loader_cls=TextLoader,
loader_kwargs=abc, show_progress=True, use_multithreading=True
)
buf["c"].extend(loader.load())
keychecker("c", keys)
if any(File.endswith(".h") for File in os.listdir(obj)):
abc={'autodetect_encoding': True}
loader = DirectoryLoader(
obj, glob="**/*.h", loader_cls=TextLoader,
loader_kwargs=abc, show_progress=True, use_multithreading=True
)
buf["c"].extend(loader.load())
keychecker("c", keys)
return buf, keys #accumulator
def specificsplitter(keys, **kwargs):
splitted_data = []
splitter_fun = {key: [] for key in keys}
embedding = kwargs.get("embedding", None)
for key in keys:
if key == "c" or key == "h" or key == "cuh" or key == "cu":
splitter_fun[key] = RecursiveCharacterTextSplitter.from_language(
language=Language.C, chunk_size=200, chunk_overlap=0
)
return splitter_fun
def keychecker(key, keys):
if key not in keys:
keys.append(key)
def loaddata(data_path, **kwargs):
default_keys = ["txt", "pdf", "f90", "c", "cpp", "py", "png", "xlsx", "odt", "csv", "pptx", "md", "org"]
buf = {key: [] for key in default_keys}
keys = []
documents = []
embedding = kwargs.get("embedding", None)
for data in data_path:
print(data)
buf, keys = routerloader(data, buf, keys)
print (keys)
print (buf)
splitter_fun = specificsplitter(keys, embedding=embedding)
print (splitter_fun)
for key in keys:
print ("*"*20)
print (key)
buf[key] = splitter_fun[key].split_documents(buf[key])
print (buf[key])
print(len(buf[key]))
return buf, keys
IDOC_PATH = []
argumentlist = sys.argv[1:]
options = "hi:"
long_options = ["help",
"inputdocs_path="]
arguments, values = getopt.getopt(argumentlist, options, long_options)
for currentArgument, currentValue in arguments:
if currentArgument in ("-h", "--help"):
print("python main.py -i path/docs")
elif currentArgument in ("-i", "--inputdocs_path"):
for i in currentValue.split(" "):
if (len(i) != 0):
if (os.path.isfile(i)) or ((os.path.isdir(i)) and (len(os.listdir(i)) != 0)):
IDOC_PATH.append(Path(i))
splitted_data, keys = loaddata(IDOC_PATH)
```
### Error Message and Stack Trace (if applicable)
```bash
python ISSUE_TXT_SPLITTER.py -i "/home/vlederer/Bureau/ISSUE_TXT/DOCS/hello_world.c"
/home/vlederer/Bureau/ISSUE_TXT/DOCS/hello_world.c
['c']
{'txt': [], 'pdf': [], 'f90': [], 'c': [Document(page_content='#include <stdio.h>\n\nint main() {\n puts("Hello, World!");\n return 0;\n}', metadata={'source': '/home/vlederer/Bureau/ISSUE_TXT/DOCS/hello_world.c'})], 'cpp': [], 'py': [], 'png': [], 'xlsx': [], 'odt': [], 'csv': [], 'pptx': [], 'md': [], 'org': []}
Traceback (most recent call last):
File "/home/vlederer/Bureau/ISSUE_TXT/ISSUE_TXT_SPLITTER.py", line 92, in <module>
splitted_data, keys = loaddata(IDOC_PATH)
^^^^^^^^^^^^^^^^^^^
File "/home/vlederer/Bureau/ISSUE_TXT/ISSUE_TXT_SPLITTER.py", line 67, in loaddata
splitter_fun = specificsplitter(keys, embedding=embedding)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vlederer/Bureau/ISSUE_TXT/ISSUE_TXT_SPLITTER.py", line 47, in specificsplitter
splitter_fun[key] = RecursiveCharacterTextSplitter.from_language(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/Anaconda3/envs/langchain_rag_pytorchcuda121gpu_env/lib/python3.11/site-packages/langchain_text_splitters/character.py", line 116, in from_language
separators = cls.get_separators_for_language(language)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/Anaconda3/envs/langchain_rag_pytorchcuda121gpu_env/lib/python3.11/site-packages/langchain_text_splitters/character.py", line 631, in get_separators_for_language
raise ValueError(
ValueError: Language Language.C is not supported! Please choose from [<Language.CPP: 'cpp'>, <Language.GO: 'go'>, <Language.JAVA: 'java'>, <Language.KOTLIN: 'kotlin'>, <Language.JS: 'js'>, <Language.TS: 'ts'>, <Language.PHP: 'php'>, <Language.PROTO: 'proto'>, <Language.PYTHON: 'python'>, <Language.RST: 'rst'>, <Language.RUBY: 'ruby'>, <Language.RUST: 'rust'>, <Language.SCALA: 'scala'>, <Language.SWIFT: 'swift'>, <Language.MARKDOWN: 'markdown'>, <Language.LATEX: 'latex'>, <Language.HTML: 'html'>, <Language.SOL: 'sol'>, <Language.CSHARP: 'csharp'>, <Language.COBOL: 'cobol'>, <Language.C: 'c'>, <Language.LUA: 'lua'>, <Language.PERL: 'perl'>, <Language.HASKELL: 'haskell'>]
```
### Description
I'm trying to split C code using the langchain-text-splitter and RecursiveCharacterTextSplitter.from_language with Language=Language.C or Language='c'. I'am expecting no error since the C language is listed by the enumerator
```python
[print(e.value) for e in Language]
```
### System Info
```bash
langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.3
langchain-experimental==0.0.59
langchain-text-splitters==0.2.0
```
```bash
No LSB modules are available.
Distributor ID: Ubuntu
Description: Linux Mint 21.3
Release: 22.04
Codename: virginia
```
```bash
Python 3.11.9
```
```bash
System Information
------------------
> OS: Linux
> OS Version: #35~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue May 7 09:00:52 UTC 2
> Python Version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.65
> langchain_experimental: 0.0.59
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | RecursiveCharacterTextSplitter.from_language(language=Language.C) ValueError: Language Language.C is not supported! :bug: | https://api.github.com/repos/langchain-ai/langchain/issues/22430/comments | 1 | 2024-06-03T13:42:36Z | 2024-06-03T15:43:37Z | https://github.com/langchain-ai/langchain/issues/22430 | 2,331,198,366 | 22,430 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
class ConversationDBMemory(BaseChatMemory):
conversation_id: str
human_prefix: str = "Human"
ai_prefix: str = "Assistant"
llm: BaseLanguageModel
memory_key: str = "history"
@property
async def buffer(self) -> List[BaseMessage]:
async with get_async_session_context() as session:
messages = await get_all_messages(session=session, conversation_id=self.conversation_id)
print("messages in buffer: ", messages)
chat_history: List[BaseMessage] = []
for message in messages:
chat_history.append(HumanMessage(content=message.user_query))
chat_history.append(AIMessage(content=message.llm_response))
print(f"chat history: {chat_history}")
if not chat_history:
return []
return chat_history
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
meta private
"""
return [self.memory_key]
async def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Return history buffer."""
buffer: Any = await self.buffer
if self.return_messages:
final_buffer: Any = buffer
else:
final_buffer = get_buffer_string(
buffer,
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
inputs[self.memory_key] = final_buffer
return inputs
async def aload_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
buffer: Any = await self.buffer
if self.return_messages:
final_buffer: Any = buffer
else:
final_buffer = get_buffer_string(
buffer,
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
inputs[self.memory_key] = final_buffer
return inputs
==========================
chat_prompt = ChatPromptTemplate.from_messages([default_system_message_prompt, rag_chat_prompt])
# print(chat_prompt)
agent = {
"history": lambda x: x["history"],
"input": lambda x: x["input"],
"knowledge": lambda x: x["knowledge"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
),
} | chat_prompt | model_with_tools | OpenAIFunctionsAgentOutputParser()
agent_executor = AgentExecutor(agent=agent, verbose=True, callbacks=[callback], memory=memory, tools=tools)
task = asyncio.create_task(wrap_done(
agent_executor.ainvoke(input={"input": user_query, "knowledge": knowledge}),
callback.done
))
====================== Prompts
<INSTRUCTION>
Based on the known information, answer the question concisely and professionally. If the answer cannot be derived from it, please say "The question cannot be answered based on the known information."
No additional fabricated elements are allowed in the answer
</INSTRUCTION>
<CONVERSATION HISTORY>
{history}
</CONVERSATION HISTORY>
<KNOWLEDGE>
{knowledge}
</KNOWLEDGE>
<QUESTION>
{input}
</QUESTION>
### Error Message and Stack Trace (if applicable)
File "/Users/steveliu/miniconda3/envs/intelli_req_back/lib/python3.12/site-packages/langchain/chains/base.py", line 217, in ainvoke
raise e
File "/Users/steveliu/miniconda3/envs/intelli_req_back/lib/python3.12/site-packages/langchain/chains/base.py", line 212, in ainvoke
final_outputs: Dict[str, Any] = await self.aprep_outputs(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/steveliu/miniconda3/envs/intelli_req_back/lib/python3.12/site-packages/langchain/chains/base.py", line 486, in aprep_outputs
await self.memory.asave_context(inputs, outputs)
File "/Users/steveliu/miniconda3/envs/intelli_req_back/lib/python3.12/site-packages/langchain/memory/chat_memory.py", line 64, in asave_context
input_str, output_str = self._get_input_output(inputs, outputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/steveliu/miniconda3/envs/intelli_req_back/lib/python3.12/site-packages/langchain/memory/chat_memory.py", line 30, in _get_input_output
prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/steveliu/miniconda3/envs/intelli_req_back/lib/python3.12/site-packages/langchain/memory/utils.py", line 19, in get_prompt_input_key
raise ValueError(f"One input key expected got {prompt_input_keys}")
ValueError: One input key expected got ['knowledge', 'input']
### Description
Just like my code, I am trying to create a RAG application. In my prompt, I used `knowledge` to represent the retrieved information. I want to pass it along with the user input to LLM, but I encountered this problem when creating the Agent. Why is this happening?
### System Info
langchain==0.2.0
langchain-community==0.2.1
langchain-core==0.2.3
langchain-openai==0.1.8
langchain-postgres==0.0.6
langchain-text-splitters==0.2.0
| How to make multiple inputs to a agent | https://api.github.com/repos/langchain-ai/langchain/issues/22427/comments | 0 | 2024-06-03T13:10:52Z | 2024-06-03T13:13:26Z | https://github.com/langchain-ai/langchain/issues/22427 | 2,331,123,659 | 22,427 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# Define a callback that wants to access the token usage:
class LLMCallbackHandler(BaseCallbackHandler):
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
super().on_llm_end(response, **kwargs)
token_usage = response.llm_output["token_usage"]
prompt_tokens = token_usage.get("prompt_tokens", 0)
completion_tokens = token_usage.get("completion_tokens", 0)
# Do something...
callbacks = [LLMCallbackHandler()]
# Define some LLM models that use this callback:
chatgpt = ChatOpenAI(
model="gpt-3.5-turbo",
callbacks=callbacks,
)
sonnet = BedrockChat(
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
client=boto3.Session(region_name="us-east-1").client("bedrock-runtime"),
callbacks=callbacks,
)
# Let's call the two models
gpt_response = chatgpt.invoke({"input":"Hello, how are you?"})
sonnet_response = sonnet.invoke({"input":"Hello, how are you?"})
### Error Message and Stack Trace (if applicable)
_No response_
### Description
_combine_llm_outputs() of different supported models hardcodes different keys.
In this example, the token_usage key is different in
https://github.com/langchain-ai/langchain/blob/acaf214a4516a2ffbd2817f553f4d48e6a908695/libs/community/langchain_community/chat_models/bedrock.py#L321
and
https://github.com/langchain-ai/langchain/blob/acaf214a4516a2ffbd2817f553f4d48e6a908695/libs/partners/openai/langchain_openai/chat_models/base.py#L457
The outcome is that replacing one model with another is not transparent and can lead to issues, such as breaking monitoring
### System Info
Appears in master | chatModels _combine_llm_outputs uses different hardcoded dict keys | https://api.github.com/repos/langchain-ai/langchain/issues/22426/comments | 0 | 2024-06-03T12:46:44Z | 2024-06-03T12:49:14Z | https://github.com/langchain-ai/langchain/issues/22426 | 2,331,068,107 | 22,426 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Error Message and Stack Trace (if applicable)
api-hub | INFO: Application startup complete.
api-hub | INFO: 172.18.0.1:49982 - "POST /agent/stream_log HTTP/1.1" 200 OK
api-hub | /usr/local/lib/python3.12/site-packages/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: This API is in beta and may change in the future.
api-hub | warn_beta(
api-hub |
api-hub |
api-hub | > Entering new AgentExecutor chain...
api-hub | INFO 2024-06-03 07:01:17 at httpx ]> HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
api-hub |
api-hub | Invoking: `csv_qna` with `{'question': 'Find 4 golden keywords with the highest Search volume and lowest CPC', 'csv_file': 'https://jin.writerzen.dev/files/ws1/keyword_explorer.csv'}`
api-hub |
api-hub |
api-hub | INFO 2024-06-03 07:01:22 at httpx ]> HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
api-hub | ERROR: Exception in ASGI application
api-hub | Traceback (most recent call last):
api-hub | File "/usr/local/lib/python3.12/site-packages/sse_starlette/sse.py", line 269, in __call__
api-hub | await wrap(partial(self.listen_for_disconnect, receive))
api-hub | File "/usr/local/lib/python3.12/site-packages/sse_starlette/sse.py", line 258, in wrap
api-hub | await func()
api-hub | File "/usr/local/lib/python3.12/site-packages/sse_starlette/sse.py", line 215, in listen_for_disconnect
api-hub | message = await receive()
api-hub | ^^^^^^^^^^^^^^^
api-hub | File "/usr/local/lib/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 538, in receive
api-hub | await self.message_event.wait()
api-hub | File "/usr/local/lib/python3.12/asyncio/locks.py", line 212, in wait
api-hub | await fut
api-hub | asyncio.exceptions.CancelledError: Cancelled by cancel scope 7fed579989e0
api-hub |
api-hub | During handling of the above exception, another exception occurred:
api-hub |
api-hub | + Exception Group Traceback (most recent call last):
api-hub | | File "/usr/local/lib/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
api-hub | | result = await app( # type: ignore[func-returns-value]
api-hub | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
api-hub | | File "/usr/local/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
api-hub | | return await self.app(scope, receive, send)
api-hub | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
api-hub | | File "/usr/local/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
api-hub | | await super().__call__(scope, receive, send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/applications.py", line 123, in __call__
api-hub | | await self.middleware_stack(scope, receive, send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 186, in __call__
api-hub | | raise exc
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 164, in __call__
api-hub | | await self.app(scope, receive, _send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
api-hub | | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
api-hub | | raise exc
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
api-hub | | await app(scope, receive, sender)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 756, in __call__
api-hub | | await self.middleware_stack(scope, receive, send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 776, in app
api-hub | | await route.handle(scope, receive, send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 297, in handle
api-hub | | await self.app(scope, receive, send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 77, in app
api-hub | | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
api-hub | | raise exc
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
api-hub | | await app(scope, receive, sender)
api-hub | | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 75, in app
api-hub | | await response(scope, receive, send)
api-hub | | File "/usr/local/lib/python3.12/site-packages/sse_starlette/sse.py", line 255, in __call__
api-hub | | async with anyio.create_task_group() as task_group:
api-hub | | File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 680, in __aexit__
api-hub | | raise BaseExceptionGroup(
api-hub | | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
api-hub | +-+---------------- 1 ----------------
api-hub | | Traceback (most recent call last):
api-hub | | File "/usr/local/lib/python3.12/site-packages/langserve/serialization.py", line 90, in default
api-hub | | return super().default(obj)
api-hub | | ^^^^^^^
api-hub | | RuntimeError: super(): __class__ cell not found
api-hub | |
api-hub | | The above exception was the direct cause of the following exception:
api-hub | |
api-hub | | Traceback (most recent call last):
api-hub | | File "/usr/local/lib/python3.12/site-packages/sse_starlette/sse.py", line 258, in wrap
api-hub | | await func()
api-hub | | File "/usr/local/lib/python3.12/site-packages/sse_starlette/sse.py", line 245, in stream_response
api-hub | | async for data in self.body_iterator:
api-hub | | File "/usr/local/lib/python3.12/site-packages/langserve/api_handler.py", line 1243, in _stream_log
api-hub | | "data": self._serializer.dumps(data).decode("utf-8"),
api-hub | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
api-hub | | File "/usr/local/lib/python3.12/site-packages/langserve/serialization.py", line 168, in dumps
api-hub | | return orjson.dumps(obj, default=default)
api-hub | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
api-hub | | TypeError: Type is not JSON serializable: DataFrame
api-hub | +------------------------------------
api-hub | INFO 2024-06-03 07:01:24 at httpx ]> HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
api-hub | result='| | Keyword | Volume | CPC | Word count | PPC Competition | Trending |\n|-----:|:-------------------|---------:|------:|:-------------|:------------------|:-----------|\n| 4985 | ="purina pro plan" | 165000 | 2.31 | ="3" | ="High" | ="false" |\n| 0 | ="dog food" | 165000 | 11.1 | ="2" | ="High" | ="false" |\n| 1 | ="dog food a" | 165000 | 11.1 | ="3" | ="High" | ="false" |\n| 3 | ="dog food victor" | 74000 | 1.2 | ="3" | ="High" | ="false" |'The 4 golden keywords with the highest search volume and lowest CPC from the provided data are:
api-hub |
api-hub | 1. Keyword: "dog food victor"
api-hub | - Search Volume: 74,000
api-hub | - CPC: $1.20
api-hub |
api-hub | 2. Keyword: "purina pro plan"
api-hub | - Search Volume: 165,000
api-hub | - CPC: $2.31
api-hub |
api-hub | 3. Keyword: "dog food"
api-hub | - Search Volume: 165,000
api-hub | - CPC: $11.10
api-hub |
api-hub | 4. Keyword: "dog food a"
api-hub | - Search Volume: 165,000
api-hub | - CPC: $11.10
api-hub |
api-hub | These are the 4 keywords that meet the criteria of having the highest search volume and lowest CPC.
api-hub |
api-hub | > Finished chain.
![image](https://github.com/langchain-ai/langchain/assets/134404869/cd69e505-f199-4160-867e-93829653b159)
### Description
*I am trying to build a tools which can question on CSV file with related documents of langchain ver 2 `https://python.langchain.com/v0.1/docs/use_cases/sql/csv/` when chain run I have this error and after that It still return the result in log but in playground not show it. Some one help me fix it
### System Info
langchain-pinecone = "^0.1.1"
langserve = {extras = ["server"], version = ">=0.0.30"}
langchain-openai = "^0.1.1"
langchain-anthropic = "^0.1.7"
langchain-google-genai = "^1.0.1"
langchain-community = "^0.2.1"
langchain-experimental = "^0.0.59"
langchain = "0.2.1" | TypeError: Type is not JSON serializable: DataFrame on question with CSV Langchain Ver2 | https://api.github.com/repos/langchain-ai/langchain/issues/22415/comments | 0 | 2024-06-03T07:21:54Z | 2024-06-05T02:28:24Z | https://github.com/langchain-ai/langchain/issues/22415 | 2,330,372,065 | 22,415 |
[
"hwchase17",
"langchain"
] | ### URL
https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html
![image](https://github.com/langchain-ai/langchain/assets/20320125/3d334a05-8f3e-4d58-94fe-813bbde3798b)
it shows
```
[If you’d like to use LangSmith, uncomment the below:](https://python.langchain.com/docs/use_cases/tool_use/human_in_the_loop/)
[os.environ[“LANGCHAIN_TRACING_V2”] = “true”](https://python.langchain.com/docs/use_cases/tool_use/tool_error_handling/)
```
and those are not related with the section
### Idea or request for content:
In the Examples using Runnable[¶](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html#langchain-core-runnables-base-runnable) section, the links text there should be
```
- Human in the Loop
- Tool Error Handling
``` | DOC: Examples using Runnable section links are not correct | https://api.github.com/repos/langchain-ai/langchain/issues/22414/comments | 0 | 2024-06-03T06:38:37Z | 2024-06-03T15:46:51Z | https://github.com/langchain-ai/langchain/issues/22414 | 2,330,297,740 | 22,414 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
prefix = """
Task:Generate Cypher statement to query a graph database.
Instructions:
Use only the provided relationship types and properties in the schema.
Do not use any other relationship types or properties that are not provided.
Note: Do not include any explanations or apologies in your responses.
Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.
Do not include any text except the generated Cypher statement.
context:
{context}
Examples: Here are a few examples of generated Cypher statements for particular questions:
"""
FEW_SHOT_PROMPT = FewShotPromptTemplate(
example_selector = example_selector,
example_prompt = example_prompt,
prefix=prefix,
suffix="Question: {question}, \nCypher Query: ",
input_variables =["question","query", "context"],
)
graph_qa = GraphCypherQAChain.from_llm(
cypher_llm = llm3, #should use gpt-4 for production
qa_llm = llm3,
graph=graph,
verbose=True,
cypher_prompt = FEW_SHOT_PROMPT,
)
input_variables = {
"question": args['question'],
"context": "NA",
"query": args['question']
}
graph_qa.invoke(input_variables)
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[41], line 1
----> 1 graph_qa.invoke(input_variables)
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:163, in Chain.invoke(self, input, config, **kwargs)
161 except BaseException as e:
162 run_manager.on_chain_error(e)
--> 163 raise e
164 run_manager.on_chain_end(outputs)
166 if include_run_info:
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:153, in Chain.invoke(self, input, config, **kwargs)
150 try:
151 self._validate_inputs(inputs)
152 outputs = (
--> 153 self._call(inputs, run_manager=run_manager)
154 if new_arg_supported
155 else self._call(inputs)
156 )
158 final_outputs: Dict[str, Any] = self.prep_outputs(
159 inputs, outputs, return_only_outputs
160 )
161 except BaseException as e:
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/graph_qa/cypher.py:247, in GraphCypherQAChain._call(self, inputs, run_manager)
243 question = inputs[self.input_key]
245 intermediate_steps: List = []
--> 247 generated_cypher = self.cypher_generation_chain.run(
248 {"question": question, "schema": self.graph_schema}, callbacks=callbacks
249 )
251 # Extract Cypher code if it is wrapped in backticks
252 generated_cypher = extract_cypher(generated_cypher)
File ~/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:148, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
146 warned = True
147 emit_warning()
--> 148 return wrapped(*args, **kwargs)
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:595, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
593 if len(args) != 1:
594 raise ValueError("`run` supports only one positional argument.")
--> 595 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
596 _output_key
597 ]
599 if kwargs and not args:
600 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
601 _output_key
602 ]
File ~/anaconda3/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:148, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
146 warned = True
147 emit_warning()
--> 148 return wrapped(*args, **kwargs)
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:378, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
346 """Execute the chain.
347
348 Args:
(...)
369 `Chain.output_keys`.
370 """
371 config = {
372 "callbacks": callbacks,
373 "tags": tags,
374 "metadata": metadata,
375 "run_name": run_name,
376 }
--> 378 return self.invoke(
379 inputs,
380 cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
381 return_only_outputs=return_only_outputs,
382 include_run_info=include_run_info,
383 )
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:163, in Chain.invoke(self, input, config, **kwargs)
161 except BaseException as e:
162 run_manager.on_chain_error(e)
--> 163 raise e
164 run_manager.on_chain_end(outputs)
166 if include_run_info:
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:151, in Chain.invoke(self, input, config, **kwargs)
145 run_manager = callback_manager.on_chain_start(
146 dumpd(self),
147 inputs,
148 name=run_name,
149 )
150 try:
--> 151 self._validate_inputs(inputs)
152 outputs = (
153 self._call(inputs, run_manager=run_manager)
154 if new_arg_supported
155 else self._call(inputs)
156 )
158 final_outputs: Dict[str, Any] = self.prep_outputs(
159 inputs, outputs, return_only_outputs
160 )
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:279, in Chain._validate_inputs(self, inputs)
277 missing_keys = set(self.input_keys).difference(inputs)
278 if missing_keys:
--> 279 raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'context'}
### Description
Hello,
I cannot seem to invoke GraphCypherQAChain.from_llm() properly so that I can format correctly for the FewShotPromptTemplate.
Especially, in the template I introduced at variable 'context' which I intend to supply at the invoke time.
However, even I pass 'context' at invoke time, the FewShotPromptTemplate doesn't seem to access this variable.
I am confused how arguments are passed for prompt vs chain.
It seems like the argument for QAchain is 'query' only, i.e graph_qa.invoke({'query': 'user question'}).
In this case, we cannot really have a dynamic few shot prompt template.
Please provide me with some direction here.
Thank you
### System Info
langchain==0.1.20
langchain-community==0.0.38
langchain-core==0.2.3
langchain-openai==0.1.6
langchain-text-splitters==0.0.1
langchainhub==0.1.15 | How to invoke GraphCypherQAChain.from_llm() with multiple variables | https://api.github.com/repos/langchain-ai/langchain/issues/22413/comments | 3 | 2024-06-03T05:53:51Z | 2024-06-13T15:59:25Z | https://github.com/langchain-ai/langchain/issues/22413 | 2,330,234,431 | 22,413 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/retrievers/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I am using Mac Silicon M1,
When I am executing
```
from langchain_chroma import Chroma
from langchain_openai import OpenAIEmbeddings
vectorstore = Chroma.from_documents(
documents,
embedding=OpenAIEmbeddings(),
)
```
I am getting the error
> ImportError: dlopen(/Users/arupsarkar/miniconda3/envs/llm_env/lib/python3.12/site-packages/hnswlib.cpython-312-darwin.so, 0x0002): tried: '/Users/arupsarkar/miniconda3/envs/llm_env/lib/python3.12/site-packages/hnswlib.cpython-312-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e' or 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/arupsarkar/miniconda3/envs/llm_env/lib/python3.12/site-packages/hnswlib.cpython-312-darwin.so' (no such file), '/Users/arupsarkar/miniconda3/envs/llm_env/lib/python3.12/site-packages/hnswlib.cpython-312-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e' or 'arm64'))
### Idea or request for content:
How can I make this compatible with my Apple Silicon M1.
I am also using conda (miniconda) environment | DOC: <Issue related to /v0.2/docs/tutorials/retrievers/> | https://api.github.com/repos/langchain-ai/langchain/issues/22412/comments | 1 | 2024-06-03T03:21:58Z | 2024-06-03T15:49:15Z | https://github.com/langchain-ai/langchain/issues/22412 | 2,330,081,197 | 22,412 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_google_genai import GoogleGenerativeAIEmbeddings
embeddings = GoogleGenerativeAIEmbeddings(model='models/embedding-001')
vectors = embeddings.embed_documents(queries)
print(type(vectors[0]))
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
It returns <class 'proto.marshal.collections.repeated.Repeated'> type not List type. It might work the same as a List type but not when using it in any vectorstore.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Jan 11 04:09:03 UTC 2024
> Python Version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.67
> langchain_google_genai: 1.0.5
> langchain_text_splitters: 0.2.0
> langgraph: 0.0.60 | GoogleGenerativeAIEmbeddings embed_documents method returns list of Repeated Type | https://api.github.com/repos/langchain-ai/langchain/issues/22411/comments | 4 | 2024-06-03T03:16:20Z | 2024-06-12T14:28:03Z | https://github.com/langchain-ai/langchain/issues/22411 | 2,330,076,456 | 22,411 |
[
"hwchase17",
"langchain"
] | ### URL
https://github.com/langchain-ai/langchain/blob/master/cookbook/oracleai_demo.ipynb
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The sample code in https://github.com/langchain-ai/langchain/blob/master/cookbook/oracleai_demo.ipynb uses try/catch blocks which don't print the actual driver or DB error, making it impossible to troubleshoot. For example it currently has:
```
import sys
import oracledb
# please update with your username, password, hostname and service_name
username = ""
password = ""
dsn = ""
try:
conn = oracledb.connect(user=username, password=password, dsn=dsn)
print("Connection successful!")
except Exception as e:
print("Connection failed!")
sys.exit(1)
```
For any connection failure this will only show:
```
Connection failed!
```
The code should be changed to:
```
import sys
import oracledb
# please update with your username, password, hostname and service_name
username = ""
password = ""
dsn = ""
conn = oracledb.connect(user=username, password=password, dsn=dsn)
print("Connection successful!")
```
This, for example, with an incorrect password will show a traceback and a useful error:
```
oracledb.exceptions.DatabaseError: ORA-01017: invalid credential or not authorized; logon denied
Help: https://docs.oracle.com/error-help/db/ora-01017/
```
The same try/catch problem exists in other examples.
### Idea or request for content:
_No response_ | DOC: Remove try/catch blocks from sample connection code to allow actual error to be shown | https://api.github.com/repos/langchain-ai/langchain/issues/22410/comments | 3 | 2024-06-02T23:28:02Z | 2024-06-07T06:03:42Z | https://github.com/langchain-ai/langchain/issues/22410 | 2,329,913,300 | 22,410 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
pip install langchain==0.2.3
### Error Message and Stack Trace (if applicable)
ERROR: No matching distribution found for langchain==0.2.3
### Description
Your current release is 0.2.3 but Pypi is not up-to-date
![image](https://github.com/langchain-ai/langchain/assets/1483774/327e02a3-8fc1-45b8-859f-5c67020ffeef)
### System Info
no info available | Pypi is not up to date | https://api.github.com/repos/langchain-ai/langchain/issues/22404/comments | 0 | 2024-06-02T17:28:42Z | 2024-06-02T19:39:06Z | https://github.com/langchain-ai/langchain/issues/22404 | 2,329,777,633 | 22,404 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
How do you limit the number of previous conversations for the checkpoint memory ? As it is shown in the documentation, The checkpoint just grows and grows which eventually exceeds the LLM input token limit.
https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/
### Idea or request for content:
Please add a section on how to limit the the number of previopus conversations that go into a checkpoint.
https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/ | DOC: <Issue related to /v0.2/docs/tutorials/qa_chat_history/> | https://api.github.com/repos/langchain-ai/langchain/issues/22400/comments | 3 | 2024-06-02T11:36:39Z | 2024-06-04T02:45:42Z | https://github.com/langchain-ai/langchain/issues/22400 | 2,329,608,791 | 22,400 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
class Code(BaseModel):
prefix: str = Field(description="Description of the problem and approach")
imports: str = Field(description="Code block import statements")
code: str = Field(description="Code block not including import statements")
messages = state["messages"]
...
llm = ChatMistralAI(model="codestral-latest", temperature=0, endpoint="https://codestral.mistral.ai/v1")
code_gen_chain = llm.with_structured_output(Code, include_raw=False)
code_solution = code_gen_chain.invoke(messages)
```
`code_solution` is always a `dict` not type `Code`.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
This is code from a recent public codestral demo:
```
class Code(BaseModel):
prefix: str = Field(description="Description of the problem and approach")
imports: str = Field(description="Code block import statements")
code: str = Field(description="Code block not including import statements")
messages = state["messages"]
...
llm = ChatMistralAI(model="codestral-latest", temperature=0, endpoint="https://codestral.mistral.ai/v1")
code_gen_chain = llm.with_structured_output(Code, include_raw=False)
code_solution = code_gen_chain.invoke(messages)
```
`code_solution` is always a `dict` not type `Code`.
Stepping into `llm.with_structured_output`, the first lines are:
```
if kwargs:
raise ValueError(f"Received unsupported arguments {kwargs}")
is_pydantic_schema = isinstance(schema, type) and issubclass(schema, BaseModel)
```
`issubclass(schema, BaseModel)` always returns False even though `schema` is the same `Code` type being sent in.
Before the call:
```
>>> Code
<class 'codestral.model.Code'>
>>> issubclass(Code, BaseModel)
True
>>> type(Code)
<class 'pydantic._internal._model_construction.ModelMetaclass'>
```
Step inside the call:
```
>>> schema
<class 'codestral.model.Code'>
>>> issubclass(schema, BaseModel)
False
>>> type(schema)
<class 'pydantic._internal._model_construction.ModelMetaclass'>
```
It behaves correctly outside the call to Langchain and incorrectly inside the call.
### System Info
langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.3
langchain-mistralai==0.1.7
langchain-text-splitters==0.2.0
pydantic==2.7.2
pydantic_core==2.18.3
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
> Python Version: 3.11.9 (main, Apr 19 2024, 11:43:47) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.67
> langchain_mistralai: 0.1.7
> langchain_text_splitters: 0.2.0
> langgraph: 0.0.60
| ChatMistralAI with_structured_output does not recognize BaseModel subclass | https://api.github.com/repos/langchain-ai/langchain/issues/22390/comments | 1 | 2024-06-01T15:06:02Z | 2024-06-01T15:13:28Z | https://github.com/langchain-ai/langchain/issues/22390 | 2,329,189,975 | 22,390 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
import os
from pathlib import Path
from langchain.globals import set_verbose, set_debug, set_llm_cache
from langchain_community.chat_models import ChatLiteLLM
from langchain_community.cache import SQLiteCache
from langchain_core.output_parsers.string import StrOutputParser
os.environ["OPENAI_API_KEY"] = Path("OPENAI_API_KEY.txt").read_text().strip()
set_verbose(True)
set_debug(True)
Path("test_cache.db").unlink(missing_ok=True)
set_llm_cache(SQLiteCache(database_path="test_cache.db"))
llm = ChatLiteLLM(
model_name="openai/gpt-4o",
cache=True,
verbose=True,
temperature=0,
)
print(llm.predict("this is a test")) # works fine because cache empty
print("Success 1/2")
print(llm.predict("this is a test")) # fails
print("Success 2/2")
```
### Error Message and Stack Trace (if applicable)
```
Success 1/2
[llm/start] [llm:ChatLiteLLM] Entering LLM run with input:
{
"prompts": [
"Human: this is a test"
]
}
Retrieving a cache value that could not be deserialized properly. This is likely due to the cache being in an older format. Please recreate your cache to avoid this error.
[llm/error] [llm:ChatLiteLLM] [3ms] LLM run errored with error:
"ValidationError(model='ChatResult', errors=[{'loc': ('generations', 0, 'type'), 'msg': \"unexpected value; permitted: 'ChatGeneration'\", 'type': 'value_error.const', 'ctx': {'given': 'Generation', 'permitted': ('ChatGeneration',)}}, {'loc': ('generations', 0, 'message'), 'msg': 'field required', 'type': 'value_error.missing'}, {'loc': ('generations', 0, '__root__'), 'msg': 'Error while initializing ChatGeneration', 'type': 'value_error'}])Traceback (most recent call last):\n\n\n File \"/home/USER/.pyenv/versions/doctoolsllm/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py\", line 446, in generate\n self._generate_with_cache(\n\n\n File \"/home/USER/.pyenv/versions/doctoolsllm/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py\", line 634, in _generate_with_cache\n return ChatResult(generations=cache_val)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n File \"/home/USER/.pyenv/versions/doctoolsllm/lib/python3.11/site-packages/pydantic/v1/main.py\", line 341, in __init__\n raise validation_error\n\n\npydantic.v1.error_wrappers.ValidationError: 3 validation errors for ChatResult\ngenerations -> 0 -> type\n unexpected value; permitted: 'ChatGeneration' (type=value_error.const; given=Generation; permitted=('ChatGeneration',))\ngenerations -> 0 -> message\n field required (type=value_error.missing)\ngenerations -> 0 -> __root__\n Error while initializing ChatGeneration (type=value_error)"
Traceback (most recent call last):
File "/home/LOCAL_PATH/DocToolsLLM/test.py", line 23, in <module>
print(llm.predict("this is a test"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/USER/.pyenv/versions/doctoolsllm/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 148, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/USER/.pyenv/versions/doctoolsllm/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 885, in predict
result = self([HumanMessage(content=text)], stop=_stop, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/USER/.pyenv/versions/doctoolsllm/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 148, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/USER/.pyenv/versions/doctoolsllm/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 847, in __call__
generation = self.generate(
^^^^^^^^^^^^^^
File "/home/USER/.pyenv/versions/doctoolsllm/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 456, in generate
raise e
File "/home/USER/.pyenv/versions/doctoolsllm/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 446, in generate
self._generate_with_cache(
File "/home/USER/.pyenv/versions/doctoolsllm/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 634, in _generate_with_cache
return ChatResult(generations=cache_val)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/USER/.pyenv/versions/doctoolsllm/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 3 validation errors for ChatResult
generations -> 0 -> type
unexpected value; permitted: 'ChatGeneration' (type=value_error.const; given=Generation; permitted=('ChatGeneration',))
generations -> 0 -> message
field required (type=value_error.missing)
generations -> 0 -> __root__
Error while initializing ChatGeneration (type=value_error)
```
### Description
* I want to use the caching with chatlitellm
* It started happening when I upgraded from version 1 of langchain. I confirm it happens in langchain 0.2.0 and 0.2.1
### System Info
System Information
------------------
> OS: Linux
> OS Version: #35~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue May 7 09:00:52 UTC 2
> Python Version: 3.11.7 (main, Dec 28 2023, 19:03:16) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.67
> langchain_mistralai: 0.1.7
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
litellm==1.39.6 | REGRESSION: ChatLiteLLM: ValidationError only when using cache | https://api.github.com/repos/langchain-ai/langchain/issues/22389/comments | 3 | 2024-06-01T14:30:48Z | 2024-08-07T16:59:47Z | https://github.com/langchain-ai/langchain/issues/22389 | 2,329,174,547 | 22,389 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.graphs import Neo4jGraph
from langchain.chains import GraphCypherQAChain
from langchain_openai import ChatOpenAI
uri = "bolt://localhost:7687"
username = "xxxx"
password = "xxxxx"
graph = Neo4jGraph(url=uri, username=username, password=password)
llm = ChatOpenAI(model="gpt-4-0125-preview",temperature=0)
chain = GraphCypherQAChain.from_llm(graph=graph, llm=llm, verbose=True, validate_cypher=True)
```
### Error Message and Stack Trace (if applicable)
Entering new GraphCypherQAChain chain...
Generated Cypher:
MATCH (t:Tools {name: "test.py"})-[:has MD5 hash]->(h:Hash) RETURN h.name
Traceback (most recent call last):
File "\lib\site-packages\langchain_community\graphs\neo4j_graph.py", line 391, in query
data = session.run(Query(text=query, timeout=self.timeout), params)
File "\lib\site-packages\neo4j\_sync\work\session.py", line 313, in run
self._auto_result._run(
File "\lib\site-packages\neo4j\_sync\work\result.py", line 181, in _run
self._attach()
File "\lib\site-packages\neo4j\_sync\work\result.py", line 301, in _attach
self._connection.fetch_message()
File "\lib\site-packages\neo4j\_sync\io\_common.py", line 178, in inner
func(*args, **kwargs)
File "\lib\site-packages\neo4j\_sync\io\_bolt.py", line 850, in fetch_message
res = self._process_message(tag, fields)
File "\lib\site-packages\neo4j\_sync\io\_bolt5.py", line 369, in _process_message
response.on_failure(summary_metadata or {})
File "\lib\site-packages\neo4j\_sync\io\_common.py", line 245, in on_failure
raise Neo4jError.hydrate(**metadata)
neo4j.exceptions.CypherSyntaxError: {code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input 'MD5': expected
"*"
"WHERE"
"]"
"{"
a parameter (line 1, column 50 (offset: 49))
"MATCH (t:Tools {name: "test.py"})-[:has MD5 hash]->(h:Hash) RETURN h.name"
^}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "\graph_RAG.py", line 29, in <module>
response = chain.invoke({"query": "What is the MD5 of test.py?"})
File "lib\site-packages\langchain\chains\base.py", line 166, in invoke
raise e
File "\lib\site-packages\langchain\chains\base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "\lib\site-packages\langchain_community\chains\graph_qa\cypher.py", line 274, in _call
context = self.graph.query(generated_cypher)[: self.top_k]
File "\lib\site-packages\langchain_community\graphs\neo4j_graph.py", line 397, in query
raise ValueError(f"Generated Cypher Statement is not valid\n{e}")
ValueError: Generated Cypher Statement is not valid
{code: Neo.ClientError.Statement.SyntaxError} {message: Invalid input 'MD5': expected
"*"
"WHERE"
"]"
"{"
a parameter (line 1, column 50 (offset: 49))
"MATCH (t:Tools {name: "test.py"})-[:has MD5 hash]->(h:Hash) RETURN h.name"
^}
### Description
Following the tutorial https://python.langchain.com/v0.2/docs/tutorials/graph/, the relationships between entities in my neo4j database contain spaces, and the agent cannot handle this situation correctly.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.67
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | GraphCypherQAChain cannot generate correct Cypher commands | https://api.github.com/repos/langchain-ai/langchain/issues/22385/comments | 1 | 2024-06-01T08:42:16Z | 2024-07-03T18:33:01Z | https://github.com/langchain-ai/langchain/issues/22385 | 2,329,012,951 | 22,385 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I followed the exact steps in: https://python.langchain.com/v0.2/docs/integrations/chat/huggingface/
However, it does not work. The moment I try to bind_tools with my model, the code throws an error.
```
from langchain_core.output_parsers.openai_tools import PydanticToolsParser
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
class Calculator(BaseModel):
"""Multiply two integers together."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
llm = HuggingFaceEndpoint(
repo_id="HuggingFaceH4/zephyr-7b-beta",
task="text-generation",
max_new_tokens=100,
do_sample=False,
seed=42
)
chat_model = ChatHuggingFace(llm=llm)
llm_with_multiply = chat_model.bind_tools([Calculator], tool_choice="auto")
parser = PydanticToolsParser(tools=[Calculator])
tool_chain = llm_with_multiply | parser
tool_chain.invoke("How much is 3 multiplied by 12?")
```
### Error Message and Stack Trace (if applicable)
```
warnings.warn(
Traceback (most recent call last):
File "/Users/ssengupta/Desktop/LangchainTests/Langchain_Trials/llm_application_with_calc.py", line 69, in <module>
tool_chain.invoke("How much is 3 multiplied by 12?")
File "/Users/ssengupta/Desktop/LangchainTests/Langchain_Trials/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2399, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "/Users/ssengupta/Desktop/LangchainTests/Langchain_Trials/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4433, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/Users/ssengupta/Desktop/LangchainTests/Langchain_Trials/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 170, in invoke
self.generate_prompt(
File "/Users/ssengupta/Desktop/LangchainTests/Langchain_Trials/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 599, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ssengupta/Desktop/LangchainTests/Langchain_Trials/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 456, in generate
raise e
File "/Users/ssengupta/Desktop/LangchainTests/Langchain_Trials/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 446, in generate
self._generate_with_cache(
File "/Users/ssengupta/Desktop/LangchainTests/Langchain_Trials/.venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 671, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/Users/ssengupta/Desktop/LangchainTests/Langchain_Trials/.venv/lib/python3.12/site-packages/langchain_huggingface/chat_models/huggingface.py", line 212, in _generate
return self._create_chat_result(answer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ssengupta/Desktop/LangchainTests/Langchain_Trials/.venv/lib/python3.12/site-packages/langchain_huggingface/chat_models/huggingface.py", line 189, in _create_chat_result
message=_convert_TGI_message_to_LC_message(response.choices[0].message),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ssengupta/Desktop/LangchainTests/Langchain_Trials/.venv/lib/python3.12/site-packages/langchain_huggingface/chat_models/huggingface.py", line 102, in _convert_TGI_message_to_LC_message
if "arguments" in tool_calls[0]["function"]:
~~~~~~~~~~^^^
File "/Users/ssengupta/Desktop/LangchainTests/Langchain_Trials/.venv/lib/python3.12/site-packages/huggingface_hub/inference/_generated/types/base.py", line 144, in __getitem__
return super().__getitem__(__key)
^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 0
```
### Description
I'm following the tutorial exactly. However, I still get the issue above. I even downgraded to 0.2.2, it doesn't work.
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:09:52 PDT 2024; root:xnu-10063.121.3~5/RELEASE_X86_64
> Python Version: 3.12.3 (v3.12.3:f6650f9ad7, Apr 9 2024, 08:18:48) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.67
> langchain_huggingface: 0.0.1
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.17
> langgraph: 0.0.60
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve
``` | Tools do not work with HuggingFace - Issue either with tutorial or library | https://api.github.com/repos/langchain-ai/langchain/issues/22379/comments | 9 | 2024-06-01T00:30:04Z | 2024-07-17T17:35:57Z | https://github.com/langchain-ai/langchain/issues/22379 | 2,328,767,834 | 22,379 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.memory import ConversationBufferWindowMemory
from langchain_core.prompts.chat import ChatPromptTemplate, MessagesPlaceholder
from langchain_community.utilities import BingSearchAPIWrapper
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser
from langchain.agents.format_scratchpad.openai_tools import (
format_to_openai_tool_messages,
)
from langchain_openai import AzureChatOpenAI
from langchain.chains import LLMMathChain
from langchain.agents import AgentExecutor
from langchain.agents import tool
memory = ConversationBufferWindowMemory(memory_key="chat_history", return_messages=True,k=12, verbose=True, output_key="output")
messages_data = [
"What is 2+2",
"The answer to 2+2 is 4",
]
memory.save_context({"input":messages_data[0]},{"output": messages_data[1]})
system = '''Your name is TemperaAI.
You are a data-driven Marketing Assistant designed to help with a wide range of tasks, from answering simple questions to providing in-depth plans.
YOU MUST FOLLOW THESE INSTRUCTIONS:
1. Add a citation next to every fact with the file path within brackets. For example: [//home/docs/file.txt]. You can only skip this if your answer has no citations.
2. Always include the subject matter in the search query when calling a retriever tool to ensure relevance.
3. If the tool response is not useful, try a different query.'''
human = '''
{input}
{agent_scratchpad}
'''
prompt = ChatPromptTemplate.from_messages(
[
("system", system),
MessagesPlaceholder("chat_history", optional=True),
("human", human),
]
)
llm = AzureChatOpenAI(
openai_api_version="2024-03-01-preview",
azure_deployment="chat",
)
llm_math = LLMMathChain.from_llm(llm=llm)
@tool
def math_tool(query: str) -> str:
'''Use it for performing calculations.'''
chain = llm_math
return chain.invoke(query)
def agent(llm,tools,question,memory,prompt):
result = {}
llm_with_tools = llm.bind_tools(tools)
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| OpenAIToolsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory, verbose=True, return_intermediate_steps=True, max_iterations=5, trim_intermediate_steps=1)
chat_history = memory.buffer_as_messages
result = agent_executor.invoke({"input": question,"chat_history": chat_history,})
return result
tools = [math_tool]
question = "Can you please summarize this conversation?"
response = agent(llm,tools,question,memory,prompt)
print("\nQUESTION",response['input'])
print("\n\nRESPONSE: ",response['output'])
```
### Error Message and Stack Trace (if applicable)
> Entering new AgentExecutor chain...
I'm sorry, but as an AI, I don't have the ability to summarize the current conversation. However, I can assist you with any specific questions or tasks you have. How can I help you today?
> Finished chain.
QUESTION Can you please summarize this conversation?
RESPONSE: I'm sorry, but as an AI, I don't have the ability to summarize the current conversation. However, I can assist you with any specific questions or tasks you have. How can I help you today?
### Description
I'm trying to use the ConversationBufferWindowMemory from Langchain on a Agent with Tools.
I create the agent and load the memory but when I ask a question about a past message or ask for a summary it does not respond well.
I implemented a solution to add the messages manually in the prompt and it gave me this response.
Expected Response: The conversation so far has been brief. The user asked a simple math question, "What is 2+2?". I provided the answer, which is 4. There hasn't been any further discussion or questions.
But using Langchain ConversationBufferWindowMemory I responds.
Response: "I'm sorry, but as an AI, I don't have the ability to summarize the current conversation. However, I can assist you with any specific questions or tasks you have. How can I help you today?"
Memory Content:
Memory content [HumanMessage(content='What is 2+2'), AIMessage(content='The answer to 2+2 is 4'), HumanMessage(content='Can you please summarize this conversation?'), AIMessage(content="I'm sorry, but as an AI, I don't have the ability to summarize the current conversation. However, I can assist you with any specific questions or tasks you have. How can I help you today?")]
Have any of you found this issue before? How can I make the Agent use the Memory?
### System Info
!pip install -qU langchain \ langchain-community \ langchain-core \langchain-openai \ numexpr \ hub \ langgraph \ langchainhub \ azure-cosmos \ azure-identity
Platform: Google Colab
| ConversationBufferWindowMemory | https://api.github.com/repos/langchain-ai/langchain/issues/22376/comments | 2 | 2024-05-31T21:56:29Z | 2024-06-06T19:19:34Z | https://github.com/langchain-ai/langchain/issues/22376 | 2,328,638,365 | 22,376 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from typing import Union
from typing import Optional
from langchain_core.pydantic_v1 import BaseModel, Field
class Joke(BaseModel):
"""Joke to tell user."""
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")
class ConversationalResponse(BaseModel):
"""Respond in a conversational manner. Be kind and helpful."""
response: str = Field(description="A conversational response to the user's query")
class Response(BaseModel):
output: Union[Joke, ConversationalResponse]
llm_modelgpt4 = ChatOpenAI(model="gpt-4o")
structured_llm = llm_modelgpt4.with_structured_output(Response)
structured_llm.invoke("tell me about the accounting")
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[117], line 28
23 output: Union[Joke, ConversationalResponse]
26 structured_llm = llm_modelgpt4.with_structured_output(Response)
---> 28 structured_llm.invoke("tell me about the accounting")
File ~/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py:2393, in RunnableSequence.invoke(self, input, config)
2391 try:
2392 for i, step in enumerate(self.steps):
-> 2393 input = step.invoke(
2394 input,
2395 # mark each step as a child run
2396 patch_config(
2397 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
2398 ),
2399 )
2400 # finish the root run
2401 except BaseException as e:
File ~/.local/lib/python3.10/site-packages/langchain_core/output_parsers/base.py:169, in BaseOutputParser.invoke(self, input, config)
165 def invoke(
166 self, input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None
167 ) -> T:
168 if isinstance(input, BaseMessage):
--> 169 return self._call_with_config(
170 lambda inner_input: self.parse_result(
171 [ChatGeneration(message=inner_input)]
172 ),
173 input,
174 config,
175 run_type="parser",
176 )
177 else:
178 return self._call_with_config(
179 lambda inner_input: self.parse_result([Generation(text=inner_input)]),
180 input,
181 config,
182 run_type="parser",
183 )
File ~/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py:1503, in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
1499 context = copy_context()
1500 context.run(var_child_runnable_config.set, child_config)
1501 output = cast(
1502 Output,
-> 1503 context.run(
1504 call_func_with_variable_args, # type: ignore[arg-type]
1505 func, # type: ignore[arg-type]
1506 input, # type: ignore[arg-type]
1507 config,
1508 run_manager,
1509 **kwargs,
1510 ),
1511 )
1512 except BaseException as e:
1513 run_manager.on_chain_error(e)
File ~/.local/lib/python3.10/site-packages/langchain_core/runnables/config.py:346, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
344 if run_manager is not None and accepts_run_manager(func):
345 kwargs["run_manager"] = run_manager
--> 346 return func(input, **kwargs)
File ~/.local/lib/python3.10/site-packages/langchain_core/output_parsers/base.py:170, in BaseOutputParser.invoke.<locals>.<lambda>(inner_input)
165 def invoke(
166 self, input: Union[str, BaseMessage], config: Optional[RunnableConfig] = None
167 ) -> T:
168 if isinstance(input, BaseMessage):
169 return self._call_with_config(
--> 170 lambda inner_input: self.parse_result(
171 [ChatGeneration(message=inner_input)]
172 ),
173 input,
174 config,
175 run_type="parser",
176 )
177 else:
178 return self._call_with_config(
179 lambda inner_input: self.parse_result([Generation(text=inner_input)]),
180 input,
181 config,
182 run_type="parser",
183 )
File ~/.local/lib/python3.10/site-packages/langchain_core/output_parsers/openai_tools.py:201, in PydanticToolsParser.parse_result(self, result, partial)
199 continue
200 else:
--> 201 raise e
202 if self.first_tool_only:
203 return pydantic_objects[0] if pydantic_objects else None
File ~/.local/lib/python3.10/site-packages/langchain_core/output_parsers/openai_tools.py:196, in PydanticToolsParser.parse_result(self, result, partial)
191 if not isinstance(res["args"], dict):
192 raise ValueError(
193 f"Tool arguments must be specified as a dict, received: "
194 f"{res['args']}"
195 )
--> 196 pydantic_objects.append(name_dict[res["type"]](**res["args"]))
197 except (ValidationError, ValueError) as e:
198 if partial:
File ~/.local/lib/python3.10/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for Response
output
field required (type=value_error.missing)
### Description
Following the example in https://python.langchain.com/v0.2/docs/how_to/structured_output/#choosing-between-multiple-schemas , if we change the question to something else like "tell me about the accounting" rather than "Tell me a joke about cats", this error will show up.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #35~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue May 7 09:00:52 UTC 2
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.1
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.63
> langchain_chroma: 0.1.1
> langchain_cli: 0.0.23
> langchain_openai: 0.1.7
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.16
> langgraph: 0.0.55
> langserve: 0.2.1
| Error when Choosing between multiple schemas | https://api.github.com/repos/langchain-ai/langchain/issues/22374/comments | 0 | 2024-05-31T21:42:14Z | 2024-05-31T21:45:03Z | https://github.com/langchain-ai/langchain/issues/22374 | 2,328,626,019 | 22,374 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Using the example code in the tutorial provided the plugin usage from URL does not work anymore:
```python
from langchain_community.agent_toolkits.load_tools import load_tools
from langchain_community.tools import AIPluginTool
from langchain_openai import ChatOpenAI
from langchain import hub
from langchain.agents import AgentExecutor, create_structured_chat_agent
tool = AIPluginTool.from_plugin_url("https://www.klarna.com/.well-known/ai-plugin.json")
llm = ChatOpenAI(temperature=0)
tools = load_tools(["requests_all"])
tools += [tool]
prompt = hub.pull("hwchase17/structured-chat-agent")
agent = create_structured_chat_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools,handle_parsing_errors=False, verbose=True,include_run_info=False)
result = agent_executor.invoke({"input":"what t shirts are available in klarna?"})
```
The same with deprecated call structure.
```python
from langchain_community.tools import AIPluginTool
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain_openai import ChatOpenAI
tool = AIPluginTool.from_plugin_url("https://www.klarna.com/.well-known/ai-plugin.json")
llm = ChatOpenAI(temperature=0)
tools = load_tools(["requests_all"])
tools += [tool]
agent_chain = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
agent_chain.run("What are some t shirts available on Klarna?")
```
### Error Message and Stack Trace (if applicable)
Instead of executing the plugin at the designated URL the action input is simply the the plugin endpoint specification
### Description
```console
>> Entering new AgentExecutor chain...
Action:
{
"action": "KlarnaProducts",
"action_input": ""
}
Usage Guide: Assistant uses the Klarna plugin to get relevant product suggestions for any shopping or product discovery purpose. Assistant will reply with the following 3 paragraphs 1) Search Results 2) Product Comparison of the Search Results 3) Followup Questions. The first paragraph contains a list of the products with their attributes listed clearly and concisely as bullet points under the product, together with a link to the product and an explanation. Links will always be returned and should be shown to the user. The second paragraph compares the results returned in a summary sentence starting with "In summary". Assistant comparisons consider only the most important features of the products that will help them fit the users request, and each product mention is brief, short and concise. In the third paragraph assistant always asks helpful follow-up questions and end with a question mark. When assistant is asking a follow-up question, it uses it's product expertise to provide information pertaining to the subject of the user's request that may guide them in their search for the right product.
OpenAPI Spec: {'openapi': '3.0.1', 'info': {'version': 'v0', 'title': 'Open AI Klarna product Api'}, 'servers': [{'url': 'https://www.klarna.com/us/shopping'}], 'tags': [{'name': 'open-ai-product-endpoint', 'description': 'Open AI Product Endpoint. Query for products.'}], 'paths': {'/public/openai/v0/products': {'get': {'tags': ['open-ai-product-endpoint'], 'summary': 'API for fetching Klarna product information', 'operationId': 'productsUsingGET', 'parameters': [{'name': 'countryCode', 'in': 'query', 'description': 'ISO 3166 country code with 2 characters based on the user location. Currently, only US, GB, DE, SE and DK are supported.', 'required': True, 'schema': {'type': 'string'}}, {'name': 'q', 'in': 'query', 'description': "A precise query that matches one very small category or product that needs to be searched for to find the products the user is looking for. If the user explicitly stated what they want, use that as a query. The query is as specific as possible to the product name or category mentioned by the user in its singular form, and don't contain any clarifiers like latest, newest, cheapest, budget, premium, expensive or similar. The query is always taken from the latest topic, if there is a new topic a new query is started. If the user speaks another language than English, translate their request into English (example: translate fia med knuff to ludo board game)!", 'required': True, 'schema': {'type': 'string'}}, {'name': 'size', 'in': 'query', 'description': 'number of products returned', 'required': False, 'schema': {'type': 'integer'}}, {'name': 'min_price', 'in': 'query', 'description': "(Optional) Minimum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for.", 'required': False, 'schema': {'type': 'integer'}}, {'name': 'max_price', 'in': 'query', 'description': "(Optional) Maximum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for.", 'required': False, 'schema': {'type': 'integer'}}], 'responses': {'200': {'description': 'Products found', 'content': {'application/json': {'schema': {'$ref': '#/components/schemas/ProductResponse'}}}}, '503': {'description': 'one or more services are unavailable'}}, 'deprecated': False}}}, 'components': {'schemas': {'Product': {'type': 'object', 'properties': {'attributes': {'type': 'array', 'items': {'type': 'string'}}, 'name': {'type': 'string'}, 'price': {'type': 'string'}, 'url': {'type': 'string'}}, 'title': 'Product'}, 'ProductResponse': {'type': 'object', 'properties': {'products': {'type': 'array', 'items': {'$ref': '#/components/schemas/Product'}}}, 'title': 'ProductResponse'}}}}Action:
{
"action": "Final Answer",
"action_input": "You can use the Klarna Shopping API to search and compare prices of various t-shirts available in online shops. Please provide specific details such as the t-shirt brand, size, color, or any other preferences to narrow down the search results."
}
>> Finished chain.
Output: 'I have retrieved a list of t-shirts from Klarna. Please find the search results and product comparison in the provided link: [Klarna T-Shirts](https://www.klarna.com/us/shopping).'
```
### System Info
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.65
> langchain_cli: 0.0.21
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.14
> langserve: 0.0.41
Tested on mac and linux | Plugin execution not working anymore | https://api.github.com/repos/langchain-ai/langchain/issues/22364/comments | 1 | 2024-05-31T15:34:48Z | 2024-06-01T00:12:22Z | https://github.com/langchain-ai/langchain/issues/22364 | 2,328,105,265 | 22,364 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import bs4
from langchain_community.document_loaders import WebBaseLoader
loader = WebBaseLoader(
web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),
bs_kwargs=dict(
parse_only=bs4.SoupStrainer(
class_=("post-content", "post-title", "post-header")
)
),
)
blog_docs = loader.load()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=300,
chunk_overlap=50)
splits = text_splitter.split_documents(blog_docs)
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
vectorstore = Chroma.from_documents(documents=splits,
embedding=OpenAIEmbeddings())
retriever = vectorstore.as_retriever()
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
vectorstore = Chroma.from_documents(documents=splits,
embedding=OpenAIEmbeddings())
retriever = vectorstore.as_retriever(search_kwargs={"k": 4})
docs = retriever.get_relevant_documents("What is Task Decomposition?")
print(f"number of documents - {len(docs)}")
for doc in docs:
print(f"document content - `{doc.__dict__}")
```
the printed values are
document content - {'page_content': 'Fig. 1. Overview of a LLM-powered autonomous agent system.\nComponent One: Planning#\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\nTask decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.', 'metadata': {'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}, 'type': 'Document'}
document content - {'page_content': 'Fig. 1. Overview of a LLM-powered autonomous agent system.\nComponent One: Planning#\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\nTask decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.', 'metadata': {'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}, 'type': 'Document'}
document content - {'page_content': 'Fig. 1. Overview of a LLM-powered autonomous agent system.\nComponent One: Planning#\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\nTask Decomposition#\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.\nTree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\nTask decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.', 'metadata': {'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}, 'type': 'Document'}
document content - {'page_content': 'Resources:\n1. Internet access for searches and information gathering.\n2. Long Term memory management.\n3. GPT-3.5 powered Agents for delegation of simple tasks.\n4. File output.\n\nPerformance Evaluation:\n1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\n2. Constructively self-criticize your big-picture behavior constantly.\n3. Reflect on past decisions and strategies to refine your approach.\n4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.', 'metadata': {'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}, 'type': 'Document'}
as you can see 3 documents are the same.
I checked and splits contain 52 documents, but the value of
```
res = vectorstore.get()
res.keys()
len(res['documents'])
```
is 156, so I think each document is stored 3 times instead of 1.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to use chroma as retriever in a toy example and except to get different documents when 'get_relevant_documents' is applied. instead, I'm getting the same document 3 times
### System Info
langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.3
langchain-openai==0.1.8
langchain-text-splitters==0.2.0
langchainhub==0.1.17
Linux
Python 3.10.12
I'm running on Colab | Chroma returns the same document more than once when use as a retriver | https://api.github.com/repos/langchain-ai/langchain/issues/22361/comments | 5 | 2024-05-31T13:39:32Z | 2024-07-04T01:05:24Z | https://github.com/langchain-ai/langchain/issues/22361 | 2,327,863,933 | 22,361 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain_openai import ChatOpenAI, OpenAI
llm = ChatOpenAI(temperature=0.0)
math_llm = OpenAI(temperature=0.0)
def get_input() -> str:
print("Insert your text. Enter 'q' or press Ctrl-D (or Ctrl-Z on Windows) to end.")
contents = []
while True:
try:
line = input()
except EOFError:
break
if line == "q":
break
contents.append(line)
return "\n".join(contents)
# Or you can directly instantiate the tool
from langchain_community.tools import HumanInputRun
tool = HumanInputRun(input_func=get_input)
tools = [tool]
agent_chain = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
agent_chain.run("I need help attributing a quote")
### Error Message and Stack Trace (if applicable)
> Entering new AgentExecutor chain...
I should ask a human for guidance on how to properly attribute a quote.
Action: [human]
Action Input: How should I properly attribute a quote?
Observation: [human] is not a valid tool, try one of [human].
Thought:I should try asking a different human for guidance on how to properly attribute a quote.
Action: [human]
Action Input: How should I properly attribute a quote?
Observation: [human] is not a valid tool, try one of [human].
Thought:
---------------------------------------------------------------------------
OutputParserException Traceback (most recent call last)
File ~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/agent.py:1166, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
[1165](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/agent.py:1165) # Call the LLM to see what to do.
-> [1166](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/agent.py:1166) output = self.agent.plan(
[1167](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/agent.py:1167) intermediate_steps,
[1168](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/agent.py:1168) callbacks=run_manager.get_child() if run_manager else None,
[1169](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/agent.py:1169) **inputs,
[1170](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/agent.py:1170) )
[1171](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/agent.py:1171) except OutputParserException as e:
File ~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/agent.py:731, in Agent.plan(self, intermediate_steps, callbacks, **kwargs)
[730](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/agent.py:730) full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)
--> [731](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/agent.py:731) return self.output_parser.parse(full_output)
File ~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/mrkl/output_parser.py:76, in MRKLOutputParser.parse(self, text)
[73](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/mrkl/output_parser.py:73) elif not re.search(
[74](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/mrkl/output_parser.py:74) r"[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)", text, re.DOTALL
[75](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/mrkl/output_parser.py:75) ):
---> [76](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/mrkl/output_parser.py:76) raise OutputParserException(
[77](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/mrkl/output_parser.py:77) f"Could not parse LLM output: `{text}`",
[78](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/mrkl/output_parser.py:78) observation=MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE,
[79](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/mrkl/output_parser.py:79) llm_output=text,
[80](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/mrkl/output_parser.py:80) send_to_llm=True,
[81](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/mrkl/output_parser.py:81) )
...
[1183](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/agent.py:1183) text = str(e)
[1184](https://vscode-remote+wsl-002bubuntu-002d24-002e04.vscode-resource.vscode-cdn.net/home/vdrvar/development/esg-demo/app3/~/development/esg-demo/.venv/lib/python3.12/site-packages/langchain/agents/agent.py:1184) if isinstance(self.handle_parsing_errors, bool):
ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: `I should try looking up the proper way to attribute a quote online.
Action: Search online`
### Description
I am trying to instantiate a Human-as-a-Tool tool, without the load_tools function, this way it would be easier to integrate it in the rest of my app. However, strange things start to happen when I do this direct instantiation. It seems LangChain does not recognize the tool properly, please advise. Cheers.
### System Info
[tool.poetry]
name = "esg-demo"
version = "0.1.0"
description = ""
authors = ["Vjekoslav Drvar <[email protected]>"]
readme = "README.md"
[tool.poetry.dependencies]
python = ">=3.11,<3.13"
openpyxl = "^3.1.2"
pyyaml = "^6.0.1"
python-dotenv = "^1.0.1"
streamlit = "^1.33.0"
scikit-learn = "^1.4.2"
black = "^24.4.0"
plotly = "^5.21.0"
nbformat = "^5.10.4"
matplotlib = "^3.8.4"
tiktoken = "^0.6.0"
pymilvus = "^2.4.0"
langchain = "^0.1.17"
openai = "^1.26.0"
langchain-openai = "^0.1.6"
beautifulsoup4 = "^4.12.3"
faiss-cpu = "^1.8.0"
langchain-community = "^0.0.37"
tavily-python = "^0.3.3"
langchainhub = "^0.1.15"
langchain-chroma = "^0.1.0"
bs4 = "^0.0.2"
pypdf = "^4.2.0"
docarray = "^0.40.0"
wikipedia = "^1.4.0"
numexpr = "^2.10.0"
duckduckgo-search = "^6.1.1"
[tool.poetry.group.dev.dependencies]
ipykernel = "^6.29.4"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
| Unexpected Behaviour with Human as a tool | https://api.github.com/repos/langchain-ai/langchain/issues/22358/comments | 1 | 2024-05-31T10:48:39Z | 2024-05-31T12:19:54Z | https://github.com/langchain-ai/langchain/issues/22358 | 2,327,536,606 | 22,358 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Creation of the Agent :
```python
class Agent:
def __init__(self, tools: list[BaseTool], prompt: ChatPromptTemplate) -> None:
self.llm = ChatOpenAI(
streaming=True,
model="gpt-4o",
temperature=0.01,
)
self.history = ChatMessageHistory()
agent = create_openai_tools_agent(llm=self.llm, tools=tools, prompt=prompt)
self.agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=False)
self.agent_with_chat_history = RunnableWithMessageHistory(
self.agent_executor,
lambda session_id: self.history,
input_messages_key="input",
history_messages_key="chat_history",
).with_config({"run_name": "Agent"})
async def send(self, message: str, session_id: str):
"""
Send a message for the given conversation
Args:
message (str):
session_id (str): _description_
"""
try:
async for event in self.agent_with_chat_history.astream_events(
{"input": message},
config={"configurable": {"session_id": session_id}},
version="v1",
):
kind = event["event"]
if kind != StreamEventName.on_chat_model_stream:
logger.debug(event)
if kind == StreamEventName.on_chain_start:
# self.latency_monitorer.report_event("Chain start")
if (
event["name"] == "Agent"
): # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`
logger.debug(
f"Starting agent: {event['name']} with input: {event['data'].get('input')}"
)
elif kind == StreamEventName.on_chain_end:
# self.latency_monitorer.report_event("Chain end")
if (
event["name"] == "Agent"
): # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`
logger.debug("--")
logger.debug(
f"Done agent: {event['name']} with output: {event['data'].get('output')['output']}"
)
if kind == StreamEventName.on_chat_model_stream:
content = event["data"]["chunk"].content
if content:
logger.debug(content)
elif kind == StreamEventName.on_tool_start:
logger.debug("--")
logger.debug(
f"Starting tool: {event['name']} with inputs: {event['data'].get('input')}"
)
elif kind == "on_tool_stream":
pass
elif kind == StreamEventName.on_tool_end:
logger.debug(f"Done tool: {event['name']}")
logger.debug(f"Tool output was: {event['data'].get('output')}")
logger.debug("--")
except Exception as err:
logger.error(err)
```
Defyining the prompt :
```python
system = f"""You are a friendly robot that gives informations about the weather. Always notice the person that you are talking when you are going to call a tool that they might need to wait a little bit
"""
@tool
def get_weather(location: str):
"""retrieve the weather for the given location"""
return "bad weather"
class Dialog:
def __init__():
self.prompt = ChatPromptTemplate.from_messages(
[
(
"system",
system,
),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
self.agent = Agent(
[get_weather],
self.prompt,
)
self.session_id = "foo"
await self.agent.send(
"What's the weather like in Brest ?",
self.session_id,
)
print(self.agent.history.json())
```
### Error Message and Stack Trace (if applicable)
```bash
2024-05-31 12:36:00.593 | DEBUG | agent:send:85 - Starting agent: Agent with input: {'input': "What's the weather like in Brest ?"}
Parent run 5d91f12e-1744-4a05-b1c1-04c7b3f6ba6f not found for run d80a4e7d-daf5-4367-b1f0-393a45c89d93. Treating as a root run.
2024-05-31 12:36:01.503 | DEBUG | agent:send:110 - Sure
2024-05-31 12:36:01.515 | DEBUG | agent:send:110 - ,
2024-05-31 12:36:01.545 | DEBUG | agent:send:110 - let
2024-05-31 12:36:01.556 | DEBUG | agent:send:110 - me
2024-05-31 12:36:01.563 | DEBUG | agent:send:110 - check
2024-05-31 12:36:01.572 | DEBUG | agent:send:110 - the
2024-05-31 12:36:01.579 | DEBUG | agent:send:110 - weather
2024-05-31 12:36:01.588 | DEBUG | agent:send:110 - in
2024-05-31 12:36:01.595 | DEBUG | agent:send:110 - Brest
2024-05-31 12:36:01.622 | DEBUG | agent:send:110 - for
2024-05-31 12:36:01.630 | DEBUG | agent:send:110 - you
2024-05-31 12:36:01.653 | DEBUG | agent:send:110 - .
2024-05-31 12:36:01.661 | DEBUG | agent:send:110 - This
2024-05-31 12:36:01.693 | DEBUG | agent:send:110 - might
2024-05-31 12:36:01.702 | DEBUG | agent:send:110 - take
2024-05-31 12:36:01.714 | DEBUG | agent:send:110 - a
2024-05-31 12:36:01.722 | DEBUG | agent:send:110 - little
2024-05-31 12:36:01.802 | DEBUG | agent:send:110 - bit
2024-05-31 12:36:01.810 | DEBUG | agent:send:110 - ,
2024-05-31 12:36:01.819 | DEBUG | agent:send:110 - so
2024-05-31 12:36:01.827 | DEBUG | agent:send:110 - please
2024-05-31 12:36:01.836 | DEBUG | agent:send:110 - bear
2024-05-31 12:36:01.846 | DEBUG | agent:send:110 - with
2024-05-31 12:36:02.101 | DEBUG | agent:send:110 - me
2024-05-31 12:36:02.111 | DEBUG | agent:send:110 - .
2024-05-31 12:36:02.303 | DEBUG | agent:send:112 - --
2024-05-31 12:36:02.303 | DEBUG | agent:send:113 - Starting tool: get_weather with inputs: {'location': 'Brest'}
2024-05-31 12:36:02.312 | DEBUG | agent:send:119 - Done tool: get_weather
2024-05-31 12:36:02.313 | DEBUG | agent:send:120 - Tool output was: bad weather
2024-05-31 12:36:02.313 | DEBUG | agent:send:121 - --
2024-05-31 12:36:03.341 | DEBUG | agent:send:110 - The
2024-05-31 12:36:03.387 | DEBUG | agent:send:110 - weather
2024-05-31 12:36:03.400 | DEBUG | agent:send:110 - in
2024-05-31 12:36:03.446 | DEBUG | agent:send:110 - Brest
2024-05-31 12:36:03.456 | DEBUG | agent:send:110 - is
2024-05-31 12:36:03.507 | DEBUG | agent:send:110 - currently
2024-05-31 12:36:03.520 | DEBUG | agent:send:110 - bad
2024-05-31 12:36:03.542 | DEBUG | agent:send:110 - .
2024-05-31 12:36:03.556 | DEBUG | agent:send:110 - If
2024-05-31 12:36:03.623 | DEBUG | agent:send:110 - you
2024-05-31 12:36:03.632 | DEBUG | agent:send:110 - need
2024-05-31 12:36:03.671 | DEBUG | agent:send:110 - more
2024-05-31 12:36:03.684 | DEBUG | agent:send:110 - specific
2024-05-31 12:36:03.698 | DEBUG | agent:send:110 - details
2024-05-31 12:36:03.731 | DEBUG | agent:send:110 - ,
2024-05-31 12:36:03.742 | DEBUG | agent:send:110 - feel
2024-05-31 12:36:03.773 | DEBUG | agent:send:110 - free
2024-05-31 12:36:03.787 | DEBUG | agent:send:110 - to
2024-05-31 12:36:03.819 | DEBUG | agent:send:110 - ask
2024-05-31 12:36:03.832 | DEBUG | agent:send:110 - !
2024-05-31 12:36:03.948 | DEBUG | agent:send:93 - --
2024-05-31 12:36:03.948 | DEBUG | agent:send:94 - Done agent: Agent with output: The weather in Brest is currently bad. If you need more specific details, feel free to ask!
{"messages": [{"content": "What's the weather like in Brest ?", "additional_kwargs": {}, "response_metadata": {}, "type": "human", "name": null, "id": null, "example": false}, {"content": "The weather in Brest is currently bad. If you need more specific details, feel free to ask!", "additional_kwargs": {}, "response_metadata": {}, "type": "ai", "name": null, "id": null, "example": false, "tool_calls": [], "invalid_tool_calls": []}]}
```
### Description
* I am trying to use Langchain to create an Agent that call tools and has memory.
* When I use the astream_events API, the messages generated by the AI before calling a tool (e.g when 'finish_reason' is 'tool_calls') are not saved into the ChatMessageHistory.
What is currently happening :
* As you can see in the stack trace, the message sent by the AI before calling the tool does not appear in the Agent history.
What I expect to happen:
* The AIMessageChunk generated before "finish_reason" is "tool_calls" appears in the agent message history
Please let me know if anything is unclear or if the problem lies with my implementation.
Thanks in advance,
### System Info
```bash
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Jan 11 04:09:03 UTC 2024
> Python Version: 3.12.1 (main, Mar 26 2024, 17:07:43) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.52
> langchain: 0.1.20
> langchain_community: 0.0.38
> langsmith: 0.1.63
> langchain_openai: 0.1.7
> langchain_text_splitters: 0.0.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
```
| AIMessage played before invoking a tool is not registered in the Agent memory | https://api.github.com/repos/langchain-ai/langchain/issues/22357/comments | 2 | 2024-05-31T10:42:34Z | 2024-06-05T15:51:33Z | https://github.com/langchain-ai/langchain/issues/22357 | 2,327,525,082 | 22,357 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I don't have a MRE, but better to tell you what I know than nothing at all: after updating to latest 0.2.x from 0.1.x, I started having this warning. It comes from `lanchain_core/tracers/base.py:399`. I noticed that `chain_run` is of class `RunTree` rather than `Run`, it looks like `self.run_map` in that function contains a mix of `Run` and `RunTree` objects, of which only the `Run` objects have an `events` defined?
![image](https://github.com/langchain-ai/langchain/assets/8631181/f068df53-8246-40ab-80f1-17224bce0858)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
See above
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Fri, 17 May 2024 11:49:30 +0000
> Python Version: 3.12.3 (main, Apr 23 2024, 09:16:07) [GCC 13.2.1 20240417]
Package Information
-------------------
> langchain_core: 0.2.3
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.65
> langchain_cli: 0.0.24
> langchain_cohere: 0.1.5
> langchain_mongodb: 0.1.5
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.17
> langgraph: 0.0.59
> langserve: 0.2.1 | [2024-05-31 11:06:20,219: WARNING] langchain_core.callbacks.manager: Error in LangChainTracer.on_chain_end callback: AttributeError("'NoneType' object has no attribute 'append'") | https://api.github.com/repos/langchain-ai/langchain/issues/22353/comments | 8 | 2024-05-31T09:41:06Z | 2024-07-18T09:54:50Z | https://github.com/langchain-ai/langchain/issues/22353 | 2,327,393,633 | 22,353 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_chroma import Chroma
from langchain_community.chat_models.tongyi import ChatTongyi
from langchain_community.embeddings import DashScopeEmbeddings
from langchain_core.messages import HumanMessage
from conf.configs import DASHSCOPE_API_KEY
from langchain_core.tools import tool, create_retriever_tool
from langchain_community.document_transformers import Html2TextTransformer
from langchain_community.document_loaders import RecursiveUrlLoader
import os
os.environ["DASHSCOPE_API_KEY"] = DASHSCOPE_API_KEY
url = "https://python.langchain.com/v0.2/docs/versions/v0_2/"
loader = RecursiveUrlLoader(url=url, max_depth=100)
docs = loader.load()
html2text = Html2TextTransformer()
docs_transformed = html2text.transform_documents(docs)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=200, chunk_overlap=50)
docs = text_splitter.split_documents(docs_transformed)
db = Chroma.from_documents(docs, DashScopeEmbeddings(), persist_directory="D:\ollama")
retriever = db.as_retriever()
langchain_search = create_retriever_tool(retriever, "langchain_search", "Return knowledge related to Langchain")
tools = [langchain_search]
chat = ChatTongyi(streaming=True)
from langgraph.prebuilt import chat_agent_executor
agent_executor = chat_agent_executor.create_tool_calling_executor(chat, tools)
query = "When was Langchain0.2 released?"
for s in agent_executor.stream(
{"messages": [HumanMessage(content=query)]},
):
print(s)
print("----")
```
### Error Message and Stack Trace (if applicable)
D:\miniconda3\envs\chat2\python.exe D:\pythonProject\chat2\langchain_agent_create.py
{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'type': 'function', 'function': {'name': 'langchain_search', 'arguments': ''}, 'id': ''}, {'type': 'function', 'function': {'name': '', 'arguments': '{"query": "'}, 'id': ''}, {'type': 'function', 'function': {'name': '', 'arguments': 'Langchain 0.2 version release'}, 'id': ''}, {'type': 'function', 'function': {'name': '', 'arguments': ' date"}'}, 'id': ''}, {'type': 'function', 'function': {'name': '', 'arguments': ''}, 'id': ''}]}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'tool_calls', 'request_id': 'c426dbd5-a597-91a0-9ec4-a55b2591fed1', 'token_usage': {'input_tokens': 189, 'output_tokens': 26, 'total_tokens': 215}}, id='run-13fd4707-8439-4431-9dad-817894f4c3e7-0', tool_calls=[{'name': 'langchain_search', 'args': {'query': 'Langchain 0.2 version release date'}, 'id': ''}])]}}
----
{'tools': {'messages': [ToolMessage(content='Skip to main content\n\nLangChain 0.2 is out! Leave feedback on the v0.2 docs here. You can view the\nv0.1 docs here.\n\nIntegrationsAPI Reference\n\nMore\n\nSkip to main content\n\nLangChain 0.2 is out! Leave feedback on the v0.2 docs here. You can view the\nv0.1 docs here.\n\nIntegrationsAPI Reference\n\nMore\n\nSkip to main content\n\nLangChain 0.2 is out! Leave feedback on the v0.2 docs here. You can view the\nv0.1 docs here.\n\nIntegrationsAPI Reference\n\nMore\n\n* LangChain v0.2\n * astream_events v2\n * Changes\n * Security\n\n * * Versions\n * v0.2\n\nOn this page\n\n# LangChain v0.2', name='langchain_search', id='28ffa364-791c-488e-9020-1960c4a5672b', tool_call_id='')]}}
----
Traceback (most recent call last):
File "D:\pythonProject\chat2\langchain_agent_create.py", line 49, in <module>
for s in agent_executor.stream(
File "D:\miniconda3\envs\chat2\Lib\site-packages\langgraph\pregel\__init__.py", line 876, in stream
_panic_or_proceed(done, inflight, step)
File "D:\miniconda3\envs\chat2\Lib\site-packages\langgraph\pregel\__init__.py", line 1422, in _panic_or_proceed
raise exc
File "D:\miniconda3\envs\chat2\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\chat2\Lib\site-packages\langgraph\pregel\retry.py", line 66, in run_with_retry
task.proc.invoke(task.input, task.config)
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_core\runnables\base.py", line 2393, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_core\runnables\base.py", line 3857, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_core\runnables\base.py", line 1503, in _call_with_config
context.run(
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_core\runnables\config.py", line 346, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_core\runnables\base.py", line 3731, in _invoke
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_core\runnables\config.py", line 346, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\chat2\Lib\site-packages\langgraph\prebuilt\chat_agent_executor.py", line 403, in call_model
response = model_runnable.invoke(messages, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_core\runnables\base.py", line 4427, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_core\language_models\chat_models.py", line 170, in invoke
self.generate_prompt(
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_core\language_models\chat_models.py", line 599, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_core\language_models\chat_models.py", line 456, in generate
raise e
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_core\language_models\chat_models.py", line 446, in generate
self._generate_with_cache(
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_core\language_models\chat_models.py", line 671, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_community\chat_models\tongyi.py", line 440, in _generate
for chunk in self._stream(
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_community\chat_models\tongyi.py", line 512, in _stream
for stream_resp, is_last_chunk in generate_with_last_element_mark(
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_community\llms\tongyi.py", line 135, in generate_with_last_element_mark
item = next(iterator)
^^^^^^^^^^^^^^
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_community\chat_models\tongyi.py", line 361, in _stream_completion_with_retry
yield check_response(delta_resp)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\chat2\Lib\site-packages\langchain_community\llms\tongyi.py", line 66, in check_response
raise HTTPError(
^^^^^^^^^^
File "D:\miniconda3\envs\chat2\Lib\site-packages\requests\exceptions.py", line 22, in __init__
if response is not None and not self.request and hasattr(response, "request"):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\miniconda3\envs\chat2\Lib\site-packages\dashscope\api_entities\dashscope_response.py", line 59, in __getattr__
return self[attr]
~~~~^^^^^^
File "D:\miniconda3\envs\chat2\Lib\site-packages\dashscope\api_entities\dashscope_response.py", line 15, in __getitem__
return super().__getitem__(key)
^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'request'
Exception ignored in: <generator object HttpRequest._handle_request at 0x0000013002FBB240>
RuntimeError: generator ignored GeneratorExit
### Description
I am using ChatTongyi to create a proxy for RAG Q&A, but the code is not executing properly. The document I am referring to is: https://python.langchain.com/v0.2/docs/tutorials/qa_chat_history/#agents
### System Info
python:3.11.9
langchain:0.2.1
platform:windows11 | Creating proxy using ChatTongyi, unable to return results properly | https://api.github.com/repos/langchain-ai/langchain/issues/22351/comments | 3 | 2024-05-31T09:12:38Z | 2024-06-11T09:08:21Z | https://github.com/langchain-ai/langchain/issues/22351 | 2,327,336,164 | 22,351 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_core.prompts import ChatPromptTemplate
from langchain.llms import OpenAI
from langchain.schema.output_parser import StrOutputParser
model = OpenAI(model_name='gpt-4o')
prompt = ChatPromptTemplate.from_template('tell me a joke: {question}')
question = "a funny one"
async def get_question(query):
return 'tell me a joke'
chain = (
{
"question": get_question,
}
| prompt
| model
| StrOutputParser()
)
# Define a function to invoke the chain
async def invoke_chain(question):
result = await chain.ainvoke(question)
return result
# Example usage
# Run the async function and get the result
result = await invoke_chain(question)
print(result)
```
### Error Message and Stack Trace (if applicable)
```
---> 32 result = await invoke_chain(question)
33 print(result)
Cell In[13], line 25, in invoke_chain(question)
24 async def invoke_chain(question):
---> 25 result = await chain.ainvoke(question)
26 return result
File ~/dev/sparkai-chatbot/venv/lib/python3.11/site-packages/langchain_core/runnables/base.py:2436, in RunnableSequence.ainvoke(self, input, config, **kwargs)
2434 try:
2435 for i, step in enumerate(self.steps):
-> 2436 input = await step.ainvoke(
2437 input,
2438 # mark each step as a child run
2439 patch_config(
2440 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
2441 ),
2442 )
2443 # finish the root run
2444 except BaseException as e:
File ~/dev/sparkai-chatbot/venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py:299, in BaseLLM.ainvoke(self, input, config, stop, **kwargs)
290 async def ainvoke(
291 self,
292 input: LanguageModelInput,
(...)
296 **kwargs: Any,
297 ) -> str:
298 config = ensure_config(config)
--> 299 llm_result = await self.agenerate_prompt(
300 [self._convert_input(input)],
301 stop=stop,
302 callbacks=config.get("callbacks"),
303 tags=config.get("tags"),
304 metadata=config.get("metadata"),
305 run_name=config.get("run_name"),
306 run_id=config.pop("run_id", None),
307 **kwargs,
308 )
309 return llm_result.generations[0][0].text
File ~/dev/sparkai-chatbot/venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py:643, in BaseLLM.agenerate_prompt(self, prompts, stop, callbacks, **kwargs)
635 async def agenerate_prompt(
636 self,
637 prompts: List[PromptValue],
(...)
640 **kwargs: Any,
641 ) -> LLMResult:
642 prompt_strings = [p.to_string() for p in prompts]
--> 643 return await self.agenerate(
644 prompt_strings, stop=stop, callbacks=callbacks, **kwargs
645 )
File ~/dev/sparkai-chatbot/venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py:1018, in BaseLLM.agenerate(self, prompts, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
1001 run_managers = await asyncio.gather(
1002 *[
1003 callback_manager.on_llm_start(
(...)
1015 ]
1016 )
1017 run_managers = [r[0] for r in run_managers] # type: ignore[misc]
-> 1018 output = await self._agenerate_helper(
1019 prompts,
1020 stop,
1021 run_managers, # type: ignore[arg-type]
1022 bool(new_arg_supported),
1023 **kwargs, # type: ignore[arg-type]
1024 )
1025 return output
1026 if len(missing_prompts) > 0:
File ~/dev/sparkai-chatbot/venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py:882, in BaseLLM._agenerate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
875 except BaseException as e:
876 await asyncio.gather(
877 *[
878 run_manager.on_llm_error(e, response=LLMResult(generations=[]))
879 for run_manager in run_managers
880 ]
881 )
--> 882 raise e
883 flattened_outputs = output.flatten()
884 await asyncio.gather(
885 *[
886 run_manager.on_llm_end(flattened_output)
(...)
890 ]
891 )
File ~/dev/sparkai-chatbot/venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py:866, in BaseLLM._agenerate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
856 async def _agenerate_helper(
857 self,
858 prompts: List[str],
(...)
862 **kwargs: Any,
863 ) -> LLMResult:
864 try:
865 output = (
--> 866 await self._agenerate(
867 prompts,
868 stop=stop,
869 run_manager=run_managers[0] if run_managers else None,
870 **kwargs,
871 )
872 if new_arg_supported
873 else await self._agenerate(prompts, stop=stop)
874 )
875 except BaseException as e:
876 await asyncio.gather(
877 *[
878 run_manager.on_llm_error(e, response=LLMResult(generations=[]))
879 for run_manager in run_managers
880 ]
881 )
File ~/dev/sparkai-chatbot/venv/lib/python3.11/site-packages/langchain_community/llms/openai.py:1194, in OpenAIChat._agenerate(self, prompts, stop, run_manager, **kwargs)
1192 messages, params = self._get_chat_params(prompts, stop)
1193 params = {**params, **kwargs}
-> 1194 full_response = await acompletion_with_retry(
1195 self, messages=messages, run_manager=run_manager, **params
1196 )
1197 if not isinstance(full_response, dict):
1198 full_response = full_response.dict()
File ~/dev/sparkai-chatbot/venv/lib/python3.11/site-packages/langchain_community/llms/openai.py:133, in acompletion_with_retry(llm, run_manager, **kwargs)
131 """Use tenacity to retry the async completion call."""
132 if is_openai_v1():
--> 133 return await llm.async_client.create(**kwargs)
135 retry_decorator = _create_retry_decorator(llm, run_manager=run_manager)
137 @retry_decorator
138 async def _completion_with_retry(**kwargs: Any) -> Any:
139 # Use OpenAI's async api https://github.com/openai/openai-python#async-api
AttributeError: 'NoneType' object has no attribute 'create'
```
### Description
Doing async calls to langchain seem to break when I update OpenAI package to version >=1.0
### System Info
latest version from github or any other version is affected | Langchain using chain.ainvoke for async breaks with OpenAI>=1.0: AttributeError: 'NoneType' object has no attribute 'create | https://api.github.com/repos/langchain-ai/langchain/issues/22338/comments | 3 | 2024-05-31T01:10:43Z | 2024-05-31T12:51:33Z | https://github.com/langchain-ai/langchain/issues/22338 | 2,326,789,269 | 22,338 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
As a dummy example, let's try to stop a model using exclamation marks.
Using tiktoken, I can identify that the token for '!' is '0':
![image](https://github.com/langchain-ai/langchain/assets/46486498/c7f2fc04-b89d-476f-9f40-2f08529e961b)
```
from langchain_openai import ChatOpenAI
logit_bias_dict = {'0': -100}
llm = ChatOpenAI(
api_key=os.getenv("OPENAI_API_KEY"),
model="gpt-4o",
temperature=0,
model_kwargs={"logit_bias":logit_bias_dict},
)
messages = [("human", "Write me a furious message as though you are screaming")]
response = llm.invoke(messages)
response.content
```
### Error Message and Stack Trace (if applicable)
I get the following response, still with exclamation marks:
"ARE YOU KIDDING ME RIGHT NOW?! I CAN'T BELIEVE YOU WOULD DO SOMETHING SO ABSURD AND THOUGHTLESS!! THIS IS COMPLETELY UNACCEPTABLE AND I AM BEYOND FURIOUS!! HOW COULD YOU POSSIBLY THINK THIS WAS A GOOD IDEA?! YOU HAVE CROSSED THE LINE AND I AM DONE PUTTING UP WITH THIS NONSENSE!! GET YOUR ACT TOGETHER IMMEDIATELY OR THERE WILL BE SERIOUS CONSEQUENCES!!"
### Description
As you can see, it is not having the desired effect. I have also tried passing the logit bias dictionary to llm.invoke instead:
```
from langchain_openai import ChatOpenAI
logit_bias_dict = {'0': -100}
llm = ChatOpenAI(
api_key=os.getenv("OPENAI_API_KEY"),
model="gpt-4o",
temperature=0,
)
messages = [("human", "Write me a furious message as though you are screaming")]
response = llm.invoke(messages, **{"logit_bias": logit_bias_dict})
response.content
```
which also has no effect. Conversely this works fine with OpenAI directly.
I have also tried using 0 instead of '0' (i.e. an integer instead of a string) - also no difference.
### System Info
langchain 0.2.1
langchain-community 0.2.1
langchain-core 0.2.3
langchain-openai 0.1.8
langchain-text-splitters 0.2.0
MacOS
Python 3.12.3 | Logit Bias is not having the desired effect when using ChatOpenAI - it doesn't seem like it's propagating to OpenAI call properly | https://api.github.com/repos/langchain-ai/langchain/issues/22335/comments | 1 | 2024-05-30T22:12:54Z | 2024-05-30T22:46:22Z | https://github.com/langchain-ai/langchain/issues/22335 | 2,326,612,607 | 22,335 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Dockerfile:
```
# LLM Installs
RUN pip3 install \
langchain \
langchain-community \
langchain-pinecone \
langchain-openai
```
Python Imports
``` python
import langchain
from langchain_community.document_loaders import PyPDFLoader, TextLoader, Docx2txtLoader, UnstructuredHTMLLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_pinecone import PineconeVectorStore
from langchain_openai import OpenAIEmbeddings, OpenAI, ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import RetrievalQAWithSourcesChain, ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.load.dump import dumps
```
### Error Message and Stack Trace (if applicable)
```
2024-05-30 13:28:59 from langchain_openai import OpenAIEmbeddings, OpenAI, ChatOpenAI
2024-05-30 13:28:59 File "/usr/local/lib/python3.9/site-packages/langchain_openai/__init__.py", line 1, in <module>
2024-05-30 13:28:59 from langchain_openai.chat_models import (
2024-05-30 13:28:59 File "/usr/local/lib/python3.9/site-packages/langchain_openai/chat_models/__init__.py", line 1, in <module>
2024-05-30 13:28:59 from langchain_openai.chat_models.azure import AzureChatOpenAI
2024-05-30 13:28:59 File "/usr/local/lib/python3.9/site-packages/langchain_openai/chat_models/azure.py", line 9, in <module>
2024-05-30 13:28:59 from langchain_core.language_models.chat_models import LangSmithParams
2024-05-30 13:28:59 ImportError: cannot import name 'LangSmithParams' from 'langchain_core.language_models.chat_models' (/usr/local/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py)
```
### Description
I am trying to import langchain_openai with the newest version released last night (0.1.8) and it can not find the LangSmithParams module.
I move back a version with ``` langchain-openai==0.1.7 ``` and it works again. Something in this new update broke the import.
### System Info
Container is running python 3.9 on Rocky Linux 8
```
# Install dependecies
RUN dnf -y install epel-release
RUN dnf -y install \
httpd \
python39 \
unzip \
xz \
git-core \
ImageMagick \
wget
RUN pip3 install \
psycopg2-binary \
pillow \
lxml \
pycryptodomex \
six \
pytz \
jaraco.functools \
requests \
supervisor \
flask \
flask-cors \
flask-socketio \
mako \
boto3 \
botocore==1.34.33 \
gotenberg-client \
docusign-esign \
python-dotenv \
htmldocx \
python-docx \
beautifulsoup4 \
pypandoc \
pyetherpadlite \
html2text \
PyJWT \
sendgrid \
auth0-python \
authlib \
openai==0.27.7 \
pinecone-client==3.1.0 \
pinecone-datasets==0.7.0 \
tiktoken==0.4.0
# Installing LLM requirements
RUN pip3 install \
langchain \
langchain-community \
langchain-pinecone \
langchain-openai==0.1.7 \
pinecone-client \
pinecone-datasets \
unstructured \
poppler-utils \
tiktoken \
pypdf \
python-dotenv \
docx2txt
``` | langchain-openai==0.1.8 is now broken | https://api.github.com/repos/langchain-ai/langchain/issues/22333/comments | 10 | 2024-05-30T20:51:56Z | 2024-07-03T17:07:55Z | https://github.com/langchain-ai/langchain/issues/22333 | 2,326,523,092 | 22,333 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from pydantic import BaseModel, Field
from typing import List
from langchain_openai import ChatOpenAI
class Jokes(BaseModel):
"""List of jokes to tell user."""
jokes: List[str] = Field(description="List of jokes to tell the user")
structured_llm = ChatOpenAI(model="gpt-4").with_structured_output(Jokes)
structured_llm.invoke("You MUST tell me more than one joke about cats")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to get a structured output using a chain with `ChatOpenAI`. I reproduced the behavior with this very simple scenario:
```
from pydantic import BaseModel, Field
from typing import List
from langchain_openai import ChatOpenAI
class Jokes(BaseModel):
"""List of jokes to tell user."""
jokes: List[str] = Field(description="List of jokes to tell the user")
structured_llm = ChatOpenAI(model="gpt-4").with_structured_output(Jokes)
structured_llm.invoke("You MUST tell me more than one joke about cats")
```
I expected the result to be a list of jokes, but it didn't work, even for this very simple prompt. If I change the code a little bit like this:
```
class Jokes(BaseModel):
"""List of jokes to tell user."""
jokes: str = Field(description="List of jokes to tell the user, separated by a semicolon")
structured_llm = ChatOpenAI(model="gpt-4").with_structured_output(Jokes)
structured_llm.invoke("You MUST tell me more than one joke about cats, and split them with a semicolon")
```
I get the following output:
```
{'jokes': "Why don't cats play poker in the jungle? Too many cheetahs.;What do you call a cat that throws all the most expensive parties? The Great Catsby.;Why did the cat sit on the computer? To keep an eye on the mouse."}
```
Obviously, the model knows how to solve such a simple task, but it doesn't seem to be using the structure correctly when it has a list of strings as the attribute.
I tried the same behavior with more complex structured outputs, and the same happened.
### System Info
langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.3
langchain-experimental==0.0.59
langchain-openai==0.1.8
langchain-text-splitters==0.2.0
platform: linux
python version: 3.10.10 | Structured output with ChatOpenAI is not working when structure class has a list of strings | https://api.github.com/repos/langchain-ai/langchain/issues/22332/comments | 2 | 2024-05-30T20:08:42Z | 2024-05-31T12:46:07Z | https://github.com/langchain-ai/langchain/issues/22332 | 2,326,455,189 | 22,332 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Weaviate(client=wv_conn,
index_name=index_name,
text_key="content",
by_text=False,
embedding=embeddings,
attributes=["page_id", "page_title", "caccess", "internal"])
metadata_filter = {"path": ["page_id"],
"operator": "ContainsAny",
"valueText": ["1","2"]}
wvdb.similarity_search_with_score(query=query,
where_filter=metadata_filter,
k=top_k)
### Error Message and Stack Trace (if applicable)
ValueError: Error during query: [{'locations': [{'column': 6, 'line': 1}], 'message': 'explorer: get class: vector search: object vector search at index index_name: remote shard xfBTRVlYuIvE: status code: 500, error: shard index_name_xfBTRVlYuIvE: build inverted filter allow list: value type should be []string but is []interface {}\n: context deadline exceeded', 'path': ['Get', 'index_name']}]
### Description
I'm trying to do filter data on weaviate using "ContainsAny" operators. But I get this error. The program doesn't let me to filter on list of value
### System Info
langchain==0.0.354
langchain-community==0.0.18
langchain-core==0.1.19
weaviate-client==3.24.2
weaviate==1.22.0 | Weavite "Containsany" Filter returns error | https://api.github.com/repos/langchain-ai/langchain/issues/22330/comments | 0 | 2024-05-30T19:28:53Z | 2024-05-30T19:31:23Z | https://github.com/langchain-ai/langchain/issues/22330 | 2,326,402,920 | 22,330 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_pinecone import PineconeVectorStore
# Dummy data for illustration purposes
dummy_docs = ...
dummy_embeddings = ...
dummy_index_name = ...
# Attempt to create a PineconeVectorStore retriever
vector_store_retriever = PineconeVectorStore.from_documents(dummy_docs, dummy_embeddings, index_name=dummy_index_name)
```
### Error Message and Stack Trace (if applicable)
```python
File "/home/ubuntu/app.py", line 31, in _get_vector_store_retriever
vector_store_retriever = PineconeVectorStore.from_documents(docs, self.embeddings, index_name=self.cfg.index_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/langchain_core/vectorstores.py", line 550, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/langchain_pinecone/vectorstores.py", line 441, in from_texts
pinecone.add_texts(
File "/home/ubuntu/langchain_pinecone/vectorstores.py", line 158, in add_texts
async_res = [
^
File "/home/ubuntu/langchain_pinecone/vectorstores.py", line 159, in <listcomp>
self._index.upsert(
File "/home/ubuntu/pinecone/utils/error_handling.py", line 10, in inner_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/pinecone/data/index.py", line 168, in upsert
return self._upsert_batch(vectors, namespace, _check_type, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/pinecone/data/index.py", line 189, in _upsert_batch
return self._vector_api.upsert(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/pinecone/core/client/api_client.py", line 772, in __call__
return self.callable(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/pinecone/core/client/api/data_plane_api.py", line 1084, in __upsert
return self.call_with_http_info(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/pinecone/core/client/api_client.py", line 834, in call_with_http_info
return self.api_client.call_api(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/pinecone/core/client/api_client.py", line 417, in call_api
return self.pool.apply_async(self.__call_api, (resource_path,
^^^^^^^^^
File "/home/ubuntu/pinecone/core/client/api_client.py", line 103, in pool
self._pool = ThreadPool(self.pool_threads)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/multiprocessing/pool.py", line 930, in __init__
Pool.__init__(self, processes, initializer, initargs)
File "/usr/local/lib/python3.11/multiprocessing/pool.py", line 196, in __init__
self._change_notifier = self._ctx.SimpleQueue()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/multiprocessing/context.py", line 113, in SimpleQueue
return SimpleQueue(ctx=self.get_context())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/multiprocessing/queues.py", line 341, in __init__
self._rlock = ctx.Lock()
^^^^^^^^^^
File "/usr/local/lib/python3.11/multiprocessing/context.py", line 68, in Lock
return Lock(ctx=self.get_context())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/multiprocessing/synchronize.py", line 169, in __init__
SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)
File "/usr/local/lib/python3.11/multiprocessing/synchronize.py", line 57, in __init__
sl = self._semlock = _multiprocessing.SemLock(
^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory
```
### Description
- **Problem**: Encountering `FileNotFoundError: [Errno 2] No such file or directory` when trying to use `PineconeVectorStore.from_documents` in an AWS Lambda environment with the base image `python:3.11-slim`.
- **Expected Behavior**: The code should initialize a `PineconeVectorStore` retriever without errors.
- **Actual Behavior**: The initialization fails with a `FileNotFoundError` indicating an issue with multiprocessing in the slim Python 3.11 Docker image.
### System Info
- **Python Version**: 3.11
- **Platform**: Linux (AWS Lambda with python:3.11-slim base image)
- **Installed Packages**:
```
langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.2
langchain-openai==0.1.8
langchain-pinecone==0.1.1
langchain-text-splitters==0.2.0
langdetect==1.0.9
langgraph==0.0.57
langsmith==0.1.63
pinecone-client==3.2.2
``` | FileNotFoundError in PineconeVectorStore.from_documents on AWS Lambda | https://api.github.com/repos/langchain-ai/langchain/issues/22325/comments | 6 | 2024-05-30T15:28:19Z | 2024-06-22T06:24:40Z | https://github.com/langchain-ai/langchain/issues/22325 | 2,325,958,612 | 22,325 |
[
"hwchase17",
"langchain"
] | Hi @dosu,
I am referring to the code of langchain for self-query retriever.
But when I try the below query it throws an error.
output1:
![image](https://github.com/langchain-ai/langchain/assets/68585511/9e2c3e51-bda9-4038-b261-1347cd8da7f8)
output2:
![image](https://github.com/langchain-ai/langchain/assets/68585511/236ac516-3cb3-4646-ae4e-a74156b31237)
both queries have the same meaning but there is a problem in response,
where output1 is successful but output2 throws error.
What can be the reason?
_Originally posted by @anusonawane in https://github.com/langchain-ai/langchain/discussions/22313#discussioncomment-9604396_ | Hi @dosu, | https://api.github.com/repos/langchain-ai/langchain/issues/22316/comments | 2 | 2024-05-30T10:48:17Z | 2024-05-31T10:55:33Z | https://github.com/langchain-ai/langchain/issues/22316 | 2,325,347,430 | 22,316 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code from AzureSearch class
```
def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> bool:
"""Delete by vector ID.
Args:
ids: List of ids to delete.
Returns:
bool: True if deletion is successful,
False otherwise.
"""
if ids:
res = self.client.delete_documents([{"id": i} for i in ids])
return len(res) > 0
else:
return False
```
Do not use the FIELDS_ID variable defined at the top of the script therefore it doesn't allow complete override of the key field in Azure AI Search.
Simply replacing by:
```
def delete(self, ids: Optional[List[str]] = None, **kwargs: Any) -> bool:
"""Delete by vector ID.
Args:
ids: List of ids to delete.
Returns:
bool: True if deletion is successful,
False otherwise.
"""
if ids:
res = self.client.delete_documents([{FIELDS_ID: i} for i in ids])
return len(res) > 0
else:
return False
```
Will do the trick.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm using langchain to interact with Azure AISearch , as I want to remove documents based on their id (the key field in AISearch) I would like Langchain to offer override capabilities for this field
### System Info
No specific info | AzureSearch delete method does not use the variable FIELDS_ID therefore it does not override the value | https://api.github.com/repos/langchain-ai/langchain/issues/22314/comments | 0 | 2024-05-30T10:35:04Z | 2024-05-30T10:37:32Z | https://github.com/langchain-ai/langchain/issues/22314 | 2,325,322,454 | 22,314 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.utilities import SQLDatabase
db = SQLDatabase.from_uri(f"mysql+pymysql://{db_user}:{db_password}@{db_host}:{db_port}/{db_name}")
print("dialect:",db.dialect)
print("get_usable_table_names:",db.get_usable_table_names())
```
### Error Message and Stack Trace (if applicable)
```python
File "C:\Development\Anaconda3\envs\langchain\lib\site-packages\sqlalchemy\util\deprecations.py", line 281, in warned
return fn(*args, **kwargs) # type: ignore[no-any-return]
File "C:\Development\Anaconda3\envs\langchain\lib\site-packages\sqlalchemy\sql\schema.py", line 431, in __new__
return cls._new(*args, **kw)
File "C:\Development\Anaconda3\envs\langchain\lib\site-packages\sqlalchemy\sql\schema.py", line 485, in _new
with util.safe_reraise():
File "C:\Development\Anaconda3\envs\langchain\lib\site-packages\sqlalchemy\util\langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File "C:\Development\Anaconda3\envs\langchain\lib\site-packages\sqlalchemy\sql\schema.py", line 481, in _new
table.__init__(name, metadata, *args, _no_init=False, **kw)
File "C:\Development\Anaconda3\envs\langchain\lib\site-packages\sqlalchemy\sql\schema.py", line 861, in __init__
self._autoload(
File "C:\Development\Anaconda3\envs\langchain\lib\site-packages\sqlalchemy\sql\schema.py", line 893, in _autoload
conn_insp.reflect_table(
File "C:\Development\Anaconda3\envs\langchain\lib\site-packages\sqlalchemy\engine\reflection.py", line 1538, in reflect_table
raise exc.NoSuchTableError(table_name)
sqlalchemy.exc.NoSuchTableError: archivesparameter
```
### Description
1.There are two databases: DB1 and DB2. DB1 is a temporary test database and DB2 is a development database.
<hr>
2. DB1 as a test only a few tables, DB2 as the development of the use of about 1000 tables.
<hr>
3. Core: At the beginning, DB2 is used in LangChain for testing, and the problem occurs. When the new DB1 is created and DB1 is switched to be used at the same time, everything is executed normally.
### System Info
<h3>Encountered this problem on both Window & Ubuntu.</h3>
1.Windows10 22H2 Python 3.10.14
>This problem was encountered by langchain==0.1.7, upgraded to the latest, and the problem still exists.
```
# python -m langchain_core.sys_info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.10.14 | packaged by Anaconda, Inc. | (main, May 6 2024, 19:44:50) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.1
> langchain: 0.2.1
> langchain_community: 0.2.1
> langsmith: 0.1.54
> langchain_experimental: 0.0.59
> langchain_openai: 0.1.6
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
```
2.Ubuntu22.04 Python 3.10.14
```
# python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #117-Ubuntu SMP Fri Apr 26 12:26:49 UTC 2024
> Python Version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.0.87
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | sqlalchemy.exc.NoSuchTableError: archivesparameter | https://api.github.com/repos/langchain-ai/langchain/issues/22312/comments | 3 | 2024-05-30T09:30:02Z | 2024-05-31T00:51:46Z | https://github.com/langchain-ai/langchain/issues/22312 | 2,325,195,609 | 22,312 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import boto3
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import FAISS
from langchain_community.embeddings import BedrockEmbeddings
class Document():
def __init__(self, content, source):
self.page_content = content
self.metadata = { 'source': source }
def data_ingestion():
documents = [
Document('Contents1', 'Doc1'),
Document('Contents2', 'Doc2')
]
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=10000,
chunk_overlap=1000
)
return text_splitter.split_documents(documents)
def store_vectors(embeddings, documents, directory):
vectorstore_faiss = FAISS.from_documents(
documents,
embeddings
)
vectorstore_faiss.save_local(directory)
INDEX_DIR = 'test_index'
bedrock = boto3.client(service_name='bedrock-runtime', region_name='us-west-2')
bedrock_embeddings = BedrockEmbeddings(model_id='amazon.titan-embed-text-v2:0', client=bedrock)
# Store documents
docs = data_ingestion()
store_vectors(bedrock_embeddings, docs, INDEX_DIR)
# Load documents
vector_store = FAISS.load_local(
INDEX_DIR,
bedrock_embeddings,
allow_dangerous_deserialization=True,
asynchronous=True
)
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/root/aws-doc-sdk-examples/python/example_code/bedrock-runtime/models/mistral_ai/example_code.py", line 41, in <module>
vector_store = FAISS.load_local(
File "/root/aws-doc-sdk-examples/python/example_code/bedrock-runtime/rag/lib/python3.10/site-packages/langchain_community/vectorstores/faiss.py", line 1098, in load_local
return cls(embeddings, index, docstore, index_to_docstore_id, **kwargs)
TypeError: FAISS.__init__() got an unexpected keyword argument 'asynchronous'
```
### Description
In the documentation here: https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.faiss.FAISS.html#langchain_community.vectorstores.faiss.FAISS.load_local
It says that I can use the asynchronous argument to use the async version. However, when I try it, I get the stack trace such that FAISS.__init__() got an unexpected keyword argument 'asynchronous'.
It is possible I am misunderstanding the usage of this feature. In the example code, I am simply trying to call load_local with this argument.
### System Info
"pip freeze | grep langchain"
langchain==0.2.1
langchain-aws==0.1.4
langchain-community==0.2.0
langchain-core==0.2.0
langchain-text-splitters==0.2.0
langchainhub==0.1.15
Platform: Ubuntu 20.04.6 LTS
Python Version: 3.10.14
| Unable to use asynchronous argument with FAISS.load_local | https://api.github.com/repos/langchain-ai/langchain/issues/22299/comments | 1 | 2024-05-29T22:43:06Z | 2024-05-30T20:59:27Z | https://github.com/langchain-ai/langchain/issues/22299 | 2,324,355,403 | 22,299 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.1/docs/integrations/chat/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
In the following documentation url. The row on ChatOpenAI shows Streaming not supported.
**https://python.langchain.com/v0.1/docs/integrations/chat/**
While the there is a documentation page show it is supported:
**https://python.langchain.com/v0.1/docs/modules/agents/how_to/streaming/**
I'm just wondering if my dyslexia and poor eyesight are playing tricks on me.
### Idea or request for content:
Your page says steaming is not support while another page provides a short notebook.
I think they should be consistent. It could be I do not understand the what is on the following page
(https://python.langchain.com/v0.1/docs/integrations/chat/)
| DOC: Conflicting informaiton on ChatOpenAI as it relates to streaming.On page say not supported while a notebook is shows support | https://api.github.com/repos/langchain-ai/langchain/issues/22298/comments | 0 | 2024-05-29T22:20:14Z | 2024-05-29T22:22:40Z | https://github.com/langchain-ai/langchain/issues/22298 | 2,324,334,831 | 22,298 |