issue_owner_repo
sequencelengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/extraction_examples/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Here's a revised version of your report description:
I attempted the example provided in the given link. While it executed flawlessly with OpenAI, I encountered a `TypeError` when running it with Cohere.
The error message was: `TypeError: BaseCohere.chat() received an unexpected keyword argument 'method'`.
### Idea or request for content:
I believe the document is not current with the Cohere chat and requires some amendments. | DOC: <Issue related to /v0.2/docs/how_to/extraction_examples/> | https://api.github.com/repos/langchain-ai/langchain/issues/23396/comments | 2 | 2024-06-25T11:29:23Z | 2024-06-26T11:11:20Z | https://github.com/langchain-ai/langchain/issues/23396 | 2,372,440,766 | 23,396 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.text_splitter import RecursiveCharacterTextSplitter
if __name__ == "__main__":
# Wrong behaviour, using \s instead of regular space
splitter_keep = RecursiveCharacterTextSplitter(
separators=[r"\s"],
keep_separator=False,
is_separator_regex=True,
chunk_size=15,
chunk_overlap=0,
strip_whitespace=False)
assert splitter_keep.split_text("Hello world")[0] == r"Hello\sworld"
# Expected behaviour, keeping regular space
splitter_no_keep = RecursiveCharacterTextSplitter(
separators=[r"\s"],
keep_separator=True,
is_separator_regex=True,
chunk_size=15,
chunk_overlap=0,
strip_whitespace=False)
assert splitter_no_keep.split_text("Hello world")[0] == r"Hello world"
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to use the `langchain` library to split a test using regex separators. I expect the output strings to contain the original separators, but what happens is that when using the `keep_separator` flag as `False` it uses the regex value instead of the original separator.
Possible code pointer where the problem might be coming from: [libs/text-splitters/langchain_text_splitters/character.py#L98](https://github.com/langchain-ai/langchain/blob/master/libs/text-splitters/langchain_text_splitters/character.py#L98)
### System Info
langchain==0.2.5
langchain-core==0.2.9
langchain-text-splitters==0.2.1
Platform: Apple M1 Pro
macOS: 14.5 (23F79)
python version: Python 3.12.3
| RecursiveCharacterTextSplitter uses regex value instead of original separator when merging and keep_separator is false | https://api.github.com/repos/langchain-ai/langchain/issues/23394/comments | 2 | 2024-06-25T09:39:09Z | 2024-06-25T13:26:20Z | https://github.com/langchain-ai/langchain/issues/23394 | 2,372,195,599 | 23,394 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
!pip install --upgrade langchain e2b langchain-community
2. Set up the environment variables for E2B and OpenAI API keys.
3. Run the following python code:
```
from langchain_community.tools import E2BDataAnalysisTool
import os
from langchain.agents import AgentType, initialize_agent
from langchain_openai import ChatOpenAI
os.environ["E2B_API_KEY"] = "<E2B_API_KEY>"
os.environ["OPENAI_API_KEY"] = "<OPENAI_API_KEY>"
def save_artifact(artifact):
print("New matplotlib chart generated:", artifact.name)
file = artifact.download()
basename = os.path.basename(artifact.name)
with open(f"./charts/{basename}", "wb") as f:
f.write(file)
e2b_data_analysis_tool = E2BDataAnalysisTool(
env_vars={"MY_SECRET": "secret_value"},
on_stdout=lambda stdout: print("stdout:", stdout),
on_stderr=lambda stderr: print("stderr:", stderr),
on_artifact=save_artifact,
)
```
### Error Message and Stack Trace (if applicable)
Error Message
_ImportError: cannot import name 'DataAnalysis' from 'e2b' (c:\Users\sarthak kaushik\OneDrive\Desktop\Test_Project_Python\e2b\myenv\Lib\site-packages\e2b\__init__.py)
The above exception was the direct cause of the following exception:
ImportError: Unable to import e2b, please install with `pip install e2b`_
### Description
When trying to use the _**E2BDataAnalysisTool**_ from the _**langchain_community.tools**_ module, I'm encountering an **ImportError.** The error suggests that the DataAnalysis class cannot be imported from the e2b package.
**Expected Behavior:**
The E2BDataAnalysisTool should initialize without any import errors.
**Additional Context**
I have already installed the e2b package as suggested in the error message, but the issue persists.
**Possible Solution**
It seems that there might be a discrepancy between the expected structure of the e2b package and what's actually installed. Could there be a version mismatch or a change in the package structure that hasn't been reflected in the LangChain community tools?
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.12.3
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.82
> langchain_openai: 0.1.9
> langchain_text_splitters: 0.2.1 | E2B DataAnalysisTool() function not working correctly | https://api.github.com/repos/langchain-ai/langchain/issues/23392/comments | 3 | 2024-06-25T09:26:19Z | 2024-07-27T18:24:58Z | https://github.com/langchain-ai/langchain/issues/23392 | 2,372,167,852 | 23,392 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_core.messages import AIMessage
from langchain_openai import ChatOpenAI
c = ChatOpenAI()
test = AIMessage(content='Hello, this is a test AI message that contains text and tool use.', tool_calls=[{'name': 'test_tool_call', 'args': {}, 'id': 'test_tool_call_id'}])
c.get_num_tokens_from_messages(messages = [test])
```
### Error Message and Stack Trace (if applicable)
```
ValueError Traceback (most recent call last)
Cell In[4], [line 10](vscode-notebook-cell:?execution_count=4&line=10)
[6](vscode-notebook-cell:?execution_count=4&line=6) c = ChatOpenAI()
[8](vscode-notebook-cell:?execution_count=4&line=8) test = AIMessage(content='Hello, this is a test AI message that contains text and tool use.', tool_calls=[{'name': 'test_tool_call', 'args': {}, 'id': 'test_tool_call_id'}])
---> [10](vscode-notebook-cell:?execution_count=4&line=10) c.get_num_tokens_from_messages(messages = [test])
File ~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:777, in BaseChatOpenAI.get_num_tokens_from_messages(self, messages)
[775](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:775) num_tokens += _count_image_tokens(*image_size)
[776](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:776) else:
--> [777](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:777) raise ValueError(
[778](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:778) f"Unrecognized content block type\n\n{val}"
[779](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:779) )
[780](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:780) else:
[781](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:781) # Cast str(value) in case the message value is not a string
[782](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:782) # This occurs with function messages
[783](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:783) num_tokens += len(encoding.encode(value))
ValueError: Unrecognized content block type
{'type': 'function', 'id': 'test_tool_call_id', 'function': {'name': 'test_tool_call', 'arguments': '{}'}}
```
### Description
Caused by the new `isinstance` block in https://github.com/langchain-ai/langchain/pull/23147/files
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Thu, 13 Jun 2024 16:25:55 +0000
> Python Version: 3.12.4 (main, Jun 7 2024, 06:33:07) [GCC 14.1.1 20240522]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.81
> langchain_cli: 0.0.24
> langchain_cohere: 0.1.8
> langchain_mongodb: 0.1.6
> langchain_openai: 0.1.9
> langchain_text_splitters: 0.2.1
> langchainhub: 0.1.20
> langgraph: 0.1.1
> langserve: 0.2.1 | [Regression] ChatOpenAI.get_num_tokens_from_messages breaks with tool calls since version 0.1.9 | https://api.github.com/repos/langchain-ai/langchain/issues/23388/comments | 1 | 2024-06-25T07:59:41Z | 2024-06-25T20:27:48Z | https://github.com/langchain-ai/langchain/issues/23388 | 2,371,973,835 | 23,388 |
[
"hwchase17",
"langchain"
] | Proposal for a new feature below by @baptiste-pasquier
### Checked
- [X] I searched existing ideas and did not find a similar one
- [X] I added a very descriptive title
- [X] I've clearly described the feature request and motivation for it
### Feature request
Add the ability to filter out documents with a similarity score less than a score_threshold in the `MultiVectorRetriever`.
### Motivation
The `VectorStoreRetriever` base class has a `"similarity_score_threshold"` option for `search_type`, which adds the ability to filter out any documents with a similarity score less than a score_threshold by calling the `.similarity_search_with_relevance_scores()` method instead of `.similarity_search()`.
This feature is not implementend in the `MultiVectorRetriever` class.
### Proposal (If applicable)
In the `_get_relevant_documents` method of `MultiVectorRetriever`
Replace :
https://github.com/langchain-ai/langchain/blob/b20c2640dac79551685b8aba095ebc6125df928c/libs/langchain/langchain/retrievers/multi_vector.py#L63-L68
With :
```python
if self.search_type == "similarity":
sub_docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
elif self.search_type == "similarity_score_threshold":
sub_docs_and_similarities = (
self.vectorstore.similarity_search_with_relevance_scores(
query, **self.search_kwargs
)
)
sub_docs = [sub_doc for sub_doc, _ in sub_docs_and_similarities]
elif self.search_type == "mmr":
sub_docs = self.vectorstore.max_marginal_relevance_search(
query, **self.search_kwargs
)
else:
raise ValueError(f"search_type of {self.search_type} not allowed.")
```
As in the `VectorStoreRetriever` base class :
https://github.com/langchain-ai/langchain/blob/b20c2640dac79551685b8aba095ebc6125df928c/libs/core/langchain_core/vectorstores.py#L673-L687
_Originally posted by @baptiste-pasquier in https://github.com/langchain-ai/langchain/discussions/19404_ | Add "similarity_score_threshold" option for MultiVectorRetriever class | https://api.github.com/repos/langchain-ai/langchain/issues/23387/comments | 2 | 2024-06-25T07:42:59Z | 2024-06-26T16:52:57Z | https://github.com/langchain-ai/langchain/issues/23387 | 2,371,940,278 | 23,387 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
def insert_data_into_vector_db(file_path, source_column=None):
"""
Main function to get data from the source, create embeddings, and insert them into the database.
"""
output = None
logger.info("Embedding Started...")
logger.info(f"Collection Name: {COLLECTION_NAME}")
split_docs, err = fetch_data_from_source(file_path, source_column)
#failure
if split_docs is None:
print('None')
else:
print('value')
for docs in split_docs:
print(docs)
PGVector.from_documents(
embedding=EMBEDDINGS_FUNCTION,
documents=split_docs,
collection_name=COLLECTION_NAME,
connection_string=CONNECTION_STRING,
use_jsonb=True
)
if err:
output = f"Embedding failed with the error - {err}"
logger.error(output)
else:
output = "Embedding Completed..."
logger.info(output)
return output
```
### Error Message and Stack Trace (if applicable)
"Traceback (most recent call last):\n File \"/var/task/app.py\", line 216, in lambda_handler\n return handle_valid_file(event, safe_filename, file_path, file_content)\n File \"/var/task/app.py\", line 133, in handle_valid_file\n body = handle_uploaded_file_success(\n File \"/var/task/app.py\", line 90, in handle_uploaded_file_success\n output = insert_data_into_vector_db(file_path, source_column)\n File \"/var/task/ai/embeddings/create.py\", line 128, in insert_data_into_vector_db\n PGVector.from_documents(\n File \"/var/task/langchain_community/vectorstores/pgvector.py\", line 1139, in from_documents\n return cls.from_texts(\n File \"/var/task/langchain_community/vectorstores/pgvector.py\", line 1009, in from_texts\n embeddings = embedding.embed_documents(list(texts))\n File \"/var/task/langchain_community/embeddings/baichuan.py\", line 111, in embed_documents\n return self._embed(texts)\n File \"/var/task/langchain_community/embeddings/baichuan.py\", line 85, in _embed\n response.raise_for_status()\n File \"/var/task/requests/models.py\", line 1024, in raise_for_status\n raise HTTPError(http_error_msg, response=self)\nrequests.exceptions.HTTPError: 400 Client Error: for url: http://api.baichuan-ai.com/v1/embeddings"
### Description
I'm encountering a `400 Client Error` when attempting to embed documents using `langchain_community.embeddings` with `PGVector`, preventing successful embedding.
In a separate script where I didn't use `PGVector`, document embedding worked properly. This suggests there may be an issue specifically with `PGVector`.
### System Info
langchain==0.2.5
langchain-community==0.2.5
langchain-core==0.2.9
langchain-openai==0.1.9
langchain-text-splitters==0.2.1
Platform : Ubuntu
Python Version : 3.10.13 | PGVector.from_documents is not working | https://api.github.com/repos/langchain-ai/langchain/issues/23386/comments | 0 | 2024-06-25T07:16:33Z | 2024-06-29T15:24:23Z | https://github.com/langchain-ai/langchain/issues/23386 | 2,371,888,192 | 23,386 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/extraction_examples/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Getting following error:
Traceback (most recent call last):
File "Z:\llm_images\extract_info.py", line 148, in <module>
response = chain.invoke(
^^^^^^^^^^^^^
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\runnables\base.py", line 2504, in invoke
input = step.invoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\output_parsers\base.py", line 169, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\runnables\base.py", line 1598, in _call_with_config
context.run(
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\runnables\config.py", line 380, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\output_parsers\base.py", line 170, in <lambda>
lambda inner_input: self.parse_result(
^^^^^^^^^^^^^^^^^^
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\output_parsers\openai_tools.py", line 196, in parse_result
pydantic_objects.append(name_dict[res["type"]](**res["args"]))
~~~~~~~~~^^^^^^^^^^^^^
KeyError: 'Person'
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/how_to/extraction_examples/> | https://api.github.com/repos/langchain-ai/langchain/issues/23383/comments | 11 | 2024-06-25T06:07:12Z | 2024-06-25T09:22:16Z | https://github.com/langchain-ai/langchain/issues/23383 | 2,371,735,246 | 23,383 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Below is the example code to reproduce the issue:
```
def fetch_config_from_header(config: Dict[str, Any], req: Request) -> Dict[str, Any]:
""" All supported types: 'name', 'cache', 'verbose', 'callbacks', 'tags', 'metadata', 'custom_get_token_ids', 'callback_manager', 'client', 'async_client', 'model_name', 'temperature', 'model_kwargs', 'openai_api_key', 'openai_api_base', 'openai_organization', 'openai_proxy', 'request_timeout', 'max_retries', 'streaming', 'n', 'max_tokens', 'tiktoken_model_name', 'default_headers', 'default_query', 'http_client', 'http_async_client']"""
config = config.copy()
configurable = config.get("configurable", {})
if "x-model-name" in req.headers:
configurable["model_name"] = req.headers["x-model-name"]
else:
raise HTTPException(401, "No model name provided")
if "x-api-key" in req.headers:
configurable["default_headers"] = {
"Content-Type":"application/json",
"api-key": req.headers["x-api-key"]
}
else:
raise HTTPException(401, "No API key provided")
if "x-model-kwargs" in req.headers:
configurable["model_kwargs"] = json.loads(req.headers["x-model-kwargs"])
else:
raise HTTPException(401, "No model arguments provided")
configurable["openai_api_base"] = f"https://someendpoint.com/{req.headers['x-model-name']}"
config["configurable"] = configurable
return config
chat_model = ChatOpenAI(
model_name = "some_model",
model_kwargs = {},
default_headers = {},
openai_api_key = "placeholder",
openai_api_base = "placeholder").configurable_fields(
model_name = ConfigurableField(id="model_name"),
model_kwargs = ConfigurableField(id="model_kwargs"),
default_headers = ConfigurableField(id="default_headers"),
openai_api_base = ConfigurableField(id="openai_api_base"),
)
chain = prompt_template | chat_model | StrOutputParser()
add_routes(
app,
chain.with_types(input_type=InputChat),
path="/some_chain",
disabled_endpoints=["playground"],
per_req_config_modifier=fetch_config_from_header,
)
```
### Error Message and Stack Trace (if applicable)
I attached only the relevant part of the traceback
```
Traceback (most recent call last):
File "/venv/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 399, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__ await super().__call__(scope, receive, send)
File "/venv/lib/python3.12/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/venv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/venv/lib/python3.12/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/venv/lib/python3.12/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/venv/lib/python3.12/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/venv/lib/python3.12/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/venv/lib/python3.12/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langserve/server.py", line 530, in invoke
return await api_handler.invoke(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langserve/api_handler.py", line 835, in invoke
output = await invoke_coro
^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4585, in ainvoke
return await self.bound.ainvoke(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2541, in ainvoke
input = await step.ainvoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langchain_core/runnables/configurable.py", line 123, in ainvoke
return await runnable.ainvoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 191, in ainvoke
llm_result = await self.agenerate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 611, in agenerate_prompt
return await self.agenerate(
^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 570, in agenerate
raise exceptions[0]
File "/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 757, in _agenerate_with_cache
result = await self._agenerate(
^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 667, in _agenerate
response = await self.async_client.create(messages=message_dicts, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
### Description
In https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/language_models/chat_models.py, the kwargs still has information in agenerate_prompt() as shown below.
```
async def agenerate_prompt(
self,
prompts: List[PromptValue],
stop: Optional[List[str]] = None,
callbacks: Callbacks = None,
**kwargs: Any,
) -> LLMResult:
prompt_messages = [p.to_messages() for p in prompts]
return await self.agenerate(
prompt_messages, stop=stop, callbacks=callbacks, **kwargs
)
```
values of `prompt_messages` and `kwargs` in agenerate_prompt()
```
langchain_core.language_models.chat_model.py BaseChatModel.agenerate_prompt
prompt_messages: [[SystemMessage(content='some messages')]]
kwargs: {'tags': [], 'metadata': {'__useragent': 'python-requests/2.32.3', '__langserve_version': '0.2.2', '__langserve_endpoint': 'invoke', 'model_name': 'some_model', 'openai_api_base': 'https://someendpoint.com/some_model', 'run_name': None, 'run_id': None}
```
However when agenerate() is called from agenerate_prompt(), the kwargs is empty as shown below.
```
async def agenerate(
self,
messages: List[List[BaseMessage]],
stop: Optional[List[str]] = None,
callbacks: Callbacks = None,
*,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
run_name: Optional[str] = None,
run_id: Optional[uuid.UUID] = None,
**kwargs: Any,
) -> LLMResult:
"""Asynchronously pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
1. take advantage of batched calls,
2. need more output from the model than just the top generated value,
3. are building chains that are agnostic to the underlying language model
type (e.g., pure text completion models vs chat models).
Args:
messages: List of list of messages.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks: Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns:
An LLMResult, which contains a list of candidate Generations for each input
prompt and additional model provider-specific output.
"""
params = self._get_invocation_params(stop=stop, **kwargs)
```
Values of `params` and `kwargs`
```
langchain_core.language_models.chat_models.py BaseChatModel.agenerate
params: {'model': 'some_model', 'model_name': 'some_model', 'stream': False, 'n': 1, 'temperature': 0.7, 'user': 'some_user', '_type': 'openai-chat', 'stop': None}
kwargs: {}
```
### System Info
```
langchain==0.2.5
langchain-community==0.2.5
langchain-core==0.2.9
langchain-experimental==0.0.60
langchain-openai==0.1.9
langchain-text-splitters==0.2.1
langgraph==0.1.1
langserve==0.2.2
langsmith==0.1.82
openai==1.35.3
platform = linux
python version = 3.12.4
``` | BaseChatModel.agenerate_prompt() not passing kwargs correctly to BaseChatModel.agenerate() | https://api.github.com/repos/langchain-ai/langchain/issues/23381/comments | 1 | 2024-06-25T04:48:29Z | 2024-06-25T09:51:30Z | https://github.com/langchain-ai/langchain/issues/23381 | 2,371,644,357 | 23,381 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_aws import ChatBedrock
from langchain_experimental.llms.anthropic_functions import AnthropicFunctions
from dotenv import load_dotenv
load_dotenv()
# Initialize the LLM with the required parameters
llm = ChatBedrock(
model_id="anthropic.claude-3-haiku-20240307-v1:0",
model_kwargs={"temperature": 0.1},
region_name="us-east-1"
)
# Initialize AnthropicFunctions with the LLM
base_model = AnthropicFunctions(llm=llm)
# Define the function parameters for the model
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
},
},
"required": ["location"],
},
}
]
# Bind the functions to the model without causing keyword conflicts
model = base_model.bind(
functions=functions,
function_call={"name": "get_current_weather"}
)
# Invoke the model with the provided input
res = model.invoke("What's the weather in San Francisco?")
# Extract and print the function call from the response
function_call = res.additional_kwargs.get("function_call")
print("function_call", function_call)
### Error Message and Stack Trace (if applicable)
TypeError: langchain_core.language_models.chat_models.BaseChatModel.generate_prompt() got multiple values for keyword argument 'callbacks'
### Description
I try to use function calling in Anthropic's models through Bedrock. Help me fix problem!!
### System Info
I use latest version | TypeError: langchain_core.language_models.chat_models.BaseChatModel.generate_prompt() got multiple values for keyword argument 'callbacks' | https://api.github.com/repos/langchain-ai/langchain/issues/23379/comments | 2 | 2024-06-25T04:08:08Z | 2024-07-08T17:14:54Z | https://github.com/langchain-ai/langchain/issues/23379 | 2,371,597,422 | 23,379 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/sql_qa/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
```python
from langchain_community.tools.sql_database.tool import QuerySQLDataBaseTool
execute_query = QuerySQLDataBaseTool(db=db)
write_query = create_sql_query_chain(llm, db)
chain = write_query l execute_query
chain.invoke({"question": "How many employees are there"})
```
The result generated by write_query is not entirely SQL, so execute_query will report an error.
My way :
```python
chain = write_query | extract_sql |execute_query
def extract_sql(txt: str) -> str:
code_block_pattern = r'```sql(.*?)```'
code_blocks = re.findall(code_block_pattern, txt, re.DOTALL)
if code_blocks:
return code_blocks[0]
else:
return ""
```
At the same time, prompt words will also be modified
Use the following format to return:
"""
```sql
write sql here
```
"""
### Idea or request for content:
_No response_ | DOC:Some questions about Execute SQL query | https://api.github.com/repos/langchain-ai/langchain/issues/23378/comments | 1 | 2024-06-25T02:35:56Z | 2024-07-08T02:06:44Z | https://github.com/langchain-ai/langchain/issues/23378 | 2,371,502,868 | 23,378 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_openai import ChatOpenAI
from langchain_community.graphs import RdfGraph
from langchain.chains import GraphSparqlQAChain
end_point = "https://brickschema.org/schema/1.0.3/Brick.ttl"
graph = RdfGraph(query_endpoint=end_point, standard="ttl")
```
### Error Message and Stack Trace (if applicable)
```
ValueError: Invalid standard. Supported standards are: ('rdf', 'rdfs', 'owl').
```
### Description
There isn't any support for RdfGraph to read ttl file format.
.ttl format is Terse RDF Triple Language, which is another way to serialize the RDF language. .ttl format is a much preferable choice due to its human-readable syntax.
.ttl format is also known as Turtle format.
### System Info
```
pip install langchain==0.2.5
pip install langchain-openai==0.1.9
pip install rdflib==7.0.0
``` | When will langchain_community.graphs.RdfGraph support reading .ttl serialization format ? | https://api.github.com/repos/langchain-ai/langchain/issues/23372/comments | 1 | 2024-06-24T21:09:44Z | 2024-07-17T20:15:40Z | https://github.com/langchain-ai/langchain/issues/23372 | 2,371,098,095 | 23,372 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/tools/google_search/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The current documentation for langchain_community.utilities.google_search.GoogleSearchAPIWrapper appears to be outdated and deprecated. The class GoogleSearchAPIWrapper is marked as deprecated since version 0.0.33, yet there are detailed instructions for using it which may confuse users.
### Idea or request for content:
Clearly mark the class as deprecated at the beginning of the documentation.
Remove or simplify the setup instructions to reflect the deprecated status.
Provide alternative recommendations or direct users to updated tools and methods if available. | Issue related to /v0.2/docs/integrations/tools/google_search/ | https://api.github.com/repos/langchain-ai/langchain/issues/23371/comments | 0 | 2024-06-24T21:08:11Z | 2024-06-24T22:23:01Z | https://github.com/langchain-ai/langchain/issues/23371 | 2,371,095,092 | 23,371 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain.callbacks.base import BaseCallbackHandler
class CustomCallbackHandler(BaseCallbackHandler):
def on_agent_finish(self, finish, **kwargs):
if "Agent stopped due to iteration limit or time limit" in finish.return_values.get('output', ''):
finish.return_values['output'] = "I'm having difficulty finding an answer. Please rephrase your question."
class ChatBot:
def __init__(self, llm, tasks):
self.llm = llm
self.tasks = tasks
self.agent = self._init_agent()
def _init_agent(self):
tools = get_tools() # Function to get the tools
agent = create_tool_calling_agent(llm=self.llm, tools=tools, callback_handler=CustomCallbackHandler())
return agent
def send_message(self, message):
if not message.strip():
return "You didn't ask a question. How can I assist you further?"
response = self.agent.run(message)
if "tool was called" in response:
# Logically verify if the tool was actually called
# This is where the inconsistency occurs
print("Tool call indicated but not verified")
return response
# Usage
llm = "mock_llm_instance" # Replace with actual LLM instance
tasks = ["task1", "task2"]
chatbot = ChatBot(llm, tasks)
response = chatbot.send_message("Trigger tool call")
print(response)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The chatbot sometimes responds with a message indicating that a tool was called, even though the tool was not actually executed. This inconsistency suggests an issue within the Langchain library, particularly in the agent's tool-calling mechanism.
**Steps to Reproduce:**
1. Initiate a session with the chatbot.
2. Send a message that should trigger a tool call.
3. Observe the response: it sometimes indicates the tool was called, but in reality, it wasn't.
**Expected Behavior:**
1. When the chatbot indicates that a tool was called, the tool should actually be executed and return a result.
**Actual Behavior:**
1. The chatbot occasionally indicates that a tool was called without actually executing it.
### System Info
langchain==0.2.3
langchain-community==0.0.33
langchain-core==0.2.4
langchain-openai==0.1.8
langchain-text-splitters==0.2.0
langchainhub==0.1.15
| Langchain hallucinates and responds without actually calling the tool | https://api.github.com/repos/langchain-ai/langchain/issues/23365/comments | 2 | 2024-06-24T18:54:19Z | 2024-06-24T19:03:03Z | https://github.com/langchain-ai/langchain/issues/23365 | 2,370,889,775 | 23,365 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [ ] I used the GitHub search to find a similar question and didn't find it.
- [ ] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import json
import gradio as gr
import typing_extensions
import os
import boto3
from langchain.prompts.prompt import PromptTemplate
from langchain_community.graphs import Neo4jGraph
from langchain.chains import GraphCypherQAChain
from langchain.memory import ConversationBufferMemory
from langchain.globals import set_debug
from langchain_community.chat_models import BedrockChat
set_debug(True)
LANGCHAIN_TRACING_V2="true" # false
LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
LANGCHAIN_PROJECT=os.getenv("LANGCHAIN_PROJECT")
LANGCHAIN_API_KEY=os.getenv("LANGCHAIN_API_KEY")
# pass
REGION_NAME = os.getenv("AWS_REGION")
SERVICE_NAME = os.getenv("AWS_SERVICE_NAME")
AWS_ACCESS_KEY_ID = os.getenv("AWS_ACCESS_KEY_ID")
AWS_SECRET_ACCESS_KEY = os.getenv("AWS_SECRET_ACCESS_KEY")
NEO4J_URI = os.getenv("NEO4J_URI")
NEO4J_USERNAME = os.getenv("NEO4J_USERNAME")
NEO4J_PASSWORD = os.getenv("NEO4J_PASSWORD")
bedrock = boto3.client(
service_name=SERVICE_NAME,
region_name=REGION_NAME,
endpoint_url=f'https://{SERVICE_NAME}.{REGION_NAME}.amazonaws.com',
aws_access_key_id = AWS_ACCESS_KEY_ID,
aws_secret_access_key = AWS_SECRET_ACCESS_KEY
)
CYPHER_GENERATION_TEMPLATE = """You are an expert Neo4j Cypher translator who understands the question in english and convert to Cypher strictly based on the Neo4j Schema provided and following the instructions below:
<instructions>
* Generate Cypher query compatible ONLY for Neo4j Version 5
* Do not use EXISTS, SIZE keywords in the cypher. Use alias when using the WITH keyword
* Please do not use same variable names for different nodes and relationships in the query.
* Use only Nodes and relationships mentioned in the schema
* Always enclose the Cypher output inside 3 backticks (```)
* Always do a case-insensitive and fuzzy search for any properties related search. Eg: to search for a Person name use `toLower(c.name) contains 'neo4j'`
* ypher is NOT SQL. So, do not mix and match the syntaxes.
* Every Cypher query always starts with a MATCH keyword.
* Always do fuzzy search for any properties related search. Eg: when the user asks for "karn" instead of "karna", make sure to search for a Person name using use `toLower(c.name) contains 'karn'`
* Always understand the gender of the Person node and map relationship accordingly. Eg: when asked Who is Karna married to, search for HUSBAND_OF relationship coming out of Karna instead of WIFE_OF relationship.
</instructions>
Schema:
<schema>
{schema}
</schema>
The samples below follow the instructions and the schema mentioned above. So, please follow the same when you generate the cypher:
<samples>
Human: Who is the husband of Kunti?
Assistant: ```MATCH (p:Person)-[:WIFE_OF]->(husband:Person) WHERE toLower(p.name) contains "kunti" RETURN husband.name```
Human: Who are the parents of Karna?
Assistant: ```MATCH (p1:Person)<-[:FATHER_OF]-(father:Person) OPTIONAL MATCH (p2:Person)<-[:MOTHER_OF]-(mother:Person) WHERE toLower(p1.name) contains "karna" OR toLower(p2.name) contains "karna" RETURN coalesce(father.name, mother.name) AS parent_name```
Human: Who is Kunti married to?
Assistant: ```MATCH (p:Person)-[:WIFE_OF]->(husband:Person) WHERE toLower(p.name) contains "kunti" RETURN husband.name```
Human: Who killed Ghatotakach?
Assistant: ```MATCH (killer:Person)-[:KILLED]->(p:Person) WHERE toLower(p.name) contains "ghatotakach" RETURN killer.name```
Human: Who are the siblings of Karna?
Assistant: ```MATCH (p1:Person)<-[:FATHER_OF]-(father)-[:FATHER_OF]->(sibling) WHERE sibling <> p1 and toLower(p1.name) contains "karna" RETURN sibling.name AS SiblingName UNION MATCH (p2:Person)<-[:MOTHER_OF]-(mother)-[:MOTHER_OF]->(sibling) WHERE sibling <> p2 and toLower(p2.name) contains "karna" RETURN sibling.name AS SiblingName```
Human: Tell me the names of top 5 characters in Mahabharata.
Assistant: ```MATCH (p:Person) WITH p, COUNT(*) AS rel_count RETURN p, COUNT(*) AS rel_count ORDER BY rel_count DESC LIMIT 5```
</samples>
Human: {question}
Assistant:
"""
CYPHER_GENERATION_PROMPT = PromptTemplate(
input_variables=["schema","question"], validate_template=True, template=CYPHER_GENERATION_TEMPLATE
)
graph = Neo4jGraph(
url=NEO4J_URI,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
)
llm = BedrockChat(
model_id="anthropic.claude-v2",
client=bedrock,
model_kwargs = {
"temperature":0,
"top_k":1, "top_p":0.1,
"anthropic_version":"bedrock-2023-05-31",
"max_tokens_to_sample": 2048
}
)
chain = GraphCypherQAChain.from_llm(
llm,
graph=graph,
cypher_prompt=CYPHER_GENERATION_PROMPT,
verbose=True,
return_direct=True
)
def chat(que):
r = chain.invoke(que)
print(r)
summary_prompt_tpl = f"""Human:
Fact: {json.dumps(r['result'])}
* Summarise the above fact as if you are answering this question "{r['query']}"
* When the fact is not empty, assume the question is valid and the answer is true
* Do not return helpful or extra text or apologies
* Just return summary to the user. DO NOT start with Here is a summary
* List the results in rich text format if there are more than one results
Assistant:
"""
return llm.invoke(summary_prompt_tpl).content
memory = ConversationBufferMemory(memory_key = "chat_history", return_messages = True)
def chat_response(input_text,history):
try:
return chat(input_text)
except:
# a bit of protection against exposed error messages
# we could log these situations in the backend to revisit later in development
return "I'm sorry, there was an error retrieving the information you requested."
# Define your custom CSS
custom_css = """
/* Custom CSS for the chat interface */
.gradio-container {
# background: #f0f0f0; /* Change background color */
border: 0
border-radius: 15px; /* Add border radius */
}
.primary.svelte-cmf5ev{
background: linear-gradient(90deg, #9848FC 0%, #DC8855 100%);
# background-clip: text;
# -webkit-background-clip: text;
# -webkit-text-fill-color: transparent;
}
.v-application .secondary{
background-color: #EEEEEE !important
}
# /* Custom CSS for the chat input */
# .gradio-chat-input input[type="text"] {
# background-color: #ffffff; /* Change input background color */
# border-radius: 5px; /* Add border radius */
# border: 1px solid #cccccc; /* Change border color */
# }
# /* Custom CSS for the chat button */
# .gradio-chat-input button {
# # background-color: #ff0000; /* Change button background color */
# # border-radius: 5px; /* Add border radius */
# # color: #ffffff; /* Change text color */
# background: linear-gradient(90deg, #9848FC 0%, #DC8855 100%);
# background-clip: text;
# -webkit-background-clip: text;
# -webkit-text-fill-color: transparent;
# }
"""
interface = gr.ChatInterface(fn = chat_response,
theme = "soft",
chatbot = gr.Chatbot(height=430),
undo_btn = None,
clear_btn = "\U0001F5D1 Clear Chat",
css=custom_css,
examples = ["Who killed Ghatotakach?",
"Who are the parents of Karna?",
"Who are the kids of Kunti?",
"Who are the siblings of Karna?",
"Tell me the names of top 5 characters in Mahabharata.",
"Why did the Mahabharata war happen?",
"Who killed Karna, and why?",
"Why did the Pandavas have to go live in the forest for 12 years?",
"How did the Pandavas receive knowledge from sages and saintly persons during their time in the forest?",
#"What were the specific austerities that Arjuna had to perform in the Himalayan mountains to please Lord Shiva?",
#"How did Lord Krishna's presence in the forest affect the Pandavas' experience during their exile?",
"What were the specific challenges and difficulties that Yudhisthira and his brothers faced in their daily lives as inhabitants of the forest?",
#"How did Bhima cope with the challenges of living as an ascetic in the forest? Did he face any particular difficulties or struggles during their time in exile?"
])
# Launch the interface
interface.launch(share=True)
### Error Message and Stack Trace (if applicable)
"ValueError('Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: subject must not be valid against schema {\"required\":[\"messages\"]}#: extraneous key [max_tokens_to_sample] is not permitted, please reformat your input and try again.')Traceback (most recent call last):\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_community/llms/bedrock.py\", line 545, in _prepare_input_and_invoke\n response = self.client.invoke_model(**request_options)\n\n\n File \"/usr/local/lib/python3.10/site-packages/botocore/client.py\", line 565, in _api_call\n return self._make_api_call(operation_name, kwargs)\n\n\n File \"/usr/local/lib/python3.10/site-packages/botocore/client.py\", line 1021, in _make_api_call\n raise error_class(parsed_response, operation_name)\n\n\nbotocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: subject must not be valid against schema {\"required\":[\"messages\"]}#: extraneous key [max_tokens_to_sample] is not permitted, please reformat your input and try again.\n\n\n\nDuring handling of the above exception, another exception occurred:\n\n\n\nTraceback (most recent call last):\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain/chains/base.py\", line 156, in invoke\n self._call(inputs, run_manager=run_manager)\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_community/chains/graph_qa/cypher.py\", line 316, in _call\n generated_cypher = self.cypher_generation_chain.run(\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py\", line 168, in warning_emitting_wrapper\n return wrapped(*args, **kwargs)\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain/chains/base.py\", line 600, in run\n return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py\", line 168, in warning_emitting_wrapper\n return wrapped(*args, **kwargs)\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain/chains/base.py\", line 383, in __call__\n return self.invoke(\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain/chains/base.py\", line 166, in invoke\n raise e\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain/chains/base.py\", line 156, in invoke\n self._call(inputs, run_manager=run_manager)\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain/chains/llm.py\", line 126, in _call\n response = self.generate([inputs], run_manager=run_manager)\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain/chains/llm.py\", line 138, in generate\n return self.llm.generate_prompt(\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py\", line 599, in generate_prompt\n return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py\", line 456, in generate\n raise e\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py\", line 446, in generate\n self._generate_with_cache(\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py\", line 671, in _generate_with_cache\n result = self._generate(\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_community/chat_models/bedrock.py\", line 300, in _generate\n completion, usage_info = self._prepare_input_and_invoke(\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_community/llms/bedrock.py\", line 552, in _prepare_input_and_invoke\n raise ValueError(f\"Error raised by bedrock service: {e}\")\n\n\nValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: subject must not be valid against schema {\"required\":[\"messages\"]}#: extraneous key [max_tokens_to_sample] is not permitted, please reformat your input and try again."
### Description
I am trying to integrate Bedrock Runtime with Langchain but it is continuously failing with `ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: subject must not be valid against schema {\"required\":[\"messages\"]}#: extraneous key [max_tokens_to_sample] is not permitted, please reformat your input and try again.`
This does seem like the issue mentioned [here](https://repost.aws/questions/QUcTMammSKSL-mTcrz-JW4OA/bedrock-error-when-calling) and somewhat related to issue mentioned [here](https://github.com/langchain-ai/langchain/issues/11130). However, I am unable to find any code example of implementing Bedrock Runtime with Langchain and how to update the prompt and LLM accordingly. Any help in this regard would be much appreciated.
### System Info
neo4j-driver
gradio==4.29.0
langchain==0.2.5
awscli
langchain-community
langchain-aws
botocore
boto3 | ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: subject must not be valid against schema {\"required\":[\"messages\"]}#: extraneous key [max_tokens_to_sample] is not permitted, please reformat your input and try again. | https://api.github.com/repos/langchain-ai/langchain/issues/23352/comments | 2 | 2024-06-24T13:20:30Z | 2024-06-25T03:43:12Z | https://github.com/langchain-ai/langchain/issues/23352 | 2,370,242,732 | 23,352 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/chat/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/integrations/chat/> | https://api.github.com/repos/langchain-ai/langchain/issues/23346/comments | 0 | 2024-06-24T08:43:45Z | 2024-06-24T08:46:12Z | https://github.com/langchain-ai/langchain/issues/23346 | 2,369,618,335 | 23,346 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/tools/wikidata/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The current documentation does not cover the positional arguments required by the WikidataAPIWrapper. It needs the arguments wikidata_mw and wikidata_rest.
Link to faulty documentation page: https://python.langchain.com/v0.2/docs/integrations/tools/wikidata/
### Idea or request for content:
Please update the code with the required arguments, and please explain from where we can get it. If needed, I can work on it as well. Thanks! | DOC: Missing positional arguments in Wikidata Langchain documentation. Path: /v0.2/docs/integrations/tools/wikidata/ | https://api.github.com/repos/langchain-ai/langchain/issues/23344/comments | 1 | 2024-06-24T07:59:20Z | 2024-06-27T06:04:49Z | https://github.com/langchain-ai/langchain/issues/23344 | 2,369,519,742 | 23,344 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/providers/replicate/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
This page doesn't include import statements, e.g. `from langchain_community.llms import Replicate`.
### Idea or request for content:
Add that line at the top | The documentation for the Integration for Replicate is missing import statements | https://api.github.com/repos/langchain-ai/langchain/issues/23342/comments | 0 | 2024-06-24T07:28:40Z | 2024-06-24T07:31:10Z | https://github.com/langchain-ai/langchain/issues/23342 | 2,369,458,309 | 23,342 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
text_splitter = SemanticChunker(HuggingFaceEmbeddings(), breakpoint_threshold_type="gradient")
### Error Message and Stack Trace (if applicable)
```
File "/Users/guertethiaf/Documents/jamstack/muniai/anabondsbackend/main.py", line 33, in chunk_text_semantically
text_splitter = SemanticChunker(HuggingFaceEmbeddings(), breakpoint_threshold_type="gradient")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_experimental/text_splitter.py", line 124, in __init__
self.breakpoint_threshold_amount = BREAKPOINT_DEFAULTS[
^^^^^^^^^^^^^^^^^^^^
KeyError: 'gradient'
```
### Description
When trying to use the SemanticChunker with 'gradient' as a breakpoint_threshold_type I noticed it always gave me a key error.
After checking `/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_experimental/text_splitter.py` I noticed the option wasn't present.
It's present in the repository and was merged I think a week ago.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:27 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T8103
> Python Version: 3.12.4 (v3.12.4:8e8a4baf65, Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.81
> langchain_experimental: 0.0.61
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.9
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| The gradient option in SemanticChunker (langchain_experimental) is not available when installing from pip | https://api.github.com/repos/langchain-ai/langchain/issues/23340/comments | 1 | 2024-06-23T20:03:37Z | 2024-06-24T11:40:37Z | https://github.com/langchain-ai/langchain/issues/23340 | 2,368,869,398 | 23,340 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code is used to create a Ranker object which downloads the model into a custom directory and then FlashrankRerank is initialized.
```python
def create_ms_marco_mini_llm():
logger.info("Download model ms_marco_mini_llm")
model_name = "ms-marco-MiniLM-L-12-v2"
ranker = Ranker(model_name=model_name, max_length=1024, cache_dir=model_dir)
return FlashrankRerank(client=ranker, model=model_name)
```
The validate_environment method in FlashrankRerank goes to create another Ranker object causing it to download model into a different directory
```python
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
try:
from flashrank import Ranker
except ImportError:
raise ImportError(
"Could not import flashrank python package. "
"Please install it with `pip install flashrank`."
)
values["model"] = values.get("model", DEFAULT_MODEL_NAME)
values["client"] = Ranker(model_name=values["model"])
return values
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
FlashrankRerank should check if the client is already initialized before creating a ranker.
### System Info
langchain==0.2.5
langchain-anthropic==0.1.15
langchain-aws==0.1.6
langchain-community==0.2.5
langchain-core==0.2.7
langchain-experimental==0.0.61
langchain-openai==0.1.8
langchain-text-splitters==0.2.1
openinference-instrumentation-langchain==0.1.19 | FlashrankRerank validate_environment creates a Ranker client even if a custom client is passed | https://api.github.com/repos/langchain-ai/langchain/issues/23338/comments | 1 | 2024-06-23T18:43:30Z | 2024-06-24T11:11:21Z | https://github.com/langchain-ai/langchain/issues/23338 | 2,368,826,147 | 23,338 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# https://python.langchain.com/v0.2/docs/how_to/message_history/
from langchain_community.chat_message_histories import SQLChatMessageHistory
from langchain_core.messages import HumanMessage
from langchain_core.runnables.history import RunnableWithMessageHistory
def get_session_history(session_id):
return SQLChatMessageHistory(session_id, "sqlite:///memory.db")
runnable_with_history = RunnableWithMessageHistory(
llm,
get_session_history,
)
runnable_with_history.invoke(
[HumanMessage(content="hi - im bob!")],
config={"configurable": {"session_id": "1"}},
)
runnable_with_history.invoke(
[HumanMessage(content="what is my name?")],
config={"configurable": {"session_id": "1"}},
)
### Error Message and Stack Trace (if applicable)
Error in RootListenersTracer.on_llm_end callback: AttributeError("'str' object has no attribute 'type'")
<img width="1122" alt="image" src="https://github.com/langchain-ai/langchain/assets/31367145/1db9de46-0822-4c78-80e1-6a6ea8633f21">
### Description
I'm following the document https://python.langchain.com/v0.2/docs/how_to/message_history/ then got this error.
### System Info
from conda list
langchain 0.2.1 pypi_0 pypi
langchain-chroma 0.1.1 pypi_0 pypi
langchain-community 0.2.1 pypi_0 pypi
langchain-core 0.2.2 pypi_0 pypi
langchain-openai 0.1.8 pypi_0 pypi
langchain-text-splitters 0.2.0 pypi_0 pypi
langgraph 0.0.66 pypi_0 pypi
langserve 0.2.2 pypi_0 pypi
langsmith 0.1.63 pyhd8ed1ab_0 conda-forge | Error in RootListenersTracer.on_llm_end callback: AttributeError("'str' object has no attribute 'type'") | https://api.github.com/repos/langchain-ai/langchain/issues/23311/comments | 2 | 2024-06-23T08:47:47Z | 2024-06-23T09:59:18Z | https://github.com/langchain-ai/langchain/issues/23311 | 2,368,450,526 | 23,311 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```python
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import DuckDB
db = DuckDB.from_texts(
['a', 'v', 'asdfadf', '893yhrfa'],
HuggingFaceEmbeddings())
```
### Error Message and Stack Trace (if applicable)
```
CatalogException Traceback (most recent call last)
Cell In[3], line 2
1 texts = ['a', 'v', 'asdfadf', '893yhrfa']
----> 2 db = DuckDB.from_texts(texts,
3 HuggingFaceEmbeddings())
4 db.similarity_search('ap', k=5)
File /opt/conda/lib/python3.10/site-packages/langchain_community/vectorstores/duckdb.py:287, in DuckDB.from_texts(cls, texts, embedding, metadatas, **kwargs)
278 instance = DuckDB(
279 connection=connection,
280 embedding=embedding,
(...)
284 table_name=table_name,
285 )
286 # Add texts and their embeddings to the DuckDB vector store
--> 287 instance.add_texts(texts, metadatas=metadatas, **kwargs)
289 return instance
File /opt/conda/lib/python3.10/site-packages/langchain_community/vectorstores/duckdb.py:194, in DuckDB.add_texts(self, texts, metadatas, **kwargs)
191 if have_pandas:
192 # noinspection PyUnusedLocal
193 df = pd.DataFrame.from_dict(data) # noqa: F841
--> 194 self._connection.execute(
195 f"INSERT INTO {self._table_name} SELECT * FROM df",
196 )
197 return ids
CatalogException: Catalog Error: Table with name df does not exist!
Did you mean "pg_am"?
LINE 1: INSERT INTO embeddings SELECT * FROM df
```
### Description
* I am trying to use DuckDB as vector storage
* I expect to get a vector storage instance connected to DuckDB
* Instead it throw error on initialization
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Tue Dec 19 13:14:11 UTC 2023
> Python Version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.81
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | SQL problem in langchain-community `langchain_community.vectorstores.duckdb`:194 | https://api.github.com/repos/langchain-ai/langchain/issues/23308/comments | 3 | 2024-06-23T02:21:20Z | 2024-07-01T13:12:07Z | https://github.com/langchain-ai/langchain/issues/23308 | 2,368,150,297 | 23,308 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/chatbot/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
would be nice to understand what those "Parent run 181a1f04-9176-4837-80e8-ce74866775a2 not found for run ad402c5a-8341-4c62-ac58-cdf923b3b9ec. Treating as a root run." messages mean...
Are they harmless?
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/tutorials/chatbot/> | https://api.github.com/repos/langchain-ai/langchain/issues/23307/comments | 1 | 2024-06-22T16:46:20Z | 2024-06-25T18:59:03Z | https://github.com/langchain-ai/langchain/issues/23307 | 2,367,902,893 | 23,307 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
class MyVanna(ChromaDB_VectorStore, GoogleGeminiChat):
def __init__(self, config=None):
ChromaDB_VectorStore.__init__(self, config=config)
GoogleGeminiChat.__init__(self, config=config)
vn = MyVanna(config={
"path": "../....,
"api_key": os.getenv("GOOGLE_API_KEY"),
"model_name": "gemini-1.5-pro",
}
)
@tool('vanna_tool')
def vanna_tool(qa : str):
"....."
....
return .....
class Response(BaseModel):
"""The format for your final response by the agent should be in a Json format agent_response and sql_query"""
agent_response: str = Field(description="""..... """)
sql_query: str = Field("", description="The full sql query returned by the `vanna_tool` agent and '' if no response.")
tools = [vanna_tool]
llm = ChatVertexAI(model="gemini-pro")
...
parser = PydanticOutputParser(pydantic_object=Response)
prompt.partial_variables = {"format_instructions": parser.get_format_instructions()}
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
return_intermediate_steps=True
)
### Error Message and Stack Trace (if applicable)
in case of openai:
```> Entering new AgentExecutor chain...
Invoking: `vanna_tool` with `{'qa': "Display the '...' column values for the week between 2 to 8 in 2024."}`
...
``` **(right output)**
in case of Gemini:
```
> Entering new AgentExecutor chain...
```json
{"agent_response": "Here are the ... values for weeks 2 to 8 in 2024:\n\n```\n opec \nweek \n2 77680000 \n3 77900000 \n4 78110000 \n5 78310000 \n6 78500000 \n7 78680000 \n```\n\nI hope this is helpful!", "sql_query": "SELECT ...FROM ...WHERE week BETWEEN 2 AND 8 AND YEAR_ = 2024;"}
``` **(random output without invoking 'vanna_tool'**
### Description
asking question that should invoke vanna_tool
when using openai or mistral, it works.
when using any gemini model, it will create random response without entering 'vanna_tool' ..
### System Info
langchain==0.2.5
langchain-community==0.2.5
langchain-core==0.2.9
langchain-experimental==0.0.61
langchain-openai==0.1.9
langchain-google-genai==1.0.6
langchain-google-vertexai==1.0.5
google-generativeai==0.5.4 | AgentExecutor is not choosing the tool (vanna_tool) when using any Gemini model (it is working properly when using OpenAI or Mistral) | https://api.github.com/repos/langchain-ai/langchain/issues/23298/comments | 2 | 2024-06-22T07:58:50Z | 2024-06-22T09:42:35Z | https://github.com/langchain-ai/langchain/issues/23298 | 2,367,677,918 | 23,298 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```Python
from langchain_core.output_parsers import JsonOutputParser
msg = "what queries must i run?"
class Step(BaseModel):
step_name: str = Field(
description="...")
tool_to_use: str = Field(description="...")
tool_input: str = Field(description="...")
depends_on: List[str] = Field(
description="...")
class PlanOutput(BaseModel):
task: str = Field(description="...")
steps: List[Step] = Field(description="...")
parser = JsonOutputParser(pydantic_object=PlanOutput)
llm = ChatOpenAI(...)
chain = ChatPromptTemplate.from_messages([('user': '...{input} Your output must follow this format: {format}' | llm | parser
chain.invoke({'format': plan_parser.get_format_instructions(), "input": msg})
```
### Error Message and Stack Trace (if applicable)
2024-06-22 11:21:03,116 - agent.py - 90 - ERROR - Traceback (most recent call last):
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/output_parsers/json.py", line 66, in parse_result
return parse_json_markdown(text)
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/utils/json.py", line 147, in parse_json_markdown
return _parse_json(json_str, parser=parser)
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/utils/json.py", line 160, in _parse_json
return parser(json_str)
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/utils/json.py", line 120, in parse_partial_json
return json.loads(s, strict=strict)
File "/usr/lib/python3.9/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/usr/lib/python3.9/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.9/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 6 column 22 (char 109)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/mnt/d/python_projects/azure-openai-qa-bot/nat-sql/src/agent.py", line 69, in talk
for s in ap.app.stream({"task": inp, 'session_id': sid}, config=args):
File "/root/classifier/.venv/lib/python3.9/site-packages/langgraph/pregel/__init__.py", line 963, in stream
_panic_or_proceed(done, inflight, step)
File "/root/classifier/.venv/lib/python3.9/site-packages/langgraph/pregel/__init__.py", line 1489, in _panic_or_proceed
raise exc
File "/usr/lib/python3.9/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/root/classifier/.venv/lib/python3.9/site-packages/langgraph/pregel/retry.py", line 66, in run_with_retry
task.proc.invoke(task.input, task.config)
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2399, in invoke
input = step.invoke(
File "/root/classifier/.venv/lib/python3.9/site-packages/langgraph/utils.py", line 95, in invoke
ret = context.run(self.func, input, **kwargs)
File "/mnt/d/python_projects/azure-openai-qa-bot/nat-sql/src/action_plan.py", line 138, in _plan_steps
plan = self.planner.invoke({"task": state['task'], 'chat_history': hist if not self.no_mem else [],
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2399, in invoke
input = step.invoke(
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/output_parsers/base.py", line 169, in invoke
return self._call_with_config(
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1509, in _call_with_config
context.run(
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/runnables/config.py", line 365, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/output_parsers/base.py", line 170, in <lambda>
lambda inner_input: self.parse_result(
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/output_parsers/json.py", line 69, in parse_result
raise OutputParserException(msg, llm_output=text) from e
langchain_core.exceptions.OutputParserException: Invalid json output: ```json
{
"task": "what Queries tmust i run",
"steps": [
{
"step_name": "Step#1",
"tool_to_use": Document_Search_Tool,
"tool_input": "What queries must I run?",
"depends_on": []
}
]
}
```
```
### Description
Sometimes, despite adding a JSON output parser to the LLM chain, the LLM may enclose the generated JSON within ``` json... ``` tags.
This causes the JSON output parser to fail. It would be nice if the parser could check for this enclosure and remove it before parsing the JSON.
![image](https://github.com/langchain-ai/langchain/assets/20237354/069e5da1-423a-4bdc-bcc6-cf2fb0ced809)
### System Info
```
langchain==0.2.1
langchain-chroma==0.1.1
langchain-cli==0.0.24
langchain-community==0.2.1
langchain-core==0.2.3
langchain-openai==0.1.8
langchain-text-splitters==0.2.0
langchain-visualizer==0.0.33
langchainhub==0.1.19
``` | JsonOutputParser fails at times when the LLM encloses the output JSON within ``` json ... ``` | https://api.github.com/repos/langchain-ai/langchain/issues/23297/comments | 1 | 2024-06-22T06:06:18Z | 2024-06-23T09:37:38Z | https://github.com/langchain-ai/langchain/issues/23297 | 2,367,595,218 | 23,297 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/multimodal_prompts/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The examples in this page show the image as part of the message as:
```json
{"type" : "image_url", "image_url" : "data:image/jpeg;base64,{image_data}"}
```
However, this will result in a 400 response from OpenAI because the `image_url` value must be an object and not a string. This is the proper schema:
```json
{"type" : "image_url", "image_url" : {"url" : "data:image/jpeg;base64,{image_data}"}}
```
### Idea or request for content:
Rewrite the documentation examples to have the proper schema. I believe the other [multi modal page](https://python.langchain.com/v0.2/docs/how_to/multimodal_inputs/) has the correct one.
[OpenAI documentation reference](https://platform.openai.com/docs/guides/vision/quick-start) | DOC: <Issue related to /v0.2/docs/how_to/multimodal_prompts/> Image message structure is incorrect for OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/23294/comments | 2 | 2024-06-22T01:35:54Z | 2024-06-24T15:58:52Z | https://github.com/langchain-ai/langchain/issues/23294 | 2,367,440,695 | 23,294 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from this tutorial: https://python.langchain.com/v0.2/docs/tutorials/sql_qa/
from langchain_community.agent_toolkits import SQLDatabaseToolkit
from langchain_core.messages import SystemMessage
from langchain_core.messages import HumanMessage
from langgraph.prebuilt import create_react_agent
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
tools = toolkit.get_tools()
SQL_PREFIX = """You are an agent designed to interact with a SQL database.
Given an input question, create a syntactically correct SQLite query to run, then look at the results of the query and return the answer.
Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 5 results.
You can order the results by a relevant column to return the most interesting examples in the database.
Never query for all the columns from a specific table, only ask for the relevant columns given the question.
You have access to tools for interacting with the database.
Only use the below tools. Only use the information returned by the below tools to construct your final answer.
You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.
DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.
To start you should ALWAYS look at the tables in the database to see what you can query.
Do NOT skip this step.
Then you should query the schema of the most relevant tables."""
system_message = SystemMessage(content=SQL_PREFIX)
agent_executor = create_react_agent(llm, tools, messages_modifier=system_message)
for s in agent_executor.stream(
{"messages": [HumanMessage(content="Which country's customers spent the most?")]}
):
print(s)
print("----")
### Error Message and Stack Trace (if applicable)
The llm returns hallucinated query result, but the generated sql query looks reasonable. no error messages were surfaced.
### Description
I am using the above code to create sql agent, the code runs, it generates reasonable sql queries, but the query results were all hallucinated, not the actual result based on the database. wondering how is the agent connected to db, since the agent arguments don't include db and why sql_db_query tool doesn't execute on the sql db.
### System Info
langchain==0.2.5
langchain-aws==0.1.6
langchain-chroma==0.1.0
langchain-community==0.2.5
langchain-core==0.2.7
langchain-experimental==0.0.51
langchain-text-splitters==0.2.1
@dosu | V2.0 create_react_agent doesn't execute generated query on sql database | https://api.github.com/repos/langchain-ai/langchain/issues/23293/comments | 2 | 2024-06-22T01:30:15Z | 2024-06-28T01:45:09Z | https://github.com/langchain-ai/langchain/issues/23293 | 2,367,431,738 | 23,293 |
[
"hwchase17",
"langchain"
] | > @TheJerryChang will it also stop the llm`s completion process? I am using langchain llama cpp with conversationalretrievalchain
Did you solve this? It seems I'm facing the same issue.
> > @TheJerryChang will it also stop the llm`s completion process? I am using langchain llama cpp with conversationalretrievalchain
>
> For my use case, Im using Chat model and did not try with completion process. How do you do the streaming? Did you use chain.stream()? I personally think as long as it supports calling chain.stream() to proceed the streaming, it then could be interrupted by raising an exception during the iteration
I'm using the .astream_events method of AgentExecutor. I'm trying to figure out how to stream an agent response but also be able to cancel it before it finishes generating.
_Originally posted by @Spider-netizen in https://github.com/langchain-ai/langchain/issues/11959#issuecomment-1975388015_
Hi,
Can this be achieved?
Thanks. | Cancelling Llama CPP Generation Using Agent's astream_events | https://api.github.com/repos/langchain-ai/langchain/issues/23282/comments | 1 | 2024-06-21T20:51:36Z | 2024-07-26T04:27:17Z | https://github.com/langchain-ai/langchain/issues/23282 | 2,367,232,107 | 23,282 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Dockerfile:
```
# LLM Installs
RUN pip3 install \
langchain \
langchain-community \
langchain-pinecone \
langchain-openai
```
Python Imports
``` python
import langchain
from langchain_community.document_loaders import PyPDFLoader, TextLoader, Docx2txtLoader, UnstructuredHTMLLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_pinecone import PineconeVectorStore
from langchain_openai import OpenAIEmbeddings, OpenAI, ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import RetrievalQAWithSourcesChain, ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.load.dump import dumps
```
### Error Message and Stack Trace (if applicable)
```
2024-05-30 13:28:59 from langchain_openai import OpenAIEmbeddings, OpenAI, ChatOpenAI
2024-05-30 13:28:59 File "/usr/local/lib/python3.9/site-packages/langchain_openai/__init__.py", line 1, in <module>
2024-05-30 13:28:59 from langchain_openai.chat_models import (
2024-05-30 13:28:59 File "/usr/local/lib/python3.9/site-packages/langchain_openai/chat_models/__init__.py", line 1, in <module>
2024-05-30 13:28:59 from langchain_openai.chat_models.azure import AzureChatOpenAI
2024-05-30 13:28:59 File "/usr/local/lib/python3.9/site-packages/langchain_openai/chat_models/azure.py", line 9, in <module>
2024-05-30 13:28:59 from langchain_core.language_models.chat_models import LangSmithParams
2024-05-30 13:28:59 ImportError: cannot import name 'LangSmithParams' from 'langchain_core.language_models.chat_models' (/usr/local/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py)
```
### Description
I am trying to import langchain_openai with the newest version released last night (0.1.8) and it can not find the LangSmithParams module.
I move back a version with ``` langchain-openai==0.1.7 ``` and it works again. Something in this new update broke the import.
### System Info
Container is running python 3.9 on Rocky Linux 8
```
# Install dependecies
RUN dnf -y install epel-release
RUN dnf -y install \
httpd \
python39 \
unzip \
xz \
git-core \
ImageMagick \
wget
RUN pip3 install \
psycopg2-binary \
pillow \
lxml \
pycryptodomex \
six \
pytz \
jaraco.functools \
requests \
supervisor \
flask \
flask-cors \
flask-socketio \
mako \
boto3 \
botocore==1.34.33 \
gotenberg-client \
docusign-esign \
python-dotenv \
htmldocx \
python-docx \
beautifulsoup4 \
pypandoc \
pyetherpadlite \
html2text \
PyJWT \
sendgrid \
auth0-python \
authlib \
openai==0.27.7 \
pinecone-client==3.1.0 \
pinecone-datasets==0.7.0 \
tiktoken==0.4.0
# Installing LLM requirements
RUN pip3 install \
langchain \
langchain-community \
langchain-pinecone \
langchain-openai==0.1.7 \
pinecone-client \
pinecone-datasets \
unstructured \
poppler-utils \
tiktoken \
pypdf \
python-dotenv \
docx2txt
``` | langchain-openai==0.1.8 is completely broken | https://api.github.com/repos/langchain-ai/langchain/issues/23278/comments | 1 | 2024-06-21T19:24:45Z | 2024-07-24T19:05:47Z | https://github.com/langchain-ai/langchain/issues/23278 | 2,367,126,718 | 23,278 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Dockerfile:
```
# LLM Installs
RUN pip3 install \
langchain \
langchain-community \
langchain-pinecone \
langchain-openai
```
Python Imports
``` python
import langchain
from langchain_community.document_loaders import PyPDFLoader, TextLoader, Docx2txtLoader, UnstructuredHTMLLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_pinecone import PineconeVectorStore
from langchain_openai import OpenAIEmbeddings, OpenAI, ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import RetrievalQAWithSourcesChain, ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.load.dump import dumps
```
### Error Message and Stack Trace (if applicable)
```
2024-05-30 13:28:59 from langchain_openai import OpenAIEmbeddings, OpenAI, ChatOpenAI
2024-05-30 13:28:59 File "/usr/local/lib/python3.9/site-packages/langchain_openai/__init__.py", line 1, in <module>
2024-05-30 13:28:59 from langchain_openai.chat_models import (
2024-05-30 13:28:59 File "/usr/local/lib/python3.9/site-packages/langchain_openai/chat_models/__init__.py", line 1, in <module>
2024-05-30 13:28:59 from langchain_openai.chat_models.azure import AzureChatOpenAI
2024-05-30 13:28:59 File "/usr/local/lib/python3.9/site-packages/langchain_openai/chat_models/azure.py", line 9, in <module>
2024-05-30 13:28:59 from langchain_core.language_models.chat_models import LangSmithParams
2024-05-30 13:28:59 ImportError: cannot import name 'LangSmithParams' from 'langchain_core.language_models.chat_models' (/usr/local/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py)
```
### Description
I am trying to import langchain_openai with the newest version released last night (0.1.8) and it can not find the LangSmithParams module.
I move back a version with ``` langchain-openai==0.1.7 ``` and it works again. Something in this new update broke the import.
### System Info
Container is running python 3.9 on Rocky Linux 8
```
# Install dependecies
RUN dnf -y install epel-release
RUN dnf -y install \
httpd \
python39 \
unzip \
xz \
git-core \
ImageMagick \
wget
RUN pip3 install \
psycopg2-binary \
pillow \
lxml \
pycryptodomex \
six \
pytz \
jaraco.functools \
requests \
supervisor \
flask \
flask-cors \
flask-socketio \
mako \
boto3 \
botocore==1.34.33 \
gotenberg-client \
docusign-esign \
python-dotenv \
htmldocx \
python-docx \
beautifulsoup4 \
pypandoc \
pyetherpadlite \
html2text \
PyJWT \
sendgrid \
auth0-python \
authlib \
openai==0.27.7 \
pinecone-client==3.1.0 \
pinecone-datasets==0.7.0 \
tiktoken==0.4.0
# Installing LLM requirements
RUN pip3 install \
langchain \
langchain-community \
langchain-pinecone \
langchain-openai==0.1.7 \
pinecone-client \
pinecone-datasets \
unstructured \
poppler-utils \
tiktoken \
pypdf \
python-dotenv \
docx2txt
``` | langchain-openai==0.1.8 is now broken | https://api.github.com/repos/langchain-ai/langchain/issues/23277/comments | 1 | 2024-06-21T19:11:03Z | 2024-06-21T19:11:33Z | https://github.com/langchain-ai/langchain/issues/23277 | 2,367,109,149 | 23,277 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
test | test | https://api.github.com/repos/langchain-ai/langchain/issues/23274/comments | 0 | 2024-06-21T18:49:28Z | 2024-07-01T15:04:50Z | https://github.com/langchain-ai/langchain/issues/23274 | 2,367,082,015 | 23,274 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def set_embed(self, query: str) -> None:
# Load model configurations
load_dotenv(self.model_conf)
# Load app configurations
config = ConfigParser(interpolation=None)
config.read('app.ini')
apim_params = config['apim']
# Set env variables
os.environ["OPENAI_API_KEY"] = apim_params['OPENAI_API_KEY']
# Set emb_model name variables
#embed_model = os.getenv('EMBEDDING_MODEL_TYPE')
embed_model = os.getenv('MODEL_TYPE_EMBEDDING')
print(embed_model)
# Set apim request parameters
params: Mapping[str, str] = {
'api-version': os.getenv('OPENAI_API_VERSION')
}
headers: Mapping[str, str] = {
'Content-Type': apim_params['CONTENT_TYPE'],
'Ocp-Apim-Subscription-Key': os.getenv('OCP-APIM-SUBSCRIPTION-KEY')
}
client = httpx.Client(
base_url = os.getenv('AZURE_OPENAI_ENDPOINT'),
params = params,
headers = headers,
verify = apim_params['CERT_PATH']
)
print(client.params)
print(client.headers)
try:
# Load embedding model
self.embed = AzureOpenAIEmbeddings(
model = 'text-embedding-ada-002',
azure_deployment=embed_model,
azure_deployment='text-embedding-ada-002',
chunk_size=2048,
http_client=client)
print (self.embed)
result = self.embed.embed_query(query)
print (f'{embed_model} model initialized')
except Exception as e:
raise Exception(f'ApimUtils-set_embed : Error while initializing embedding model - {e}')
### Error Message and Stack Trace (if applicable)
ApimUtils-set_embed : Error while initializing embedding model - Error code: 400 - {'statusCode': 400, 'message': "Unable to parse and estimate tokens from incoming request. Please ensure incoming request is of one of the following types: 'Chat Completion', 'Completion', 'Embeddings' and works with current prompt estimation mode of 'Auto'."}
### Description
When using the AzureOpenAIEmbedding class with our Azure APIM in front of our Azure OpenAI services it breaks within our APIM policy which captures/calculates prompt/completion tokens from the request. We believe this is due to how the AzureOpenAIEmbedding class is sending a list of integers ex. b'{"input": [[3923, 374, 279, 4611, 96462, 46295, 58917, 30]], "model": "text-embedding-ada-002", "encoding_format": "base64"}' vs [str] from the query text.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.5
> langchain: 0.2.3
> langchain_community: 0.2.4
> langsmith: 0.1.75
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.1 | LangChain AzureOpenAIEmbeddings issue when passing list of ints vs [str] | https://api.github.com/repos/langchain-ai/langchain/issues/23268/comments | 2 | 2024-06-21T16:08:45Z | 2024-07-10T11:18:04Z | https://github.com/langchain-ai/langchain/issues/23268 | 2,366,841,828 | 23,268 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Gemini now allows a developer to create a context cache with the system instructions, contents, tools, and model information already set, and then reference this context as part of a standard query. It must be explicitly cached (ie - it is not automatic as part of a request or reply) and a cache expiration can be set (and later changed).
It does not appear to be supported in Vertex AI at this time.
Open issues:
* Best paradigm to add to cache or integrate with LangChain history system
* Best paradigm to reference
References:
* AI Studio / genai: https://ai.google.dev/gemini-api/docs/caching?lang=python
* LangChain.js: https://github.com/langchain-ai/langchainjs/issues/5841 | google-genai [feature]: Context Caching | https://api.github.com/repos/langchain-ai/langchain/issues/23259/comments | 1 | 2024-06-21T12:50:57Z | 2024-08-06T16:51:24Z | https://github.com/langchain-ai/langchain/issues/23259 | 2,366,484,658 | 23,259 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
import os
import json
from pathlib import Path
from langchain_community.cache import SQLiteCache
from typing import Callable, List
model_list = [
'ChatAnthropic', # <- has several instance of this bug, not only SQLiteCache
'ChatBaichuan',
'ChatCohere',
'ChatCoze',
'ChatDeepInfra',
'ChatEverlyAI',
'ChatFireworks',
'ChatFriendli',
'ChatGooglePalm',
'ChatHunyuan',
'ChatLiteLLM',
'ChatOctoAI',
'ChatOllama',
'ChatOpenAI',
'ChatPerplexity',
'ChatYuan2',
'ChatZhipuAI'
# Below are the models I didn't test, as well as the reason why I haven't
# 'ChatAnyscale', # needs a model name
# 'ChatDatabricks', # needs some params
# 'ChatHuggingFace', # needs a modelname
# 'ChatJavelinAIGateway', # needs some params
# 'ChatKinetica', # not installed
# 'ChatKonko', # not installed
# 'ChatLiteLLMRouter', # needs router arg
# 'ChatLlamaCpp', #needs some params
# 'ChatMLflowAIGateway', # not installed
# 'ChatMaritalk', # needs some params
# 'ChatMlflow', # not installed
# 'ChatMLX', # needs some params
# 'ChatPremAI', # not installed
# 'ChatSparkLLM', # issue with api key
# 'ChatTongyi', # not installed
# 'ChatVertexAI', # not insalled
# 'ChatYandexGPT', # needs some params
]
# import the models
for m in model_list:
exec(f"from langchain_community.chat_models import {m}")
# set fake api keys
for m in model_list:
backend = m[4:].upper()
os.environ[f"{backend}_API_KEY"] = "aaaaaa"
os.environ[f"{backend}_API_TOKEN"] = "aaaaaa"
os.environ[f"{backend}_TOKEN"] = "aaaaaa"
os.environ["GOOGLE_API_KEY"] = "aaaaaa"
os.environ["HUNYUAN_APP_ID"] = "aaaaaa"
os.environ["HUNYUAN_SECRET_ID"] = "aaaaaa"
os.environ["HUNYUAN_SECRET_KEY"] = "aaaaaa"
os.environ["PPLX_API_KEY"] = "aaaaaa"
os.environ["IFLYTEK_SPARK_APP_ID"] = "aaaaaa"
os.environ["SPARK_API_KEY"] = "aaaaaa"
os.environ["DASHSCOPE_API_KEY"] = "aaaaaa"
os.environ["YC_API_KEY"] = "aaaaaa"
# create two brand new cache
Path("test_cache.db").unlink(missing_ok=True)
c1 = SQLiteCache(database_path="test_cache.db")
c2 = SQLiteCache(database_path="test_cache.db")
def recur_dict_check(val: dict) -> List[str]:
"find which object is causing the issue"
found = []
for k, v in val.items():
if " object at " in str(v):
if isinstance(v, dict):
found.append(recur_dict_check(v))
else:
found.append(v)
# flatten the list
out = []
for f in found:
if isinstance(f, list):
out.extend(f)
else:
out.append(f)
assert out
out = [str(o) for o in out]
return out
def check(chat_model: Callable, verbose: bool = False) -> bool:
"check a given chatmodel"
llm1 = chat_model(
cache=c1,
)
llm2 = chat_model(
cache=c2,
)
backend = llm1.get_lc_namespace()[-1]
str1 = llm1._get_llm_string().split("---")[0]
str2 = llm2._get_llm_string().split("---")[0]
if verbose:
print(f"LLM1:\n{str1}")
print(f"LLM2:\n{str2}")
if str1 == str2:
print(f"{backend.title()} does not have the bug")
return True
else:
print(f"{backend.title()} HAS the bug")
j1, j2 = json.loads(str1), json.loads(str2)
assert j1.keys() == j2.keys()
diff1 = recur_dict_check(j1)
diff2 = recur_dict_check(j2)
assert len(diff1) == len(diff2)
diffs = [str(v).split("object at ")[0] for v in diff1 + diff2]
assert all(diffs.count(elem) == 2 for elem in diffs)
print(f"List of buggy objects for model {backend.title()}:")
for d in diff1:
print(f" - {d}")
# for k, v in j1
return False
failed = []
for model in model_list:
if not check(locals()[model]):
failed.append(model)
print(f"The culprit is at least SQLiteCache repr string:\n{c1}\n{c2}")
c1.__class__.__repr__ = lambda x=None : "<langchain_community.cache.SQLiteCache>"
c2.__class__.__repr__ = lambda x=None : "<langchain_community.cache.SQLiteCache>"
print(f"Now fixed:\n{c1}\n{c2}\n")
# Anthropic still has issues
assert not check(locals()["ChatAnthropic"])
for model in failed:
if model == "ChatAnthropic": # anthropic actually has more issues!
continue
assert check(locals()[model]), model
print("Fixed it for most models!")
print(f"Models with the issue: {len(failed)} / {len(model_list)}")
for f in failed:
print(f" - {f}")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Being affected by [this bug](https://github.com/langchain-ai/langchain/issues/22389) in [my DocToolsLLM project](https://github.com/thiswillbeyourgithub/DocToolsLLM/) I ended up, instead of ChatLiteLLM for all models, using directly ChatOpenAI if the model asked is by openai anyway.
The other day I noticed that my SQLiteCcache was getting systematically ignored only by ChatOpenAI and ended up figuring out the culprit :
- To know if a value is present in the cache, the prompt AND a string characterizing the LLM is used.
- The method used to characterize the LLM is `_get_llm_string()`
- This method's implementation is inconsistent across chat models, causing outputs to contain the unfiltered __repr__ object of for example cache, callbacks etc.
- The issue is that for a lot of instance, the __repr__ returns something like `<langchain_community.cache.SQLiteCache object at SOME_ADRESS>`
- I found that manually setting the __repr__ of the superclass of those object is a viable workaround
To help you fix this ASAP I coded an loop that checks over all chat models and tells you what instance is causing the issue.
### System Info
python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #35~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue May 7 09:00:52 UTC 2
> Python Version: 3.11.7 (main, Jun 12 2024, 12:57:34) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.7
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.77
> langchain_mistralai: 0.1.8
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | BUG: Many chat models never uses SQLiteCache because of the cache instance's __repr__ method changes! | https://api.github.com/repos/langchain-ai/langchain/issues/23257/comments | 6 | 2024-06-21T10:57:12Z | 2024-06-21T15:22:28Z | https://github.com/langchain-ai/langchain/issues/23257 | 2,366,287,565 | 23,257 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
from dotenv import load_dotenv
import streamlit as st
from langchain_community.document_loaders import PyPDFLoader, WebBaseLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import FAISS
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain.chains import ConversationalRetrievalChain, StuffDocumentsChain, LLMChain
from langchain.memory import ConversationBufferMemory
import os
load_dotenv()
def load_documents(global_pdf_path, external_pdf_path=None, input_url=None):
""" This functionality of loading global PDF knowledge base is currently placed inside load_documents function
which is not a feasible approach has to perform the global pdf load only once hence the below funcion need
to be placed somewhere else"""
# Load the global internal knowledge base PDF
global_loader = PyPDFLoader(global_pdf_path)
global_docs = global_loader.load()
documents = global_docs
if external_pdf_path:
# Load the external input knowledge base PDF
external_loader = PyPDFLoader(external_pdf_path)
external_docs = external_loader.load()
documents += external_docs
if input_url:
# Load URL content
url_loader = WebBaseLoader(input_url)
url_docs = url_loader.load()
documents += url_docs
# Split the documents into smaller chunks
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
split_docs = text_splitter.split_documents(documents)
return split_docs
def create_vector_store(documents):
embeddings = OpenAIEmbeddings(api_key=os.environ['OPENAI_API_KEY'])
vector_store = FAISS.from_documents(documents, embeddings)
return vector_store
def get_LLM_response(query, task, content_type):
llm = ChatOpenAI(api_key=os.environ['OPENAI_API_KEY'])
# Create a prompt template
prompt = ChatPromptTemplate.from_template(f"""You are a Marketing Assistant
<context>
{{context}}
</context>
Question: {{input}}""")
# Create document chain
document_chain = StuffDocumentsChain(
llm_chain=LLMChain(llm=llm, prompt=prompt),
document_variable_name="context"
)
retriever = vector_store.as_retriever()
question_generator_template = PromptTemplate(
input_variables=[
"chat_history",
"input_key",
],
template= (
"""
Combine the chat history and follow up question into a standalone question.
Chat History: {chat_history}
Follow up question: {question}
""")
)
question_generator_chain = LLMChain(
llm=llm,
prompt=question_generator_template,
)
# Create retrieval chain
retrieval_chain = ConversationalRetrievalChain(
combine_docs_chain=document_chain,
question_generator=question_generator_chain,
retriever=retriever,
memory=ConversationBufferMemory(memory_key="chat_history", input_key="")
)
# Get the response
response = retrieval_chain.invoke({"question": query, "context": documents, "input": ""})
return response["answer"]
# Code for Frontend begins here
st.set_page_config(page_title="Linkenite AI", page_icon='🤖', layout='centered', initial_sidebar_state='collapsed')
st.header("🤖Linkenite Marketing Assistant")
prompt = st.text_input("Enter the prompt")
task = st.selectbox("Please select the input you want to provide", ('PDF', 'URL'), key=1)
content_type = st.selectbox("Select the type of content you want to generate", ("Blog", "Technical Blog", "Whitepaper", "Case Studies", "LinkedIn Post", "Social Media Post"), key=2)
input_file = None
input_url = None
if task == 'PDF':
input_file = st.file_uploader("Upload a PDF file", type="pdf")
# Work on tracking the path of the uploaded pdf
elif task == 'URL':
input_url = st.text_input("Enter the URL")
submit = st.button("Generate")
if submit and (input_file or input_url):
global_pdf_path = "input_kb.pdf"
external_pdf_path = None
if input_file:
# The input pdf file's path has to be used below inplace of "input_kb.pdf"
with open("input_kb.pdf", "wb") as f:
f.write(input_file.read())
external_pdf_path = "input_kb.pdf"
documents = load_documents(global_pdf_path, external_pdf_path, input_url)
vector_store = create_vector_store(documents)
context = " ".join([doc.page_content for doc in documents])
response = get_LLM_response(prompt, context, vector_store)
st.write(response)
def set_bg_from_url(url, opacity=1):
# Set background image using HTML and CSS
st.markdown(
f"""
<style>
body {{
background: url('{url}') no-repeat center center fixed;
background-size: cover;
opacity: {opacity};
}}
</style>
""",
unsafe_allow_html=True
)
# Set background image from URL
set_bg_from_url("https://cdn.create.vista.com/api/media/medium/231856778/stock-photo-smartphone-laptop-black-background-marketing-lettering-icons?token=", opacity=0.775)
```
### Error Message and Stack Trace (if applicable)
(venv) PS C:\Users\User\Desktop\Linkenite\MarketingAI MVP> streamlit run apporiginal.py
USER_AGENT environment variable not set, consider setting it to identify your requests.
C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain_core\_api\deprecation.py:139: LangChainDeprecationWarning: The class `LLMChain` was deprecated in LangChain 0.1.17 and will be removed in 0.3.0. Use RunnableSequence, e.g., `prompt | llm` instead.
warn_deprecated(
C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain_core\_api\deprecation.py:139: LangChainDeprecationWarning: The class `ConversationalRetrievalChain` was deprecated in LangChain 0.1.17 and will be removed in 0.3.0. Use create_history_aware_retriever together with create_retrieval_chain (see example in docstring) instead.
warn_deprecated(
2024-06-21 12:54:19.810 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 589, in _run_script
exec(code, module.__dict__)
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\apporiginal.py", line 147, in <module>
response = get_LLM_response(prompt, context, vector_store)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\apporiginal.py", line 109, in get_LLM_response
response = retrieval_chain.invoke({"question": query, "context": documents, "input": ""})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain\chains\base.py", line 166, in invoke
raise e
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain\chains\base.py", line 161, in invoke
final_outputs: Dict[str, Any] = self.prep_outputs(
^^^^^^^^^^^^^^^^^^
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain\chains\base.py", line 460, in prep_outputs
self.memory.save_context(inputs, outputs)
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain\memory\chat_memory.py", line 55, in save_context
input_str, output_str = self._get_input_output(inputs, outputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
##File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain\memory\chat_memory.py", line 51, in _get_input_output
return inputs[prompt_input_key], outputs[output_key]
~~~~~~^^^^^^^^^^^^^^^^^^
##KeyError: ''
### Description
I am trying to invoke a retrieval chain with three parameters passed {"question": query, "context": documents, "input": ""} the result throws a KeyError related to [ output_key ].
When I passed output_key like this {"question": query, "context": documents, "input": ", "output_key": ""} it gives another error.
The error comes from line 51 of langchain/memory/chat_memory.py ->
in _get_input_output
return inputs[prompt_input_key], outputs[output_key]
~~~~~~^^^^^^^^^^^^^^^^^^
KeyError: ''
Stopping...
### System Info
"pip freeze | grep langchain"
platform (windows)
python version (3.12.2)
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.12.4 (tags/v3.12.4:8e8a4ba, Jun 6 2024, 19:30:16) [MSC v.1940 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.77
> langchain_chroma: 0.1.1
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | KeyError: '' : output_key not found [in _get_input_output return inputs[prompt_input_key], outputs[output_key]] | https://api.github.com/repos/langchain-ai/langchain/issues/23255/comments | 1 | 2024-06-21T09:54:31Z | 2024-06-24T13:26:06Z | https://github.com/langchain-ai/langchain/issues/23255 | 2,366,174,937 | 23,255 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/document_loaders/chatgpt_loader/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
When we run this code with the downloaded conversations.json file, we get the following error
---
File "C:\New_LLM_Camp\myenv\Lib\site-packages\langchain_community\document_loaders\chatgpt.py", line 54, in load
concatenate_rows(messages[key]["message"], title)
File "C:\New_LLM_Camp\myenv\Lib\site-packages\langchain_community\document_loaders\chatgpt.py", line 25, in concatenate_rows
date = datetime.datetime.fromtimestamp(message["create_time"]).strftime(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object cannot be interpreted as an integer
---
The format of the export file for ChatGPT transcripts seems to be changing from what is defined in chatgpt.py.
Also, it seems to only work for text-only chats, and not for chats with images.
---
[chatgpt.py]
text = "".join(
[
concatenate_rows(messages[key]["message"], title)
for idx, key in enumerate(messages)
if not (
idx == 0
and messages[key]["message"]["author"]["role"] == "system"
)
]
)
---
### Idea or request for content:
_No response_ | Error when running ChatGPTLoader | https://api.github.com/repos/langchain-ai/langchain/issues/23252/comments | 0 | 2024-06-21T08:57:03Z | 2024-06-21T08:59:39Z | https://github.com/langchain-ai/langchain/issues/23252 | 2,366,065,661 | 23,252 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
In text_splitter.py (SemanticChunker)
```python
def _calculate_sentence_distances(
self, single_sentences_list: List[str]
) -> Tuple[List[float], List[dict]]:
"""Split text into multiple components."""
_sentences = [
{"sentence": x, "index": i} for i, x in enumerate(single_sentences_list)
]
sentences = combine_sentences(_sentences, self.buffer_size)
embeddings = self.embeddings.embed_documents(
[x["combined_sentence"] for x in sentences]
)
for i, sentence in enumerate(sentences):
sentence["combined_sentence_embedding"] = embeddings[i] << Failed here since embeddings size is less than i at a later point
return calculate_cosine_distances(sentences)
```
### Error Message and Stack Trace (if applicable)
```python
Traceback (most recent call last):
File "/Users/A72281951/telly/venv/ingestion/lib/python3.10/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/Users/A72281951/telly/venv/ingestion/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/A72281951/telly/venv/ingestion/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/Users/A72281951/telly/telly-backend/ingestion/main.py", line 132, in start
store.load_data_to_db(configured_spaces)
File "/Users/A72281951/telly/telly-backend/ingestion/common/utils.py", line 70, in wrapper
value = func(*args, **kwargs)
File "/Users/A72281951/telly/telly-backend/ingestion/agent/store/db.py", line 86, in load_data_to_db
for docs in self.ingest_data(spaces):
File "/Users/A72281951/telly/telly-backend/ingestion/agent/store/db.py", line 77, in ingest_data
documents.extend(self.chunker.split_documents(docs))
File "/Users/A72281951/telly/venv/ingestion/lib/python3.10/site-packages/langchain_experimental/text_splitter.py", line 258, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/Users/A72281951/telly/venv/ingestion/lib/python3.10/site-packages/langchain_experimental/text_splitter.py", line 243, in create_documents
for chunk in self.split_text(text):
File "/Users/A72281951/telly/venv/ingestion/lib/python3.10/site-packages/langchain_experimental/text_splitter.py", line 201, in split_text
distances, sentences = self._calculate_sentence_distances(single_sentences_list)
File "/Users/A72281951/telly/venv/ingestion/lib/python3.10/site-packages/langchain_experimental/text_splitter.py", line 186, in _calculate_sentence_distances
sentence["combined_sentence_embedding"] = embeddings[i]
IndexError: list index out of range
```
### Description
* I am trying to chunk a list of documents and it fails with this
* I am using SemanticChunker from langchain-experimental~=0.0.61
* breakpoint_threshold = percentile and breakpoint_threshold amount = 95.0
### System Info
langchain==0.2.5
langchain-community==0.2.5
langchain-core==0.2.9
langchain-experimental==0.0.61
langchain-google-vertexai==1.0.5
langchain-postgres==0.0.8
langchain-text-splitters==0.2.1
Mac M3
Python 3.10.14 | SemanticChunker: list index out of range | https://api.github.com/repos/langchain-ai/langchain/issues/23250/comments | 7 | 2024-06-21T08:04:16Z | 2024-07-05T08:55:34Z | https://github.com/langchain-ai/langchain/issues/23250 | 2,365,969,512 | 23,250 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
llama3_groq_model = ChatGroq(temperature=0, groq_api_key="gsk_")
def run_tools(query):
serp_search_tool = serp_search
tools = [serp_search_tool]
tools_by_name = {tool.name:tool for tool in tools}
tool_calls=[]
while True:
model = llama3_groq_model
llm_with_tool = model.bind_tools(tools)
res = llm_with_tool.invoke(query)
tool_calls = res.tool_calls
break
if tool_calls:
name = tool_calls[-1]['name']
args = tool_calls[-1]['args']
print(f'Running Tool {name}...')
rs = tools_by_name[name].invoke(args)
else:
rs = res.content
name = ''
args = {}
return {'result': rs, 'last_tool_calls':tool_calls}
```
### Error Message and Stack Trace (if applicable)
Expected response:
content='' additional_kwargs={'tool_calls': [{'id': 'call_7d3a', 'function': {'arguments': '{"keyword":"đài quảng bình"}', 'name': 'serp_search'}, 'type': 'function'}]} response_metadata={'token_usage': {'completion_time': 0.128152054, 'completion_tokens': 47, 'prompt_time': 0.197270744, 'prompt_tokens': 932, 'queue_time': None, 'total_time': 0.32542279799999996, 'total_tokens': 979}, 'model_name': 'llama3-70b-8192', 'system_fingerprint': 'fp_c1a4bcec29', 'finish_reason': 'tool_calls', 'logprobs': None} id='run-faa4fae3-93ab-4a13-8e5b-9e2269c1594f-0' tool_calls=[{'name': 'serp_search', 'args': {'keyword': 'đài quảng bình'}, 'id': 'call_7d3a'}]
Not expected response:
content='assistant<|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|>' response_metadata={'token_usage': {'completion_time': 11.079835881, 'completion_tokens': 4000, 'prompt_time': 0.165631979, 'prompt_tokens': 935, 'queue_time': None, 'total_time': 11.24546786, 'total_tokens': 4935}, 'model_name': 'llama3-70b-8192', 'system_fingerprint': 'fp_2f30b0b571', 'finish_reason': 'length', 'logprobs': None} id='run-a89e17f6-bea2-4db6-8969-94820098f2dc-0'
### Description
when the model call tool, it sometime return response as expected, but sometime it will response like the error above.
The <|start_header_id|> is duplicate multiple time and it take too much time to finish. I have to wait for the response, do a check and rerun the process to get the right response.
Not only that, the success rate of normal chain which involve prompt, parser is low to. Good thing is lang graph do the rerun so i don't have to worry much about it, but it still take a lot of time.
I only encountered this problem since yesterday. Sooner than that, it worked flawlessly.
I updated langchain, langgraph, langsmith and langchain_groq to the last version yesterday. I think it cause the problem.
### System Info
aiohttp==3.9.5
aiosignal==1.3.1
alembic==1.13.1
annotated-types==0.6.0
anthropic==0.28.1
anyio==4.3.0
appdirs==1.4.4
asgiref==3.8.1
asttokens==2.4.1
async-timeout==4.0.3
attrs==23.2.0
Babel==2.15.0
backoff==2.2.1
bcrypt==4.1.3
beautifulsoup4==4.12.3
blinker==1.8.2
boto3==1.34.127
botocore==1.34.127
Brotli==1.1.0
bs4==0.0.2
build==1.2.1
cachetools==5.3.3
catalogue==2.0.10
certifi==2024.2.2
cffi==1.16.0
charset-normalizer==3.3.2
chroma-hnswlib==0.7.3
chromadb==0.4.24
click==8.1.7
coloredlogs==15.0.1
comm==0.2.2
courlan==1.1.0
crewai==0.28.8
crewai-tools==0.2.3
cryptography==42.0.6
dataclasses-json==0.6.5
dateparser==1.2.0
debugpy==1.8.1
decorator==5.1.1
defusedxml==0.7.1
Deprecated==1.2.14
deprecation==2.1.0
dirtyjson==1.0.8
distro==1.9.0
docstring-parser==0.15
embedchain==0.1.102
exceptiongroup==1.2.1
executing==2.0.1
faiss-cpu==1.8.0
faiss-gpu==1.7.2
fast-pytorch-kmeans==0.2.0.1
fastapi==0.110.3
filelock==3.14.0
flatbuffers==24.3.25
free-proxy==1.1.1
frozenlist==1.4.1
fsspec==2024.3.1
git-python==1.0.3
gitdb==4.0.11
GitPython==3.1.43
google==3.0.0
google-ai-generativelanguage==0.6.4
google-api-core==2.19.0
google-api-python-client==2.133.0
google-auth==2.29.0
google-auth-httplib2==0.2.0
google-cloud-aiplatform==1.50.0
google-cloud-bigquery==3.21.0
google-cloud-core==2.4.1
google-cloud-resource-manager==1.12.3
google-cloud-storage==2.16.0
google-crc32c==1.5.0
google-generativeai==0.5.4
google-resumable-media==2.7.0
googleapis-common-protos==1.63.0
gptcache==0.1.43
graphviz==0.20.3
greenlet==3.0.3
groq==0.5.0
grpc-google-iam-v1==0.13.0
grpcio==1.63.0
grpcio-status==1.62.2
h11==0.14.0
html2text==2024.2.26
htmldate==1.8.1
httpcore==1.0.5
httplib2==0.22.0
httptools==0.6.1
httpx==0.27.0
huggingface-hub==0.23.0
humanfriendly==10.0
idna==3.7
importlib-metadata==7.0.0
importlib_resources==6.4.0
iniconfig==2.0.0
instructor==0.5.2
ipykernel==6.29.4
ipython==8.24.0
itsdangerous==2.2.0
jedi==0.19.1
Jinja2==3.1.3
jiter==0.4.2
jmespath==1.0.1
joblib==1.4.2
jsonpatch==1.33
jsonpointer==2.4
jupyter_client==8.6.1
jupyter_core==5.7.2
jusText==3.0.0
kubernetes==29.0.0
lancedb==0.5.7
langchain==0.2.5
langchain-anthropic==0.1.11
langchain-aws==0.1.3
langchain-chroma==0.1.0
langchain-community==0.2.5
langchain-core==0.2.8
langchain-experimental==0.0.60
langchain-google-genai==1.0.3
langchain-groq==0.1.5
langchain-openai==0.1.6
langchain-text-splitters==0.2.1
langchainhub==0.1.18
langgraph==0.0.57
langsmith==0.1.80
lark==1.1.9
llama-index==0.10.36
llama-index-agent-openai==0.2.4
llama-index-cli==0.1.12
llama-index-embeddings-openai==0.1.9
llama-index-indices-managed-llama-cloud==0.1.6
llama-index-llms-openai==0.1.18
llama-index-multi-modal-llms-openai==0.1.5
llama-index-program-openai==0.1.6
llama-index-question-gen-openai==0.1.3
llama-index-readers-file==0.1.22
llama-index-readers-llama-parse==0.1.4
llama-parse==0.4.2
lxml==5.1.1
Mako==1.3.3
markdown-it-py==3.0.0
MarkupSafe==2.1.5
marshmallow==3.21.2
matplotlib-inline==0.1.7
mdurl==0.1.2
minify_html==0.15.0
mmh3==4.1.0
monotonic==1.6
mpmath==1.3.0
multidict==6.0.5
mutagen==1.47.0
mypy-extensions==1.0.0
nest-asyncio==1.6.0
networkx==3.3
nodeenv==1.8.0
numpy==1.26.4
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.20.5
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.1.105
oauthlib==3.2.2
onnxruntime==1.17.3
openai==1.25.1
opentelemetry-api==1.24.0
opentelemetry-exporter-otlp-proto-common==1.24.0
opentelemetry-exporter-otlp-proto-grpc==1.24.0
opentelemetry-exporter-otlp-proto-http==1.24.0
opentelemetry-instrumentation==0.45b0
opentelemetry-instrumentation-asgi==0.45b0
opentelemetry-instrumentation-fastapi==0.45b0
opentelemetry-proto==1.24.0
opentelemetry-sdk==1.24.0
opentelemetry-semantic-conventions==0.45b0
opentelemetry-util-http==0.45b0
orjson==3.10.2
outcome==1.3.0.post0
overrides==7.7.0
packaging==23.2
pandas==2.2.2
parso==0.8.4
pexpect==4.9.0
pillow==10.3.0
platformdirs==4.2.1
playwright==1.43.0
pluggy==1.5.0
posthog==3.5.0
prompt-toolkit==3.0.43
proto-plus==1.23.0
protobuf==4.25.3
psutil==5.9.8
ptyprocess==0.7.0
pulsar-client==3.5.0
pure-eval==0.2.2
py==1.11.0
pyarrow==16.0.0
pyarrow-hotfix==0.6
pyasn1==0.6.0
pyasn1_modules==0.4.0
pycparser==2.22
pycryptodomex==3.20.0
pydantic==2.7.1
pydantic_core==2.18.2
pyee==11.1.0
PyGithub==1.59.1
Pygments==2.18.0
PyJWT==2.8.0
pylance==0.9.18
PyNaCl==1.5.0
pyparsing==3.1.2
pypdf==4.2.0
PyPika==0.48.9
pyproject_hooks==1.1.0
pyright==1.1.361
pysbd==0.3.4
PySocks==1.7.1
pytest==8.2.0
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pytube==15.0.0
pytz==2024.1
PyYAML==6.0.1
pyzmq==26.0.3
random-user-agent==1.0.1
rank-bm25==0.2.2
ratelimiter==1.2.0.post0
redis==5.0.4
regex==2023.12.25
requests==2.31.0
requests-file==2.0.0
requests-oauthlib==2.0.0
retry==0.9.2
rich==13.7.1
rsa==4.9
s3transfer==0.10.1
safetensors==0.4.3
schema==0.7.7
scikit-learn==1.4.2
scipy==1.13.0
selenium==4.20.0
semver==3.0.2
sentence-transformers==2.7.0
shapely==2.0.4
six==1.16.0
smmap==5.0.1
sniffio==1.3.1
sortedcontainers==2.4.0
soupsieve==2.5
SQLAlchemy==2.0.29
stack-data==0.6.3
starlette==0.37.2
striprtf==0.0.26
sympy==1.12
tavily-python==0.3.3
tenacity==8.2.3
threadpoolctl==3.5.0
tiktoken==0.6.0
tld==0.13
tldextract==5.1.2
tokenizers==0.19.1
tomli==2.0.1
torch==2.3.0
tornado==6.4
tqdm==4.66.4
trafilatura==1.9.0
traitlets==5.14.3
transformers==4.40.1
trio==0.25.0
trio-websocket==0.11.1
triton==2.3.0
typer==0.9.4
types-requests==2.32.0.20240602
typing-inspect==0.9.0
typing_extensions==4.11.0
tzdata==2024.1
tzlocal==5.2
ujson==5.9.0
undetected-playwright==0.3.0
uritemplate==4.1.1
urllib3==2.2.1
uuid6==2024.1.12
uvicorn==0.29.0
uvloop==0.19.0
watchfiles==0.21.0
wcwidth==0.2.13
websocket-client==1.8.0
websockets==12.0
wrapt==1.16.0
wsproto==1.2.0
yarl==1.9.4
youtube-transcript-api==0.6.2
yt-dlp==2023.12.30
zipp==3.18.1
platform: Ubuntu 22.04 LTS
python version: 3.10.12 | Running Groq using llama3 model keep getting un-formatted output | https://api.github.com/repos/langchain-ai/langchain/issues/23248/comments | 2 | 2024-06-21T07:28:20Z | 2024-06-27T08:20:11Z | https://github.com/langchain-ai/langchain/issues/23248 | 2,365,908,695 | 23,248 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from dotenv import load_dotenv
from langchain_core.globals import set_debug
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
set_debug(True)
load_dotenv()
model = ChatOpenAI(
api_key=os.getenv('OPENAI_API_KEY'),
base_url=os.getenv('OPENAI_BASE_URL'),
model="gpt-3.5-turbo"
)
messages = [
SystemMessage(content="Translate the following from English into Italian"),
HumanMessage(content="hi!"),
]
if __name__ == "__main__":
print(model.invoke(messages))
```
### Error Message and Stack Trace (if applicable)
```
[llm/start] [llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: Translate the following from English into Italian\nHuman: hi!"
]
}
```
### Description
Shouldn't it be like this?
```
[llm/start] [llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System": "Translate the following from English into Italian",
"Human": "hi!"
]
}
```
### System Info
I have tried it in two conda envs:
---
langchain 0.2.5
windows 11
python Python 3.11.9
---
langchain 0.1.10
windows 11
python Python 3.11.7 | Misleading logs | https://api.github.com/repos/langchain-ai/langchain/issues/23239/comments | 2 | 2024-06-21T00:13:49Z | 2024-06-21T01:57:47Z | https://github.com/langchain-ai/langchain/issues/23239 | 2,365,465,210 | 23,239 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
https://github.com/langchain-ai/langchain/blob/bf7763d9b0210d182409d35f538ddb97c9d2c0ad/libs/core/langchain_core/tools.py#L291-L310
### Error Message and Stack Trace (if applicable)
_No response_
### Description
My function looks like `f(a:str, b:list[str])`
When llm returns a str looks like `"'A', ['B', 'C']" `
Here will be `input_args.validate({'a': "'A', ['B', 'C']"})` and it will never pass.
But simplly `input_args.validate(tool_input)` will works fine
### System Info
langchain 0.2.5
langchain_core 0.2.7
windows
python 3.12 | Wrong parse _parse_input when tool_input is str. | https://api.github.com/repos/langchain-ai/langchain/issues/23230/comments | 0 | 2024-06-20T17:55:55Z | 2024-06-20T17:58:27Z | https://github.com/langchain-ai/langchain/issues/23230 | 2,364,969,551 | 23,230 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
embedding = ZhipuAIEmbeddings(
api_key="xxx"
)
text = "This is a test query."
query_result = embedding.embed_query(text)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/edy/PycharmProjects/Jupyter-Notebook/langchain_V_0_2_0/vecotr_stores_and_retrievers2.py", line 35, in <module>
query_result = embedding.embed_query(text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/edy/PycharmProjects/Jupyter-Notebook/venv/lib/python3.11/site-packages/langchain_community/embeddings/zhipuai.py", line 60, in embed_query
resp = self.embed_documents([text])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/edy/PycharmProjects/Jupyter-Notebook/venv/lib/python3.11/site-packages/langchain_community/embeddings/zhipuai.py", line 74, in embed_documents
resp = self._client.embeddings.create(model=self.model, input=texts)
^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'FieldInfo' object has no attribute 'embeddings'
### Description
The _client field in the ZhipuAIEmbeddings class cannot be correctly initialized.
Rename _client to client and update the function embed_documents.
example:
```python
client: Any = Field(default=None, exclude=True)
values["client"] = ZhipuAI(api_key=values["api_key"])
def embed_documents(self, texts: List[str]) -> List[List[float]]:
resp = self.client.embeddings.create(model=self.model, input=texts)
embeddings = [r.embedding for r in resp.data]
return embeddings
```
### System Info
langchain==0.2.5
langchain-chroma==0.1.1
langchain-community==0.2.5
langchain-core==0.2.9
langchain-experimental==0.0.61
langchain-text-splitters==0.2.1
langchain-weaviate==0.0.2
platform mac
python version 3.11 | The ZhipuAIEmbeddings class is not working. | https://api.github.com/repos/langchain-ai/langchain/issues/23215/comments | 0 | 2024-06-20T09:35:13Z | 2024-06-20T13:04:52Z | https://github.com/langchain-ai/langchain/issues/23215 | 2,363,985,738 | 23,215 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
test | test | https://api.github.com/repos/langchain-ai/langchain/issues/23195/comments | 0 | 2024-06-19T20:17:33Z | 2024-06-21T19:27:23Z | https://github.com/langchain-ai/langchain/issues/23195 | 2,363,068,260 | 23,195 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
We want to add docstring linting to langchain-core, langchain, langchain-text-splitters, and partner packages. This requires adding this to each pacakages pyproject.toml
```toml
[tool.ruff.lint]
select = [
...
"D", # pydocstyle
]
[tool.ruff.lint.pydocstyle]
convention = "google"
[tool.ruff.per-file-ignores]
"tests/**" = ["D"] # ignore docstring checks for tests
```
this will likely cause a number of new linting errors which then need to be fixed. there should be a separate pr for each package. here's a reference for langchain-openai (linting errors have not yet been fixed) https://github.com/langchain-ai/langchain/pull/23187 | Add docstring linting to core, langchain, partner packages | https://api.github.com/repos/langchain-ai/langchain/issues/23188/comments | 1 | 2024-06-19T18:12:49Z | 2024-06-21T07:36:12Z | https://github.com/langchain-ai/langchain/issues/23188 | 2,362,917,741 | 23,188 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.chat_history import BaseChatMessageHistory
```
### Error Message and Stack Trace (if applicable)
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/christophebornet/Library/Caches/pypoetry/virtualenvs/ragstack-ai-langchain-F6idWdWf-py3.9/lib/python3.9/site-packages/langchain_core/chat_history.py", line 29, in <module>
from langchain_core.runnables import run_in_executor
File "/Users/christophebornet/Library/Caches/pypoetry/virtualenvs/ragstack-ai-langchain-F6idWdWf-py3.9/lib/python3.9/site-packages/langchain_core/runnables/__init__.py", line 39, in <module>
from langchain_core.runnables.history import RunnableWithMessageHistory
File "/Users/christophebornet/Library/Caches/pypoetry/virtualenvs/ragstack-ai-langchain-F6idWdWf-py3.9/lib/python3.9/site-packages/langchain_core/runnables/history.py", line 16, in <module>
from langchain_core.chat_history import BaseChatMessageHistory
ImportError: cannot import name 'BaseChatMessageHistory' from partially initialized module 'langchain_core.chat_history' (most likely due to a circular import) (/Users/christophebornet/Library/Caches/pypoetry/virtualenvs/ragstack-ai-langchain-F6idWdWf-py3.9/lib/python3.9/site-packages/langchain_core/chat_history.py)
```
### Description
On latest master, importing BaseChatMessageHistory fails because of a circular dependency. (it works with the very recent langchain-core 0.2.9)
I suspect this is caused by : https://github.com/langchain-ai/langchain/pull/23136
### System Info
langchain==0.2.5
langchain-astradb==0.3.3
langchain-community==0.2.5
langchain-core @ git+https://github.com/langchain-ai/langchain.git@4fe8403bfbb81e7780179a3b164aa22c694e2ece#subdirectory=libs/core
langchain-openai==0.1.8
langchain-text-splitters==0.2.1 | Crash due to circular dependency on BaseChatMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/23175/comments | 2 | 2024-06-19T14:21:37Z | 2024-06-19T17:43:37Z | https://github.com/langchain-ai/langchain/issues/23175 | 2,362,515,421 | 23,175 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from langchain_community.chat_models.moonshot import MoonshotChat
os.environ["MOONSHOT_API_KEY"] = "{my_api_key}
chat = MoonshotChat()
```
### Error Message and Stack Trace (if applicable)
File "/foo/bar/venv/lib/python3.12/site-packages/langchain_community/chat_models/moonshot.py", line 45, in validate_environment
"api_key": values["moonshot_api_key"].get_secret_value(),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'get_secret_value'
### Description
![image](https://github.com/langchain-ai/langchain/assets/16267732/45ba709c-e59f-4c50-b906-0acd56fe9983)
![image](https://github.com/langchain-ai/langchain/assets/16267732/7cb7d84a-2e7d-4aa9-aa55-c79334bfe616)
`get_from_dict_or_env` returns the type of `SecretStr` when api_key is set through the constructor (`MoonshotChat(api_key={})`), but when api_key is set through OS environment, which is mentioned in [docs](https://python.langchain.com/v0.2/docs/integrations/chat/moonshot/), it returns the type of `str`.
So the exception raised: AttributeError: 'str' object has no attribute 'get_secret_value'
Solution: we need to convert the result of `get_from_dict_or_env` , if it's an instance of `str`, then convert it to `SecretStr`
### System Info
AttributeError: 'str' object has no attribute 'get_secret_value' | MoonshotChat fails when setting the moonshot_api_key through the OS environment. | https://api.github.com/repos/langchain-ai/langchain/issues/23174/comments | 0 | 2024-06-19T14:13:12Z | 2024-06-19T16:28:25Z | https://github.com/langchain-ai/langchain/issues/23174 | 2,362,496,250 | 23,174 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code using AzureSearch as vectorstore for Azure Cognitive Search always gives me invalid json format:
import os
from langchain_openai import AzureChatOpenAI
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain_openai import AzureOpenAIEmbeddings
from typing import List
from langchain.chains import LLMChain
from langchain.output_parsers import PydanticOutputParser
from langchain_core.prompts import PromptTemplate
from pydantic import BaseModel, Field
from langchain_community.vectorstores.azuresearch import AzureSearch
api_key = os.getenv("AZURE_OPENAI_API_KEY")
api_version = os.getenv("AZURE_API_VERSION")
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
vector_store_password = os.getenv("AZURE_SEARCH_ADMIN_KEY")
vector_store_address = os.getenv("AZURE_SEARCH_ENDPOINT")
service_name = os.getenv("AZURE_SERVICE_NAME")
azure_deployment="text-embedding-ada-002"
azure_openai_api_version=api_version
azure_endpoint=azure_endpoint
azure_openai_api_key=api_key
embeddings: AzureOpenAIEmbeddings = AzureOpenAIEmbeddings(
azure_deployment=azure_deployment,
openai_api_version=azure_openai_api_version,
azure_endpoint=azure_endpoint,
api_key=azure_openai_api_key,
)
index_name: str = "test-index"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
model= os.getenv('modelName')
# Output parser will split the LLM result into a list of queries
class LineList(BaseModel):
# "lines" is the key (attribute name) of the parsed output
lines: List[str] = Field(description="Lines of text")
class LineListOutputParser(PydanticOutputParser):
def __init__(self) -> None:
super().__init__(pydantic_object=LineList)
def parse(self, text: str) -> LineList:
lines = text.strip().split("\n")
return LineList(lines=lines)
output_parser = LineListOutputParser()
QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an AI language model assistant. Your task is to generate five
different versions of the given user question to retrieve relevant documents from a vector
database. By generating multiple perspectives on the user question, your goal is to help
the user overcome some of the limitations of the distance-based similarity search.
Provide these alternative questions separated by newlines.
Original question: {question}""",
)
llm = AzureChatOpenAI(temperature=0,api_key=api_key,api_version=api_version,model=model)
llm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT, output_parser=output_parser)
retriever = MultiQueryRetriever(
retriever=vector_store.as_retriever(), llm_chain=llm_chain, parser_key="lines"
)
print(type(retriever.llm_chain.output_parser))
print(retriever.llm_chain.output_parser)
unique_docs = retriever.invoke(query="What is Llama-2?")
print(unique_docs)
### Error Message and Stack Trace (if applicable)
Exception has occurred: OutputParserException
langchain_core.exceptions.OutputParserException: Invalid json output: Can you provide information on Llama-2?
Could you explain the concept of Llama-2?
What does Llama-2 refer to?
File Python\Python312\Lib\json\decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
StopIteration: 0
During handling of the above exception, another exception occurred:
File "Python\Python312\Lib\site-packages\langchain_core\output_parsers\json.py", line 66, in parse_result
return parse_json_markdown(text)
^^^^^^^^^^^^^^^^^^^^^^^^^
File \Python\Python312\Lib\site-packages\langchain_core\utils\json.py", line 147, in parse_json_markdown
return _parse_json(json_str, parser=parser)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File \Python\Python312\Lib\site-packages\langchain_core\utils\json.py", line 160, in _parse_json
return parser(json_str)
^^^^^^^^^^^^^^^^
File Python\Python312\Lib\site-packages\langchain_core\utils\json.py", line 120, in parse_partial_json
return json.loads(s, strict=strict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File \Python\Python312\Lib\json\__init__.py", line 359, in loads
return cls(**kw).decode(s)
^^^^^^^^^^^^^^^^^^^
File \Python\Python312\Lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File \Python\Python312\Lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
The above exception was the direct cause of the following exception:
File Python\Python312\Lib\site-packages\langchain_core\output_parsers\json.py", line 69, in parse_result
raise OutputParserException(msg, llm_output=text) from e
File Python\Python312\Lib\site-packages\langchain_core\output_parsers\pydantic.py", line 60, in parse_result
json_object = super().parse_result(result)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File Python\Python312\Lib\site-packages\langchain\chains\llm.py", line 284, in create_outputs
self.output_key: self.output_parser.parse_result(generation),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File Python\Python312\Lib\site-packages\langchain\chains\llm.py", line 127, in _call
return self.create_outputs(response)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File Python\Python312\Lib\site-packages\langchain\chains\base.py", line 166, in invoke
raise e
File Python\Python312\Lib\site-packages\langchain\chains\base.py", line 166, in invoke
raise e
File Python\Python312\Lib\site-packages\langchain\retrievers\multi_query.py", line 182, in generate_queries
response = self.llm_chain.invoke(
^^^^^^^^^^^^^^^^^^^^^^
File Python\Python312\Lib\site-packages\langchain\retrievers\multi_query.py", line 165, in _get_relevant_documents
queries = self.generate_queries(query, run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File Python\Python312\Lib\site-packages\langchain_core\retrievers.py", line 221, in invoke
raise e
File Python\Python312\Lib\site-packages\langchain_core\retrievers.py", line 221, in invoke
raise e
File Python\Python312\Lib\site-packages\langchain_core\retrievers.py", line 355, in get_relevant_documents
return self.invoke(query, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File Python\Python312\Lib\site-packages\langchain_core\_api\deprecation.py", line 168, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
unique_docs = retriever.get_relevant_documents(query="What is Llama-2")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File Python\Python312\Lib\site-packages\langchain_core\output_parsers\json.py", line 69, in parse_result
raise OutputParserException(msg, llm_output=text) from e
langchain_core.exceptions.OutputParserException: Invalid json output: Can you provide information on Llama-2?
Could you explain the concept of Llama-2?
What does Llama-2 refer to?
### Description
I'm trying to use MultiQuery Retriver using vector store of Azure Cognitive Search. I'm following the example explained in Langchain documentation, https://python.langchain.com/v0.1/docs/modules/data_connection/retrievers/MultiQueryRetriever/ and used the vector store AzureSearch, Is there anything I miss using the vector_store for MultiQuery Retriever? I always see an error, langchain_core.exceptions.OutputParserException: Invalid json output.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.5
> langchain_community: 0.0.13
> langsmith: 0.1.80
> langchain_chroma: 0.1.1
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.1
> langgraph: 0.0.69
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | MultiQuery Retriever Using AzureSearch vector store always returns a invalid json format error | https://api.github.com/repos/langchain-ai/langchain/issues/23171/comments | 2 | 2024-06-19T14:02:04Z | 2024-06-23T06:50:53Z | https://github.com/langchain-ai/langchain/issues/23171 | 2,362,472,977 | 23,171 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Using the below code for semantic cache
```
from langchain.globals import set_llm_cache
from langchain_openai import OpenAI
from langchain.cache import RedisSemanticCache
from langchain_huggingface import HuggingFaceEmbeddings
import time, os
from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(<my credentials>)
huggingface_embedding = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
set_llm_cache(
RedisSemanticCache(redis_url="redis://127.0.0.1:6379", embedding=huggingface_embedding)
)
question = "What is capital of Japan?"
res = llm.invoke(question)
```
both redis db and redis python client I installed.
```redis-5.0.6```
```redis-cli 7.2.5```
Still its getting the given error
```
[BUG]ValueError: Redis failed to connect: Redis cannot be used as a vector database without RediSearch >=2.4Please head to https://redis.io/docs/stack/search/quick_start/to know more about installing the RediSearch module within Redis Stack.
```
But the strange thing is, there is no **2.4** version available for the python client **RediSearch** in pypi
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Using the below code for semantic cache
```
from langchain.globals import set_llm_cache
from langchain_openai import OpenAI
from langchain.cache import RedisSemanticCache
from langchain_huggingface import HuggingFaceEmbeddings
import time, os
from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(<my credentials>)
huggingface_embedding = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
set_llm_cache(
RedisSemanticCache(redis_url="redis://127.0.0.1:6379", embedding=huggingface_embedding)
)
question = "What is capital of Japan?"
res = llm.invoke(question)
```
both redis db and redis python client I installed.
```redis-5.0.6```
```redis-cli 7.2.5```
Still its getting the given error
```
[BUG]ValueError: Redis failed to connect: Redis cannot be used as a vector database without RediSearch >=2.4Please head to https://redis.io/docs/stack/search/quick_start/to know more about installing the RediSearch module within Redis Stack.
```
But the strange thing is, there is no **2.4** version available for the python client **RediSearch** in pypi
### System Info
```
python 3.9
ubuntu machine
langchain==0.1.12
langchain-community==0.0.36
langchain-core==0.2.9
``` | [BUG]ValueError: Redis failed to connect: Redis cannot be used as a vector database without RediSearch >=2.4Please head to https://redis.io/docs/stack/search/quick_start/to know more about installing the RediSearch module within Redis Stack | https://api.github.com/repos/langchain-ai/langchain/issues/23168/comments | 1 | 2024-06-19T11:19:13Z | 2024-06-19T11:47:18Z | https://github.com/langchain-ai/langchain/issues/23168 | 2,362,099,035 | 23,168 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI
from langchain_core.runnables import ConfigurableField
class Joke(BaseModel):
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=2, api_key="API-KEY").configurable_fields(
temperature=ConfigurableField(id="temperature", name="Temperature", description="The temperature of the model")
)
structured_llm = model.with_structured_output(Joke)
## This line does not raise an exception meaning the temperature field is not passed to the llm
structured_llm.with_config(configurable={"temperature" : 20}).invoke("Tell me a joke about cats")
## This raises exception as expected, as temperature is above 2
model.with_config(configurable={"temperature" : 20}).invoke("Tell me a joke about cats")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
using `.with_config(configurable={})` together with `.with_structured_output()` in ChatOpenAI breaks propogating the configurable fields.
This issue might exist in other provider implementations too.
### System Info
langchain==0.2.3
langchain-anthropic==0.1.15
langchain-community==0.2.4
langchain-core==0.2.5
langchain-google-vertexai==1.0.5
langchain-openai==0.1.8
langchain-text-splitters==0.2.1 | ChatOpenAI with_structured output breaks Runnable ConfigurableFields | https://api.github.com/repos/langchain-ai/langchain/issues/23167/comments | 3 | 2024-06-19T11:15:24Z | 2024-06-21T10:49:35Z | https://github.com/langchain-ai/langchain/issues/23167 | 2,362,092,354 | 23,167 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code of JSON Loader, which is there in Langchain Documentation
import { JSONLoader } from "langchain/document_loaders/fs/json";
const loader = new JSONLoader("src/document_loaders/example_data/example.json");
const docs = await loader.load();
### Error Message and Stack Trace (if applicable)
I am trying to initiate a conversation with json files where I want to load this json file content into a docs variable then I am performing required steps to ask questions on it using openai api and langchain. It is unable to understand the context and also failed to identify the properties and it's values.
Following is my json file.
{
"name": "OpenAI",
"description": "A research and deployment company focused on AI.",
"endpoints": [
{
"path": "/completions",
"method": "POST",
"required_parameters": ["model", "prompt"]
},
{
"path": "/edits",
"method": "POST",
"required_parameters": ["model", "input", "instruction"]
}
]
}
### Description
I asked it a question as :
"What are the method and required_parameters in /completions endpoint ?"
Output :
![image](https://github.com/langchain-ai/langchain/assets/91243935/6f4bb605-987b-4df3-9fe6-c190a80218c2)
### System Info
Node js
Langchain
Windows | JSON Loader is not working as expected. | https://api.github.com/repos/langchain-ai/langchain/issues/23166/comments | 0 | 2024-06-19T10:37:20Z | 2024-06-19T10:39:51Z | https://github.com/langchain-ai/langchain/issues/23166 | 2,362,004,538 | 23,166 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
# Check retrieval
query = "What are the EV / NTM and NTM rev growth for MongoDB, Cloudflare, and Datadog?"
docs = retriever_multi_vector_img.invoke(query, limit=6)
# We get 4 docs
len(docs)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
`retriever_multi_vector_img.invoke(query)` no longer has a method to limit or increase the amount of docs returned and subsequently passed to the LLM. This is defaulted at 4 and no information can be found on the issue.
You can see the incorrect use in this cookbook: https://github.com/langchain-ai/langchain/blob/master/cookbook/Multi_modal_RAG.ipynb
```
# Check retrieval
query = "What are the EV / NTM and NTM rev growth for MongoDB, Cloudflare, and Datadog?"
docs = retriever_multi_vector_img.invoke(query, limit=6)
# We get 4 docs
len(docs)
```
Where 6 was the limit and 4 is returned. How can we enforce more docs to be returned?
### System Info
Using langchain_core 0.2.8 | Can't Specify Top-K retrieved Documents in Multimodal Retrievers using Invoke() | https://api.github.com/repos/langchain-ai/langchain/issues/23158/comments | 1 | 2024-06-19T03:37:39Z | 2024-07-01T07:01:34Z | https://github.com/langchain-ai/langchain/issues/23158 | 2,361,196,976 | 23,158 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_core.prompts import PromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from typing import Optional
import json
# Schema for structured response
class AuditorOpinion(BaseModel):
opinion: Optional[str] = Field(
None,
description="The auditor's opinion on the financial statements. Values are: 'Unqualified Opinion', "
"'Qualified Opinion', 'Adverse Opinion', 'Disclaimer of Opinion'."
)
def load_markdown_file(file_path):
with open(file_path, 'r') as file:
return file.read()
path = "data/auditor_opinion_1.md"
markdown_text = load_markdown_file(path)
# Prompt template
prompt = PromptTemplate.from_template(
"""
what is the auditor's opinion
Human: {question}
AI: """
)
# Chain
llm = OllamaFunctions(model="llama3", format="json", temperature=0)
structured_llm = llm.with_structured_output(AuditorOpinion)
chain = prompt | structured_llm
alex = chain.invoke(markdown_text)
response_dict = alex.dict()
# Serialize the dictionary to a JSON string with indentation for readability
readable_json = json.dumps(response_dict, indent=2, ensure_ascii=False)
# Print the readable JSON
print(readable_json)
```
### Error Message and Stack Trace (if applicable)
```
langchain_experimental/llms/ollama_functions.py", line 400, in _generate
raise ValueError(
ValueError: 'llama3' did not respond with valid JSON.
```
### Description
Trying to get structured output from markdown text using with_structured_output
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
> Python Version: 3.9.6 (default, Feb 3 2024, 15:58:27)
[Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.79
> langchain_experimental: 0.0.61
> langchain_google_genai: 1.0.5
> langchain_google_vertexai: 1.0.4
> langchain_mistralai: 0.1.8
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.16
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| /langchain_experimental/llms/ollama_functions.py", line 400, in _generate raise ValueError( ValueError: 'llama3' did not respond with valid JSON. | https://api.github.com/repos/langchain-ai/langchain/issues/23156/comments | 2 | 2024-06-19T02:34:35Z | 2024-06-24T06:09:34Z | https://github.com/langchain-ai/langchain/issues/23156 | 2,361,114,911 | 23,156 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/installation/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Was very usefull
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/how_to/installation/> | https://api.github.com/repos/langchain-ai/langchain/issues/23140/comments | 0 | 2024-06-18T22:28:56Z | 2024-06-18T22:31:22Z | https://github.com/langchain-ai/langchain/issues/23140 | 2,360,849,759 | 23,140 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/memory/zep_memory/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Hi, Since this is in the v0.2 docs that this example should be written using the current langchain expression languange syntax. Or, is the OSS version of ZEP not compatible with LCEL? It's kind of confusing. Thanks
### Idea or request for content:
_No response_ | Out of date with LangChain Expression Language. DOC: <Issue related to /v0.2/docs/integrations/memory/zep_memory/> | https://api.github.com/repos/langchain-ai/langchain/issues/23129/comments | 0 | 2024-06-18T18:11:25Z | 2024-06-18T18:13:54Z | https://github.com/langchain-ai/langchain/issues/23129 | 2,360,427,696 | 23,129 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I have inserted documents in a cosmos db with no-sql api, the insetion works well. The documents contains metadata (one of the fields is `claim_id`). I want to run a search but on a subset of documents by filtering on `claim_id`.
Here is the code. but it doesn't seem to work. It always returns results without taking into account the filtering, also the `k` holds it's default value 4.
```pyhton retriever = vector_search.as_retriever(
search_type='similarity',
search_kwargs={
'k': 3,
'filter': {"claim_id": 1}
}
)
from langchain.chains import RetrievalQA
qa_stuff = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
verbose=True,
return_source_documents=True,
)
query = "what is prompt engineering?"
response = qa_stuff.invoke(query)
print(response) ```
### Error Message and Stack Trace (if applicable)
no error, but unexpected behavior
### Description
I want to query on documents that have only claim_id=1 as metadata.
The returned result shows that the filtering does not work, it seems ignored
### System Info
ai21==2.6.0
ai21-tokenizer==0.10.0
aiohttp==3.9.5
aiosignal==1.3.1
annotated-types==0.7.0
anyio==4.4.0
asttokens==2.4.1
attrs==23.2.0
azure-core==1.30.2
azure-cosmos==4.7.0
certifi==2024.6.2
charset-normalizer==3.3.2
colorama==0.4.6
comm==0.2.2
dataclasses-json==0.6.7
debugpy==1.8.1
decorator==5.1.1
distro==1.9.0
executing==2.0.1
filelock==3.15.1
frozenlist==1.4.1
fsspec==2024.6.0
greenlet==3.0.3
h11==0.14.0
httpcore==1.0.5
httpx==0.27.0
huggingface-hub==0.23.4
idna==3.7
ipykernel==6.29.4
ipython==8.25.0
jedi==0.19.1
jsonpatch==1.33
jsonpointer==3.0.0
jupyter_client==8.6.2
jupyter_core==5.7.2
langchain==0.2.5
langchain-community==0.2.5
langchain-core==0.2.7
langchain-openai==0.1.8
langchain-text-splitters==0.2.1
langsmith==0.1.77
marshmallow==3.21.3
matplotlib-inline==0.1.7
multidict==6.0.5
mypy-extensions==1.0.0
nest-asyncio==1.6.0
numpy==1.26.4
openai==1.34.0
orjson==3.10.5
packaging==24.1
parso==0.8.4
platformdirs==4.2.2
prompt_toolkit==3.0.47
psutil==5.9.8
pure-eval==0.2.2
pydantic==2.7.4
pydantic_core==2.18.4
Pygments==2.18.0
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pywin32==306
PyYAML==6.0.1
pyzmq==26.0.3
regex==2024.5.15
requests==2.32.3
sentencepiece==0.2.0
six==1.16.0
sniffio==1.3.1
SQLAlchemy==2.0.30
stack-data==0.6.3
tenacity==8.4.1
tiktoken==0.7.0
tokenizers==0.19.1
tornado==6.4.1
tqdm==4.66.4
traitlets==5.14.3
typing-inspect==0.9.0
typing_extensions==4.12.2
urllib3==2.2.2
wcwidth==0.2.13
yarl==1.9.4
| Cannot filter with metadata with azure_cosmos_db_no_sql | https://api.github.com/repos/langchain-ai/langchain/issues/23089/comments | 1 | 2024-06-18T16:01:06Z | 2024-06-20T08:52:36Z | https://github.com/langchain-ai/langchain/issues/23089 | 2,360,210,529 | 23,089 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.vectorstores.weaviate import Weaviate
vectorstore = Weaviate(
client=client,
index_name="coll_summary",
text_key="summary"
)
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[19], [line 1](vscode-notebook-cell:?execution_count=19&line=1)
----> [1](vscode-notebook-cell:?execution_count=19&line=1) vectorstore = Weaviate(
[2](vscode-notebook-cell:?execution_count=19&line=2) client=client,
[3](vscode-notebook-cell:?execution_count=19&line=3) index_name="coll_summary",
[4](vscode-notebook-cell:?execution_count=19&line=4) text_key="summary"
[5](vscode-notebook-cell:?execution_count=19&line=5) )
File ~/weaviate.py:105, in Weaviate.__init__(self, client, index_name, text_key, embedding, attributes, relevance_score_fn, by_text)
[100](https://file+.vscode-resource.vscode-cdn.net//weaviate.py:100) raise ImportError(
[101](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:101) "Could not import weaviate python package. "
[102](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:102) "Please install it with `pip install weaviate-client`."
[103](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:103) )
[104](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:104) if not isinstance(client, weaviate.Client):
--> [105](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:105) raise ValueError(
[106](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:106) f"client should be an instance of weaviate.Client, got {type(client)}"
[107](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:107) )
[108](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:108) self._client = client
[109](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:109) self._index_name = index_name
ValueError: client should be an instance of weaviate.Client, got <class 'weaviate.client.WeaviateClient'>
```
### Description
it seems that a weaviate client now is the class `weaviate.client.WeaviateClient`, not `weaviate.Client`. this means that the instantiation of the vector store fails.
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 20.6.0: Thu Jul 6 22:12:47 PDT 2023; root:xnu-7195.141.49.702.12~1/RELEASE_X86_64
> Python Version: 3.11.3 (main, May 24 2024, 22:45:35) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.8
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.79
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
```
weaviate client library: `4.6.1` | Typechecking for `weaviate.Client` no longer up to date with class name in `weaviate-client`? | https://api.github.com/repos/langchain-ai/langchain/issues/23088/comments | 1 | 2024-06-18T15:48:25Z | 2024-08-05T02:46:51Z | https://github.com/langchain-ai/langchain/issues/23088 | 2,360,183,842 | 23,088 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/time_weighted_vectorstore/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
It's not noted in the documentation that TimeWeightedVectorStoreRetriever works with FAISS vectorDB only (if so?).
### Idea or request for content:
_No response_ | DOC: TimeWeightedVectorStoreRetriever works with FAISS VectorDB only | https://api.github.com/repos/langchain-ai/langchain/issues/23077/comments | 1 | 2024-06-18T12:25:05Z | 2024-07-09T18:47:21Z | https://github.com/langchain-ai/langchain/issues/23077 | 2,359,738,996 | 23,077 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import pandas as pd
from langchain.agents import create_react_agent, AgentExecutor
from langchain.llms import VertexAI
from langchain_experimental.tools.python.tool import PythonAstREPLTool
from langchain.prompts import PromptTemplate
from langchain_google_vertexai import VertexAI
# --- Create a Sample DataFrame ---
df = pd.DataFrame({"Age": [25, 30, 35, 40], "Value": [10, 20, 30, 40]})
# --- Initialize Vertex AI ---
llm = VertexAI(model_name="gemini-pro", temperature=0)
# --- Tool ---
python_tool = PythonAstREPLTool(locals={"df": df})
# --- Chain-of-Thought Prompt Template ---
prompt_template = """Tu es un assistant expert en Pandas.
Tu dois répondre aux questions en utilisant **uniquement** le format suivant pour **chaque étape** de ton raisonnement :
tool_code Thought: [ta réflexion ici] Action: [l'outil que tu veux utiliser] Action Input: [le code à exécuter par l'outil] Observation: [le résultat de l'exécution du code]
Outils : {tool_names} {tools}
**Ces mots-clés ne doivent jamais être traduits ni transformés :**
- Action:
- Thought:
- Action Input:
- Observation:
Voici les colonnes disponibles dans le DataFrame : {df.columns}
Question: {input}
Thought: Pour répondre à cette question, je dois d'abord trouver le nom de la colonne qui contient les âges.
Action: python_repl_ast
Action Input: print(df.columns)
Observation:
Thought: Maintenant que j'ai la liste des colonnes, je peux utiliser la colonne 'Age' et la fonctionmean()de Pandas pour calculer la moyenne des âges.
Action: python_repl_ast
Action Input: print(df['Age'].mean())
Observation:
python {agent_scratchpad} """
prompt = PromptTemplate(
input_variables=["input", "agent_scratchpad", "df.columns"], template=prompt_template
)
# --- Create ReAct agent ---
react_agent = create_react_agent(
llm=llm, tools=[python_tool], prompt=prompt, stop_sequence=False
)
# --- Agent Executor ---
agent_executor = AgentExecutor(
agent=react_agent,
tools=[python_tool],
verbose=True,
handle_parsing_errors=True,
max_iterations=5,
)
# --- Main Execution Loop ---
test_questions = ["Calcule la moyenne des âges"]
for question in test_questions:
print(f"Question: {question}")
try:
response = agent_executor.invoke(
{"input": question, "df": df, "df.columns": df.columns}
)
print(f"Answer: {response['output']}")
except Exception as e:
print(f"An error occurred: {e}")
### Error Message and Stack Trace (if applicable)
** is not a valid tool, try one of [python_repl_ast].
### Description
I am encountering a persistent issue where the React agent fails to recognize and utilize the "python_repl_ast" tool correctly, despite it being defined in the list of tools.
Steps to Reproduce:
Define a Pandas DataFrame.
Initialize VertexAI from langchain_google_vertexai
Create the python_repl_asttool using PythonAstREPLTool and passing the DataFrame.
Define a prompt that includes instructions for the agent to use python_repl_ast to perform a calculation on the DataFrame (e.g., calculate the mean of a column).
Create the React agent using create_react_agentand passing the tool.
Run the agent with a question related to the DataFrame.
Expected Behavior:
The agent should correctly interpret the "Action" and "Action Input" instructions in the prompt, execute the Python code using python_repl_ast and return the result in the "Observation" section.
Actual Behavior:
The agent repeatedly returns the error message "** python_repl_ast
** is not a valid tool, try one of [python_repl_ast]."
### System Info
Windows, vs code, python 3.10, langchain 0.2.2 | React Agent Fails to Recognize "python_repl_ast" Tool | https://api.github.com/repos/langchain-ai/langchain/issues/23076/comments | 0 | 2024-06-18T12:12:31Z | 2024-06-18T12:15:09Z | https://github.com/langchain-ai/langchain/issues/23076 | 2,359,714,784 | 23,076 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from azure.search.documents.indexes.models import (
SemanticSearch,
SemanticConfiguration,
SemanticPrioritizedFields,
SemanticField,
ScoringProfile,
SearchableField,
SearchField,
SearchFieldDataType,
SimpleField,
TextWeights,
)
index_fields = [
SimpleField(
name="id",
type=SearchFieldDataType.String,
key=True,
filterable=True,
),
SearchableField(
name="content",
type=SearchFieldDataType.String,
searchable=True,
),
SearchField(
name="content_vector",
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
vector_search_dimensions=len(embeddings.embed_query("Text")),
vector_search_configuration="default",
vector_search_profile_name = "my-vector-profile"
),
SearchableField(
name="metadata",
type=SearchFieldDataType.String,
searchable=True,
)]
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "d:\kf-ds-genai-python\kf_skybot_env\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in _run_script
exec(code, module.__dict__)
File "D:\kf-ds-genai-python\skybot\pages\Skybot_Home.py", line 42, in <module>
init_index()
File "D:\kf-ds-genai-python\skybot\vectorstore.py", line 87, in init_index
vector_store: AzureSearch = AzureSearch(
File "d:\kf-ds-genai-python\kf_skybot_env\lib\site-packages\langchain_community\vectorstores\azuresearch.py", line 310, in __init__
self.client = _get_search_client(
File "d:\kf-ds-genai-python\kf_skybot_env\lib\site-packages\langchain_community\vectorstores\azuresearch.py", line 220, in _get_search_client
index_client.create_index(index)
File "d:\kf-ds-genai-python\kf_skybot_env\lib\site-packages\azure\core\tracing\decorator.py", line 94, in wrapper_use_tracer
return func(*args, **kwargs)
File "d:\kf-ds-genai-python\kf_skybot_env\lib\site-packages\azure\search\documents\indexes\_search_index_client.py", line 219, in create_index
result = self._client.indexes.create(patched_index, **kwargs)
return func(*args, **kwargs)
File "d:\kf-ds-genai-python\kf_skybot_env\lib\site-packages\azure\search\documents\indexes\_generated\operations\_indexes_operations.py", line 402, in create
raise HttpResponseError(response=response, model=error)
azure.core.exceptions.HttpResponseError: (InvalidRequestParameter) The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchProfile' set.
Code: InvalidRequestParameter
Message: The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchProfile' set.
Exception Details: (InvalidField) The vector field 'content_vector' must have the property 'vectorSearchProfile' set. Parameters: definition
Code: InvalidField
Message: The vector field 'content_vector' must have the property 'vectorSearchProfile' set. Parameters: definition
```
### Description
I'm using the latest version of langchain==0.2.5 and azure-search-documents==11.4.0. But when I try to create an Azure Search Index by defining the index fields, I get the error " The vector field 'content_vector' must have the property 'vectorSearchProfile' set.". This error did not occur in the older versions of langchain and azure-search-documents. But I need to use the latest versions of these for certain features, and I'm not able to get around this issue.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19041
> Python Version: 3.9.2 (tags/v3.9.2:1a79785, Feb 19 2021, 13:44:55) [MSC v.1928 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.7
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.77
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
@hwchase17 | The vector field 'content_vector' must have the property 'vectorSearchProfile' set. | https://api.github.com/repos/langchain-ai/langchain/issues/23070/comments | 0 | 2024-06-18T07:33:33Z | 2024-06-18T07:36:55Z | https://github.com/langchain-ai/langchain/issues/23070 | 2,359,155,891 | 23,070 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/rag/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
error say the data which llm receives,its datatype dont have a attribute:shape;
```
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import ChatPromptTemplate
system_prompt = (
"You are an assistant for question-answering tasks. "
"Use the following pieces of retrieved context to answer "
"the question. If you don't know the answer, say that you "
"don't know. Use three sentences maximum and keep the "
"answer concise."
"\n\n"
"{context}"
)
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
("human", "{input}"),
]
)
question_answer_chain = create_stuff_documents_chain(llm, prompt)
rag_chain = create_retrieval_chain(retriever, question_answer_chain)
response = rag_chain.invoke({"input": "What is Task Decomposition?"})
print(response["answer"])
```
like this
```
File "/home/desir/PycharmProjects/pdf_parse/rag/create_stuff_chain.py", line 28, in <module>
response = rag_chain.invoke({"input": "文章主旨"})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4573, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2504, in invoke
input = step.invoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/passthrough.py", line 469, in invoke
return self._call_with_config(self._invoke, input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1598, in _call_with_config
context.run(
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 380, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/passthrough.py", line 456, in _invoke
**self.mapper.invoke(
^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3149, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3149, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4573, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2504, in invoke
input = step.invoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3976, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1598, in _call_with_config
context.run(
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 380, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3844, in _invoke
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 380, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/transformers/models/mistral/modeling_mistral.py", line 1139, in forward
outputs = self.model(
^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/transformers/models/mistral/modeling_mistral.py", line 937, in forward
batch_size, seq_length = input_ids.shape
^^^^^^^^^^^^^^^
```
AttributeError: 'ChatPromptValue' object has no attribute 'shape'
my model loaded from local disk
```
import os
import time
import gc
import torch
print(torch.version.cuda)
gc.collect()
torch.cuda.empty_cache()
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
os.environ['HTTP_PROXY'] = 'http://127.0.0.1:7890'
os.environ['HTTPS_PROXY'] = 'http://127.0.0.1:7890'
cache_dir = os.path.expanduser("~/.mistral")
cache_mistral_tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=cache_dir)
cache_mistral_model = AutoModelForCausalLM.from_pretrained(cache_dir)
```
### Idea or request for content:
how can i modify the data before input it to llm | DOC: <Issue related to /v0.2/docs/tutorials/rag/> | https://api.github.com/repos/langchain-ai/langchain/issues/23066/comments | 2 | 2024-06-18T05:56:36Z | 2024-06-27T01:13:51Z | https://github.com/langchain-ai/langchain/issues/23066 | 2,358,977,746 | 23,066 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_core.prompts import PromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_experimental.llms.ollama_functions import OllamaFunctions
# Schema for structured response
class Person(BaseModel):
name: str = Field(description="The person's name", required=True)
height: float = Field(description="The person's height", required=True)
hair_color: str = Field(description="The person's hair color")
# Prompt template
prompt = PromptTemplate.from_template(
"""Alex is 5 feet tall.
Claudia is 1 feet taller than Alex and jumps higher than him.
Claudia is a brunette and Alex is blonde.
Human: {question}
AI: """
)
# Chain
llm = OllamaFunctions(model="phi3", format="json", temperature=0)
structured_llm = llm.with_structured_output(Person)
chain = prompt | structured_llm
alex = chain.invoke("Describe Alex")
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Trying to extract structured output
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
> Python Version: 3.9.6 (default, Feb 3 2024, 15:58:27)
[Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.8
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.79
> langchain_experimental: 0.0.61
> langchain_google_genai: 1.0.5
> langchain_google_vertexai: 1.0.4
> langchain_mistralai: 0.1.8
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.16
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| ValueError: `tool_calls` missing from AIMessage: {message} | https://api.github.com/repos/langchain-ai/langchain/issues/23065/comments | 3 | 2024-06-18T05:20:22Z | 2024-08-06T15:06:10Z | https://github.com/langchain-ai/langchain/issues/23065 | 2,358,931,924 | 23,065 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
chain = LLMChain(
llm=self.bedrock.llm,
prompt=self.prompt_template,
)
chain_result = chain.predict(statement=text).strip()
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm facing an issue similar to #3512 .
Using Langchain in a Flask App, hosted in an Azure Web App. Calling Anthropic Claude3 Haiku model in AWS Bedrock.
First Langchain request takes about 2 minutes to return. The following ones return smoothly. After about 7 idle minutes, first request takes too long again.
Can't reproduce this issue locally. It only happens in Azure environment.
When testing with boto3 AWS python SDK, the requests return fast every time, with no issues.
### System Info
langchain==0.2.3
linux slim-bookworm
python:3.12.3
container image: python:3.12.3-slim-bookworm
| Request Timeout / Taking too long | https://api.github.com/repos/langchain-ai/langchain/issues/23060/comments | 2 | 2024-06-18T00:00:54Z | 2024-06-18T09:27:15Z | https://github.com/langchain-ai/langchain/issues/23060 | 2,358,523,743 | 23,060 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/vectorstores/azure_cosmos_db_no_sql/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The current code does not work, it fails on document insertion with the following error
TypeError: AzureCosmosDBNoSqlVectorSearch._from_kwargs() missing 1 required keyword-only argument: 'cosmos_database_properties'
### Idea or request for content:
_No response_ | DOC: Azure Cosmos DB No SQL | https://api.github.com/repos/langchain-ai/langchain/issues/23018/comments | 2 | 2024-06-17T20:48:31Z | 2024-06-22T17:26:58Z | https://github.com/langchain-ai/langchain/issues/23018 | 2,358,230,408 | 23,018 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
_No response_ | Add image token counting to ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/23000/comments | 3 | 2024-06-17T20:29:37Z | 2024-06-19T17:41:48Z | https://github.com/langchain-ai/langchain/issues/23000 | 2,358,200,622 | 23,000 |
[
"hwchase17",
"langchain"
] | ### URL
https://github.com/langchain-ai/langchain/blob/c6b7db6587c5397e320b84cbd7cd25c7c4b743e5/docs/docs/how_to/toolkits.mdx#L4
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The Link "Integration" points to [a wrong url](https://github.com/langchain-ai/langchain/blob/c6b7db6587c5397e320b84cbd7cd25c7c4b743e5/docs/integrations/toolkits).
[this is](https://github.com/langchain-ai/langchain/tree/c6b7db6587c5397e320b84cbd7cd25c7c4b743e5/docs/docs/integrations/toolkits) the Correct URL (/docs/docs instead of /docs/)
### Idea or request for content:
_No response_ | DOC: wrong link URL | https://api.github.com/repos/langchain-ai/langchain/issues/22992/comments | 1 | 2024-06-17T18:31:46Z | 2024-06-18T06:41:38Z | https://github.com/langchain-ai/langchain/issues/22992 | 2,357,979,748 | 22,992 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/tool_calling/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The following example should not be included in this page. The llm_with_tools instantiation examples use different LLMs that are not compatible with PydanticToolsParser. This creates confusion and wastes time
`from langchain_core.output_parsers.openai_tools import PydanticToolsParser
chain = llm_with_tools | PydanticToolsParser(tools=[Multiply, Add])
chain.invoke(query)`
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/how_to/tool_calling/> | https://api.github.com/repos/langchain-ai/langchain/issues/22989/comments | 3 | 2024-06-17T16:50:08Z | 2024-06-22T15:54:26Z | https://github.com/langchain-ai/langchain/issues/22989 | 2,357,794,583 | 22,989 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.embeddings import OCIGenAIEmbeddings
m = OCIGenAIEmbeddings(
model_id="MY EMBEDDING MODEL",
compartment_id="MY COMPARTMENT ID"
)
response = m.embed_documents([str(n) for n in range(0, 100)])
```
### Error Message and Stack Trace (if applicable)
(stack trace is sanitized to remove identifying information)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "oci_generative_ai.py", line 192, in embed_documents
response = self.client.embed_text(invocation_obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "generative_ai_inference_client.py", line 298, in embed_text
return retry_strategy.make_retrying_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "retry.py", line 308, in make_retrying_call
response = func_ref(*func_args, **func_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "base_client.py", line 535, in call_api
response = self.request(request, allow_control_chars, operation_name, api_reference_link)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "circuitbreaker.py", line 159, in wrapper
return call(function, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "circuitbreaker.py", line 170, in call
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "base_client.py", line 726, in request
self.raise_service_error(request, response, service_code, message, operation_name, api_reference_link, target_service, request_endpoint, client_version, timestamp, deserialized_data)
File "base_client.py", line 891, in raise_service_error
raise exceptions.ServiceError(
oci.exceptions.ServiceError: {'target_service': 'generative_ai_inference', 'status': 400, 'code': '400', 'opc-request-id': 'XYZ', 'message': 'Inputs must be provided, support inputs array size less than 96.', 'operation_name': 'embed_text', 'timestamp': 'XYZ', 'client_version': 'Oracle-PythonSDK/2.128.2', 'request_endpoint': 'XYZ', 'logging_tips': 'To get more info on the failing request, refer to https://docs.oracle.com/en-us/iaas/tools/python/latest/logging.html for ways to log the request/response details.', 'troubleshooting_tips': "See https://docs.oracle.com/iaas/Content/API/References/apierrors.htm#apierrors_400__400_400 for more information about resolving this error. Also see https://docs.oracle.com/iaas/api/#/en/generative-ai-inference/20231130/EmbedTextResult/EmbedText for details on this operation's requirements. If you are unable to resolve this generative_ai_inference issue, please contact Oracle support and provide them this full error message."}
### Description
OCI embeddings service has a batch size of 96. Inputs over this length will receive a service error from the embedding service. This can be fixed by adding a batching parameter to the embedding class, and updating the embed_documents function like so:
```python
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Call out to OCIGenAI's embedding endpoint.
Args:
texts: The list of texts to embed.
Returns:
List of embeddings, one for each text.
"""
from oci.generative_ai_inference import models
if self.model_id.startswith(CUSTOM_ENDPOINT_PREFIX):
serving_mode = models.DedicatedServingMode(endpoint_id=self.model_id)
else:
serving_mode = models.OnDemandServingMode(model_id=self.model_id)
embeddings = []
def split_texts():
for i in range(0, len(texts), self.batch_size):
yield texts[i:i + self.batch_size]
for chunk in split_texts():
invocation_obj = models.EmbedTextDetails(
serving_mode=serving_mode,
compartment_id=self.compartment_id,
truncate=self.truncate,
inputs=chunk,
)
response = self.client.embed_text(invocation_obj)
embeddings.extend(response.data.embeddings)
return embeddings
```
### System Info
Mac (x86)
```shell
% python --version
Python 3.12.2
% pip freeze |grep langchain
-e git+ssh://[email protected]/langchain-ai/langchain.git@c437b1aab734d5f15d4bdcd4a2d989808d244ad9#egg=langchain&subdirectory=libs/langchain
-e git+ssh://[email protected]/langchain-ai/langchain.git@c437b1aab734d5f15d4bdcd4a2d989808d244ad9#egg=langchain_community&subdirectory=libs/community
-e git+ssh://[email protected]/langchain-ai/langchain.git@c437b1aab734d5f15d4bdcd4a2d989808d244ad9#egg=langchain_core&subdirectory=libs/core
-e git+ssh://[email protected]/langchain-ai/langchain.git@c437b1aab734d5f15d4bdcd4a2d989808d244ad9#egg=langchain_experimental&subdirectory=libs/experimental
-e git+ssh://[email protected]/langchain-ai/langchain.git@c437b1aab734d5f15d4bdcd4a2d989808d244ad9#egg=langchain_openai&subdirectory=libs/partners/openai
-e git+ssh://[email protected]/langchain-ai/langchain.git@c437b1aab734d5f15d4bdcd4a2d989808d244ad9#egg=langchain_text_splitters&subdirectory=libs/text-splitters
``` | OCI Embeddings service should use batch size 96 by default. | https://api.github.com/repos/langchain-ai/langchain/issues/22985/comments | 0 | 2024-06-17T15:29:15Z | 2024-06-17T15:31:49Z | https://github.com/langchain-ai/langchain/issues/22985 | 2,357,634,791 | 22,985 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
pip install --upgrade --quiet pymilvus[model] langchain-milvus
### Error Message and Stack Trace (if applicable)
![image](https://github.com/langchain-ai/langchain/assets/119028382/895bc83e-a774-4528-b14b-d6950ba6224d)
### Description
can not install langchain-milvus
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.5
> langchain: 0.2.3
> langchain_community: 0.2.4
> langsmith: 0.1.77
> langchain_cli: 0.0.23
> langchain_google_cloud_sql_mysql: 0.2.2
> langchain_google_vertexai: 1.0.5
> langchain_openai: 0.1.6
> langchain_text_splitters: 0.2.1
> langchainhub: 0.1.17
> langserve: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | langchain-milvus install error | https://api.github.com/repos/langchain-ai/langchain/issues/22976/comments | 1 | 2024-06-17T11:33:00Z | 2024-06-17T12:42:37Z | https://github.com/langchain-ai/langchain/issues/22976 | 2,357,114,648 | 22,976 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
import requests
import yaml
# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
from langchain_community.agent_toolkits.openapi import planner
from langchain_openai.chat_models import ChatOpenAI
from langchain_community.agent_toolkits.openapi.spec import reduce_openapi_spec
from langchain.requests import RequestsWrapper
from requests.packages.urllib3.exceptions import InsecureRequestWarning
# Disable SSL warnings
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
import certifi
os.environ['SSL_CERT_FILE'] = certifi.where()
print(os.environ.get('NO_PROXY'))
with open("swagger.yaml") as f:
data = yaml.load(f, Loader=yaml.FullLoader)
swagger_api_spec = reduce_openapi_spec(data)
def construct_superset_aut_headers(url=None):
import requests
url = "https://your-superset-url/api/v1/security/login"
payload = {
"username": "your-username",
"password": "your-password",
"provider": "db",
"refresh": True
}
headers = {
"Content-Type": "application/json"
}
response = requests.post(url, json=payload, headers=headers, verify=False)
data = response.json()
return {"Authorization": f"Bearer {data['access_token']}"}
llm = ChatOpenAI(model='gpt-4o')
swagger_requests_wrapper = RequestsWrapper(headers=construct_superset_aut_headers())
superset_agent = planner.create_openapi_agent(swagger_api_spec, swagger_requests_wrapper, llm, allow_dangerous_requests=True, handle_parsing_errors=True)
superset_agent.run(
"Tell me the number and types of charts and dashboards available."
)
```
### Error Message and Stack Trace (if applicable)
Entering new AgentExecutor chain...
Action: api_planner
Action Input: I need to find the right API calls to get the number and types of charts and dashboards available.
Observation: 1. **Evaluate whether the user query can be solved by the API documented below:**
...
Observation: Use the `requests_get` tool to retrieve a list of charts. is not a valid tool, try one of [requests_get, requests_post].
Thought: To proceed with the plan, I will first retrieve a list of charts using the **GET /api/v1/chart/** endpoint and extract the necessary information.
...
Plan:
1. Retrieve a list of charts using the **GET /api/v1/chart/** endpoint.
2. Extract the count of charts and their IDs.
3. Retrieve a list of dashboards using the **GET /api/v1/dashboard/** endpoint.
4. Extract the count of dashboards and their IDs.
...
Action: Use the `requests_get` tool to retrieve a list of charts.
Action Input:
{
"url": "https://your-superset-url/api/v1/chart/",
"params": {},
"output_instructions": "Extract the count of charts and ids of the charts"
}
...
Traceback (most recent call last):
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/connectionpool.py", line 467, in _make_request
self._validate_conn(conn)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/connectionpool.py", line 1099, in _validate_conn
conn.connect()
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/connection.py", line 653, in connect
sock_and_verified = _ssl_wrap_socket_and_match_hostname(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/connection.py", line 806, in _ssl_wrap_socket_and_match_hostname
ssl_sock = ssl_wrap_socket(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/util/ssl_.py", line 465, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/util/ssl_.py", line 509, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "~/anaconda3/envs/superset/lib/python3.11/ssl.py", line 517, in wrap_socket
return self.sslsocket_class._create(
File "~/anaconda3/envs/superset/lib/python3.11/ssl.py", line 1104, in _create
self.do_handshake()
File "~/anaconda3/envs/superset/lib/python3.11/ssl.py", line 1382, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)
...
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/connectionpool.py", line 491, in _make_request
raise new_e
urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)
...
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/requests/adapters.py", line 589, in send
resp = conn.urlopen(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/connectionpool.py", line 847, in urlopen
retries = retries.increment(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='your-superset-url', port=443): Max retries exceeded with url: /api/v1/chart/ (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))
...
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "~/git/forPR/superset/openapi-agent.py", line 46, in <module>
superset_agent.run(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 148, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/chains/base.py", line 600, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 148, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/chains/base.py", line 383, in __call__
return self.invoke(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/chains/base.py", line 166, in invoke
raise e
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/agents/agent.py", line 1433, in _call
next_step_output = self._take_next_step(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/agents/agent.py", line 1139, in _take_next_step
[
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/agents/agent.py", line 1139, in <listcomp>
[
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/agents/agent.py", line 1224, in _iter_next_step
yield self._perform_agent_action(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/agents/agent.py", line 1246, in _perform_agent_action
observation = tool.run(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain_core/tools.py", line 452, in run
raise e
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain_core/tools.py", line 413, in run
else context.run(self._run, *tool_args, **tool_kwargs)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain_community/agent_toolkits/openapi/planner.py", line 88, in _run
str, self.requests_wrapper.get(data["url"], params=data_params)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain_community/utilities/requests.py", line 154, in get
return self._get_resp_content(self.requests.get(url, **kwargs))
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain_community/utilities/requests.py", line 31, in get
return requests.get(url, headers=self.headers, auth=self.auth, verify=self.verify, **kwargs)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/requests/adapters.py", line 620, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='your-superset-url', port=443): Max retries exceeded with url: /api/v1/chart/ (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))
### Description
I am creating an agent that calls the API by accessing the swagger of the API server whose certificate is broken.
At this time, if I structured the code, I was able to encounter the error message.
Of course, resolving the certificate issue would be the best solution,
but it would be even better if a temporary solution was provided through an option.
### System Info
langchain==0.2.4
langchain-community==0.2.4
langchain-core==0.2.6
langchain-openai==0.1.8
langchain-text-splitters==0.2.1 | RequestsWrapper initialization for API Endpoint where SSL authentication fails | https://api.github.com/repos/langchain-ai/langchain/issues/22975/comments | 0 | 2024-06-17T11:31:32Z | 2024-06-18T03:12:43Z | https://github.com/langchain-ai/langchain/issues/22975 | 2,357,111,081 | 22,975 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
import tenacity
```
### Error Message and Stack Trace (if applicable)
> ```
> > import tenacity
> ...
> lib/python3.11/site-packages/tenacity/__init__.py", line 653, in <module>
> from tenacity.asyncio import AsyncRetrying # noqa:E402,I100
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> ModuleNotFoundError: No module named 'tenacity.asyncio'
> ```
### Description
Tenacity 8.4.0 has error.
https://github.com/langchain-ai/langchain/blob/892bd4c29be34c0cc095ed178be6d60c6858e2ec/libs/core/pyproject.toml#L15
https://github.com/jd/tenacity/issues/471
### System Info
- Python 3.11
- tenacity 8.4.0 | It will occurred error if a dependent library `tenacity` is upgraded to `8.4.0`. | https://api.github.com/repos/langchain-ai/langchain/issues/22972/comments | 35 | 2024-06-17T08:20:25Z | 2024-06-18T14:34:29Z | https://github.com/langchain-ai/langchain/issues/22972 | 2,356,709,439 | 22,972 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/rag/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
mycode
`
> from langchain_core.prompts import PromptTemplate
>
> template = """Use the following pieces of context to answer the question at the end.
> If you don't know the answer, just say that you don't know, don't try to make up an answer.
> Use three sentences maximum and keep the answer as concise as possible.
> Always say "感谢提问!" at the end of the answer,总是用中文回答问题,可以使用英语描述专业词汇.
>
> {context}
>
> Question: {question}
>
> Helpful Answer:"""
> custom_rag_prompt = PromptTemplate.from_template(template)
> rag_chain = (
> {"context": retriever | format_docs, "question": RunnablePassthrough()}
> | custom_rag_prompt
> | cache_mistral_model
> | StrOutputParser()
> )
> while True:
> user_input = input("请输入问题或命令(输入 q 退出): ")
> if user_input.lower() == "q":
> break
> for chunk in rag_chain.stream(user_input):
> print(chunk, end="", flush=True)
`
`
-----
`error say :`Traceback (most recent call last):
File "/home/desir/PycharmProjects/pdf_parse/rag/cohere.py", line 141, in <module>
for chunk in rag_chain.stream(user_input):
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2873, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2860, in transform
yield from self._transform_stream_with_config(
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1865, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2822, in _transform
for output in final_pipeline:
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/output_parsers/transform.py", line 50, in transform
yield from self._transform_stream_with_config(
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1829, in _transform_stream_with_config
final_input: Optional[Input] = next(input_for_tracing, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4057, in transform
for output in self._transform_stream_with_config(
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1865, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4025, in _transform
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 380, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/transformers/models/mistral/modeling_mistral.py", line 1139, in forward
outputs = self.model(
^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/transformers/models/mistral/modeling_mistral.py", line 937, in forward
batch_size, seq_length = input_ids.shape
^^^^^^^^^^^^^^^
**AttributeError: 'StringPromptValue' object has no attribute 'shape'`**
### what happened?please
### Idea or request for content:
i don't know | DOC: <Issue related to /v0.2/docs/tutorials/rag/> | https://api.github.com/repos/langchain-ai/langchain/issues/22971/comments | 3 | 2024-06-17T07:30:48Z | 2024-06-19T05:01:46Z | https://github.com/langchain-ai/langchain/issues/22971 | 2,356,605,589 | 22,971 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
using the standard tiledb code located in tiledb.py
### Error Message and Stack Trace (if applicable)
there's no specific error to provide
### Description
tiledb source code doesn't have "from_documents" method despite the online instructions saying it does.
### System Info
windows 10 | Why doesn't the TileDB vector store implementation have a "from_documents" method when the instructions say it does... | https://api.github.com/repos/langchain-ai/langchain/issues/22964/comments | 3 | 2024-06-17T02:01:06Z | 2024-06-17T09:34:03Z | https://github.com/langchain-ai/langchain/issues/22964 | 2,356,180,425 | 22,964 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/sql_qa/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I am quite new to SQL agents, however, I think there should be a line of code where the new `retriever_tool` should be added to the list of tools available to the agent? I can't seem to connect how these two blocks of code will work together:
```
retriever_tool = create_retriever_tool(
retriever,
name="search_proper_nouns",
description=description,
)
```
AND
```
agent = create_react_agent(llm, tools, messages_modifier=system_message)
```
without an intermediate step like ```tools.add(retriever_tool)``` or something to that effect.
If I am wrong, please explain how Agent will know of `retriever_tool`.
Arindam
### Idea or request for content:
_No response_ | DOC: There seems to be a code line missing from the given example to connect `retriever_tool` to the `tools` list <Issue related to /v0.2/docs/tutorials/sql_qa/> | https://api.github.com/repos/langchain-ai/langchain/issues/22963/comments | 1 | 2024-06-17T00:14:48Z | 2024-06-17T12:57:18Z | https://github.com/langchain-ai/langchain/issues/22963 | 2,356,069,720 | 22,963 |
[
"hwchase17",
"langchain"
] | ### URL
https://api.python.langchain.com/en/latest/tools/langchain_community.tools.ddg_search.tool.DuckDuckGoSearchResults.html
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The documentation indicates that json should be returned by this tool (https://python.langchain.com/v0.2/docs/integrations/tools/ddg/), but instead it's a normal string that is not in json format (and also can't be parsed correctly through simple text search alone)
### Idea or request for content:
Adjust the tool to either return json or a python object with the results (title, url, etc) (this is the preferred method for me personally) | DuckDuckGo Search tool does not return JSON format | https://api.github.com/repos/langchain-ai/langchain/issues/22961/comments | 2 | 2024-06-16T21:22:02Z | 2024-08-10T23:41:59Z | https://github.com/langchain-ai/langchain/issues/22961 | 2,355,988,551 | 22,961 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[68], line 2
1 # Build graph
----> 2 from langgraph.graph import END, StateGraph
4 workflow = StateGraph(GraphState)
5 # Define the nodes
File ~/Desktop/Dev/AI-Agent/ai-agent-env/lib/python3.9/site-packages/langgraph/graph/__init__.py:1
----> 1 from langgraph.graph.graph import END, START, Graph
2 from langgraph.graph.message import MessageGraph, MessagesState, add_messages
3 from langgraph.graph.state import StateGraph
File ~/Desktop/Dev/AI-Agent/ai-agent-env/lib/python3.9/site-packages/langgraph/graph/graph.py:31
29 from langgraph.constants import END, START, TAG_HIDDEN, Send
30 from langgraph.errors import InvalidUpdateError
---> 31 from langgraph.pregel import Channel, Pregel
32 from langgraph.pregel.read import PregelNode
33 from langgraph.pregel.types import All
File ~/Desktop/Dev/AI-Agent/ai-agent-env/lib/python3.9/site-packages/langgraph/pregel/__init__.py:46
36 from langchain_core.runnables.base import Input, Output, coerce_to_runnable
37 from langchain_core.runnables.config import (
38 RunnableConfig,
39 ensure_config,
(...)
44 patch_config,
45 )
---> 46 from langchain_core.runnables.utils import (
47 ConfigurableFieldSpec,
48 create_model,
49 get_unique_config_specs,
50 )
51 from langchain_core.tracers._streaming import _StreamingCallbackHandler
52 from typing_extensions import Self
ImportError: cannot import name 'create_model' from 'langchain_core.runnables.utils' (/Users/UserName/Desktop/Dev/AI-Agent/ai-agent-env/lib/python3.9/site-packages/langchain_core/runnables/utils.py)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I was trying to import StateGraph from langchain.graph and it kept returning the error of 'create_model' not available in langchain_core.runnables.utils
### System Info
langchain==0.2.5
langchain-community==0.0.13
langchain-core==0.2.7
langchain-text-splitters==0.2.1 | Cannot import name 'create_model' from 'langchain_core.runnables.utils' | https://api.github.com/repos/langchain-ai/langchain/issues/22956/comments | 4 | 2024-06-16T12:18:50Z | 2024-07-01T14:21:23Z | https://github.com/langchain-ai/langchain/issues/22956 | 2,355,726,106 | 22,956 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Directly from the documentation:
```
URI = "./milvus_demo.db"
vector_db = Milvus.from_documents(
docs,
embeddings,
connection_args={"uri": URI},
)
```
### Error Message and Stack Trace (if applicable)
```
ERROR:langchain_community.vectorstores.milvus:Invalid Milvus URI: ./milvus_demo.db
Traceback (most recent call last):
File "/home/erik/RAGMeUp/server/server.py", line 13, in <module>
raghelper = RAGHelper(logger)
^^^^^^^^^^^^^^^^^
File "/home/erik/RAGMeUp/server/RAGHelper.py", line 113, in __init__
self.loadData()
File "/home/erik/RAGMeUp/server/RAGHelper.py", line 258, in loadData
vector_db = Milvus.from_documents(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/erik/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 550, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/erik/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/milvus.py", line 1010, in from_texts
vector_db = cls(
^^^^
File "/home/erik/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 183, in warn_if_direct_instance
return wrapped(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/erik/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/milvus.py", line 206, in __init__
self.alias = self._create_connection_alias(connection_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/erik/anaconda3/envs/rag/lib/python3.11/site-packages/langchain_community/vectorstores/milvus.py", line 254, in _create_connection_alias
raise ValueError("Invalid Milvus URI: %s", uri)
ValueError: ('Invalid Milvus URI: %s', './milvus_demo.db')
```
### Description
* Milvus should work with local file DB
* Connection URI cannot be anything local, triggers above error
### System Info
```
langchain==0.2.2
langchain-community==0.2.2
langchain-core==0.2.4
langchain-huggingface==0.0.2
langchain-milvus==0.1.1
langchain-postgres==0.0.6
langchain-text-splitters==0.2.1
milvus-lite==2.4.7
pymilvus==2.4.3
``` | Invalid Milvus URI when using Milvus lite with local DB | https://api.github.com/repos/langchain-ai/langchain/issues/22953/comments | 1 | 2024-06-16T09:10:07Z | 2024-06-16T09:19:33Z | https://github.com/langchain-ai/langchain/issues/22953 | 2,355,586,348 | 22,953 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following is the code when using **PydanticOutputParser** as Langchain fails to parse LLM output
```
HUGGINGFACEHUB_API_TOKEN = os.getenv("HUGGINGFACEHUB_API_TOKEN")
repo_id = "mistralai/Mistral-7B-Instruct-v0.3"
model_kwargs = {
"max_new_tokens": 60,
"max_length": 200,
"temperature": 0.1,
"timeout": 6000
}
# Using HuggingFaceHub
llm = HuggingFaceHub(
repo_id=repo_id,
huggingfacehub_api_token = HUGGINGFACEHUB_API_TOKEN,
model_kwargs = model_kwargs,
)
# Define your desired data structure.
class Suggestions(BaseModel):
words: List[str] = Field(description="list of substitute words based on context")
# Throw error in case of receiving a numbered-list from API
@field_validator('words')
def not_start_with_number(cls, field):
for item in field:
if item[0].isnumeric():
raise ValueError("The word can not start with numbers!")
return field
parser = PydanticOutputParser(pydantic_object=Suggestions)
prompt_template = """
Offer a list of suggestions to substitute the specified target_word based on the context.
{format_instructions}
target_word={target_word}
context={context}
"""
prompt_input_variables = ["target_word", "context"]
partial_variables = {"format_instructions":parser.get_format_instructions()}
prompt = PromptTemplate(
template=prompt_template,
input_variables=prompt_input_variables,
partial_variables=partial_variables
)
model_input = prompt.format_prompt(
target_word="behaviour",
context="The behaviour of the students in the classroom was disruptive and made it difficult for the teacher to conduct the lesson."
)
output = llm(model_input.to_string())
parser.parse(output)
```
When trying to fix the error using **OutputFixingParser** another error was experienced below is the codebase
```
outputfixing_parser = OutputFixingParser.from_llm(parser=parser,llm=llm)
print(outputfixing_parser)
outputfixing_parser.parse(output)
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
File [~\Desktop\llmai\llm_deep\Lib\site-packages\langchain_core\output_parsers\pydantic.py:33](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:33), in PydanticOutputParser._parse_obj(self, obj)
[32](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:32) if issubclass(self.pydantic_object, pydantic.BaseModel):
---> [33](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:33) return self.pydantic_object.model_validate(obj)
[34](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:34) elif issubclass(self.pydantic_object, pydantic.v1.BaseModel):
File [~\Desktop\llmai\llm_deep\Lib\site-packages\pydantic\main.py:551](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:551), in BaseModel.model_validate(cls, obj, strict, from_attributes, context)
[550](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:550) __tracebackhide__ = True
--> [551](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:551) return cls.__pydantic_validator__.validate_python(
[552](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:552) obj, strict=strict, from_attributes=from_attributes, context=context
[553](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:553) )
ValidationError: 1 validation error for Suggestions
words
Field required [type=missing, input_value={'properties': {'words': ..., 'required': ['words']}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.7/v/missing
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[284], [line 1](vscode-notebook-cell:?execution_count=284&line=1)
----> [1](vscode-notebook-cell:?execution_count=284&line=1) parser.parse(output)
File [~\Desktop\llmai\llm_deep\Lib\site-packages\langchain_core\output_parsers\pydantic.py:64](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:64), in PydanticOutputParser.parse(self, text)
...
OutputParserException: Failed to parse Suggestions from completion {"properties": {"words": {"description": "list of substitute words based on context", "items": {"type": "string"}, "title": "Words", "type": "array"}}, "required": ["words"]}. Got: 1 validation error for Suggestions
words
Field required [type=missing, input_value={'properties': {'words': ..., 'required': ['words']}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.7/v/missing
```
Error when using **OutputFixingParser**
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
File [~\Desktop\llmai\llm_deep\Lib\site-packages\langchain_core\output_parsers\pydantic.py:33](~\Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:33), in PydanticOutputParser._parse_obj(self, obj)
[32](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:32) if issubclass(self.pydantic_object, pydantic.BaseModel):
---> [33](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:33) return self.pydantic_object.model_validate(obj)
[34](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:34) elif issubclass(self.pydantic_object, pydantic.v1.BaseModel):
File [~\Desktop\llmai\llm_deep\Lib\site-packages\pydantic\main.py:551](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:551), in BaseModel.model_validate(cls, obj, strict, from_attributes, context)
[550](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:550) __tracebackhide__ = True
--> [551](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:551) return cls.__pydantic_validator__.validate_python(
[552](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:552) obj, strict=strict, from_attributes=from_attributes, context=context
[553](~/Desktop/llmai/llm_deep/Lib/site-packages/pydantic/main.py:553) )
ValidationError: 1 validation error for Suggestions
Input should be a valid dictionary or instance of Suggestions [type=model_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.7/v/model_type
During handling of the above exception, another exception occurred:
OutputParserException Traceback (most recent call last)
Cell In[265], [line 1](vscode-notebook-cell:?execution_count=265&line=1)
----> [1](vscode-notebook-cell:?execution_count=265&line=1) outputfixing_parser.parse(output)
File [~\Desktop\llmai\llm_deep\Lib\site-packages\langchain\output_parsers\fix.py:62](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain/output_parsers/fix.py:62), in OutputFixingParser.parse(self, completion)
[60](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain/output_parsers/fix.py:60) except OutputParserException as e:
...
[44](~/Desktop/llmai/llm_deep/Lib/site-packages/langchain_core/output_parsers/pydantic.py:44) try:
OutputParserException: Failed to parse Suggestions from completion null. Got: 1 validation error for Suggestions
Input should be a valid dictionary or instance of Suggestions [type=model_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.7/v/model_type
```
### Description
Output must be able to parse LLM output and extract the json produced as shown below;
```
Suggestions(words=["conduct", "misconduct", "actions", "antics", "performance", "demeanor", "attitude", "behavior", "manner", "pupil actions"])
```
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:27:10) [MSC v.1938 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.4
> langchain: 0.2.2
> langchain_community: 0.2.4
> langsmith: 0.1.73
> langchain_google_community: 1.0.5
> langchain_huggingface: 0.0.3
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Failed to parse Suggestions from completion | https://api.github.com/repos/langchain-ai/langchain/issues/22952/comments | 2 | 2024-06-16T07:10:57Z | 2024-06-18T11:22:23Z | https://github.com/langchain-ai/langchain/issues/22952 | 2,355,491,461 | 22,952 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
chain.py
```
google_api: str = os.environ["GOOGLE_API_KEY"]
vertex_model: str = os.environ["vertex_model"]
llm = ChatGoogleGenerativeAI(temperature=1.0,
model=vertex_model,
google_api_key=google_api,
safety_settings=safety_settings_NONE)
```
server.py
```
@app.post("/admin/ases-ai/{instance_id}/content-generate/invoke", include_in_schema=True)
async def ai_route(instance_id: str, token: str = Depends(validate_token), request: Request = None):
instance_id=token['holder']
try:
path = f"/admin/ases-ai/{instance_id}/question-generate/pppk/invoke"
response = await invoke_api(
api_chain=soal_pppk_chain.with_config(config=set_langfuse_config(instance_id=instance_id)),
path=path,
request=request)
return response
except Exception as e:
raise HTTPException(status_code=500, detail=f"Status code: 500, Error: {str(e)}")
```
### Error Message and Stack Trace (if applicable)
`httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.`
### Description
Trying to run `langchain serve`. When trying the API, specially post got the error `httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.`
### System Info
aiohttp==3.9.5
aiosignal==1.3.1
annotated-types==0.7.0
anyio==3.7.1
attrs==23.2.0
backoff==2.2.1
cachetools==5.3.3
certifi==2024.6.2
cffi==1.16.0
charset-normalizer==3.3.2
click==8.1.7
colorama==0.4.6
cryptography==42.0.7
dataclasses-json==0.6.7
dnspython==2.6.1
fastapi==0.110.3
frozenlist==1.4.1
gitdb==4.0.11
GitPython==3.1.43
google-ai-generativelanguage==0.6.4
google-api-core==2.19.0
google-api-python-client==2.133.0
google-auth==2.30.0
google-auth-httplib2==0.2.0
google-cloud-discoveryengine==0.11.12
google-generativeai==0.5.4
googleapis-common-protos==1.63.1
grpcio==1.64.1
grpcio-status==1.62.2
h11==0.14.0
httpcore==1.0.5
httplib2==0.22.0
httpx==0.27.0
httpx-sse==0.4.0
idna==3.7
jsonpatch==1.33
jsonpointer==3.0.0
jsonschema==4.22.0
jsonschema-specifications==2023.12.1
langchain==0.2.5
langchain-cli==0.0.25
langchain-community==0.2.5
langchain-core==0.2.7
langchain-google-genai==1.0.6
langchain-mongodb==0.1.6
langchain-text-splitters==0.2.1
langfuse==2.36.1
langserve==0.2.2
langsmith==0.1.77
libcst==1.4.0
markdown-it-py==3.0.0
marshmallow==3.21.3
mdurl==0.1.2
multidict==6.0.5
mypy-extensions==1.0.0
numpy==1.26.4
orjson==3.10.5
packaging==23.2
pipdeptree==2.22.0
proto-plus==1.23.0
protobuf==4.25.3
pyasn1==0.6.0
pyasn1_modules==0.4.0
pycparser==2.22
pydantic==2.7.4
pydantic_core==2.18.4
Pygments==2.18.0
PyJWT==2.3.0
pymongo==4.7.2
pyparsing==3.1.2
pypdf==4.2.0
pyproject-toml==0.0.10
python-dotenv==1.0.1
python-multipart==0.0.9
PyYAML==6.0.1
referencing==0.35.1
requests==2.32.3
rfc3986==1.5.0
rich==13.7.1
rpds-py==0.18.1
rsa==4.9
shellingham==1.5.4
smmap==5.0.1
sniffio==1.3.1
SQLAlchemy==2.0.30
sse-starlette==1.8.2
starlette==0.37.2
tenacity==8.3.0
toml==0.10.2
tomlkit==0.12.5
tqdm==4.66.4
typer==0.9.4
typing-inspect==0.9.0
typing_extensions==4.12.2
uritemplate==4.1.1
urllib3==2.2.1
uvicorn==0.23.2
wrapt==1.16.0
yarl==1.9.4 | httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol | https://api.github.com/repos/langchain-ai/langchain/issues/22951/comments | 0 | 2024-06-16T06:50:13Z | 2024-06-16T06:52:45Z | https://github.com/langchain-ai/langchain/issues/22951 | 2,355,482,036 | 22,951 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/streaming/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
![image](https://github.com/langchain-ai/langchain/assets/91041770/80cdc4c6-38ad-4e7c-891e-f745c46ec4b5)
https://python.langchain.com/v0.2/docs/how_to/streaming/#chains
### Idea or request for content:
It looks like `streaming` is misspelled `sreaming`.
Positioned at the end of the `Chain` under `Using stream events`. | Wrong spell in DOC: <Issue related to /v0.2/docs/how_to/streaming/> | https://api.github.com/repos/langchain-ai/langchain/issues/22935/comments | 0 | 2024-06-15T09:03:02Z | 2024-06-15T09:13:18Z | https://github.com/langchain-ai/langchain/issues/22935 | 2,354,680,156 | 22,935 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/extraction_examples/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I was having a problem with using ChatMistralAI for extraction, with examples, so I went and followed the how-to page exactly. Without examples it works fine, but when I add the examples as described here:
https://python.langchain.com/v0.2/docs/how_to/extraction_examples/#with-examples-
I get the following error:
HTTPStatusError: Error response 400 while fetching https://api.mistral.ai/v1/chat/completions: {"object":"error","message":"Unexpected role 'user' after role 'tool'","type":"invalid_request_error","param":null,"code":null}
### Idea or request for content:
_No response_ | MistralAI Extraction How-To (with examples) throws an error | https://api.github.com/repos/langchain-ai/langchain/issues/22928/comments | 4 | 2024-06-14T23:49:43Z | 2024-06-26T11:15:59Z | https://github.com/langchain-ai/langchain/issues/22928 | 2,354,262,911 | 22,928 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/chat/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The webpage https://python.langchain.com/v0.2/docs/integrations/chat/#advanced-features says ChatOllama has no JSON mode (red cross) while https://python.langchain.com/v0.2/docs/concepts/#structured-output says: Some models, such as (...) Ollama support a feature called JSON mode. Also examples here support JSON mode existence https://python.langchain.com/v0.2/docs/integrations/chat/ollama/#extraction
### Idea or request for content:
Insert green checkbox for Ollama/JSON on https://python.langchain.com/v0.2/docs/integrations/chat/#advanced-features | DOC: <Issue related to /v0.2/docs/integrations/chat/> Ollama JSON mode seems to be marked incorrectly as NO | https://api.github.com/repos/langchain-ai/langchain/issues/22910/comments | 1 | 2024-06-14T17:48:39Z | 2024-06-14T23:27:56Z | https://github.com/langchain-ai/langchain/issues/22910 | 2,353,826,349 | 22,910 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from pymilvus import (
Collection,
CollectionSchema,
DataType,
FieldSchema,
WeightedRanker,
connections,
)
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_milvus.retrievers import MilvusCollectionHybridSearchRetriever
from langchain_milvus.utils.sparse import BM25SparseEmbedding
# from langchain_openai import ChatOpenAI, OpenAIEmbeddings
import logging
logger = logging.getLogger("gunicorn.error")
texts = [
"In 'The Whispering Walls' by Ava Moreno, a young journalist named Sophia uncovers a decades-old conspiracy hidden within the crumbling walls of an ancient mansion, where the whispers of the past threaten to destroy her own sanity.",
"In 'The Last Refuge' by Ethan Blackwood, a group of survivors must band together to escape a post-apocalyptic wasteland, where the last remnants of humanity cling to life in a desperate bid for survival.",
"In 'The Memory Thief' by Lila Rose, a charismatic thief with the ability to steal and manipulate memories is hired by a mysterious client to pull off a daring heist, but soon finds themselves trapped in a web of deceit and betrayal.",
"In 'The City of Echoes' by Julian Saint Clair, a brilliant detective must navigate a labyrinthine metropolis where time is currency, and the rich can live forever, but at a terrible cost to the poor.",
"In 'The Starlight Serenade' by Ruby Flynn, a shy astronomer discovers a mysterious melody emanating from a distant star, which leads her on a journey to uncover the secrets of the universe and her own heart.",
"In 'The Shadow Weaver' by Piper Redding, a young orphan discovers she has the ability to weave powerful illusions, but soon finds herself at the center of a deadly game of cat and mouse between rival factions vying for control of the mystical arts.",
"In 'The Lost Expedition' by Caspian Grey, a team of explorers ventures into the heart of the Amazon rainforest in search of a lost city, but soon finds themselves hunted by a ruthless treasure hunter and the treacherous jungle itself.",
"In 'The Clockwork Kingdom' by Augusta Wynter, a brilliant inventor discovers a hidden world of clockwork machines and ancient magic, where a rebellion is brewing against the tyrannical ruler of the land.",
"In 'The Phantom Pilgrim' by Rowan Welles, a charismatic smuggler is hired by a mysterious organization to transport a valuable artifact across a war-torn continent, but soon finds themselves pursued by deadly assassins and rival factions.",
"In 'The Dreamwalker's Journey' by Lyra Snow, a young dreamwalker discovers she has the ability to enter people's dreams, but soon finds herself trapped in a surreal world of nightmares and illusions, where the boundaries between reality and fantasy blur.",
]
from langchain_openai import AzureOpenAIEmbeddings
dense_embedding_func: AzureOpenAIEmbeddings = AzureOpenAIEmbeddings(
azure_deployment="************",
openai_api_version="************",
azure_endpoint="*******************",
api_key="************************",
)
# dense_embedding_func = OpenAIEmbeddings()
dense_dim = len(dense_embedding_func.embed_query(texts[1]))
# logger.info(f"DENSE DIM - {dense_dim}")
print("DENSE DIM")
print(dense_dim)
sparse_embedding_func = BM25SparseEmbedding(corpus=texts)
sparse_embedding = sparse_embedding_func.embed_query(texts[1])
print("SPARSE EMBEDDING")
print(sparse_embedding)
# connections.connect(uri=CONNECTION_URI)
connections.connect(
host="**************", # Replace with your Milvus server IP
port="***********",
user="**************",
password="***************",
db_name="*****************"
)
print("CONNECTED")
pk_field = "doc_id"
dense_field = "dense_vector"
sparse_field = "sparse_vector"
text_field = "text"
fields = [
FieldSchema(
name=pk_field,
dtype=DataType.VARCHAR,
is_primary=True,
auto_id=True,
max_length=100,
),
FieldSchema(name=dense_field, dtype=DataType.FLOAT_VECTOR, dim=dense_dim),
FieldSchema(name=sparse_field, dtype=DataType.SPARSE_FLOAT_VECTOR),
FieldSchema(name=text_field, dtype=DataType.VARCHAR, max_length=65_535),
]
schema = CollectionSchema(fields=fields, enable_dynamic_field=False)
collection = Collection(
name="IntroductionToTheNovels", schema=schema, consistency_level="Strong"
)
print("SCHEMA CRAETED")
dense_index = {"index_type": "FLAT", "metric_type": "IP"}
collection.create_index("dense_vector", dense_index)
sparse_index = {"index_type": "SPARSE_INVERTED_INDEX", "metric_type": "IP"}
collection.create_index("sparse_vector", sparse_index)
print("INDEX CREATED")
collection.flush()
print("FLUSHED")
entities = []
for text in texts:
entity = {
dense_field: dense_embedding_func.embed_documents([text])[0],
sparse_field: sparse_embedding_func.embed_documents([text])[0],
text_field: text,
}
entities.append(entity)
print("ENTITES")
collection.insert(entities)
print("INSERTED")
collection.load()
print("LOADED")
sparse_search_params = {"metric_type": "IP"}
dense_search_params = {"metric_type": "IP", "params": {}}
retriever = MilvusCollectionHybridSearchRetriever(
collection=collection,
rerank=WeightedRanker(0.5, 0.5),
anns_fields=[dense_field, sparse_field],
field_embeddings=[dense_embedding_func, sparse_embedding_func],
field_search_params=[dense_search_params, sparse_search_params],
top_k=3,
text_field=text_field,
)
print("RETRIEVED CREATED")
documents = retriever.invoke("What are the story about ventures?")
print(documents)
### Error Message and Stack Trace (if applicable)
RPC error: [create_index], <MilvusException: (code=1100, message=create index on 104 field is not supported: invalid parameter[expected=supported field][actual=create index on 104 field])>, <Time:{'RPC start': '2024-06-14 13:38:35.242645', 'RPC error': '2024-06-14 13:38:35.247294'}>
### Description
I am trying to use hybrid search in milvus database using langchain-milvus library.
But when I created index for sparse vector field, it gives an error -
RPC error: [create_index], <MilvusException: (code=1100, message=create index on 104 field is not supported: invalid parameter[expected=supported field][actual=create index on 104 field])>, <Time:{'RPC start': '2024-06-14 13:38:35.242645', 'RPC error': '2024-06-14 13:38:35.247294'}>
I have tried milvusclient for create collection as well but that also gives me same error.
We have commited the implementation of hybrid search after finding langchain's document but it gives an error, we are stuck in middle now, so please resolve it as soon as possible.
### System Info
pip freeze | grep langchain -
langchain-core==0.2.6
langchain-milvus==0.1.1
langchain-openai==0.1.8
----------------
Platform - linux
----------------
python version - 3.11.7
-----------------------------
python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #73~20.04.1-Ubuntu SMP Mon May 6 09:43:44 UTC 2024
> Python Version: 3.11.7 (main, Dec 8 2023, 18:56:57) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.2.6
> langsmith: 0.1.77
> langchain_milvus: 0.1.1
> langchain_openai: 0.1.8
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | RPC error: [create_index], <MilvusException: (code=1100, message=create index on 104 field is not supported: invalid parameter[expected=supported field][actual=create index on 104 field])>, <Time:{'RPC start': '2024-06-14 13:38:35.242645', 'RPC error': '2024-06-14 13:38:35.247294'}> | https://api.github.com/repos/langchain-ai/langchain/issues/22901/comments | 1 | 2024-06-14T14:22:14Z | 2024-06-18T07:03:54Z | https://github.com/langchain-ai/langchain/issues/22901 | 2,353,491,955 | 22,901 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
"ollama = Ollama(model=vicuna)
print(ollama.invoke("why is the sky blue"))
DATA_PATH = '/home/lamia/arenault/test_ollama_container/advancedragtest/dataPV'
DB_FAISS_PATH = 'vectorstore1/db_faiss'
loader = DirectoryLoader(DATA_PATH,
glob='*.pdf',
loader_cls=PyPDFLoader)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000,
chunk_overlap=250)
texts = text_splitter.split_documents(documents)
embeddings = FastEmbedEmbeddings(model_name="sentence-transformers/paraphrase-multilingual-mpnet-base-v2")
db = FAISS.from_documents(texts, embeddings)
db.save_local(DB_FAISS_PATH)
question="What is said during the meeting ? "
docs = db.similarity_search(question)
len(docs)
qachain=RetrievalQA.from_chain_type(ollama, retriever=db.as_retriever())
res = qachain.invoke({"query": question})
print(res['result']) "
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/home/lamia/user/test_ollama_container/advancedragtest/rp1.py", line 53, in <module>
db = FAISS.from_documents(texts, embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lamia/user/.local/lib/python3.12/site-packages/langchain_core/vectorstores.py", line 550, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lamia/user/.local/lib/python3.12/site-packages/langchain_community/vectorstores/faiss.py", line 930, in from_texts
embeddings = embedding.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lamia/user/.local/lib/python3.12/site-packages/langchain_community/embeddings/fastembed.py", line 107, in embed_documents
return [e.tolist() for e in embeddings]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lamia/user/.local/lib/python3.12/site-packages/fastembed/text/text_embedding.py", line 95, in embed
yield from self.model.embed(documents, batch_size, parallel, **kwargs)
File "/home/lamia/user/.local/lib/python3.12/site-packages/fastembed/text/onnx_embedding.py", line 268, in embed
yield from self._embed_documents(
File "/home/lamia/user/.local/lib/python3.12/site-packages/fastembed/text/onnx_text_model.py", line 105, in _embed_documents
yield from self._post_process_onnx_output(self.onnx_embed(batch))
^^^^^^^^^^^^^^^^^^^^^^
File "/home/lamia/user/.local/lib/python3.12/site-packages/fastembed/text/onnx_text_model.py", line 75, in onnx_embed
model_output = self.model.run(self.ONNX_OUTPUT_NAMES, onnx_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lamia/user/.local/lib/python3.12/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 220, in run
return self._sess.run(output_names, input_feed, run_options)
"2024-06-13 14:16:16.791673276 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6235, index: 29, mask: {30, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.795639259 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6237, index: 31, mask: {32, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.799636049 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6240, index: 34, mask: {35, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.803653423 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6219, index: 13, mask: {14, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.807644288 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6221, index: 15, mask: {16, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.811642466 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6222, index: 16, mask: {17, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.815644076 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6224, index: 18, mask: {19, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.819637551 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6227, index: 21, mask: {22, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.823634320 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6230, index: 24, mask: {25, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.827633559 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6231, index: 25, mask: {26, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.831634722 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6239, index: 33, mask: {34, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.835633827 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6220, index: 14, mask: {15, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.839637477 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6216, index: 10, mask: {11, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.843634160 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6229, index: 23, mask: {24, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.847637243 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6238, index: 32, mask: {33, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.851635120 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6211, index: 5, mask: {6, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.855633715 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6212, index: 6, mask: {7, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.859633326 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6232, index: 26, mask: {27, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.863633725 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6213, index: 7, mask: {8, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.867635041 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6214, index: 8, mask: {9, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.871634579 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6215, index: 9, mask: {10, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.875635959 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6218, index: 12, mask: {13, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.879634416 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6223, index: 17, mask: {18, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.883633691 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6225, index: 19, mask: {20, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.887633415 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6228, index: 22, mask: {23, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.891633366 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6233, index: 27, mask: {28, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.895632904 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6234, index: 28, mask: {29, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.899633485 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6236, index: 30, mask: {31, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.903633135 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6217, index: 11, mask: {12, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.907633546 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6226, index: 20, mask: {21, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.911636455 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6210, index: 4, mask: {5, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.915633695 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6209, index: 3, mask: {4, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.919634692 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6208, index: 2, mask: {3, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:16.927640504 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6207, index: 1, mask: {2, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
2024-06-13 14:16:17.058635376 [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 6206, index: 0, mask: {1, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
"
### Description
# Hey there, I am quite new to this and will be so grateful if you could propose a way to solve this !
## Here is what I tried (without success) :
"import os
os.environ["OMP_NUM_THREADS"] = "4"
Now import ONNX Runtime and other libraries
import onnxruntime as ort"
I also tried :
"
#import onnxruntime as ort
#sess_options = ort.SessionOptions()
#sess_options.intra_op_num_threads = 15
#sess_options.inter_op_num_threads = 15
"
I am running my code on a singularity container.
I will be incredibly helpful of any help.
Thanks a lot.
### System Info
python : 3.12
pip freeze
aiohttp==3.9.5
aiosignal==1.3.1
anaconda-anon-usage @ file:///croot/anaconda-anon-usage_1710965072196/work
annotated-types==0.7.0
anyio @ file:///home/conda/feedstock_root/build_artifacts/anyio_1708355285029/work
archspec @ file:///croot/archspec_1709217642129/work
argon2-cffi @ file:///home/conda/feedstock_root/build_artifacts/argon2-cffi_1692818318753/work
argon2-cffi-bindings @ file:///home/conda/feedstock_root/build_artifacts/argon2-cffi-bindings_1695386549414/work
arrow @ file:///home/conda/feedstock_root/build_artifacts/arrow_1696128962909/work
asgiref==3.8.1
asttokens @ file:///home/conda/feedstock_root/build_artifacts/asttokens_1698341106958/work
async-lru @ file:///home/conda/feedstock_root/build_artifacts/async-lru_1690563019058/work
attrs @ file:///home/conda/feedstock_root/build_artifacts/attrs_1704011227531/work
Babel @ file:///home/conda/feedstock_root/build_artifacts/babel_1702422572539/work
backoff==2.2.1
bcrypt==4.1.3
beautifulsoup4 @ file:///home/conda/feedstock_root/build_artifacts/beautifulsoup4_1705564648255/work
bleach @ file:///home/conda/feedstock_root/build_artifacts/bleach_1696630167146/work
boltons @ file:///work/perseverance-python-buildout/croot/boltons_1698851177130/work
Brotli @ file:///croot/brotli-split_1714483155106/work
build==1.2.1
cached-property @ file:///home/conda/feedstock_root/build_artifacts/cached_property_1615209429212/work
cachetools==5.3.3
certifi @ file:///home/conda/feedstock_root/build_artifacts/certifi_1707022139797/work/certifi
cffi @ file:///croot/cffi_1714483155441/work
chardet==5.2.0
charset-normalizer==3.3.2
chroma-hnswlib==0.7.3
chromadb==0.5.0
click==8.1.7
coloredlogs==15.0.1
comm @ file:///home/conda/feedstock_root/build_artifacts/comm_1710320294760/work
conda @ file:///home/conda/feedstock_root/build_artifacts/conda_1715631928597/work
conda-content-trust @ file:///croot/conda-content-trust_1714483159009/work
conda-libmamba-solver @ file:///croot/conda-libmamba-solver_1706733287605/work/src
conda-package-handling @ file:///croot/conda-package-handling_1714483155348/work
conda_package_streaming @ file:///work/perseverance-python-buildout/croot/conda-package-streaming_1698847176583/work
contourpy @ file:///home/conda/feedstock_root/build_artifacts/contourpy_1712429918028/work
cryptography @ file:///croot/cryptography_1714660666131/work
cycler @ file:///home/conda/feedstock_root/build_artifacts/cycler_1696677705766/work
dataclasses-json==0.6.6
debugpy @ file:///home/conda/feedstock_root/build_artifacts/debugpy_1707444401483/work
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1641555617451/work
deepdiff==7.0.1
defusedxml @ file:///home/conda/feedstock_root/build_artifacts/defusedxml_1615232257335/work
Deprecated==1.2.14
dirtyjson==1.0.8
diskcache==5.6.3
distro @ file:///croot/distro_1714488253808/work
dnspython==2.6.1
email_validator==2.1.1
emoji==2.12.1
entrypoints @ file:///home/conda/feedstock_root/build_artifacts/entrypoints_1643888246732/work
exceptiongroup @ file:///home/conda/feedstock_root/build_artifacts/exceptiongroup_1704921103267/work
executing @ file:///home/conda/feedstock_root/build_artifacts/executing_1698579936712/work
faiss-cpu==1.8.0
fastapi==0.111.0
fastapi-cli==0.0.4
fastembed==0.3.0
fastjsonschema @ file:///home/conda/feedstock_root/build_artifacts/python-fastjsonschema_1703780968325/work/dist
filelock==3.14.0
filetype==1.2.0
FlashRank==0.2.5
flatbuffers==24.3.25
fonttools @ file:///home/conda/feedstock_root/build_artifacts/fonttools_1717209197958/work
fqdn @ file:///home/conda/feedstock_root/build_artifacts/fqdn_1638810296540/work/dist
frozendict @ file:///home/conda/feedstock_root/build_artifacts/frozendict_1715092752354/work
frozenlist==1.4.1
fsspec==2024.6.0
google-auth==2.30.0
googleapis-common-protos==1.63.1
greenlet==3.0.3
groq==0.8.0
grpcio==1.64.1
grpcio-tools==1.64.1
h11 @ file:///home/conda/feedstock_root/build_artifacts/h11_1664132893548/work
h2 @ file:///home/conda/feedstock_root/build_artifacts/h2_1634280454336/work
hpack==4.0.0
httpcore @ file:///home/conda/feedstock_root/build_artifacts/httpcore_1711596990900/work
httptools==0.6.1
httpx @ file:///home/conda/feedstock_root/build_artifacts/httpx_1708530890843/work
huggingface-hub==0.23.3
humanfriendly==10.0
hyperframe @ file:///home/conda/feedstock_root/build_artifacts/hyperframe_1619110129307/work
idna @ file:///croot/idna_1714398848350/work
importlib_metadata @ file:///home/conda/feedstock_root/build_artifacts/importlib-metadata_1710971335535/work
importlib_resources @ file:///home/conda/feedstock_root/build_artifacts/importlib_resources_1711040877059/work
ipykernel @ file:///home/conda/feedstock_root/build_artifacts/ipykernel_1708996548741/work
ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1717182742060/work
isoduration @ file:///home/conda/feedstock_root/build_artifacts/isoduration_1638811571363/work/dist
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1696326070614/work
Jinja2 @ file:///home/conda/feedstock_root/build_artifacts/jinja2_1715127149914/work
joblib==1.4.2
json5 @ file:///home/conda/feedstock_root/build_artifacts/json5_1712986206667/work
jsonpatch @ file:///croot/jsonpatch_1714483231291/work
jsonpath-python==1.0.6
jsonpointer==2.1
jsonschema @ file:///home/conda/feedstock_root/build_artifacts/jsonschema-meta_1714573116818/work
jsonschema-specifications @ file:///tmp/tmpkv1z7p57/src
jupyter-events @ file:///home/conda/feedstock_root/build_artifacts/jupyter_events_1710805637316/work
jupyter-lsp @ file:///home/conda/feedstock_root/build_artifacts/jupyter-lsp-meta_1712707420468/work/jupyter-lsp
jupyter_client @ file:///home/conda/feedstock_root/build_artifacts/jupyter_client_1716472197302/work
jupyter_core @ file:///home/conda/feedstock_root/build_artifacts/jupyter_core_1710257406420/work
jupyter_server @ file:///home/conda/feedstock_root/build_artifacts/jupyter_server_1717122053158/work
jupyter_server_terminals @ file:///home/conda/feedstock_root/build_artifacts/jupyter_server_terminals_1710262634903/work
jupyterlab @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_1716470278966/work
jupyterlab_pygments @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_pygments_1707149102966/work
jupyterlab_server @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_server-split_1716433953404/work
kiwisolver @ file:///home/conda/feedstock_root/build_artifacts/kiwisolver_1695379925569/work
kubernetes==30.1.0
langchain==0.2.2
langchain-community==0.2.3
langchain-core==0.2.4
langchain-groq==0.1.5
langchain-text-splitters==0.2.1
langdetect==1.0.9
langsmith==0.1.74
libmambapy @ file:///croot/mamba-split_1714483352891/work/libmambapy
llama-index-core==0.10.43.post1
llama-index-readers-file==0.1.23
llama-parse==0.4.4
llama_cpp_python==0.2.67
llamaindex-py-client==0.1.19
loguru==0.7.2
lxml==5.2.2
Markdown==3.6
markdown-it-py @ file:///home/conda/feedstock_root/build_artifacts/markdown-it-py_1686175045316/work
MarkupSafe @ file:///home/conda/feedstock_root/build_artifacts/markupsafe_1706899920239/work
marshmallow==3.21.3
matplotlib @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-suite_1715976243782/work
matplotlib-inline @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-inline_1713250518406/work
mdurl @ file:///home/conda/feedstock_root/build_artifacts/mdurl_1704317613764/work
menuinst @ file:///croot/menuinst_1714510563922/work
mistune @ file:///home/conda/feedstock_root/build_artifacts/mistune_1698947099619/work
mmh3==4.1.0
monotonic==1.6
mpmath==1.3.0
multidict==6.0.5
munkres==1.1.4
mypy-extensions==1.0.0
nbclient @ file:///home/conda/feedstock_root/build_artifacts/nbclient_1710317608672/work
nbconvert @ file:///home/conda/feedstock_root/build_artifacts/nbconvert-meta_1714477135335/work
nbformat @ file:///home/conda/feedstock_root/build_artifacts/nbformat_1712238998817/work
nest_asyncio @ file:///home/conda/feedstock_root/build_artifacts/nest-asyncio_1705850609492/work
networkx==3.3
nltk==3.8.1
notebook_shim @ file:///home/conda/feedstock_root/build_artifacts/notebook-shim_1707957777232/work
numpy @ file:///home/conda/feedstock_root/build_artifacts/numpy_1707225359967/work/dist/numpy-1.26.4-cp312-cp312-linux_x86_64.whl#sha256=031b7d6b2e5e604d9e21fc21be713ebf28ce133ec872dce6de006742d5e49bab
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.19.3
nvidia-nvjitlink-cu12==12.5.40
nvidia-nvtx-cu12==12.1.105
oauthlib==3.2.2
ollama==0.2.1
onnx==1.16.1
onnxruntime==1.18.0
onnxruntime-gpu==1.18.0
openai==1.31.2
opentelemetry-api==1.25.0
opentelemetry-exporter-otlp-proto-common==1.25.0
opentelemetry-exporter-otlp-proto-grpc==1.25.0
opentelemetry-instrumentation==0.46b0
opentelemetry-instrumentation-asgi==0.46b0
opentelemetry-instrumentation-fastapi==0.46b0
opentelemetry-proto==1.25.0
opentelemetry-sdk==1.25.0
opentelemetry-semantic-conventions==0.46b0
opentelemetry-util-http==0.46b0
ordered-set==4.1.0
orjson==3.10.3
overrides @ file:///home/conda/feedstock_root/build_artifacts/overrides_1706394519472/work
packaging @ file:///croot/packaging_1710807400464/work
pandas @ file:///home/conda/feedstock_root/build_artifacts/pandas_1715897630316/work
pandocfilters @ file:///home/conda/feedstock_root/build_artifacts/pandocfilters_1631603243851/work
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1712320355065/work
pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1706113125309/work
pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1602536217715/work
pillow @ file:///croot/pillow_1714398848491/work
pkgutil_resolve_name @ file:///home/conda/feedstock_root/build_artifacts/pkgutil-resolve-name_1694617248815/work
platformdirs @ file:///work/perseverance-python-buildout/croot/platformdirs_1701732573265/work
pluggy @ file:///work/perseverance-python-buildout/croot/pluggy_1698805497733/work
ply @ file:///home/conda/feedstock_root/build_artifacts/ply_1712242996588/work
portalocker==2.8.2
posthog==3.5.0
prometheus_client @ file:///home/conda/feedstock_root/build_artifacts/prometheus_client_1707932675456/work
prompt_toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1717583537988/work
protobuf==4.25.3
psutil @ file:///home/conda/feedstock_root/build_artifacts/psutil_1705722396628/work
ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1609419310487/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pure-eval @ file:///home/conda/feedstock_root/build_artifacts/pure_eval_1642875951954/work
pyasn1==0.6.0
pyasn1_modules==0.4.0
pycosat @ file:///croot/pycosat_1714510623388/work
pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
pydantic==2.7.3
pydantic_core==2.18.4
Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1714846767233/work
pyparsing @ file:///home/conda/feedstock_root/build_artifacts/pyparsing_1709721012883/work
pypdf==4.2.0
PyPDF2==3.0.1
PyPika==0.48.9
pyproject_hooks==1.1.0
PyQt5==5.15.10
PyQt5-sip @ file:///work/perseverance-python-buildout/croot/pyqt-split_1698847927472/work/pyqt_sip
PySocks @ file:///work/perseverance-python-buildout/croot/pysocks_1698845478203/work
PyStemmer==2.2.0.1
python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1709299778482/work
python-dotenv==1.0.1
python-iso639==2024.4.27
python-json-logger @ file:///home/conda/feedstock_root/build_artifacts/python-json-logger_1677079630776/work
python-magic==0.4.27
python-multipart==0.0.9
pytz @ file:///home/conda/feedstock_root/build_artifacts/pytz_1706886791323/work
PyYAML @ file:///home/conda/feedstock_root/build_artifacts/pyyaml_1695373450623/work
pyzmq @ file:///home/conda/feedstock_root/build_artifacts/pyzmq_1715024373784/work
qdrant-client==1.9.1
rapidfuzz==3.9.3
referencing @ file:///home/conda/feedstock_root/build_artifacts/referencing_1714619483868/work
regex==2024.5.15
requests @ file:///croot/requests_1707355572290/work
requests-oauthlib==2.0.0
requests-toolbelt==1.0.0
rfc3339-validator @ file:///home/conda/feedstock_root/build_artifacts/rfc3339-validator_1638811747357/work
rfc3986-validator @ file:///home/conda/feedstock_root/build_artifacts/rfc3986-validator_1598024191506/work
rich @ file:///home/conda/feedstock_root/build_artifacts/rich-split_1709150387247/work/dist
rpds-py @ file:///home/conda/feedstock_root/build_artifacts/rpds-py_1715089993456/work
rsa==4.9
ruamel.yaml @ file:///work/perseverance-python-buildout/croot/ruamel.yaml_1698863605521/work
safetensors==0.4.3
scikit-learn==1.5.0
scipy==1.13.1
Send2Trash @ file:///home/conda/feedstock_root/build_artifacts/send2trash_1712584999685/work
sentence-transformers==3.0.1
setuptools==69.5.1
shellingham==1.5.4
sip @ file:///home/conda/feedstock_root/build_artifacts/sip_1697300425834/work
six @ file:///home/conda/feedstock_root/build_artifacts/six_1620240208055/work
sniffio @ file:///home/conda/feedstock_root/build_artifacts/sniffio_1708952932303/work
snowballstemmer==2.2.0
soupsieve @ file:///home/conda/feedstock_root/build_artifacts/soupsieve_1693929250441/work
SQLAlchemy==2.0.30
stack-data @ file:///home/conda/feedstock_root/build_artifacts/stack_data_1669632077133/work
starlette==0.37.2
striprtf==0.0.26
sympy==1.12.1
tabulate==0.9.0
tenacity==8.3.0
terminado @ file:///home/conda/feedstock_root/build_artifacts/terminado_1710262609923/work
threadpoolctl==3.5.0
tiktoken==0.7.0
tinycss2 @ file:///home/conda/feedstock_root/build_artifacts/tinycss2_1713974937325/work
tokenizers==0.19.1
tomli @ file:///home/conda/feedstock_root/build_artifacts/tomli_1644342247877/work
torch==2.2.0
tornado @ file:///home/conda/feedstock_root/build_artifacts/tornado_1708363096407/work
tqdm @ file:///croot/tqdm_1714567712644/work
traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1713535121073/work
transformers==4.41.2
triton==2.2.0
truststore @ file:///work/perseverance-python-buildout/croot/truststore_1701735771625/work
typer==0.12.3
types-python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/types-python-dateutil_1710589910274/work
typing-inspect==0.9.0
typing-utils @ file:///home/conda/feedstock_root/build_artifacts/typing_utils_1622899189314/work
typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1717287769032/work
tzdata @ file:///home/conda/feedstock_root/build_artifacts/python-tzdata_1707747584337/work
ujson==5.10.0
unstructured==0.14.4
unstructured-client==0.23.0
uri-template @ file:///home/conda/feedstock_root/build_artifacts/uri-template_1688655812972/work/dist
urllib3 @ file:///croot/urllib3_1707770551213/work
uvicorn==0.30.1
uvloop==0.19.0
watchfiles==0.22.0
wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1704731205417/work
webcolors @ file:///home/conda/feedstock_root/build_artifacts/webcolors_1717667289718/work
webencodings @ file:///home/conda/feedstock_root/build_artifacts/webencodings_1694681268211/work
websocket-client @ file:///home/conda/feedstock_root/build_artifacts/websocket-client_1713923384721/work
websockets==12.0
wheel==0.43.0
wrapt==1.16.0
yarl==1.9.4
| [E:onnxruntime:Default, env.cc:228 ThreadMain] pthread_setaffinity_np failed for thread: 8353, index: 0, mask: {1, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set. | https://api.github.com/repos/langchain-ai/langchain/issues/22898/comments | 0 | 2024-06-14T13:11:49Z | 2024-06-14T13:23:00Z | https://github.com/langchain-ai/langchain/issues/22898 | 2,353,353,463 | 22,898 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
class PyPDFParser(BaseBlobParser):
"""Load `PDF` using `pypdf`"""
def __init__(
self, password: Optional[Union[str, bytes]] = None, extract_images: bool = False
):
self.password = password
self.extract_images = extract_images
def lazy_parse(self, blob: Blob) -> Iterator[Document]: # type: ignore[valid-type]
"""Lazily parse the blob."""
import pypdf
self.pdf_blob = blob
with blob.as_bytes_io() as pdf_file_obj: # type: ignore[attr-defined]
pdf_reader = pypdf.PdfReader(pdf_file_obj, password=self.password)
yield from [
Document(
page_content=page.extract_text()
+ self._extract_images_from_page(page),
metadata={"source": blob.source, "page": page_number}, # type: ignore[attr-defined]
)
for page_number, page in enumerate(pdf_reader.pages)
]
def _extract_images_from_page(self, page: pypdf._page.PageObject) -> str:
"""Extract images from page and get the text with RapidOCR."""
if not self.extract_images or "/XObject" not in page["/Resources"].keys():
return ""
xObject = page["/Resources"]["/XObject"].get_object() # type: ignore
images = []
for obj in xObject:
# print(f"obj: {xObject[obj]}")
if xObject[obj]["/Subtype"] == "/Image":
if xObject[obj].get("/Filter"):
if isinstance(xObject[obj]["/Filter"], str):
if xObject[obj]["/Filter"][1:] in _PDF_FILTER_WITHOUT_LOSS:
height, width = xObject[obj]["/Height"], xObject[obj]["/Width"]
# print(xObject[obj].get_data())
try:
images.append(
np.frombuffer(xObject[obj].get_data(), dtype=np.uint8).reshape(
height, width, -1
)
)
except Exception as e:
if xObject[obj]["/Filter"][1:] == "CCITTFaxDecode":
import fitz
with self.pdf_blob.as_bytes_io() as pdf_file_obj: # type: ignore[attr-defined]
with fitz.open("pdf", pdf_file_obj.read()) as doc:
pix = doc.load_page(page.page_number).get_pixmap(matrix=fitz.Matrix(1,1), colorspace=fitz.csGRAY)
images.append(pix.tobytes())
else:
warnings.warn(f"Reshape Error: {xObject[obj]}")
elif xObject[obj]["/Filter"][1:] in _PDF_FILTER_WITH_LOSS:
images.append(xObject[obj].get_data())
else:
warnings.warn(f"Unknown PDF Filter: {xObject[obj]["/Filter"][1:]}")
elif isinstance(xObject[obj]["/Filter"], list):
for xObject_filter in xObject[obj]["/Filter"]:
if xObject_filter[1:] in _PDF_FILTER_WITHOUT_LOSS:
height, width = xObject[obj]["/Height"], xObject[obj]["/Width"]
# print(xObject[obj].get_data())
try:
images.append(
np.frombuffer(xObject[obj].get_data(), dtype=np.uint8).reshape(
height, width, -1
)
)
except Exception as e:
if xObject[obj]["/Filter"][1:] == "CCITTFaxDecode":
import fitz
with self.pdf_blob.as_bytes_io() as pdf_file_obj: # type: ignore[attr-defined]
with fitz.open("pdf", pdf_file_obj.read()) as doc:
pix = doc.load_page(page.number).get_pixmap(matrix=fitz.Matrix(1,1), colorspace=fitz.csGRAY)
images.append(pix.tobytes())
else:
warnings.warn(f"Reshape Error: {xObject[obj]}")
break
elif xObject_filter[1:] in _PDF_FILTER_WITH_LOSS:
images.append(xObject[obj].get_data())
break
else:
warnings.warn(f"Unknown PDF Filter: {xObject_filter[1:]}")
else:
warnings.warn("Can Not Find PDF Filter!")
return extract_from_images_with_rapidocr(images)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When I use langchain-community, some PDF images will report errors during OCR. I tried to add some processing based on the source code PyPDFParser class, which temporarily solved the problem. Administrators can check whether to add this part of code in the new version. The complete PyPDFParser class is shown in Example Code.
### System Info
langchain-community==0.2.4 | When using langchain-community, some PDF images will report errors during OCR | https://api.github.com/repos/langchain-ai/langchain/issues/22892/comments | 0 | 2024-06-14T11:02:04Z | 2024-06-14T11:04:33Z | https://github.com/langchain-ai/langchain/issues/22892 | 2,353,121,701 | 22,892 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
## Issue
To make our document loader integrations as easy to use as possible we need to make sure the docs for them are thorough and standardized. There are two parts to this: updating the document loader docstrings and updating the actual integration docs.
This needs to be done for each DocumentLoader integration, ideally with one PR per DocumentLoader.
Related to broader issues https://github.com/langchain-ai/langchain/issues/21983 and https://github.com/langchain-ai/langchain/issues/22005.
## Docstrings
Each DocumentLoader class docstring should have the sections shown in the [Appendix](#appendix) below. The sections should have input and output code blocks when relevant. See RecursiveUrlLoader [docstrings](https://github.com/langchain-ai/langchain/blob/869523ad728e6b76d77f170cce13925b4ebc3c1e/libs/community/langchain_community/document_loaders/recursive_url_loader.py#L54) and [corresponding API reference](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.recursive_url_loader.RecursiveUrlLoader.html) for an example.
## Doc Pages
Each DocumentLoader [docs page](https://python.langchain.com/v0.2/docs/integrations/document_loaders/) should follow [this template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/document_loaders.ipynb). See [RecursiveUrlLoader](https://python.langchain.com/v0.2/docs/integrations/document_loaders/recursive_url/) for an example.
You can use the `langchain-cli` to quickly get started with a new document loader integration docs page (run from root of repo):
```bash
poetry run pip install -e libs/cli
poetry run langchain-cli integration create-doc --name "foo-bar" --name-class FooBar --component-type "DocumentLoader" --destination-dir ./docs/docs/integrations/document_loaders/
```
where `--name` is the integration package name without the "langchain-" prefix and `--name-class` is the class name without the "Loader" suffix. This will create a template doc with some autopopulated fields at docs/docs/integrations/document_loaders/foo_bar.ipynb.
To build a preview of the docs you can run (from root):
```bash
make docs_clean
make docs_build
cd docs/build/output-new
yarn
yarn start
```
## Appendix
"""
__ModuleName__ document loader integration
# TODO: Replace with relevant packages, env vars.
Setup:
Install ``__package_name__`` and set environment variable ``__MODULE_NAME___API_KEY``.
.. code-block:: bash
pip install -U __package_name__
export __MODULE_NAME___API_KEY="your-api-key"
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from __module_name__ import __ModuleName__Loader
loader = __ModuleName__Loader(
url = "https://docs.python.org/3.9/",
# otherparams = ...
)
Load:
.. code-block:: python
docs = loader.load()
print(docs[0].page_content[:100])
print(docs[0].metadata)
.. code-block:: python
TODO: Example output
# TODO: Delete if async load is not implemented
Async load:
.. code-block:: python
docs = await loader.aload()
print(docs[0].page_content[:100])
print(docs[0].metadata)
.. code-block:: python
TODO: Example output
Lazy load:
.. code-block:: python
docs = []
docs_lazy = loader.lazy_load()
# async variant:
# docs_lazy = await loader.alazy_load()
for doc in docs_lazy:
docs.append(doc)
print(docs[0].page_content[:100])
print(docs[0].metadata)
.. code-block:: python
TODO: Example output
""" | Standardize DocumentLoader docstrings and integration docs | https://api.github.com/repos/langchain-ai/langchain/issues/22866/comments | 1 | 2024-06-13T21:10:15Z | 2024-07-31T21:46:26Z | https://github.com/langchain-ai/langchain/issues/22866 | 2,352,072,105 | 22,866 |
[
"hwchase17",
"langchain"
] | ### URL
_No response_
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | Standardize DocumentLoader docstrings and integration docs | https://api.github.com/repos/langchain-ai/langchain/issues/22856/comments | 0 | 2024-06-13T18:22:34Z | 2024-06-13T19:57:10Z | https://github.com/langchain-ai/langchain/issues/22856 | 2,351,793,656 | 22,856 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.chat_message_histories.redis import RedisChatMessageHistory
try:
message_history = RedisChatMessageHistory(
session_id="12345678", url="redis://localhost:6379", ttl=600
)
except Exception as e:
abort(500, f'Error occurred: {str(e)}')
retriever = pdf_docsearch.as_retriever(search_type="similarity", search_kwargs={"k": 4})
memory = ConversationBufferWindowMemory(memory_key="chat_history", chat_memory=message_history,
input_key='question', output_key='answer',
return_messages=True, k=20)
### Error Message and Stack Trace (if applicable)
Error occurred: 'cluster_enabled'"
### Description
I'm working on implementing long-term memory for a chatbot using Langchain and a Redis database. However, I'm facing issues with the Redis client connection, particularly with the `redis.py` file where `cluster_info` seems to be empty in standalone mode
### System Info
Python 3.10
langchain-core 0.1.43
langchain-community 0.0.32 | RedisChatMessageHistory encountering issues in Redis standalone mode on Windows. | https://api.github.com/repos/langchain-ai/langchain/issues/22845/comments | 1 | 2024-06-13T10:37:38Z | 2024-07-25T20:04:17Z | https://github.com/langchain-ai/langchain/issues/22845 | 2,350,800,284 | 22,845 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Context:
* MessagePlaceholder can be optional or non optional
* Current LangChain API for input variables doesn't distinguish between possible input variables vs. required input variables
See: https://github.com/langchain-ai/langchain/pull/21640
## Requirements
* get_input_schema() should reflect optional and required inputs
* Expose another property to either fetch all required or all possible input variables (with explanation about why this is the correct approach) alternatively delegate to `get_input_schema()`, and make semantics of input_variables clear (e.g., all possible values)
```python
from langchain import LLMChain
prompt = ChatPromptTemplate.from_messages([MessagesPlaceholder("history", optional=True), ('user', '${input}')])
model = ChatOpenAI()
chain = LLMChain(llm=model, prompt=prompt)
chain({'input': 'what is your name'})
prompt.get_input_schema()
```
| Spec out API for all required vs. all possible input variables | https://api.github.com/repos/langchain-ai/langchain/issues/22832/comments | 2 | 2024-06-12T20:14:59Z | 2024-07-17T21:34:51Z | https://github.com/langchain-ai/langchain/issues/22832 | 2,349,606,960 | 22,832 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
https://github.com/langchain-ai/langchain/pull/15659 (introduced in `langchain-community==0.0.20`) removed the document id from AzureSearch retrieved Documents, which was a breaking change. Was there are a reason this was done? If not let's add it back. | Add Document ID back to AzureSearch Documents | https://api.github.com/repos/langchain-ai/langchain/issues/22827/comments | 1 | 2024-06-12T17:11:45Z | 2024-06-12T18:07:37Z | https://github.com/langchain-ai/langchain/issues/22827 | 2,349,293,411 | 22,827 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/rag/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
In section **5. Retrieval and Generation: Generate** under [Built-in chains](https://python.langchain.com/v0.2/docs/tutorials/rag/#built-in-chains) there is an error in the code example:
from **langchain.chains** import create_retrieval_chain
should be changed to
from **langchain.chains.retrieval** import create_retrieval_chain
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/tutorials/rag/> | https://api.github.com/repos/langchain-ai/langchain/issues/22826/comments | 11 | 2024-06-12T17:04:38Z | 2024-06-13T14:03:57Z | https://github.com/langchain-ai/langchain/issues/22826 | 2,349,275,655 | 22,826 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
# Context
Currently, LangChain supports Pydantic 2 only through the v1 namespace.
The plan is to transition for Pydantic 2 with release 0.3.0 of LangChain, and drop support for Pydantic 1.
LangChain has around ~1000 pydantic objects across different packages While LangChain uses a number of deprecated features, one of the harder things to update is the usage of a vanilla `@root_validator()` (which is used ~250 times across the code base).
The goal of this issue is to do as much preliminary work as possible to help prepare for the migration from pydantic v1 to pydantic 2.
To help prepare for the migration, we'll need to refactor each occurrence of a vanilla `root_validator()` to one of the following 3 variants (depending on what makes sense in the context of the model):
1. `root_validator(pre=True)` -- pre initialization validator
2. `root_validator(pre=False, skip_on_failure=True)` -- post initialization validator
3. `root_validator(pre=True)` AND `root_validator(pre=False, skip_on_failure=True)` to include both pre initialization and post initialization validation.
## Guidelines
- Pre-initialization is most useful for **creating defaults** for values, especially when the defaults cannot be supplied per field individually.
- Post-initialization is most useful for doing more complex validation, especially one that involves multiple fields.
## What not to do
* Do **NOT** upgrade to `model_validator`. We're trying to break the work into small chunks that can be done while we're still using Pydantic v1 functionality!
* Do **NOT** create `field_validators` when doing the refactor.
## Simple Example
```python
class Foo(BaseModel):
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
values["api_key"] = get_from_dict_or_env(
values, "some_api_key", "SOME_API_KEY", default=""
)
if values["temperature"] is not None and not 0 <= values["temperature"] <= 1:
raise ValueError("temperature must be in the range [0.0, 1.0]")
return values
```
# After refactor
```python
class Foo(BaseModel):
@root_validator(pre=True)
def pre_init(cls, values):
# Logic for setting defaults goes in the pre_init validator.
# While in some cases, the logic could be pulled into the `Field` definition
# directly, it's perfectly fine for this refactor to keep the changes minimal
# and just move the logic into the pre_init validator.
values["api_key"] = get_from_dict_or_env(
values, "some_api_key", "SOME_API_KEY", default=""
)
return values
@root_validator(pre=False, skip_on_failure=True)
def post_init(self, values):
# Post init validation works with an object that is already initialized
# so it can access the fields and their values (e.g., temperature).
# if this logic were part of the pre_init validator, it would raise
# a KeyError exception since `temperature` does not exist in the values
# dictionary at that point.
if values["temperature"] is not None and not 0 <= values["temperature"] <= 1:
raise ValueError("temperature must be in the range [0.0, 1.0]")
return values
```
## Example Refactors
Here are some actual for the refactors https://gist.github.com/eyurtsev/be30ddbc54dcdc02f98868eacb24b2a1
If you're feeling especially creative, you could try to use the example refactors, an LLM chain built with an appropriate prompt to attempt to automatically fix this code using LLMs!
## Vanilla `root_validator
- [x] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/agent_toolkits/connery/toolkit.py#L22
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/agents/openai_assistant/base.py#L212
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chains/llm_requests.py#L62
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/anyscale.py#L104
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/azure_openai.py#L108
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/baichuan.py#L145
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/baidu_qianfan_endpoint.py#L174
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/coze.py#L119
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/dappier.py#L78
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/deepinfra.py#L240
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/edenai.py#L303
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/ernie.py#L111
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/fireworks.py#L115
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/google_palm.py#L263
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/huggingface.py#L79
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/hunyuan.py#L193
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/jinachat.py#L220
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/kinetica.py#L344
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/konko.py#L87
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/litellm.py#L242
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/moonshot.py#L28
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/octoai.py#L50
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/openai.py#L277
- [x] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/pai_eas_endpoint.py#L70
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/premai.py#L229
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/solar.py#L40
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/sparkllm.py#L198
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/tongyi.py#L276
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/vertexai.py#L227
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/yuan2.py#L168
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/chat_models/zhipuai.py#L264
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/cross_encoders/sagemaker_endpoint.py#L98
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/document_compressors/dashscope_rerank.py#L35
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/document_compressors/volcengine_rerank.py#L42
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/document_loaders/apify_dataset.py#L52
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/aleph_alpha.py#L83
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/anyscale.py#L36
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/awa.py#L19
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/azure_openai.py#L57
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/baidu_qianfan_endpoint.py#L51
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/bedrock.py#L76
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/clarifai.py#L51
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/cohere.py#L57
- [x] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/dashscope.py#L113
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/deepinfra.py#L62
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/edenai.py#L38
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/embaas.py#L64
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/ernie.py#L34
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/fastembed.py#L57
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/gigachat.py#L80
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/google_palm.py#L65
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/gpt4all.py#L31
- [x] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/huggingface_hub.py#L55
- [x] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/jina.py#L33
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/laser.py#L44
- [x] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/llamacpp.py#L65
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/llm_rails.py#L39
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/localai.py#L196
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/minimax.py#L87
- [x] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/mosaicml.py#L49
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/nemo.py#L61
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/nlpcloud.py#L33
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/oci_generative_ai.py#L88
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/octoai_embeddings.py#L41
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/openai.py#L285
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/premai.py#L35
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/sagemaker_endpoint.py#L118
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/sambanova.py#L45
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/solar.py#L83
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/vertexai.py#L36
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/volcengine.py#L46
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/embeddings/yandex.py#L78
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/ai21.py#L76
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/aleph_alpha.py#L170
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/anthropic.py#L77
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/anthropic.py#L188
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/anyscale.py#L95
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/aphrodite.py#L160
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/baichuan.py#L34
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/baidu_qianfan_endpoint.py#L79
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/bananadev.py#L66
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/beam.py#L98
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/bedrock.py#L392
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/bedrock.py#L746
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/cerebriumai.py#L65
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/clarifai.py#L56
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/cohere.py#L98
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/ctransformers.py#L60
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/ctranslate2.py#L53
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/deepinfra.py#L46
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/deepsparse.py#L58
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/edenai.py#L75
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/exllamav2.py#L61
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/fireworks.py#L64
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/friendli.py#L69
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/gigachat.py#L116
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/google_palm.py#L110
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/gooseai.py#L89
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/gpt4all.py#L130
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/huggingface_endpoint.py#L165
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/huggingface_hub.py#L64
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/huggingface_text_gen_inference.py#L137
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/llamacpp.py#L136
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/manifest.py#L19
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/minimax.py#L74
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/moonshot.py#L82
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/mosaicml.py#L67
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/nlpcloud.py#L59
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/oci_data_science_model_deployment_endpoint.py#L50
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/oci_generative_ai.py#L73
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/octoai_endpoint.py#L69
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/opaqueprompts.py#L41
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/openai.py#L272
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/openai.py#L821
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/openai.py#L1028
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/openlm.py#L19
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/pai_eas_endpoint.py#L55
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/petals.py#L89
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/pipelineai.py#L68
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/predictionguard.py#L56
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/replicate.py#L100
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/rwkv.py#L100
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/sagemaker_endpoint.py#L251
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/sambanova.py#L243
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/sambanova.py#L756
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/solar.py#L71
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/sparkllm.py#L57
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/stochasticai.py#L61
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/symblai_nebula.py#L68
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/tongyi.py#L201
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/vertexai.py#L226
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/vertexai.py#L413
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/vllm.py#L76
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/volcengine_maas.py#L55
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/watsonxllm.py#L118
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/writer.py#L72
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/yandex.py#L77
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/retrievers/arcee.py#L73
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/retrievers/google_cloud_documentai_warehouse.py#L51
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/retrievers/pinecone_hybrid_search.py#L139
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/retrievers/qdrant_sparse_vector_retriever.py#L52
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/retrievers/thirdai_neuraldb.py#L113
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/tools/connery/service.py#L23
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/tools/connery/tool.py#L66
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/apify.py#L22
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/arcee.py#L54
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/arxiv.py#L75
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/asknews.py#L27
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/awslambda.py#L37
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/bibtex.py#L43
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/cassandra_database.py#L485
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/clickup.py#L326
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/dalle_image_generator.py#L92
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/dataforseo_api_search.py#L45
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/dataherald.py#L29
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/duckduckgo_search.py#L44
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/github.py#L45
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/gitlab.py#L37
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/golden_query.py#L31
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/google_finance.py#L32
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/google_jobs.py#L32
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/google_lens.py#L38
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/google_places_api.py#L46
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/google_scholar.py#L53
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/google_search.py#L72
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/google_serper.py#L49
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/google_trends.py#L36
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/jira.py#L23
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/merriam_webster.py#L35
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/outline.py#L30
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/polygon.py#L20
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/pubmed.py#L51
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/reddit_search.py#L33
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/rememberizer.py#L16
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/searchapi.py#L35
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/searx_search.py#L232
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/semanticscholar.py#L53
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/serpapi.py#L60
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/stackexchange.py#L22
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/tensorflow_datasets.py#L63
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/twilio.py#L51
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/wikidata.py#L95
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/wikipedia.py#L29
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/wolfram_alpha.py#L28
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/azuresearch.py#L1562
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/experimental/langchain_experimental/open_clip/open_clip.py#L17
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/agent.py#L773
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/agent.py#L977
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/agent.py#L991
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/openai_assistant/base.py#L213
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/base.py#L228
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/combine_documents/map_rerank.py#L109
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/conversation/base.py#L48
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/conversational_retrieval/base.py#L483
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/elasticsearch_database/base.py#L59
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/moderation.py#L43
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/qa_with_sources/vector_db.py#L64
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/retrieval_qa/base.py#L287
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/retrieval_qa/base.py#L295
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/router/llm_router.py#L27
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/sequential.py#L155
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/memory/buffer.py#L85
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/memory/summary.py#L76
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/memory/summary_buffer.py#L46
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/output_parsers/combining.py#L18
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/output_parsers/enum.py#L15
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/document_compressors/embeddings_filter.py#L48
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/ai21/langchain_ai21/ai21_base.py#L21
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/ai21/langchain_ai21/chat_models.py#L71
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/anthropic/langchain_anthropic/chat_models.py#L599
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/anthropic/langchain_anthropic/llms.py#L77
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/anthropic/langchain_anthropic/llms.py#L161
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/fireworks/langchain_fireworks/chat_models.py#L322
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/fireworks/langchain_fireworks/embeddings.py#L27
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/groq/langchain_groq/chat_models.py#L170
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py#L325
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/huggingface/langchain_huggingface/embeddings/huggingface_endpoint.py#L49
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/huggingface/langchain_huggingface/llms/huggingface_endpoint.py#L160
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/ibm/langchain_ibm/embeddings.py#L68
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/ibm/langchain_ibm/llms.py#L128
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/mistralai/langchain_mistralai/chat_models.py#L432
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/mistralai/langchain_mistralai/embeddings.py#L67
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/chat_models/azure.py#L115
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/chat_models/base.py#L364
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/embeddings/azure.py#L64
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/embeddings/base.py#L229
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/llms/azure.py#L87
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/llms/base.py#L156
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/together/langchain_together/chat_models.py#L74
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/together/langchain_together/embeddings.py#L143
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/together/langchain_together/llms.py#L86
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/upstage/langchain_upstage/chat_models.py#L82
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/upstage/langchain_upstage/embeddings.py#L145
- [ ] https://github.com/langchain-ai/langchain/blob/master/libs/partners/voyageai/langchain_voyageai/embeddings.py#L51
| Prepare for pydantic 2 migration by refactoring vanilla @root_validator() usage | https://api.github.com/repos/langchain-ai/langchain/issues/22819/comments | 1 | 2024-06-12T14:09:36Z | 2024-07-05T16:25:26Z | https://github.com/langchain-ai/langchain/issues/22819 | 2,348,881,003 | 22,819 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.InsufficientPrivilege) permission denied to create extension "vector"
HINT: Must be superuser to create this extension.
[SQL: BEGIN;SELECT pg_advisory_xact_lock(1573678846307946496);CREATE EXTENSION IF NOT EXISTS vector;COMMIT;]
(Background on this error at: https://sqlalche.me/e/20/f405)
### Error Message and Stack Trace (if applicable)
ERROR:Failed to create vector extension: (psycopg2.errors.InsufficientPrivilege) permission denied to create extension "vector"
HINT: Must be superuser to create this extension.
[SQL: BEGIN;SELECT pg_advisory_xact_lock(1573678846307946496);CREATE EXTENSION IF NOT EXISTS vector;COMMIT;]
(Background on this error at: https://sqlalche.me/e/20/f405)
2024-06-12,16:45:07 start_local:828 - ERROR:Exception on /codebits/api/v1/parse [POST]
### Description
ERROR:Failed to create vector extension: (psycopg2.errors.InsufficientPrivilege) permission denied to create extension "vector"
HINT: Must be superuser to create this extension.
[SQL: BEGIN;SELECT pg_advisory_xact_lock(1573678846307946496);CREATE EXTENSION IF NOT EXISTS vector;COMMIT;]
(Background on this error at: https://sqlalche.me/e/20/f405)
2024-06-12,16:45:07 start_local:828 - ERROR:Exception on /codebits/api/v1/parse [POST]
### System Info
<img width="733" alt="Screenshot 2024-06-12 at 4 55 43 PM" src="https://github.com/langchain-ai/langchain/assets/108388565/74ce6f4f-491c-41b0-98f6-e0859745aa5a">
MAC
python 3.12 | I am getting this error | https://api.github.com/repos/langchain-ai/langchain/issues/22811/comments | 4 | 2024-06-12T11:27:58Z | 2024-06-13T05:23:01Z | https://github.com/langchain-ai/langchain/issues/22811 | 2,348,531,542 | 22,811 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_huggingface import (
ChatHuggingFace,
HuggingFacePipeline,
)
chat_llm = ChatHuggingFace(
llm=HuggingFacePipeline.from_model_id(
model_id="path/to/your/local/model", # I downloaded Meta-Llama-3-8B
task="text-generation",
device_map="auto",
model_kwargs={"temperature": 0.0, "local_files_only": True},
)
)
```
### Error Message and Stack Trace (if applicable)
```bash
src/resources/predictor.py:55: in load
self.llm = ChatHuggingFace(
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/langchain_huggingface/chat_models/huggingface.py:169: in __init__
self._resolve_model_id()
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/langchain_huggingface/chat_models/huggingface.py:295: in _resolve_model_id
available_endpoints = list_inference_endpoints("*")
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/huggingface_hub/hf_api.py:7081: in list_inference_endpoints
user = self.whoami(token=token)
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114: in _inner_fn
return fn(*args, **kwargs)
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/huggingface_hub/hf_api.py:1390: in whoami
headers=self._build_hf_headers(
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/huggingface_hub/hf_api.py:8448: in _build_hf_headers
return build_hf_headers(
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114: in _inner_fn
return fn(*args, **kwargs)
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/huggingface_hub/utils/_headers.py:124: in build_hf_headers
token_to_send = get_token_to_send(token)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
token = True
def get_token_to_send(token: Optional[Union[bool, str]]) -> Optional[str]:
"""Select the token to send from either `token` or the cache."""
# Case token is explicitly provided
if isinstance(token, str):
return token
# Case token is explicitly forbidden
if token is False:
return None
# Token is not provided: we get it from local cache
cached_token = get_token()
# Case token is explicitly required
if token is True:
if cached_token is None:
> raise LocalTokenNotFoundError(
"Token is required (`token=True`), but no token found. You"
" need to provide a token or be logged in to Hugging Face with"
" `huggingface-cli login` or `huggingface_hub.login`. See"
" https://huggingface.co/settings/tokens."
)
E huggingface_hub.errors.LocalTokenNotFoundError: Token is required (`token=True`), but no token found. You need to provide a token or be logged in to Hugging Face with `huggingface-cli login` or `huggingface_hub.login`. See https://huggingface.co/settings/tokens.
/opt/poetry-cache/virtualenvs/sagacify-example-llm-8EXZSVYp-py3.10/lib/python3.10/site-packages/huggingface_hub/utils/_headers.py:158: LocalTokenNotFoundError
```
### Description
I am trying to use the `langchain-huggingface` library to instantiate a `ChatHuggingFace` object with a `HuggingFacePipeline` `llm` parameter which targets a locally downloaded model (here, `Meta-Llama-3-8B`).
I expect the instantiation to work fine even though I don't have a HuggingFace token setup in my environment as I use a local model.
Instead, the instantiation fails because it tries to read a token in order to list the available endpoints under my HuggingFace account.
After investigation, I think this line of code should be at line 456 instead of line 443 in file `langchain/libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py`
```python
def _resolve_model_id(self) -> None:
"""Resolve the model_id from the LLM's inference_server_url"""
from huggingface_hub import list_inference_endpoints # type: ignore[import]
available_endpoints = list_inference_endpoints("*") # Line 443: This line is not at the right place
if _is_huggingface_hub(self.llm) or (
hasattr(self.llm, "repo_id") and self.llm.repo_id
):
self.model_id = self.llm.repo_id
return
elif _is_huggingface_textgen_inference(self.llm):
endpoint_url: Optional[str] = self.llm.inference_server_url
elif _is_huggingface_pipeline(self.llm):
self.model_id = self.llm.model_id
return # My code lies in this case where it does not use available endpoints
else:
endpoint_url = self.llm.endpoint_url
# Line 456: The line should be here instead
for endpoint in available_endpoints:
if endpoint.url == endpoint_url:
self.model_id = endpoint.repository
if not self.model_id:
raise ValueError(
"Failed to resolve model_id:"
f"Could not find model id for inference server: {endpoint_url}"
"Make sure that your Hugging Face token has access to the endpoint."
)
```
### System Info
```bash
huggingface-hub 0.23.2 Client library to download and publish models, datasets and other repos on the huggingface.co hub
langchain 0.2.1 Building applications with LLMs through composability
langchain-core 0.2.2 Building applications with LLMs through composability
langchain-huggingface 0.0.3 An integration package connecting Hugging Face and LangChain
langchain-text-splitters 0.2.0 LangChain text splitting utilities
sentence-transformers 3.0.0 Multilingual text embeddings
tokenizers 0.19.1
transformers 4.41.2 State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
```
platform: `linux`
python: `Python 3.10.12` | ChatHuggingFace using local model with HuggingFacePipeline wrongly checks for available inference endpoints | https://api.github.com/repos/langchain-ai/langchain/issues/22804/comments | 8 | 2024-06-12T07:52:24Z | 2024-07-30T07:53:26Z | https://github.com/langchain-ai/langchain/issues/22804 | 2,348,079,651 | 22,804 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
llm = ChatGoogleGenerativeAI(model="gemini-1.5-pro", streaming=True,max_tokens=2524)
default_chain = LLMChain(
prompt = DEFAULT_PROMPT,
llm=self.llm,
verbose=False
)
`default_chain.ainvoke({"input": rephrased_question['text']}, config={"callbacks":[callback]})`
async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Rewrite on_llm_new_token to send token to client."""
await self.send(token)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I have initiated a langchain chain as seen above, where call back is a class with on_llm_new_token
To call the chain i use ainvoke.
If I use Anyscale llm class or VLLMOpenAI the response is streamed correctly, however with google this is not the case.
Is there a bug in my code? Perhaps some other parameter I should pass to ChatGoogleGenerativeAI ot does google not support streaming?
### System Info
langchain 0.1.0
langchain-community 0.0.11
langchain-core 0.1.9
langchain-google-genai 1.0.1
langchainhub 0.1.15
langsmith 0.0.92
| ChatGoogleGenerativeAI does not support streaming | https://api.github.com/repos/langchain-ai/langchain/issues/22802/comments | 2 | 2024-06-12T06:36:40Z | 2024-06-12T08:31:54Z | https://github.com/langchain-ai/langchain/issues/22802 | 2,347,931,743 | 22,802 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/sql_qa/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
In https://python.langchain.com/v0.2/docs/tutorials/sql_qa/#dealing-with-high-cardinality-columns this section, after define retriever_tool , should add this tool into the tools as tools.append(retriever_tool) . Or the agent will not know the existence of the retriever_tool.
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/tutorials/sql_qa/> | https://api.github.com/repos/langchain-ai/langchain/issues/22798/comments | 1 | 2024-06-12T05:12:55Z | 2024-06-17T12:57:18Z | https://github.com/langchain-ai/langchain/issues/22798 | 2,347,825,260 | 22,798 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import langchain
from langchain_community.chat_models import ChatHunyuan
print(langchain.__version__)
hunyuan_app_id = "******"
hunyuan_secret_id = "*********"
hunyuan_secret_key = "*************"
llm_tongyi = ChatHunyuan(streaming=True, hunyuan_app_id=hunyuan_app_id,
hunyuan_secret_id=hunyuan_secret_id,
hunyuan_secret_key=hunyuan_secret_key)
print(llm_tongyi.invoke("你好啊"))
### Error Message and Stack Trace (if applicable)
ValueError: Error from Hunyuan api response: {'note': '以上内容为AI生成,不代表开发者立场,请勿删除或修改本标记', 'choices': [{'finish_reason': 'stop'}], 'created': '1718155233', 'id': '12390d63-7be5-4dbe-b567-183f3067bc75', 'usage': {'prompt_tokens': 0, 'completion_tokens': 0, 'total_tokens': 0}, 'error': {'code': 2001, 'message': '鉴权失败:[request id:12390d63-7be5-4dbe-b567-183f3067bc75]signature calculated is different from client signature'}}
### Description
hunyuan message include chinese signature error
### System Info
langchain version 0.1.9
windows
3.9.13 | hunyuan message include chinese signature error | https://api.github.com/repos/langchain-ai/langchain/issues/22795/comments | 0 | 2024-06-12T03:31:57Z | 2024-06-12T03:34:28Z | https://github.com/langchain-ai/langchain/issues/22795 | 2,347,725,322 | 22,795 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
```python
from pydantic.v1 import BaseModel
from pydantic import BaseModel as BaseModelV2
class Answer(BaseModel):
answer: str
class Answer2(BaseModelV2):
""""The answer."""
answer: str
from langchain_openai import ChatOpenAI
model = ChatOpenAI()
model.with_structured_output(Answer).invoke('the answer is foo') # <-- Returns pydantic object
model.with_structured_output(Answer2).invoke('the answer is foo') # <--- Returns dict
``` | with_structured_output format depends on whether we're using pydantic proper or pydantic.v1 namespace | https://api.github.com/repos/langchain-ai/langchain/issues/22782/comments | 2 | 2024-06-11T18:30:58Z | 2024-06-14T21:54:27Z | https://github.com/langchain-ai/langchain/issues/22782 | 2,347,043,210 | 22,782 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import dotenv
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.output_parsers import StrOutputParser
dotenv.load_dotenv()
llm = ChatOpenAI(
model="gpt-4",
temperature=0.2,
# NOTE: setting max_tokens to "100" works. Setting to 8192 or something slightly lower does not.
max_tokens=8160
)
output_parser = StrOutputParser()
prompt_template = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant. Answer all questions to the best of your ability."),
MessagesPlaceholder(variable_name="messages"),
])
chain = prompt_template | llm | output_parser
response = chain.invoke({
"messages": [
HumanMessage(content="what llm are you 1? what llm are you 2? what llm are you 3? what llm are you 4? what llm are you 5? what llm are you 6?"),
],
})
print(response)
```
### Error Message and Stack Trace (if applicable)
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens. However, you requested 8235 tokens (75 in the messages, 8160 in the completion). Please reduce the length of the messages or completion.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
### Description
`max_tokens` is not correctly accounting for user prompt.
If you specify a `max_tokens` of `100` as an example, it "correctly accounts for it" (not really, but gives a result), by simply having the extra room in the context window to expand into. With any given prompt, it will produce the expected result.
However, If you specify a max_tokens (for GPT4 as an example of "8192" or "8100", etc. it does not.
This means max_tokens is effectively not implemented correctly.
### System Info
langchain==0.1.20
langchain-aws==0.1.4
langchain-community==0.0.38
langchain-core==0.1.52
langchain-google-vertexai==1.0.3
langchain-openai==0.1.7
langchain-text-splitters==0.0.2
platform mac
Python 3.11.6 | [FEATURE REQUEST] langchain-openai - max_tokens (vs max_context?) ability to use full LLM contexts and account for user-messages automatically. | https://api.github.com/repos/langchain-ai/langchain/issues/22778/comments | 2 | 2024-06-11T14:48:51Z | 2024-06-12T16:30:29Z | https://github.com/langchain-ai/langchain/issues/22778 | 2,346,632,699 | 22,778 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from langchain_huggingface.llms import HuggingFacePipeline
tokenizer = AutoTokenizer.from_pretrained('microsoft/Phi-3-mini-128k-instruct')
model = AutoModelForCausalLM.from_pretrained('microsoft/Phi-3-mini-128k-instruct', device_map='cuda:0', trust_remote_code=True)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=100, top_k=50, temperature=0.1, do_sample=True)
llm = HuggingFacePipeline(pipeline=pipe)
print(llm.model_id)
# 'gpt2' (expected 'microsoft/Phi-3-mini-128k-instruct')
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
As mentioned by documentation,
> [They (HuggingFacePipeline) can also be loaded by passing in an existing transformers pipeline directly](https://python.langchain.com/v0.2/docs/integrations/llms/huggingface_pipelines/)
But it seems that the implementation is not complete because the model_id parameters always show gpt2 no matter what model you load. Since the example in the documentation uses ```gpt2``` as the sample model, which is the default model, in the first review it is not possible to see this bug. But if you try another model from huggingface (For example the code mentioned), you can see the problem.
Only ```gpt2``` will be shown no matter what pipeline you use to initialize HuggingFacePipeline with.
Although, it seems that the correct model is loaded, and if you invoke the model with some prompt, it will generate expected response.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sun Apr 28 14:29:16 UTC 2024
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.5
> langchain: 0.2.3
> langchain_community: 0.2.4
> langsmith: 0.1.76
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Passing transformer's pipeline to HuggingFacePipeline does not initialize the HuggingFacePipeline correctly. | https://api.github.com/repos/langchain-ai/langchain/issues/22776/comments | 0 | 2024-06-11T14:08:11Z | 2024-06-22T23:31:54Z | https://github.com/langchain-ai/langchain/issues/22776 | 2,346,538,588 | 22,776 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I'm using the code from the LangChain docs verbatim
```python
from langchain_community.document_loaders import AzureAIDocumentIntelligenceLoader
file_path = "<filepath>"
endpoint = "<endpoint>"
key = "<key>"
loader = AzureAIDocumentIntelligenceLoader(
api_endpoint=endpoint,
api_key=key,
file_path=file_path,
api_model="prebuilt-layout",
mode="page",
)
documents = loader.load()
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
* I'm trying to use the Azure Document Intelligence loader to read my pdf files.
* Using the `markdown` mode I only get the first page of the pdf loaded.
* If I use any other mode (page, single) I will get at most pages 1 and 2.
* I expect all pages within a page to be returned as a Document object.
### System Info
langchain==0.2.3
langchain-community==0.2.4
langchain-core==0.2.5
langchain-text-splitters==0.2.1
platform: mac
python version: 3.12.3 | AzureAIDocumentIntelligenceLoader does not load all PDF pages | https://api.github.com/repos/langchain-ai/langchain/issues/22775/comments | 2 | 2024-06-11T12:04:38Z | 2024-06-23T13:36:25Z | https://github.com/langchain-ai/langchain/issues/22775 | 2,346,246,655 | 22,775 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.prompts import PipelinePromptTemplate, PromptTemplate
from langchain.agents import create_react_agent
from langchain.tools import Tool
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
full_template = """{agent-introduction}
{agent-instructions}
"""
full_prompt = PromptTemplate.from_template(full_template)
introduction_template = """You are impersonating {person}."""
introduction_prompt = PromptTemplate.from_template(introduction_template)
instructions_template = """Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}"""
instructions_prompt = PromptTemplate.from_template(instructions_template)
input_prompts = [
("agent-introduction", introduction_prompt),
("agent-instructions", instructions_prompt),
]
pipeline_prompt = PipelinePromptTemplate(
final_prompt=full_prompt, pipeline_prompts=input_prompts
)
tools = [
Tool.from_function(
name="General Chat",
description="For general chat not covered by other tools",
func=llm.invoke,
return_direct=True
)
]
agent = create_react_agent(llm, tools, pipeline_prompt)
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "C:\Users\martin.ohanlon.neo4j\Documents\neo4j-graphacademy\llm-cb-env\llm-chatbot-python\test_prompt.py", line 57, in <module>
agent = create_react_agent(llm, tools, pipeline_prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\martin.ohanlon.neo4j\Documents\neo4j-graphacademy\llm-cb-env\Lib\site-packages\langchain\agents\react\agent.py", line 116, in create_react_agent
prompt = prompt.partial(
^^^^^^^^^^^^^^^
File "C:\Users\martin.ohanlon.neo4j\Documents\neo4j-graphacademy\llm-cb-env\Lib\site-packages\langchain_core\prompts\base.py", line 188, in partial
return type(self)(**prompt_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\martin.ohanlon.neo4j\Documents\neo4j-graphacademy\llm-cb-env\Lib\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for PipelinePromptTemplate
__root__
Found overlapping input and partial variables: {'tools', 'tool_names'} (type=value_error)
```
### Description
`create_react_agent` raises a `Found overlapping input and partial variables: {'tools', 'tool_names'} (type=value_error)` error when passed a `PipelinePromptTemplate`.
I am composing an agent prompt using `PipelinePromptTemplate`, when I pass the composed prompt to `create_react_agent` I am presented with an error.
The above example replicates the error.
### System Info
langchain==0.2.3
langchain-community==0.2.4
langchain-core==0.2.5
langchain-openai==0.1.8
langchain-text-splitters==0.2.1
langchainhub==0.1.18
Windows
Python 3.12.0 | create_react_agent validation error when using PipelinePromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/22774/comments | 0 | 2024-06-11T11:34:12Z | 2024-06-11T11:36:59Z | https://github.com/langchain-ai/langchain/issues/22774 | 2,346,186,615 | 22,774 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
not applicable
### Error Message and Stack Trace (if applicable)
not applicable
### Description
The DocumentDBVectorSearch docs mention it supports metadata filtering:
https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.documentdb.DocumentDBVectorSearch.html#langchain_community.vectorstores.documentdb.DocumentDBVectorSearch.as_retriever
However, unless I misunderstand the code, I really don't think it does.
I see that VectorStoreRetriever._get_relevant_documents passes search_kwargs to the similarity search of the underlying vector store.
And nothing in the code of DocumentDBVectorSearch is using search_kwargs at all.
In my project we need to review relevant parts of opensource softwares to make sure they really meet the requirements. So if this is not a bug, and the feature is indeed implemented somewhere else, could anybody please clarify how metadata filtering in DocumentDBVectorSearch is implemented?
### System Info
not applicable | DocumentDBVectorSearch and metadata filtering | https://api.github.com/repos/langchain-ai/langchain/issues/22770/comments | 9 | 2024-06-11T09:09:54Z | 2024-06-17T07:51:53Z | https://github.com/langchain-ai/langchain/issues/22770 | 2,345,840,847 | 22,770 |