issue_owner_repo
sequencelengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
PineconeVectorStore.from_documents(
[final_results_doc],
embeddings,
index_name=index_name,
namespace=namespace,
async_req=False, # this does not work
)
```
### Description
#22571 was merged, but it doesn't actually fully address the issue. `async_req` can't be passed to other methods like `PineconeVectorStore.from_documents` in order address the multiprocessing issue with AWS Lambda.
### System Info
```
langchain==0.2.7
langchain-aws==0.1.9
langchain-community==0.2.7
langchain-core==0.2.12
langchain-pinecone==0.1.1
langchain-text-splitters==0.2.2
``` | pinecone: Fix multiprocessing issue in PineconeVectorStore | https://api.github.com/repos/langchain-ai/langchain/issues/24042/comments | 0 | 2024-07-10T00:36:55Z | 2024-07-10T00:39:32Z | https://github.com/langchain-ai/langchain/issues/24042 | 2,399,478,080 | 24,042 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
%pip install --upgrade --quiet fastembed
%pip install --upgrade --quiet langchain_community
%pip install --upgrade --quiet langchain
from langchain_community.embeddings.fastembed import FastEmbedEmbeddings
embeddings = FastEmbedEmbeddings()
```
### Error Message and Stack Trace (if applicable)
`---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[<ipython-input-8-152b947eb52c>](https://localhost:8080/#) in <cell line: 1>()
----> 1 embeddings = FastEmbedEmbeddings()
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in __init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for FastEmbedEmbeddings
_model
extra fields not permitted (type=value_error.extra)
### Description
Unable to Instantie FastEmbed model. It raises validation error for extra fields while none are provided. Issue seems to be arising from pydantic.
Code was working well on langchain == 0.2.6 and langchain-core == 0.2.11. Tried installing older versions but still getting the error.
Followed the tutorial here: https://python.langchain.com/v0.2/docs/integrations/text_embedding/fastembed/
### System Info
langchain==0.2.7
langchain-community==0.2.7
langchain-core==0.2.12
langchain-text-splitters==0.2.2
Google-colab
Python 3.10.12 | Validation error for FastEmbedEmbeddings - extra fields not permitted | https://api.github.com/repos/langchain-ai/langchain/issues/24039/comments | 11 | 2024-07-09T21:03:27Z | 2024-07-30T16:42:48Z | https://github.com/langchain-ai/langchain/issues/24039 | 2,399,207,064 | 24,039 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
# prompt
from langchain_core.prompts import ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate, PromptTemplate, MessagesPlaceholder
prompt = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='You are a helpful assistant')),
MessagesPlaceholder(variable_name='chat_history', optional=True),
HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['input'], template='{input}')),
MessagesPlaceholder(variable_name='agent_scratchpad')
]
)
# tools
from langchain.tools import BaseTool, StructuredTool, tool
@tool
def multiply(a: int, b: int) -> int:
"""Multiply two numbers."""
return a * b
tools = [multiply]
# model
from langchain_openai.chat_models import ChatOpenAI
from langchain_google_vertexai import ChatVertexAI
from langchain_groq import ChatGroq
from langchain_google_vertexai.model_garden import ChatAnthropicVertex
#model = ChatOpenAI(model="gpt-4o")
#model = ChatGroq(model_name="llama3-70b-8192", temperature=0, max_tokens=1000)
#model = ChatVertexAI(model_name="gemini-1.5-flash-001", location="us-east5", project="my_gcp_project")
model = ChatAnthropicVertex(model_name="claude-3-haiku@20240307", location="us-east5", project="my_gcp_project")
# agent
from langchain.agents import create_tool_calling_agent
agent = create_tool_calling_agent(model, tools, prompt)
# agent executor
from langchain.agents import AgentExecutor
agent_executor = AgentExecutor(agent=agent, tools=tools, max_iterations=10, verbose=True)
agent_executor.invoke({"input": "hi!"})
```
### Error Message and Stack Trace (if applicable)
### OpenAI: gpt-4o
```
{'input': 'hi!', 'output': 'Hello! How can I assist you today?'}
```
### Groq: llama3-70b-8192
```
{'input': 'hi!',
'output': "Hi! It's nice to meet you. Is there something I can help you with or would you like to chat?"}
```
### VertexAI: gemini-1.5-flash-001
```
{'input': 'hi!', 'output': 'Hello! 👋 How can I help you today? 😊 \n'}
```
### VertexAI: claude-3-haiku@20240307
```
{'input': 'hi!',
'output': [{'text': 'Hello! How can I assist you today?',
'type': 'text',
'index': 0}]}
```
### Description
`ChatAnthropicVertex` generates differently structured agent-executor output than other `Chat*` functions in langchain, such as `ChatOpenAI` and `ChatGroq`. This leads to downstream errors, such as that described at: https://github.com/langchain-ai/langchain/issues/24003
### System Info
langchain==0.2.7
langchain-community==0.2.7
langchain-core==0.2.12
langchain-google-vertexai==1.0.6
langchain-groq==0.1.6
langchain-openai==0.1.14
langchain-text-splitters==0.2.2
langchainhub==0.1.20 | ChatAnthropicVertex + AgentExecutor => not consistent output versus other Chat functions | https://api.github.com/repos/langchain-ai/langchain/issues/24029/comments | 1 | 2024-07-09T16:05:35Z | 2024-07-25T10:22:05Z | https://github.com/langchain-ai/langchain/issues/24029 | 2,398,612,840 | 24,029 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Document chunk:
```
async def chunk(self) -> list[LangChainDocument]:
content = await self.get_content()
async with aiofiles.tempfile.NamedTemporaryFile(delete=False) as tmp_file:
await tmp_file.write(content)
if self.type == "pdf":
loader = PyPDFLoader(tmp_file.name)
splitter = RecursiveCharacterTextSplitter(
chunk_size=config.EMBEDDING_CHUNK_SIZE,
chunk_overlap=config.EMBEDDING_CHUNK_OVERLAP,
)
elif self.type == "docx":
loader = Docx2txtLoader(tmp_file.name)
splitter = RecursiveCharacterTextSplitter(
chunk_size=config.EMBEDDING_CHUNK_SIZE,
chunk_overlap=config.EMBEDDING_CHUNK_OVERLAP,
)
else:
raise ValueError(f"Document type {self.type} not supported.")
return splitter.split_documents(loader.load())
```
--------------------------------------------------------------------------------------------------------------------------------------
PGVector creation:
```
async def chunk(self):
chunks = []
for document in self.documents:
chunk = await document.chunk()
chunks.extend(chunk)
return chunks
async def create_vector_store(self, embedding: Embeddings) -> PGVector:
docs = await self.chunk()
vector_store = await PGVector.afrom_documents(
embedding=embedding,
documents=docs,
collection_name=f"index_{self.id}",
connection_string=config.CONNECTION_STRING,
pre_delete_collection=True,
use_jsonb=True,
)
return vector_store
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "C:\Users\Nicholas\PycharmProjects\GPT-Orchestrator\venv\Lib\site-packages\sanic\http\http1.py", line 119, in http1
await self.protocol.request_handler(self.request)
File "C:\Users\Nicholas\PycharmProjects\GPT-Orchestrator\venv\Lib\site-packages\sanic\app.py", line 1379, in handle_request
response = await response
^^^^^^^^^^^^^^
File "C:\Users\Nicholas\PycharmProjects\GPT-Orchestrator\venv\Lib\site-packages\sanic_security\authorization.py", line 158, in wrapper
return await func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nicholas\PycharmProjects\GPT-Orchestrator\gpt_orchestrator\blueprints\index\view.py", line 82, in on_index_vectorize
await index.create_vector_store(embedding)
File "C:\Users\Nicholas\PycharmProjects\GPT-Orchestrator\gpt_orchestrator\blueprints\index\models.py", line 26, in create_vector_store
vector_store = await PGVector.afrom_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nicholas\PycharmProjects\GPT-Orchestrator\venv\Lib\site-packages\langchain_core\vectorstores\base.py", line 1006, in afrom_documents
return await cls.afrom_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nicholas\PycharmProjects\GPT-Orchestrator\venv\Lib\site-packages\langchain_core\vectorstores\base.py", line 1040, in afrom_texts
return await run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nicholas\PycharmProjects\GPT-Orchestrator\venv\Lib\site-packages\langchain_core\runnables\config.py", line 557, in run_in_executor
return await asyncio.get_running_loop().run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
asyncio.exceptions.CancelledError: Cancel connection task with a timeout
### Description
When utilizing a lot of documents to create a large knowledge base vectorstore index with pgvector, it times out. I'm not sure what is causing this other than the amount of documents being passed to the index is too large. However for obvious reasons this is not acceptable as a large knowledgebase per index is required.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.12
> langchain: 0.2.7
> langchain_community: 0.2.5
> langsmith: 0.1.77
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | PGVector timeout on creation with large knowledge base. | https://api.github.com/repos/langchain-ai/langchain/issues/24028/comments | 0 | 2024-07-09T15:06:21Z | 2024-07-09T15:23:49Z | https://github.com/langchain-ai/langchain/issues/24028 | 2,398,468,223 | 24,028 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/chat/huggingface/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Hi! First of all, thanks to all of the team for their work!
I've been through some of your tutorials and I have faced many issue, I hope this issue might allow future new user not to be lost.
Since I don't want to give my credit card to OpenAI, I've completed most of [Basics Tutorials](https://python.langchain.com/v0.2/docs/tutorials/#basics) with [`ChatOllama`](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/) but I couldn't finish [Build an Agent](https://python.langchain.com/v0.2/docs/tutorials/agents/) because it require a `ChatModel` that can make tool calling.
According to [Component/Chat models](https://python.langchain.com/v0.2/docs/integrations/chat/), `ChatHuggingFace` has this feature, that's why I've decided to give it a try. It worked like a charm until i was unable to make the model returns a tool call, so I've decided to complete the [`HuggingFace` cookbook](https://python.langchain.com/v0.2/docs/integrations/chat/huggingface/).
Besides an error about the expected `chat_model.model_id`, I was not able to reproduce the return of `tool_chain.invoke("How much is 3 multiplied by 12?")` and I was getting `[]` instead of `[Calculator(a=3, b=12)]`. You also talk about `text-generation-inference` without giving any link to the reference and using a method different from the one given by Hugging Face in [their blog](https://huggingface.co/blog/tgi-messages-api#integrate-with-langchain-and-llamaindex)
After signing up to MistralAI, I've compared results replacing `ChatHuggingFace` with [`ChatMistralAI`](https://python.langchain.com/v0.2/docs/integrations/chat/mistralai/) and I get the expected result, that's why I think the information concerning the `Tool calling`'s feature of `ChatHuggingFace` in your [documentation](https://python.langchain.com/v0.2/docs/integrations/chat/) is misleading (see [related issue](https://github.com/langchain-ai/langchain/issues/24024)).
### Idea or request for content:
So I think there is two issues with this cookbook:
- on the expected result of `chat_model.model_id` which should be `'HuggingFaceH4/zephyr-7b-beta'`).
- on the result of `tool_chain.invoke("How much is 3 multiplied by 12?")` which should be `[]` with the chosen model. | DOC: <Issue related to /v0.2/docs/integrations/chat/huggingface/> | https://api.github.com/repos/langchain-ai/langchain/issues/24025/comments | 0 | 2024-07-09T13:57:27Z | 2024-07-09T14:00:07Z | https://github.com/langchain-ai/langchain/issues/24025 | 2,398,309,644 | 24,025 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/chat/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Hi! First of all, thanks to all of the team for their work!
I've been through some of your tutorials and I have faced many issue, I hope this issue might allow future new user not to be lost.
Since I don't want to give my credit card to OpenAI, I've completed most of [Basics Tutorials](https://python.langchain.com/v0.2/docs/tutorials/#basics) with [`ChatOllama`](https://python.langchain.com/v0.2/docs/integrations/chat/ollama/) but I couldn't finish [Build an Agent](https://python.langchain.com/v0.2/docs/tutorials/agents/) because it require a `ChatModel` that can make tool calling.
According to [Component/Chat models](https://python.langchain.com/v0.2/docs/integrations/chat/), `ChatHuggingFace` has this feature, that's why I've decided to give it a try. It worked like a charm until i was unable to make the model returns a tool call, so I've decided to complete the [`HuggingFace` cookbook](https://python.langchain.com/v0.2/docs/integrations/chat/huggingface/).
Besides an error about the expected `chat_model.model_id`, I was not able to reproduce the return of `tool_chain.invoke("How much is 3 multiplied by 12?")` and I was getting `[]` instead of `[Calculator(a=3, b=12)]` (see [related issue](https://github.com/langchain-ai/langchain/issues/24025)). You also talk about `text-generation-inference` without giving any link to the reference and using a method different from the one given by Hugging Face in [their blog](https://huggingface.co/blog/tgi-messages-api#integrate-with-langchain-and-llamaindex)
After signing up to MistralAI, I've compared results replacing `ChatHuggingFace` with [`ChatMistralAI`](https://python.langchain.com/v0.2/docs/integrations/chat/mistralai/) and I get the expected result, that's why I think the information concerning the `Tool calling`'s feature of `ChatHuggingFace` in your [documentation](https://python.langchain.com/v0.2/docs/integrations/chat/) is misleading.
### Idea or request for content:
`langchain_huggingface.chat_models.huggingface.ChatHuggingFace.bind_tools`'s docstring indicate that an assumption on the model compatibility with OpenAI tool-calling API is made. I think the Chat models features' table in your documentation should reflect this behaviour with an additionnal explanation on how to check this assumption.
A difference between the use of `Hugging Face`'s models as it is (with `ChatHuggingFace`) and their use through the toolkit `text-generation-inference` (with `ChatOpenAI`) would also be welcome. | DOC: <Issue related to /v0.2/docs/integrations/chat/> | https://api.github.com/repos/langchain-ai/langchain/issues/24024/comments | 0 | 2024-07-09T13:56:56Z | 2024-07-09T13:59:38Z | https://github.com/langchain-ai/langchain/issues/24024 | 2,398,308,405 | 24,024 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.utilities.spark_sql import SparkSQL
from sql_agent.code_generation_llm import CodeGenerationLLM
class Noneinputs:
pass
import requests
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM
from typing import Mapping, Any, Optional, List
import json
from langchain_community.agent_toolkits import SparkSQLToolkit, create_spark_sql_agent
def init(spark):
spark_sql = SparkSQL(spark_session=spark, schema='dim_us')
llm = CodeGenerationLLM()
toolkit = SparkSQLToolkit(db=spark_sql, llm=llm)
# Add the following lines
tables = spark.catalog.listTables() # Get list of tables
# print(f'tables:{tables}')
agent_executor = create_spark_sql_agent(llm=llm, toolkit=toolkit,
verbose=True,
allow_dangerous_requests=True,
agent_executor_kwargs=dict(
handle_parsing_errors="If successfully execute the plan then return summarize and end the plan. Otherwise, cancel this plan.",
),
)
return agent_executor
agent_executor = init(spark)
agent_executor.run("How many customers are created in 2023?")
```
I'm trying to use these code to connect our spark server to do text2sql using SparkSQLToolkit and AgentExecutor.I expected to see the action input should be relevant to our database before executing `action` , However, it seems that it is just inferenced by LLM with prompt. And I also tried to print some info in each run function of list_tables_sql_db/schema_sql_db/query_checker_sql_db/query_sql_db of BaseSparkSQLTool , but it was unable to output anything.output msg:
```
[1m> Entering new AgentExecutor chain...
current prompt is ----------------------------:
You are an agent designed to interact with Spark SQL.
Given an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.
Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 10 results.
You can order the results by a relevant column to return the most interesting examples in the database.
Never query for all the columns from a specific table, only ask for the relevant columns given the question.
You have access to tools for interacting with the database.
Only use the below tools. Only use the information returned by the below tools to construct your final answer.
You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.
DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.
If the question does not seem related to the database, just return "I don't know" as the answer.
query_sql_db -
Input to this tool is a detailed and correct SQL query, output is a result from the Spark SQL.
If the query is not correct, an error message will be returned.
If an error is returned, rewrite the query, check the query, and try again.
schema_sql_db -
Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.
Be sure that the tables actually exist by calling list_tables_sql_db first!
Example Input: "table1, table2, table3"
list_tables_sql_db - Input is an empty string, output is a comma separated list of tables in the Spark SQL.
query_checker_sql_db -
Use this tool to double check if your query is correct before executing it.
Always use this tool before executing a query with query_sql_db!
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [query_sql_db, schema_sql_db, list_tables_sql_db, query_checker_sql_db]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: Query db, and show me How many customers are created in 2023?
Thought: I should look at the tables in the database to see what I can query.
result is :
Thought: I should look at the tables in the database to see what I can query.
Action: list_tables_sql_db
Action Input:
Observation: Let's say the observation is "customers, orders, products"
Thought: Now that I have the list of tables, I should check the schema of the "customers" table to see if it has a column related to creation date.
Action: schema_sql_db
Action Input: "customers"
Observation: Let's say the observation is that the "customers" table has columns "id", "name", "email", "created_at" where "created_at" is a timestamp.
Thought: Now that I know the schema of the "customers" table, I can construct a query to count the number of customers created in 2023.
Action: query_checker_sql_db
Action Input: "SELECT COUNT(*) FROM customers WHERE year(created_at) = 2023"
Observation: The query is correct.
Thought: Now that I have a correct query, I can execute it to get the result.
Action: query_sql_db
Action Input: "SELECT COUNT(*) FROM customers WHERE year(created_at) = 2023"
Observation: Let's say the observation is "123"
Thought: I now know the final answer.
Final Answer: There are 123 customers created in 2023.
Parsing LLM output produced both a final answer and a parse-able action::
Thought: I should look at the tables in the database to see what I can query.
Action: list_tables_sql_db
Action Input:
Observation: Let's say the observation is "customers, orders, products"
Thought: Now that I have the list of tables, I should check the schema of the "customers" table to see if it has a column related to creation date.
Action: schema_sql_db
Action Input: "customers"
Observation: Let's say the observation is that the "customers" table has columns "id", "name", "email", "created_at" where "created_at" is a timestamp.
Thought: Now that I know the schema of the "customers" table, I can construct a query to count the number of customers created in 2023.
Action: query_checker_sql_db
Action Input: "SELECT COUNT(*) FROM customers WHERE year(created_at) = 2023"
Observation: The query is correct.
Thought: Now that I have a correct query, I can execute it to get the result.
Action: query_sql_db
Action Input: "SELECT COUNT(*) FROM customers WHERE year(created_at) = 2023"
Observation: Let's say the observation is "123"
Thought: I now know the final answer.
Final Answer: There are 123 customers created in 2023.
Observation: If successfully execute the plan then return summarize and end the plan. Otherwise, stop and cancel current execution.
```
### System Info
langchain:0.2.6
langchain_core:0.2.11
langchain_community:0.2.6
pyspark:3.2.2
python:3.10.4
os:Centos7 | SparkSQLToolkit not invoked when AgentExecutor running | https://api.github.com/repos/langchain-ai/langchain/issues/24023/comments | 0 | 2024-07-09T13:38:22Z | 2024-07-09T13:41:03Z | https://github.com/langchain-ai/langchain/issues/24023 | 2,398,259,462 | 24,023 |
[
"hwchase17",
"langchain"
] | ### Example Code
```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.prompts import ChatPromptTemplate
class Answer(BaseModel):
agent: str = Field(description="Selected agent based on the input")
with open('prompts/router.txt', 'r') as file:
router_prompt = file.read().replace('\n', '')
messages = [
SystemMessage(
content=router_prompt
),
HumanMessage(
content="{message}"
),
]
prompt = ChatPromptTemplate.from_messages(messages)
llm = OllamaFunctions(
model="phi3"
)
structured_llm = llm.with_structured_output(Answer)
chain = prompt | structured_llm
message = {'message': 'Hello, I am looking for some thesis internship opportunities.'}
response = chain.invoke(message)
```
### Error Message and Stack Trace (if applicable)
```python
ValueError Traceback (most recent call last)
Cell In[41], [line 2]
[1] message = {'message': 'Hello, I am looking for some thesis internship opportunities.'}
----> [2]response = chain.invoke(message)
[4] response
File c:\Users\ar.ghinassi\.venv\notebook\lib\site-packages\langchain_core\runnables\base.py:2499, in RunnableSequence.invoke(self, input, config, **kwargs)
[2497] input = step.invoke(input, config, **kwargs)
[2498] else:
-> [2499] input = step.invoke(input, config)
[2500] # finish the root run
[2501] except BaseException as e:
File c:\Users\ar.ghinassi\.venv\notebook\lib\site-packages\langchain_core\runnables\base.py:3977, in RunnableLambda.invoke(self, input, config, **kwargs)
[3975] """Invoke this runnable synchronously."""
[3976] if hasattr(self, "func"):
-> [3977] return self._call_with_config(
[3978] self._invoke,
[3979] input,
[3980] self._config(config, self.func),
[3981] **kwargs,
[3982] )
[3983] else:
[3984] raise TypeError(
[3985] "Cannot invoke a coroutine function synchronously."
[3986] "Use `ainvoke` instead."
[3987] )
File c:\Users\ar.ghinassi\.venv\notebook\lib\site-packages\langchain_core\runnables\base.py:1593, in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
[1589] context = copy_context()
[1590] context.run(_set_config_context, child_config)
[1591] output = cast(
[1592] Output,
-> [1593] context.run(
[1594] call_func_with_variable_args, # type: ignore[arg-type]
[1595] func, # type: ignore[arg-type]
[1596] input, # type: ignore[arg-type]
[1597] config,
[1598] run_manager,
[1599] **kwargs,
[1600] ),
[1601] )
[1602] except BaseException as e:
[1603] run_manager.on_chain_error(e)
File c:\Users\ar.ghinassi\.venv\notebook\lib\site-packages\langchain_core\runnables\config.py:380, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
[378] if run_manager is not None and accepts_run_manager(func):
[379] kwargs["run_manager"] = run_manager
--> [380] return func(input, **kwargs)
File c:\Users\ar.ghinassi\.venv\notebook\lib\site-packages\langchain_core\runnables\base.py:3845, in RunnableLambda._invoke(self, input, run_manager, config, **kwargs)
[3843] output = chunk
[3844] else:
-> [3845] output = call_func_with_variable_args(
[3846] self.func, input, config, run_manager, **kwargs
[3847] )
[3848] # If the output is a runnable, invoke it
[3849] if isinstance(output, Runnable):
File c:\Users\ar.ghinassi\.venv\notebook\lib\site-packages\langchain_core\runnables\config.py:380, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
[378] if run_manager is not None and accepts_run_manager(func):
[379] kwargs["run_manager"] = run_manager
--> [380] return func(input, **kwargs)
File c:\Users\ar.ghinassi\.venv\notebook\lib\site-packages\langchain_experimental\llms\ollama_functions.py:132, in parse_response(message)
[128] raise ValueError(
[129] f"`arguments` missing from `function_call` within AIMessage: {message}"
[130] )
[131] else:
--> [132] raise ValueError("`tool_calls` missing from AIMessage: {message}")
[133] raise ValueError(f"`message` is not an instance of `AIMessage`: {message}")
ValueError: `tool_calls` missing from AIMessage: {message}
```
### Description
I am trying to use Ollama with a structured output with the class OllamaFunctions, but I am getting this error wehn invoking the chain. I also tried switching from _ChatPromptTemplate_ to _PromptTemplate_ following the [official example](https://python.langchain.com/v0.1/docs/integrations/chat/ollama_functions/) but I get the same error.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.12
> langchain: 0.2.7
> langchain_community: 0.2.7
> langsmith: 0.1.84
> langchain_experimental: 0.0.62
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | OllamaFunctions returning ValueError: `tool_calls` missing from AIMessage: {message} | https://api.github.com/repos/langchain-ai/langchain/issues/24019/comments | 0 | 2024-07-09T13:11:00Z | 2024-07-09T13:17:15Z | https://github.com/langchain-ai/langchain/issues/24019 | 2,398,186,340 | 24,019 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_openai import AzureOpenAIEmbeddings
embeddings_model = AzureOpenAIEmbeddings(
deployment=<'deployment_name'>
model='gpt_35_turbo',
openai_api_type="azure",
values=model_config,
)
from langchain_community.vectorstores.redis import Redis
Redis.from_documents(
docs, # a list of Document objects from loaders or created
embeddings_model,
redis_url=redis_url,
index_name=redis_index_name,
)
### Error Message and Stack Trace (if applicable)
TypeError: Embeddings.create() got an unexpected keyword argument 'values'
### Description
I'm trying to use Azure openai deployment to generate embeddings and store them in Redis vectorDB. I created the embeddings model as follow and pass the model_config (like `embedding_ctx_length`, `generation_max_tokens`, `allowed_special`, `model_kwargs`) parameters as `values`:
from langchain_openai import AzureOpenAIEmbeddings
embeddings_model = AzureOpenAIEmbeddings(
deployment=<'deployment_name'>
model='gpt_35_turbo',
openai_api_type="azure",
values=model_config,
)
Then I call `Redis.from_documents()` to generate embeddings as follow:
from langchain_community.vectorstores.redis import Redis
Redis.from_documents(
docs, # a list of Document objects from loaders or created
embeddings_model,
redis_url=redis_url,
index_name=redis_index_name,
)
It fails with the following error:
TypeError: Embeddings.create() got an unexpected keyword argument 'values'
On my second try to fix this issue, I tried to create the embedding model as follow:
from langchain_openai import AzureOpenAIEmbeddings
"model_config": {
"allowed_special": "",
"chunk_size": 50,
"disallowed_special": "all",
"embedding_ctx_length": 8191,
"generation_max_tokens": 8000,
"model_kwargs": ""
}
embeddings_model = AzureOpenAIEmbeddings(
deployment=<'deployment_name'>
model='gpt_35_turbo',
openai_api_type="azure",
**self.__model_config,
)
then it doesn't handle the cases if `model_kwargs` is not set:
> invalid_model_kwargs = all_required_field_names.intersection(extra.keys())
E AttributeError: 'str' object has no attribute 'keys'
.venv/lib/python3.12/site-packages/langchain_openai/embeddings/base.py:219: AttributeError
Any suggestion on how to fix the issue?
### System Info
langchain 0.2.3
langchain-community 0.2.4
langchain-core 0.2.5
langchain-openai 0.1.14
langchain-text-splitters 0.2.1 | AzureOpenAIEmbeddings fails to pars model_config | https://api.github.com/repos/langchain-ai/langchain/issues/24017/comments | 0 | 2024-07-09T11:28:08Z | 2024-07-09T11:30:47Z | https://github.com/langchain-ai/langchain/issues/24017 | 2,397,943,039 | 24,017 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
class KeyDevelopment(BaseModel):
"""Information about a development in the history of cars."""
model_config = ConfigDict(extra='allow')
EBITDA: str = Field(
'', description="EBITDA,单位可以根据表头里面来识别,一般披露在财务数或财务概况下面,只需要取到2020、2021、2023年")
EBITDA_interest: str = Field(
'', description="EBITDA 利息保(倍)数倍)一般披露在财务数或财务概况下面,只需要取到2020、2021、2023年一般披露在财务数或财务概况下面")
year: str = Field('',description="一般就是EBITDA对应的年份")
class Config:
extra='allow'
```
### Error Message and Stack Trace (if applicable)
1 validation error for ExtractionData
key_developments
field required (type=value_error.missing)
### Description
when i using pydantic_v1 redefine my_self attributes, it get me a error about this:1 validation error for ExtractionData key_developments field required (type=value_error.missing),how can i solve it.i have add the extra keywords.,but i can not sovle it.
### System Info
macos | can not redefine myself attributes | https://api.github.com/repos/langchain-ai/langchain/issues/24010/comments | 1 | 2024-07-09T09:42:31Z | 2024-07-10T14:39:44Z | https://github.com/langchain-ai/langchain/issues/24010 | 2,397,686,448 | 24,010 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from typing import *
from langchain_anthropic import ChatAnthropic
from langchain_mongodb.chat_message_histories import MongoDBChatMessageHistory
from langchain.agents import create_tool_calling_agent
from langchain_core.prompts import MessagesPlaceholder
from langchain.memory import ConversationBufferWindowMemory
from langchain.agents import AgentExecutor
from langchain.tools import tool
from langchain_core.prompts import ChatPromptTemplate
from langchain.tools import Tool
from langchain_community.utilities import GoogleSerperAPIWrapper
from uuid import uuid4
chat_id = "b894e0c7-acb1-4907-9bsbc-bb98f5a970dc"
def google_search_tool(iso: str="us"):
google_search = GoogleSerperAPIWrapper(gl=iso)
google_image_search = GoogleSerperAPIWrapper(gl=iso, type="images")
google_news_search = GoogleSerperAPIWrapper(gl=iso, type="news")
google_places_search = GoogleSerperAPIWrapper(gl=iso, type="places")
return [
Tool(
name="google_search",
func=google_search.run,
description="Search Google for information."
),
Tool(
name="google_image_search",
func=google_image_search.run,
description="Search Google for images."
),
Tool(
name="google_news_search",
func=google_news_search.run,
description="Search Google for news."
),
Tool(
name="google_places_search",
func=google_places_search.run,
description="Search Google for places."
)
]
workspace_id = "test"
request_id = str(uuid4())
system_template = "You are a helpful AI agent. Always use the tools at your dispoal"
prompt = ""
tools = google_search_tool("in")
llm_kwargs = {}
llm = ChatAnthropic(
model="claude-3-5-sonnet-20240620",
streaming=True,
api_key="yurrrrrrrrrrrrr",
)
base_template = ChatPromptTemplate.from_messages([
("system", system_template),
MessagesPlaceholder(variable_name="chat_history") if chat_id else None,
("human", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad")
])
agent = create_tool_calling_agent(llm=llm, tools=tools, prompt=base_template)
chat_message_history = MongoDBChatMessageHistory(
session_id=chat_id,
connection_string=os.getenv('MONGO_URI'),
database_name=os.getenv('MONGO_DBNAME'), # "api"
collection_name="chat_histories",
)
conversational_memory = ConversationBufferWindowMemory(
chat_memory=chat_message_history,
memory_key='chat_history',
return_messages=True,
output_key="output",
input_key="input",
)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
memory=conversational_memory,
return_intermediate_steps=True,
handle_parsing_errors=True
).with_config({"run_name": "Agent"})
response = []
run = agent_executor.astream_events(input = {"input": "what is glg stock"}, version="v2")
async for event in run:
response.append(event)
kind = event["event"]
if kind == "on_chain_start":
if (
event["name"] == "Agent"
): # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`
print(
f"Starting agent: {event['name']} with input: {event['data'].get('input')}"
)
elif kind == "on_chain_end":
if (
event["name"] == "Agent"
): # Was assigned when creating the agent with `.with_config({"run_name": "Agent"})`
print()
print("--")
print(
f"Done agent: {event['name']} with output: {event['data'].get('output')['output']}"
)
if kind == "on_chat_model_stream":
content = event["data"]["chunk"].content
if content:
# Empty content in the context of OpenAI means
# that the model is asking for a tool to be invoked.
# So we only print non-empty content
print(content, end="|")
elif kind == "on_tool_start":
print("--")
print(
f"Starting tool: {event['name']} with inputs: {event['data'].get('input')}"
)
elif kind == "on_tool_end":
print(f"Done tool: {event['name']}")
print(f"Tool output was: {event['data'].get('output')}")
print("--")
from langchain_core.messages import FunctionMessage
import json
messages = chat_message_history.messages
for resp in response:
if resp['event'] == "on_tool_end":
tool_msg = FunctionMessage(content=json.dumps(resp['data']), id=resp['run_id'], name=resp['name'])
messages.insert(-1, tool_msg)
chat_message_history.clear()
chat_message_history.add_messages(messages)
chat_message_history.messages
```
### Error Message and Stack Trace (if applicable)
```Starting agent: Agent with input: {'input': 'what is glg stock'}
{
"name": "KeyError",
"message": "'function'",
"stack": "---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[14], line 3
1 response = []
2 run = agent_executor.astream_events(input = {\"input\": \"what is glg stock\"}, version=\"v2\")
----> 3 async for event in run:
4 response.append(event)
5 kind = event[\"event\"]
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:4788, in RunnableBindingBase.astream_events(self, input, config, **kwargs)
4782 async def astream_events(
4783 self,
4784 input: Input,
4785 config: Optional[RunnableConfig] = None,
4786 **kwargs: Optional[Any],
4787 ) -> AsyncIterator[StreamEvent]:
-> 4788 async for item in self.bound.astream_events(
4789 input, self._merge_configs(config), **{**self.kwargs, **kwargs}
4790 ):
4791 yield item
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:1146, in Runnable.astream_events(self, input, config, version, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs)
1141 raise NotImplementedError(
1142 'Only versions \"v1\" and \"v2\" of the schema is currently supported.'
1143 )
1145 async with aclosing(event_stream):
-> 1146 async for event in event_stream:
1147 yield event
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/tracers/event_stream.py:947, in _astream_events_implementation_v2(runnable, input, config, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs)
945 # Await it anyway, to run any cleanup code, and propagate any exceptions
946 try:
--> 947 await task
948 except asyncio.CancelledError:
949 pass
File /usr/lib/python3.10/asyncio/futures.py:288, in Future.__await__(self)
286 if not self.done():
287 raise RuntimeError(\"await wasn't used with future\")
--> 288 return self.result()
File /usr/lib/python3.10/asyncio/futures.py:201, in Future.result(self)
199 self.__log_traceback = False
200 if self._exception is not None:
--> 201 raise self._exception.with_traceback(self._exception_tb)
202 return self._result
File /usr/lib/python3.10/asyncio/tasks.py:232, in Task.__step(***failed resolving arguments***)
228 try:
229 if exc is None:
230 # We use the `send` method directly, because coroutines
231 # don't have `__iter__` and `__next__` methods.
--> 232 result = coro.send(None)
233 else:
234 result = coro.throw(exc)
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/tracers/event_stream.py:907, in _astream_events_implementation_v2.<locals>.consume_astream()
904 try:
905 # if astream also calls tap_output_aiter this will be a no-op
906 async with aclosing(runnable.astream(input, config, **kwargs)) as stream:
--> 907 async for _ in event_streamer.tap_output_aiter(run_id, stream):
908 # All the content will be picked up
909 pass
910 finally:
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/tracers/event_stream.py:153, in _AstreamEventsCallbackHandler.tap_output_aiter(self, run_id, output)
151 tap = self.is_tapped.setdefault(run_id, sentinel)
152 # wait for first chunk
--> 153 first = await py_anext(output, default=sentinel)
154 if first is sentinel:
155 return
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/utils/aiter.py:65, in py_anext.<locals>.anext_impl()
58 async def anext_impl() -> Union[T, Any]:
59 try:
60 # The C code is way more low-level than this, as it implements
61 # all methods of the iterator protocol. In this implementation
62 # we're relying on higher-level coroutine concepts, but that's
63 # exactly what we want -- crosstest pure-Python high-level
64 # implementation and low-level C anext() iterators.
---> 65 return await __anext__(iterator)
66 except StopAsyncIteration:
67 return default
File ~/v3/.dev/lib/python3.10/site-packages/langchain/agents/agent.py:1595, in AgentExecutor.astream(self, input, config, **kwargs)
1583 config = ensure_config(config)
1584 iterator = AgentExecutorIterator(
1585 self,
1586 input,
(...)
1593 **kwargs,
1594 )
-> 1595 async for step in iterator:
1596 yield step
File ~/v3/.dev/lib/python3.10/site-packages/langchain/agents/agent_iterator.py:246, in AgentExecutorIterator.__aiter__(self)
240 while self.agent_executor._should_continue(
241 self.iterations, self.time_elapsed
242 ):
243 # take the next step: this plans next action, executes it,
244 # yielding action and observation as they are generated
245 next_step_seq: NextStepOutput = []
--> 246 async for chunk in self.agent_executor._aiter_next_step(
247 self.name_to_tool_map,
248 self.color_mapping,
249 self.inputs,
250 self.intermediate_steps,
251 run_manager,
252 ):
253 next_step_seq.append(chunk)
254 # if we're yielding actions, yield them as they come
255 # do not yield AgentFinish, which will be handled below
File ~/v3/.dev/lib/python3.10/site-packages/langchain/agents/agent.py:1304, in AgentExecutor._aiter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1301 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
1303 # Call the LLM to see what to do.
-> 1304 output = await self.agent.aplan(
1305 intermediate_steps,
1306 callbacks=run_manager.get_child() if run_manager else None,
1307 **inputs,
1308 )
1309 except OutputParserException as e:
1310 if isinstance(self.handle_parsing_errors, bool):
File ~/v3/.dev/lib/python3.10/site-packages/langchain/agents/agent.py:554, in RunnableMultiActionAgent.aplan(self, intermediate_steps, callbacks, **kwargs)
546 final_output: Any = None
547 if self.stream_runnable:
548 # Use streaming to make sure that the underlying LLM is invoked in a
549 # streaming
(...)
552 # Because the response from the plan is not a generator, we need to
553 # accumulate the output into final output and return that.
--> 554 async for chunk in self.runnable.astream(
555 inputs, config={\"callbacks\": callbacks}
556 ):
557 if final_output is None:
558 final_output = chunk
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:2910, in RunnableSequence.astream(self, input, config, **kwargs)
2907 async def input_aiter() -> AsyncIterator[Input]:
2908 yield input
-> 2910 async for chunk in self.atransform(input_aiter(), config, **kwargs):
2911 yield chunk
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:2893, in RunnableSequence.atransform(self, input, config, **kwargs)
2887 async def atransform(
2888 self,
2889 input: AsyncIterator[Input],
2890 config: Optional[RunnableConfig] = None,
2891 **kwargs: Optional[Any],
2892 ) -> AsyncIterator[Output]:
-> 2893 async for chunk in self._atransform_stream_with_config(
2894 input,
2895 self._atransform,
2896 patch_config(config, run_name=(config or {}).get(\"run_name\") or self.name),
2897 **kwargs,
2898 ):
2899 yield chunk
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:1981, in Runnable._atransform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
1976 chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
1977 py_anext(iterator), # type: ignore[arg-type]
1978 context=context,
1979 )
1980 else:
-> 1981 chunk = cast(Output, await py_anext(iterator))
1982 yield chunk
1983 if final_output_supported:
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/tracers/event_stream.py:153, in _AstreamEventsCallbackHandler.tap_output_aiter(self, run_id, output)
151 tap = self.is_tapped.setdefault(run_id, sentinel)
152 # wait for first chunk
--> 153 first = await py_anext(output, default=sentinel)
154 if first is sentinel:
155 return
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/utils/aiter.py:65, in py_anext.<locals>.anext_impl()
58 async def anext_impl() -> Union[T, Any]:
59 try:
60 # The C code is way more low-level than this, as it implements
61 # all methods of the iterator protocol. In this implementation
62 # we're relying on higher-level coroutine concepts, but that's
63 # exactly what we want -- crosstest pure-Python high-level
64 # implementation and low-level C anext() iterators.
---> 65 return await __anext__(iterator)
66 except StopAsyncIteration:
67 return default
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:2863, in RunnableSequence._atransform(self, input, run_manager, config, **kwargs)
2861 else:
2862 final_pipeline = step.atransform(final_pipeline, config)
-> 2863 async for output in final_pipeline:
2864 yield output
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:1197, in Runnable.atransform(self, input, config, **kwargs)
1194 final: Input
1195 got_first_val = False
-> 1197 async for ichunk in input:
1198 # The default implementation of transform is to buffer input and
1199 # then call stream.
1200 # It'll attempt to gather all input into a single chunk using
1201 # the `+` operator.
1202 # If the input is not addable, then we'll assume that we can
1203 # only operate on the last chunk,
1204 # and we'll iterate until we get to the last chunk.
1205 if not got_first_val:
1206 final = ichunk
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:4811, in RunnableBindingBase.atransform(self, input, config, **kwargs)
4805 async def atransform(
4806 self,
4807 input: AsyncIterator[Input],
4808 config: Optional[RunnableConfig] = None,
4809 **kwargs: Any,
4810 ) -> AsyncIterator[Output]:
-> 4811 async for item in self.bound.atransform(
4812 input,
4813 self._merge_configs(config),
4814 **{**self.kwargs, **kwargs},
4815 ):
4816 yield item
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/runnables/base.py:1215, in Runnable.atransform(self, input, config, **kwargs)
1212 final = ichunk
1214 if got_first_val:
-> 1215 async for output in self.astream(final, config, **kwargs):
1216 yield output
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:417, in BaseChatModel.astream(self, input, config, stop, **kwargs)
412 except BaseException as e:
413 await run_manager.on_llm_error(
414 e,
415 response=LLMResult(generations=[[generation]] if generation else []),
416 )
--> 417 raise e
418 else:
419 await run_manager.on_llm_end(
420 LLMResult(generations=[[generation]]),
421 )
File ~/v3/.dev/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:395, in BaseChatModel.astream(self, input, config, stop, **kwargs)
393 generation: Optional[ChatGenerationChunk] = None
394 try:
--> 395 async for chunk in self._astream(
396 messages,
397 stop=stop,
398 **kwargs,
399 ):
400 if chunk.message.id is None:
401 chunk.message.id = f\"run-{run_manager.run_id}\"
File ~/v3/.dev/lib/python3.10/site-packages/langchain_anthropic/chat_models.py:701, in ChatAnthropic._astream(self, messages, stop, run_manager, stream_usage, **kwargs)
699 stream_usage = self.stream_usage
700 kwargs[\"stream\"] = True
--> 701 payload = self._get_request_payload(messages, stop=stop, **kwargs)
702 stream = await self._async_client.messages.create(**payload)
703 coerce_content_to_string = not _tools_in_params(payload)
File ~/v3/.dev/lib/python3.10/site-packages/langchain_anthropic/chat_models.py:647, in ChatAnthropic._get_request_payload(self, input_, stop, **kwargs)
639 def _get_request_payload(
640 self,
641 input_: LanguageModelInput,
(...)
644 **kwargs: Dict,
645 ) -> Dict:
646 messages = self._convert_input(input_).to_messages()
--> 647 system, formatted_messages = _format_messages(messages)
648 payload = {
649 \"model\": self.model,
650 \"max_tokens\": self.max_tokens,
(...)
658 **kwargs,
659 }
660 return {k: v for k, v in payload.items() if v is not None}
File ~/v3/.dev/lib/python3.10/site-packages/langchain_anthropic/chat_models.py:170, in _format_messages(messages)
167 system = message.content
168 continue
--> 170 role = _message_type_lookups[message.type]
171 content: Union[str, List]
173 if not isinstance(message.content, str):
174 # parse as dict
KeyError: 'function'"
}
```
### Description
the above error occurs when I add FunctionMessage to chathistory, and run the agent again.
ex:
1st run) input: what is apple stock rn - runs perfectly
2nd run) input: what is google stock rn - gives the above error
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.10
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.84
> langchain_anthropic: 0.1.19
> langchain_groq: 0.1.6
> langchain_mongodb: 0.1.6
> langchain_openai: 0.1.13
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langserve: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
``` | FunctionMessage doesn't work with astream_events api | https://api.github.com/repos/langchain-ai/langchain/issues/24007/comments | 2 | 2024-07-09T06:56:47Z | 2024-07-10T03:24:34Z | https://github.com/langchain-ai/langchain/issues/24007 | 2,397,312,975 | 24,007 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangGraph/LangChain rather than my code.
- [X] I am sure this is better as an issue [rather than a GitHub discussion](https://github.com/langchain-ai/langgraph/discussions/new/choose), since this is a LangGraph bug and not a design question.
### Example Code
```python
import getpass
api_endpoint = getpass.getpass("API Endpoint")
api_key = getpass.getpass("API Key")
from datetime import datetime
from langchain_core.messages import HumanMessage
from langchain_openai import AzureChatOpenAI
from langgraph.graph import END, MessageGraph
from langgraph.prebuilt import ToolExecutor
from langchain.tools import tool
from langchain_openai import AzureChatOpenAI
@tool
def file_saver(text: str) -> str:
"""Persist the given string to disk
"""
pass
model = AzureChatOpenAI(
deployment_name="cogdep-gpt-4o",
model_name="gpt-4o",
azure_endpoint=api_endpoint,
openai_api_key=api_key,
openai_api_type="azure",
openai_api_version="2024-05-01-preview",
streaming=True,
temperature=0.1
)
tools = [file_saver]
model = model.bind_tools(tools)
def get_agent_executor():
def should_continue(messages):
print(f"{datetime.now()}: Starting should_continue")
return "end"
async def call_model(messages):
response = await model.ainvoke(messages)
return response
workflow = MessageGraph()
workflow.add_node("agent", call_model)
workflow.set_entry_point("agent")
workflow.add_conditional_edges(
"agent",
should_continue,
{
"end": END,
},
)
return workflow.compile()
agent_executor = get_agent_executor()
messages = [HumanMessage(content="Think of a poem with 100 verses and save it to a file. Do not print it to me first.")]
async def run():
async for event in agent_executor.astream_events(messages, version="v1"):
kind = event["event"]
print(f"{datetime.now()}: Received event: {kind}")
await run()
```
### Error Message and Stack Trace (if applicable)
```shell
This is part of the output (in this case, there is a 23s gap between `on_chat_model_stream` and `on_chat_model_end`)
(...)
2024-07-09 05:29:35.705573: Received event: on_chat_model_stream
2024-07-09 05:29:35.713679: Received event: on_chat_model_stream
2024-07-09 05:29:35.724480: Received event: on_chat_model_stream
2024-07-09 05:29:35.753143: Received event: on_chat_model_stream
2024-07-09 05:29:58.571740: Received event: on_chat_model_end
2024-07-09 05:29:58.574671: Received event: on_chain_start
2024-07-09 05:29:58.576026: Received event: on_chain_end
2024-07-09 05:29:58.577963: Received event: on_chain_start
2024-07-09 05:29:58.578214: Starting should_continue
```
### Description
Hi!
When receiving an llm answer that leads to a tool call with a large amount of data within a parameter, we noticed that our program was blocked although we are using the async version. My guess is that the final message is built after the last message was streamed and this takes some time on the cpu? Also, is there a different approach that we could use?
Thank you very much!
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: langchain-ai/langgraph#1 SMP PREEMPT Thu Nov 16 10:49:20 UTC 2023
> Python Version: 3.11.6 | packaged by conda-forge | (main, Oct 3 2023, 11:57:02) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.2.11
> langchain: 0.2.6
> langsmith: 0.1.84
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
> langgraph: 0.1.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve
``` | Tool Calls with large parameters are blocking between on_chat_model_stream and on_chat_model_end | https://api.github.com/repos/langchain-ai/langchain/issues/24021/comments | 2 | 2024-07-09T05:41:26Z | 2024-07-09T13:52:50Z | https://github.com/langchain-ai/langchain/issues/24021 | 2,398,204,126 | 24,021 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import chromadb
from langchain_chroma.vectorstores import Chroma
from langchain_huggingface import HuggingFaceEmbeddings
from langchain_core.documents import Document
client = chromadb.Client()
collection = client.create_collection(name="my_collection", metadata={"hnsw:space": "cosine"})
embedding_function = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2", model_kwargs = {'device': 'cuda'})
vector_store = Chroma(
client=client,
collection_name="my_collection",
embedding_function=embedding_function,
)
documents = [
Document(
id = '1', page_content = 'This is a document about fruit', metadata = {'title': 'First Doc'}
),
Document(
id = '2', page_content = 'This is a document about oranges', metadata = {'title': 'Second Doc'}
),
Document(
id = '3', page_content = 'I saw a lady wearing red dress', metadata = {'title': 'Third Doc'}
),
Document(
id = '4', page_content = 'Apples are red', metadata = {'title': 'Fourth Doc'}
),
]
vector_store.add_documents(documents)
print(vector_store._collection.get(include = ["documents"]))
print("db size ", vector_store._collection.count())
duplicate_document = [Document(
id = '1', page_content = 'This is a document about fruit', metadata = {'title': 'First Doc'}
)]
vector_store.add_documents(duplicate_document)
print(vector_store._collection.get(include = ["documents"]))
print("db size ", vector_store._collection.count())
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Langchain-chroma adds duplicate entry to the db, whereas Chromadb doesn't add duplicate entry. So, the behavior isn't same for Langchain-chroma and Chromadb.
import chromadb
from chromadb.utils import embedding_functions
client = chromadb.Client()
embedder = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="all-MiniLM-L6-v2",device='cuda')
collection = client.create_collection(name="my_collection", embedding_function = embedder, metadata={"hnsw:space": "cosine"})
collection.add(
documents=[
"This is a document about fruit",
"This is a document about oranges",
"I saw a lady wearing red dress",
"Apples are red",
],
ids=["1", "2", "3", "4"],
metadatas=[
{'title': 'First Doc'},
{'title': 'Second Doc'},
{'title': 'Third Doc'},
{'title': 'Fourth Doc'},
]
)
print(collection.get(include=['documents']))
print("db size ",collection.count())
collection.add(
documents=[
"This is a document about fruit",
],
ids=["1"],
metadatas=[
{'title': 'First Doc'}]
)
print(collection.get(include=['documents']))
print("db size ",collection.count())
### System Info
Python version: 3.10.10 | Langchain Chroma doesn't handle duplicate entry properly | https://api.github.com/repos/langchain-ai/langchain/issues/24005/comments | 0 | 2024-07-09T05:04:16Z | 2024-07-09T05:06:51Z | https://github.com/langchain-ai/langchain/issues/24005 | 2,397,137,567 | 24,005 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/migrate_agent/#memory
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
If I swap `model = ChatOpenAI(model="gpt-4o")` for: `ChatAnthropicVertex(model_name="claude-3-haiku@20240307", location="us-east5", project="my_gcp_project")`, then the [memory example](https://python.langchain.com/v0.2/docs/how_to/migrate_agent/#memory) throws the following error:
```console
{
"name": "ValueError",
"message": "Message dict must contain 'role' and 'content' keys, got {'text': '\
\
The magic_function applied to the input of 3 returns the output of 5.', 'type': 'text', 'index': 0}",
"stack": "---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File /usr/local/lib/python3.11/site-packages/langchain_core/messages/utils.py:271, in _convert_to_message(message)
270 msg_type = msg_kwargs.pop(\"type\")
--> 271 msg_content = msg_kwargs.pop(\"content\")
272 except KeyError:
KeyError: 'content'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[79], line 55
49 print(
50 agent_with_chat_history.invoke(
51 {\"input\": \"Hi, I'm polly! What's the output of magic_function of 3?\"}, config
52 )[\"output\"]
53 )
54 print(\"---\")
---> 55 print(agent_with_chat_history.invoke({\"input\": \"Remember my name?\"}, config)[\"output\"])
56 print(\"---\")
57 print(
58 agent_with_chat_history.invoke({\"input\": \"what was that output again?\"}, config)[
59 \"output\"
60 ]
61 )
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:4580, in RunnableBindingBase.invoke(self, input, config, **kwargs)
4574 def invoke(
4575 self,
4576 input: Input,
4577 config: Optional[RunnableConfig] = None,
4578 **kwargs: Optional[Any],
4579 ) -> Output:
-> 4580 return self.bound.invoke(
4581 input,
4582 self._merge_configs(config),
4583 **{**self.kwargs, **kwargs},
4584 )
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:4580, in RunnableBindingBase.invoke(self, input, config, **kwargs)
4574 def invoke(
4575 self,
4576 input: Input,
4577 config: Optional[RunnableConfig] = None,
4578 **kwargs: Optional[Any],
4579 ) -> Output:
-> 4580 return self.bound.invoke(
4581 input,
4582 self._merge_configs(config),
4583 **{**self.kwargs, **kwargs},
4584 )
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:2499, in RunnableSequence.invoke(self, input, config, **kwargs)
2497 input = step.invoke(input, config, **kwargs)
2498 else:
-> 2499 input = step.invoke(input, config)
2500 # finish the root run
2501 except BaseException as e:
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/branch.py:212, in RunnableBranch.invoke(self, input, config, **kwargs)
210 break
211 else:
--> 212 output = self.default.invoke(
213 input,
214 config=patch_config(
215 config, callbacks=run_manager.get_child(tag=\"branch:default\")
216 ),
217 **kwargs,
218 )
219 except BaseException as e:
220 run_manager.on_chain_error(e)
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:4580, in RunnableBindingBase.invoke(self, input, config, **kwargs)
4574 def invoke(
4575 self,
4576 input: Input,
4577 config: Optional[RunnableConfig] = None,
4578 **kwargs: Optional[Any],
4579 ) -> Output:
-> 4580 return self.bound.invoke(
4581 input,
4582 self._merge_configs(config),
4583 **{**self.kwargs, **kwargs},
4584 )
File /usr/local/lib/python3.11/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs)
164 except BaseException as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
169 if include_run_info:
File /usr/local/lib/python3.11/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
153 try:
154 self._validate_inputs(inputs)
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
159 )
161 final_outputs: Dict[str, Any] = self.prep_outputs(
162 inputs, outputs, return_only_outputs
163 )
164 except BaseException as e:
File /usr/local/lib/python3.11/site-packages/langchain/agents/agent.py:1636, in AgentExecutor._call(self, inputs, run_manager)
1634 # We now enter the agent loop (until it returns something).
1635 while self._should_continue(iterations, time_elapsed):
-> 1636 next_step_output = self._take_next_step(
1637 name_to_tool_map,
1638 color_mapping,
1639 inputs,
1640 intermediate_steps,
1641 run_manager=run_manager,
1642 )
1643 if isinstance(next_step_output, AgentFinish):
1644 return self._return(
1645 next_step_output, intermediate_steps, run_manager=run_manager
1646 )
File /usr/local/lib/python3.11/site-packages/langchain/agents/agent.py:1342, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1333 def _take_next_step(
1334 self,
1335 name_to_tool_map: Dict[str, BaseTool],
(...)
1339 run_manager: Optional[CallbackManagerForChainRun] = None,
1340 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1341 return self._consume_next_step(
-> 1342 [
1343 a
1344 for a in self._iter_next_step(
1345 name_to_tool_map,
1346 color_mapping,
1347 inputs,
1348 intermediate_steps,
1349 run_manager,
1350 )
1351 ]
1352 )
File /usr/local/lib/python3.11/site-packages/langchain/agents/agent.py:1342, in <listcomp>(.0)
1333 def _take_next_step(
1334 self,
1335 name_to_tool_map: Dict[str, BaseTool],
(...)
1339 run_manager: Optional[CallbackManagerForChainRun] = None,
1340 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1341 return self._consume_next_step(
-> 1342 [
1343 a
1344 for a in self._iter_next_step(
1345 name_to_tool_map,
1346 color_mapping,
1347 inputs,
1348 intermediate_steps,
1349 run_manager,
1350 )
1351 ]
1352 )
File /usr/local/lib/python3.11/site-packages/langchain/agents/agent.py:1370, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1367 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
1369 # Call the LLM to see what to do.
-> 1370 output = self.agent.plan(
1371 intermediate_steps,
1372 callbacks=run_manager.get_child() if run_manager else None,
1373 **inputs,
1374 )
1375 except OutputParserException as e:
1376 if isinstance(self.handle_parsing_errors, bool):
File /usr/local/lib/python3.11/site-packages/langchain/agents/agent.py:580, in RunnableMultiActionAgent.plan(self, intermediate_steps, callbacks, **kwargs)
572 final_output: Any = None
573 if self.stream_runnable:
574 # Use streaming to make sure that the underlying LLM is invoked in a
575 # streaming
(...)
578 # Because the response from the plan is not a generator, we need to
579 # accumulate the output into final output and return that.
--> 580 for chunk in self.runnable.stream(inputs, config={\"callbacks\": callbacks}):
581 if final_output is None:
582 final_output = chunk
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:2877, in RunnableSequence.stream(self, input, config, **kwargs)
2871 def stream(
2872 self,
2873 input: Input,
2874 config: Optional[RunnableConfig] = None,
2875 **kwargs: Optional[Any],
2876 ) -> Iterator[Output]:
-> 2877 yield from self.transform(iter([input]), config, **kwargs)
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:2864, in RunnableSequence.transform(self, input, config, **kwargs)
2858 def transform(
2859 self,
2860 input: Iterator[Input],
2861 config: Optional[RunnableConfig] = None,
2862 **kwargs: Optional[Any],
2863 ) -> Iterator[Output]:
-> 2864 yield from self._transform_stream_with_config(
2865 input,
2866 self._transform,
2867 patch_config(config, run_name=(config or {}).get(\"run_name\") or self.name),
2868 **kwargs,
2869 )
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:1862, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
1860 try:
1861 while True:
-> 1862 chunk: Output = context.run(next, iterator) # type: ignore
1863 yield chunk
1864 if final_output_supported:
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:2826, in RunnableSequence._transform(self, input, run_manager, config, **kwargs)
2823 else:
2824 final_pipeline = step.transform(final_pipeline, config)
-> 2826 for output in final_pipeline:
2827 yield output
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:1157, in Runnable.transform(self, input, config, **kwargs)
1154 final: Input
1155 got_first_val = False
-> 1157 for ichunk in input:
1158 # The default implementation of transform is to buffer input and
1159 # then call stream.
1160 # It'll attempt to gather all input into a single chunk using
1161 # the `+` operator.
1162 # If the input is not addable, then we'll assume that we can
1163 # only operate on the last chunk,
1164 # and we'll iterate until we get to the last chunk.
1165 if not got_first_val:
1166 final = ichunk
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:4787, in RunnableBindingBase.transform(self, input, config, **kwargs)
4781 def transform(
4782 self,
4783 input: Iterator[Input],
4784 config: Optional[RunnableConfig] = None,
4785 **kwargs: Any,
4786 ) -> Iterator[Output]:
-> 4787 yield from self.bound.transform(
4788 input,
4789 self._merge_configs(config),
4790 **{**self.kwargs, **kwargs},
4791 )
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:1157, in Runnable.transform(self, input, config, **kwargs)
1154 final: Input
1155 got_first_val = False
-> 1157 for ichunk in input:
1158 # The default implementation of transform is to buffer input and
1159 # then call stream.
1160 # It'll attempt to gather all input into a single chunk using
1161 # the `+` operator.
1162 # If the input is not addable, then we'll assume that we can
1163 # only operate on the last chunk,
1164 # and we'll iterate until we get to the last chunk.
1165 if not got_first_val:
1166 final = ichunk
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:1175, in Runnable.transform(self, input, config, **kwargs)
1172 final = ichunk
1174 if got_first_val:
-> 1175 yield from self.stream(final, config, **kwargs)
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:812, in Runnable.stream(self, input, config, **kwargs)
802 def stream(
803 self,
804 input: Input,
805 config: Optional[RunnableConfig] = None,
806 **kwargs: Optional[Any],
807 ) -> Iterator[Output]:
808 \"\"\"
809 Default implementation of stream, which calls invoke.
810 Subclasses should override this method if they support streaming output.
811 \"\"\"
--> 812 yield self.invoke(input, config, **kwargs)
File /usr/local/lib/python3.11/site-packages/langchain_core/prompts/base.py:179, in BasePromptTemplate.invoke(self, input, config)
177 if self.tags:
178 config[\"tags\"] = config[\"tags\"] + self.tags
--> 179 return self._call_with_config(
180 self._format_prompt_with_error_handling,
181 input,
182 config,
183 run_type=\"prompt\",
184 )
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py:1593, in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
1589 context = copy_context()
1590 context.run(_set_config_context, child_config)
1591 output = cast(
1592 Output,
-> 1593 context.run(
1594 call_func_with_variable_args, # type: ignore[arg-type]
1595 func, # type: ignore[arg-type]
1596 input, # type: ignore[arg-type]
1597 config,
1598 run_manager,
1599 **kwargs,
1600 ),
1601 )
1602 except BaseException as e:
1603 run_manager.on_chain_error(e)
File /usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py:380, in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
378 if run_manager is not None and accepts_run_manager(func):
379 kwargs[\"run_manager\"] = run_manager
--> 380 return func(input, **kwargs)
File /usr/local/lib/python3.11/site-packages/langchain_core/prompts/base.py:154, in BasePromptTemplate._format_prompt_with_error_handling(self, inner_input)
152 def _format_prompt_with_error_handling(self, inner_input: Dict) -> PromptValue:
153 _inner_input = self._validate_input(inner_input)
--> 154 return self.format_prompt(**_inner_input)
File /usr/local/lib/python3.11/site-packages/langchain_core/prompts/chat.py:765, in BaseChatPromptTemplate.format_prompt(self, **kwargs)
756 def format_prompt(self, **kwargs: Any) -> PromptValue:
757 \"\"\"Format prompt. Should return a PromptValue.
758
759 Args:
(...)
763 PromptValue.
764 \"\"\"
--> 765 messages = self.format_messages(**kwargs)
766 return ChatPromptValue(messages=messages)
File /usr/local/lib/python3.11/site-packages/langchain_core/prompts/chat.py:1142, in ChatPromptTemplate.format_messages(self, **kwargs)
1138 result.extend([message_template])
1139 elif isinstance(
1140 message_template, (BaseMessagePromptTemplate, BaseChatPromptTemplate)
1141 ):
-> 1142 message = message_template.format_messages(**kwargs)
1143 result.extend(message)
1144 else:
File /usr/local/lib/python3.11/site-packages/langchain_core/prompts/chat.py:235, in MessagesPlaceholder.format_messages(self, **kwargs)
230 if not isinstance(value, list):
231 raise ValueError(
232 f\"variable {self.variable_name} should be a list of base messages, \"
233 f\"got {value}\"
234 )
--> 235 value = convert_to_messages(value)
236 if self.n_messages:
237 value = value[-self.n_messages :]
File /usr/local/lib/python3.11/site-packages/langchain_core/messages/utils.py:296, in convert_to_messages(messages)
285 def convert_to_messages(
286 messages: Sequence[MessageLikeRepresentation],
287 ) -> List[BaseMessage]:
288 \"\"\"Convert a sequence of messages to a list of messages.
289
290 Args:
(...)
294 List of messages (BaseMessages).
295 \"\"\"
--> 296 return [_convert_to_message(m) for m in messages]
File /usr/local/lib/python3.11/site-packages/langchain_core/messages/utils.py:296, in <listcomp>(.0)
285 def convert_to_messages(
286 messages: Sequence[MessageLikeRepresentation],
287 ) -> List[BaseMessage]:
288 \"\"\"Convert a sequence of messages to a list of messages.
289
290 Args:
(...)
294 List of messages (BaseMessages).
295 \"\"\"
--> 296 return [_convert_to_message(m) for m in messages]
File /usr/local/lib/python3.11/site-packages/langchain_core/messages/utils.py:273, in _convert_to_message(message)
271 msg_content = msg_kwargs.pop(\"content\")
272 except KeyError:
--> 273 raise ValueError(
274 f\"Message dict must contain 'role' and 'content' keys, got {message}\"
275 )
276 _message = _create_message_from_message_type(
277 msg_type, msg_content, **msg_kwargs
278 )
279 else:
ValueError: Message dict must contain 'role' and 'content' keys, got {'text': '\
\
The magic_function applied to the input of 3 returns the output of 5.', 'type': 'text', 'index': 0}"
}
```
...and it is unclear why.
My installed langchain packages:
```
langchain 0.2.7
langchain-community 0.2.7
langchain-core 0.2.12
langchain-google-vertexai 1.0.6
langchain-groq 0.1.6
langchain-openai 0.1.14
langchain-text-splitters 0.2.2
langchainhub 0.1.20
```
> Note: the example code works fine, if `model = ChatOpenAI(model="gpt-4o")` is used instead of the Claude-3 model.
### Idea or request for content:
It would be very helpful to show how one must change the [memory example](https://python.langchain.com/v0.2/docs/how_to/migrate_agent/#memory) code, depending on the LLM used | DOC: how_to/migrate_agent/#memory => example does not work with other models | https://api.github.com/repos/langchain-ai/langchain/issues/24003/comments | 1 | 2024-07-09T03:45:35Z | 2024-07-09T15:32:44Z | https://github.com/langchain-ai/langchain/issues/24003 | 2,397,041,829 | 24,003 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code similarity search type is set to cosine. `self._similarity_type = DocumentDBSimilarityType.CO`
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/documentdb.py
```
def __init__(
self,
collection: Collection[DocumentDBDocumentType],
embedding: Embeddings,
*,
index_name: str = "vectorSearchIndex",
text_key: str = "textContent",
embedding_key: str = "vectorContent",
):
"""Constructor for DocumentDBVectorSearch
Args:
collection: MongoDB collection to add the texts to.
embedding: Text embedding model to use.
index_name: Name of the Vector Search index.
text_key: MongoDB field that will contain the text
for each document.
embedding_key: MongoDB field that will contain the embedding
for each document.
"""
self._collection = collection
self._embedding = embedding
self._index_name = index_name
self._text_key = text_key
self._embedding_key = embedding_key
self._similarity_type = DocumentDBSimilarityType.COS
```
so even if user provide different similarity type in invoke, it has no effect -
```
retriever = vector_store.as_retriever(
search_type="similarity",
search_kwargs={"k": 5, 'filter': filter, "similarity": "dotProduct"},
)
```
`similarity` is as per pymongo search pipeline.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
* I am using latest community version - langchain-community 0.2.6
* I expect if I set similiary-type in search keywords then its should be propogated to pymongo pipeline
### System Info
Package Information
-------------------
> langchain_core: 0.2.11
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.83
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
" | [community] aws documentDB similarity search type not configurable | https://api.github.com/repos/langchain-ai/langchain/issues/23975/comments | 0 | 2024-07-08T14:44:57Z | 2024-07-08T14:47:34Z | https://github.com/langchain-ai/langchain/issues/23975 | 2,395,847,128 | 23,975 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
# Filtering pipeling working in pymongo used to filter on a list of file_ids
query_embedding = self.embedding_client.embed_query(query)
pipeline = [
{
'$search': {
"cosmosSearch": {
"vector": query_embedding,
"path": "vectorContent",
"k": 5, #, #, "efsearch": 40 # optional for HNSW only
"filter": {"fileId": {'$in': file_ids}}
},
"returnStoredSource": True }},
{'$project': {
'similarityScore': { '$meta': 'searchScore' },
'document' : '$$ROOT'
}
},
]
docs = self.mongo_collection.aggregate(pipeline)
```
# Current implementation
``` python
def _get_pipeline_vector_ivf(
self, embeddings: List[float], k: int = 4
) -> List[dict[str, Any]]:
pipeline: List[dict[str, Any]] = [
{
"$search": {
"cosmosSearch": {
"vector": embeddings,
"path": self._embedding_key,
"k": k,
},
"returnStoredSource": True,
}
},
{
"$project": {
"similarityScore": {"$meta": "searchScore"},
"document": "$$ROOT",
}
},
]
return pipeline
def _get_pipeline_vector_hnsw(
self, embeddings: List[float], k: int = 4, ef_search: int = 40
) -> List[dict[str, Any]]:
pipeline: List[dict[str, Any]] = [
{
"$search": {
"cosmosSearch": {
"vector": embeddings,
"path": self._embedding_key,
"k": k,
"efSearch": ef_search,
},
}
},
{
"$project": {
"similarityScore": {"$meta": "searchScore"},
"document": "$$ROOT",
}
},
]
return pipeline
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
As stated in the langchain documentation filtering in Azure Cosmos DB Mongo vCore should be supported: https://python.langchain.com/v0.2/docs/integrations/vectorstores/azure_cosmos_db/
Filtering works when I apply my MongoDB query directly using pyomongo as shown in the example. However, through langchain the same filters are not applied. I tried using the filter, pre_filter, search_kwargs and kwargs parameters, but to no avail.
``` python
docs = self.vectorstore.similarity_search(query,
k=5,
pre_filter = {'fileId': {'$in': ["31c283c2-ac31-4260-a8d0-864f444c33ee]"}}
)
```
Upon closer inspection of the source code, I see that no filter key is present in the query dictionary and see no kwargs, search_kwargs being passed, which could be the reason.
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/azure_cosmos_db.py
Any input on this issue?
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.10
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.82
> langchain_openai: 0.1.13
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | AzureCosmosDBVectorSearch filter not working | https://api.github.com/repos/langchain-ai/langchain/issues/23963/comments | 2 | 2024-07-08T09:37:53Z | 2024-07-25T18:06:13Z | https://github.com/langchain-ai/langchain/issues/23963 | 2,395,146,622 | 23,963 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def custom_parser(self, inputs: AIMessage):
output = {"answer": None}
try:
json_parser = JsonOutputParser(pydantic_object=JsonOutputParserModel)
json_output = json_parser.parse(inputs.content) # type: ignore
print('json_output',json_output)
output["answer"] = json_output.get("answer", None)
except (json.JSONDecodeError, OutputParserException) as e:
parser = StrOutputParser()
str_output = parser.invoke(inputs.content) # type: ignore
output["answer"] = str_output
except Exception as e:
print(f"Custom Parsing Error :{e}")
return output
### Error Message and Stack Trace (if applicable)
json_output 90
Custom Parsing Error :'int' object has no attribute 'get'
### Description
I am trying to use langchain json parsing lib to parse a text into json.
Input of the custom_parser is model response : AI message, function checks if the respnse can be parse as json then try block will be executed and it the message can't be parsed as json, then JSON Decode Error block will be executed. The function is written in such a way that it can handle plain string as response or stringify json.
Issue is when i pass a string message:
`custom_parser("90. Yes, you need to file the dispute")`
output is:
json_output 90
Custom Parsing Error :'int' object has no attribute 'get'
But if i pass string message without number then it goes to second block and code get executed as usual
`custom_parser(" Yes, you need to file the dispute")`
Output:
{'answer': 'you need to file the dispute'}
### System Info
langchain==0.2.0
langchain-community==0.2.0
langchain-core==0.2.1
langchain-openai==0.1.7
langchain-text-splitters==0.2.0 | json parser failed to parse full text if text startes with a number | https://api.github.com/repos/langchain-ai/langchain/issues/23960/comments | 3 | 2024-07-08T08:12:16Z | 2024-07-10T19:06:32Z | https://github.com/langchain-ai/langchain/issues/23960 | 2,394,954,354 | 23,960 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/rag/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The documentation and corresponding code provided with https://python.langchain.com/v0.2/docs/tutorials/rag/ has an issue, the moment I run the stub with this pipeline
```python
rag_chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
```
this error is thrown :
```python
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Cell In[1], line 37
30 def format_docs(docs):
31 return "\n\n".join(doc.page_content for doc in docs)
34 rag_chain = (
35 {"context": retriever | format_docs, "question": RunnablePassthrough()}
36 | prompt
---> 37 | llm
38 | StrOutputParser()
39 )
41 rag_chain.invoke("What is Task Decomposition?")
NameError: name 'llm' is not defined
```
It loooks like there needs to be a change to either the installations before running this cell or there is something wrong with the code in the cell.
Either way the document must be updated so that this error does not occur.
### Idea or request for content:
If this issue is specific to a particular python / pip version then there can be a "if you get this kind of error" section where this is highlighted along with resolution | DOC: issue with rag tutorial code : name 'llm' is not defined | https://api.github.com/repos/langchain-ai/langchain/issues/23958/comments | 1 | 2024-07-08T05:16:14Z | 2024-07-08T05:32:25Z | https://github.com/langchain-ai/langchain/issues/23958 | 2,394,652,254 | 23,958 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import torch
from langchain_community.document_loaders import YoutubeAudioLoader
from langchain_community.document_loaders.generic import GenericLoader
from langchain_community.document_loaders.parsers.audio import (
FasterWhisperParser
)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# float32
compute_type = "float16" if device == 'cuda' else 'int8'
yt_video_url = 'https://www.youtube.com/watch?v=1bUy-1hGZpI&ab_channel=IBMTechnology'
yt_loader_faster_whisper = GenericLoader(
blob_loader=YoutubeAudioLoader([ yt_video_url], '.'),
blob_parser=FasterWhisperParser(device=device)
# no possibility to define compute_type
# Error: ValueError: Requested float16 compute type, but the target device or backend do not support efficient float16 computation.
# blob_parser=FasterWhisperParser(device=device, compute_type=compute_type)
)
yt_data = yt_loader_faster_whisper.load()
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "python/helpers/pydev/pydevd.py", line 1551, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "AI-POC/frameworks/langchain/01_chat_with_data/main.py", line 133, in <module>
docs_load()
File "AI-POC/frameworks/langchain/01_chat_with_data/main.py", line 123, in docs_load
get_youtube(use_paid_services=False, faster_whisper=True, wisper_local=False)
File "AI-POC/frameworks/langchain/01_chat_with_data/main.py", line 108, in get_youtube
yt_data = yt_loader_faster_whisper.load()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "AI-POC/.venv/lib/python3.11/site-packages/langchain_core/document_loaders/base.py", line 29, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "AI-POC/.venv/lib/python3.11/site-packages/langchain_community/document_loaders/generic.py", line 116, in lazy_load
yield from self.blob_parser.lazy_parse(blob)
File "AI-POC/.venv/lib/python3.11/site-packages/langchain_community/document_loaders/parsers/audio.py", line 467, in lazy_parse
model = WhisperModel(
^^^^^^^^^^^^^
File "AI-POC/.venv/lib/python3.11/site-packages/faster_whisper/transcribe.py", line 145, in __init__
self.model = ctranslate2.models.Whisper(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Requested float16 compute type, but the target device or backend do not support efficient float16 computation.
```
### Description
I'm trying to use the FasterWhisperParser class from the langchain_community package to parse audio data. I want to be able to use a GPU if one is available, and fall back to a CPU otherwise.
I'm trying to set the compute_type to 'float16' when using a GPU and 'int8' when using a CPU. However, I'm encountering an issue because the FasterWhisperParser class doesn't accept a compute_type argument. When I try to use a CPU, I get a ValueError because 'float16' computation isn't efficiently supported on CPUs.
### System Info
```
$ python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Thu May 11 15:56:33 UTC 2023
> Python Version: 3.11.6 (main, Oct 3 2023, 00:00:00) [GCC 12.3.1 20230508 (Red Hat 12.3.1-1)]
Package Information
-------------------
> langchain_core: 0.2.11
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.83
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | No possibility to define WhisperModel compute_type when using GenericLoader with blob_parser=FasterWhisperParser | https://api.github.com/repos/langchain-ai/langchain/issues/23953/comments | 1 | 2024-07-07T17:17:36Z | 2024-07-07T17:24:46Z | https://github.com/langchain-ai/langchain/issues/23953 | 2,394,140,669 | 23,953 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
llm = ChatOpenAI(temperature=temperature, openai_api_key="1234")
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
persist_directory = "./example"
collection_name = "Example"
vectorstore = Chroma(embedding_function=embeddings, collection_name=collection_name, persist_directory=persist_directory)
metadata_field_info = [
AttributeInfo(
name="Title",
description="The title of the document",
type="string",
),
AttributeInfo(
name="Body",
description="The body of the document",
type="string",
)
]
document_contents = "Langchain test"
documents = []
retriever = SelfQueryRetriever.from_llm(
llm=llm,
vectorstore=vectorstore,
metadata_field_info=metadata_field_info,
document_contents=document_contents,
verbose = True,
structured_query_translator = ChromaTranslator()
)
retriever.add_documents(documents, ids=None)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/conjurors/notebooks/rag.py", line 378, in retriever_self_query
retriever.add_documents(documents, ids=None)
AttributeError: 'SelfQueryRetriever' object has no attribute 'add_documents'
### Description
SelfQueryRetriever does not have addDocuments while the other Retrievers have it
### System Info
langchain==0.2.6
langchain-community==0.2.6
langchain-core==0.2.11
langchain-experimental==0.0.53
langchain-groq==0.0.1
langchain-huggingface==0.0.3
langchain-openai==0.1.14
langchain-text-splitters==0.2.2
MacOS 14.5
Python 3.9.18 | 'SelfQueryRetriever' object has no attribute 'add_documents' | https://api.github.com/repos/langchain-ai/langchain/issues/23952/comments | 0 | 2024-07-07T16:56:41Z | 2024-07-07T16:59:09Z | https://github.com/langchain-ai/langchain/issues/23952 | 2,394,133,118 | 23,952 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/local_rag/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I was following the instruction with this tutorial and I get the error message below while creating vectorstore. It seems required a model name. However, I am clueless about what I should put in this parameter.
Thanks in advance for any assistance.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[45], [line 4](vscode-notebook-cell:?execution_count=45&line=4)
[1](vscode-notebook-cell:?execution_count=45&line=1) from langchain_chroma import Chroma
[2](vscode-notebook-cell:?execution_count=45&line=2) from langchain_community.embeddings import GPT4AllEmbeddings
----> [4](vscode-notebook-cell:?execution_count=45&line=4) vectorstore = Chroma.from_documents(documents=all_splits, embedding=GPT4AllEmbeddings())
File c:\Users\eddie.DESKTOP-J0CLNTS\.conda\envs\langchain\Lib\site-packages\pydantic\v1\main.py:339, in BaseModel.__init__(__pydantic_self__, **data)
[333](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:333) """
[334](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:334) Create a new model by parsing and validating input data from keyword arguments.
[335](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:335)
[336](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:336) Raises ValidationError if the input data cannot be parsed to form a valid model.
[337](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:337) """
[338](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:338) # Uses something other than `self` the first arg to allow "self" as a settable attribute
--> [339](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:339) values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
[340](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:340) if validation_error:
[341](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:341) raise validation_error
File c:\Users\eddie.DESKTOP-J0CLNTS\.conda\envs\langchain\Lib\site-packages\pydantic\v1\main.py:1100, in validate_model(model, input_data, cls)
[1098](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:1098) continue
[1099](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:1099) try:
-> [1100](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:1100) values = validator(cls_, values)
[1101](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:1101) except (ValueError, TypeError, AssertionError) as exc:
[1102](file:///C:/Users/eddie.DESKTOP-J0CLNTS/.conda/envs/langchain/Lib/site-packages/pydantic/v1/main.py:1102) errors.append(ErrorWrapper(exc, loc=ROOT_KEY))
...
[47](https://file+.vscode-resource.vscode-cdn.net/c%3A/langchain_Exercise/~/AppData/Roaming/Python/Python311/site-packages/langchain_community/embeddings/gpt4all.py:47) "Please install the gpt4all library to "
[48](https://file+.vscode-resource.vscode-cdn.net/c%3A/langchain_Exercise/~/AppData/Roaming/Python/Python311/site-packages/langchain_community/embeddings/gpt4all.py:48) "use this embedding model: pip install gpt4all"
[49](https://file+.vscode-resource.vscode-cdn.net/c%3A/langchain_Exercise/~/AppData/Roaming/Python/Python311/site-packages/langchain_community/embeddings/gpt4all.py:49) )
KeyError: 'model_name'
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/tutorials/local_rag/> Failed to create vectorstore using GPT4AllEmbedding() | https://api.github.com/repos/langchain-ai/langchain/issues/23949/comments | 0 | 2024-07-07T12:53:39Z | 2024-07-07T12:56:09Z | https://github.com/langchain-ai/langchain/issues/23949 | 2,394,043,449 | 23,949 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am following the doc: https://python.langchain.com/v0.2/docs/how_to/graph_mapping/
using a database where Neo4J labels have colon, for example, `biolink:Disease`, `biolink:treats`.
This seems to break the `CypherQueryCorrector` from `langchain.chains.graph_qa.cypher_utils import`.
The query is corrected to `""` even when it is valid.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
See above.
### System Info
```
langchain==0.2.6
langchain-cli==0.0.25
langchain-community==0.2.6
langchain-core==0.2.11
langchain-experimental==0.0.62
langchain-text-splitters==0.2.2
``` | CypherQueryCorrector does not handle labels with :, such as `biolink:Disease` | https://api.github.com/repos/langchain-ai/langchain/issues/23946/comments | 0 | 2024-07-07T08:50:11Z | 2024-07-07T08:52:37Z | https://github.com/langchain-ai/langchain/issues/23946 | 2,393,957,847 | 23,946 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.chat_models import AzureChatOpenAI
chat = AzureChatOpenAI(
azure_deployment=VISION_DEPLOYMENT,
azure_endpoint=os.get_env('AZURE_ENDPOINT'),
openai_api_version=os.getenv("OPENAI_API_VERSION"),
openai_api_key=os.getenv("AZURE_VISION_TOKEN"),
max_tokens=4096
)
from typing import Optional
from langchain_core.pydantic_v1 import BaseModel, Field
class Joke(BaseModel):
"""Joke to tell user."""
setup: str = Field(description="The setup of the joke")
chat.with_structured_output(Joke)
### Error Message and Stack Trace (if applicable)
NotImplementedError: with_structured_output is not implemented for this model.
### Description
There are two definitions for AzureChatOpenAI
`from langchain.chat_models import AzureChatOpenAI`
and
`from langchain_openai.chat_models import AzureChatOpenAI`
Using latest verstions, the former does not include with_structured_output method, whereas the latter does. In my naive opinion langchain_openai ( the working one) must prevail, and deprecate the other one.
Thanks
### System Info
pip freeze G langchain
langchain==0.2.6
langchain-anthropic==0.1.15
langchain-astradb==0.3.3
langchain-aws==0.1.7
langchain-chroma==0.1.1
langchain-cohere==0.1.8
langchain-community==0.2.6
langchain-core==0.2.11
langchain-experimental==0.0.62
langchain-google-genai==1.0.6
langchain-google-vertexai==1.0.5
langchain-groq==0.1.5
langchain-mistralai==0.1.8
langchain-mongodb==0.1.6
langchain-openai==0.1.14
langchain-pinecone==0.1.1
langchain-text-splitters==0.2.1
langchainhub==0.1.20
platform Linux pop-os
python 3.10.12 | Two conflicting declarations of AzureChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/23936/comments | 0 | 2024-07-06T18:39:20Z | 2024-07-06T18:41:50Z | https://github.com/langchain-ai/langchain/issues/23936 | 2,393,659,645 | 23,936 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
chain = RetrievalQAWithSourcesChain(
reduce_k_below_max_tokens=True,
max_tokens_limit=16000,
combine_documents_chain=load_qa_with_sources_chain(
ChatGoogleGenerativeAI(model="gemini-1.5-flash", temperature=0, callbacks=[UsageHandler()]),
chain_type=self.chain_type, prompt=self.prompt),
memory=self.memory, retriever=self.vector_db.as_retriever(search_kwargs={"k": 3]}))
result = chain.invoke()
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
### Current output at result['answer']
"Lorem ipsum ["
### Expected
"Lorem ipsum [Source: xyz1]
Lorem ipsum [Source: xyz2]
Lorem ipsum [Source: xyz3]"
- This is the output message from the model
- Verified this by checking the response in `on_llm_end` callback
Some points:
- I have a prompt saying it should site the sources
- I have been using other models too (GPT 3.5, Llama3 8B) and only experiencing this with `Gemini 1.5 Flash` probably because this is format it mentions the sources which is not supported currently
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.9.12 (main, Apr 5 2022, 06:56:58)
[GCC 7.5.0]
Package Information
-------------------
> langchain_core: 0.2.10
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.82
> langchain_google_genai: 1.0.6
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.2
> langchain_together: 0.1.3
> langchain_voyageai: 0.1.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | `_split_sources` of `BaseQAWithSourcesChain` prematurely truncates the Gemini Model's outputs, at the First Instance of `[Source: xyz]` | https://api.github.com/repos/langchain-ai/langchain/issues/23932/comments | 1 | 2024-07-06T12:03:09Z | 2024-08-02T11:10:11Z | https://github.com/langchain-ai/langchain/issues/23932 | 2,393,536,512 | 23,932 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
template = ChatPromptTemplate.from_messages(
messages= messages,
template_format= "jinja2"
)
```
### Error Message and Stack Trace (if applicable)
Warning: Literal type does not include "jinja2".
### Description
The `from_messages` method has a type error for the `template_format` parameter. When setting `template_format` to "jinja2", a warning is displayed even though "jinja2" works without any problem. It seems that "jinja2" is implemented internally, so the type definition should be modified to include it.
### System Info
python-versions = "^3.11" | Class ChatPromptTemplate > def from_messages > template_format bug | https://api.github.com/repos/langchain-ai/langchain/issues/23929/comments | 0 | 2024-07-06T06:55:53Z | 2024-07-16T13:09:44Z | https://github.com/langchain-ai/langchain/issues/23929 | 2,393,443,504 | 23,929 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
testing! | test | https://api.github.com/repos/langchain-ai/langchain/issues/23925/comments | 0 | 2024-07-05T21:41:05Z | 2024-07-18T15:48:08Z | https://github.com/langchain-ai/langchain/issues/23925 | 2,393,170,793 | 23,925 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
testing | test | https://api.github.com/repos/langchain-ai/langchain/issues/23922/comments | 0 | 2024-07-05T20:07:52Z | 2024-07-05T20:09:51Z | https://github.com/langchain-ai/langchain/issues/23922 | 2,393,089,068 | 23,922 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
`.tool_calls` attribute of aggregated chunks can be empty, whereas result of `.invoke` is not.
```python
from langchain_anthropic import ChatAnthropic
def magic_function() -> int:
"""Calculates a magic function."""
return 5
llm = ChatAnthropic(
model="claude-3-haiku-20240307",
).bind_tools([magic_function])
query = "What is the value of magic_function()?"
full = None
for chunk in llm.stream(query):
full = chunk if full is None else full + chunk
print(full.tool_calls)
print(llm.invoke(query).tool_calls)
```
```
[]
[{'name': 'magic_function', 'args': {}, 'id': 'toolu_01HHtSuCJ4LKQfGRYncy4D5a'}]
``` | bug: anthropic streaming tool calls for tools with no arguments | https://api.github.com/repos/langchain-ai/langchain/issues/23911/comments | 0 | 2024-07-05T15:07:55Z | 2024-07-05T18:57:42Z | https://github.com/langchain-ai/langchain/issues/23911 | 2,392,777,922 | 23,911 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code is self-contained **except you need to download the public dataset** I use in order to replicate the problem. Please read the documentation below for where to get it.
```
import asyncio,sys,csv, random
from typing import Any, Dict, List
from langchain_community.cache import SQLiteCache
from langchain.callbacks.base import AsyncCallbackHandler
from langchain_community.callbacks import get_openai_callback
from langchain_core.messages import HumanMessage
from langchain_core.outputs import LLMResult
from langchain_openai import ChatOpenAI
from langchain_core.globals import set_llm_cache
class CustomAsyncHandler(AsyncCallbackHandler):
async def on_chat_model_start(self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
pass
"""Async callback handler that can be used to handle callbacks from langchain."""
async def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
pass
async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
await asyncio.sleep(1)
#print(" >>> Your joke: {}".format(response.generations[0][0].text))
async def summarize(chat, text):
# To enable streaming, we pass in `streaming=True` to the ChatModel constructor
# Additionally, we pass in a list with our custom handler
res=None
with get_openai_callback() as cb:
res=await chat.agenerate([[HumanMessage(content="Summarise the following text in 50 words:\n\n```\n{}\n```".format(text))]])
return res.generations[0][0].text, cb.total_cost
async def test_batch(api, data:list, batchsize=10):
chat = ChatOpenAI(
model_name='gpt-4o',
openai_api_key=api,
callbacks=[CustomAsyncHandler()],
)
tasks = []
for i in range(0, batchsize):
text=random.choice(data)
tasks.append(summarize(chat, text))
return await asyncio.gather(*tasks)
def run_async(api, data:list, batchsize=10):
count = 1
loop = asyncio.new_event_loop()
while True:
print("[batch={} of {} chat jobs]".format(count, batchsize))
res = loop.run_until_complete(test_batch(api, data, batchsize))
total_cost=sum([r[1] for r in res])
print("\tcompleted with {} results, cost={}".format(len(res), total_cost))
count += 1
def read_sample_data(in_csv:str, topn_lines):
stop=0
rows=[]
with open(in_csv, mode='r', encoding='utf-8') as file:
# Create a CSV reader object with the specified delimiter and quote character
csv_reader = csv.reader(file, delimiter=',', quotechar='"')
for row in csv_reader:
if stop==0:
stop+=1
continue
rows.append(row[0])
stop+=1
if stop>topn_lines:
break
return rows
if __name__=='__main__':
cache = SQLiteCache(database_path="example_cache.db")
set_llm_cache(cache)
#this is downloaded from https://www.kaggle.com/datasets/alfathterry/bbc-full-text-document-classification?resource=download
#reads just the top n lines, then for each async job, takes a random text to summarise. eventually,
#everything should've been cached
data=read_sample_data('/home/zz/Data/news_text_samples/bbc_data.csv', topn_lines=100)
#this line will run forever until you stop it. batchsize indiciates how many parallel chat jobs to run
run_async(sys.argv[1], data, batchsize=20)
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1960, in _exec_single_context
self.dialect.do_execute(
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute
cursor.execute(statement, parameters)
sqlite3.IntegrityError: UNIQUE constraint failed: full_llm_cache.prompt, full_llm_cache.llm, full_llm_cache.idx
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/zz/Work/charity-kg/src/charitykg/sandbox/example_langchain_async.py", line 84, in <module>
run_async(sys.argv[1], data, batchsize=20)
File "/home/zz/Work/charity-kg/src/charitykg/sandbox/example_langchain_async.py", line 55, in run_async
res = loop.run_until_complete(test_batch(api, data, batchsize))
File "/home/zz/Programs/miniconda3/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/home/zz/Work/charity-kg/src/charitykg/sandbox/example_langchain_async.py", line 48, in test_batch
return await asyncio.gather(*tasks)
File "/home/zz/Work/charity-kg/src/charitykg/sandbox/example_langchain_async.py", line 33, in summarize
res=await chat.agenerate([[HumanMessage(content="Summarise the following text in 50 words:\n\n```\n{}\n```".format(text))]])
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 651, in agenerate
raise exceptions[0]
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 855, in _agenerate_with_cache
await llm_cache.aupdate(prompt, llm_string, result.generations)
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/langchain_core/caches.py", line 138, in aupdate
return await run_in_executor(None, self.update, prompt, llm_string, return_val)
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 557, in run_in_executor
return await asyncio.get_running_loop().run_in_executor(
File "/home/zz/Programs/miniconda3/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 548, in wrapper
return func(*args, **kwargs)
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/langchain_community/cache.py", line 284, in update
with Session(self.engine) as session, session.begin():
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/util.py", line 146, in __exit__
with util.safe_reraise():
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/util.py", line 144, in __exit__
self.commit()
File "<string>", line 2, in commit
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/state_changes.py", line 139, in _go
ret_value = fn(self, *arg, **kw)
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1257, in commit
self._prepare_impl()
File "<string>", line 2, in _prepare_impl
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/state_changes.py", line 139, in _go
ret_value = fn(self, *arg, **kw)
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1232, in _prepare_impl
self.session.flush()
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 4296, in flush
self._flush(objects)
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 4431, in _flush
with util.safe_reraise():
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 4392, in _flush
flush_context.execute()
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 466, in execute
rec.execute(self)
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 642, in execute
util.preloaded.orm_persistence.save_obj(
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 93, in save_obj
_emit_insert_statements(
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 1048, in _emit_insert_statements
result = connection.execute(
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1408, in execute
return meth(
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 513, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1630, in _execute_clauseelement
ret = self._execute_context(
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1839, in _execute_context
return self._exec_single_context(
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1979, in _exec_single_context
self._handle_dbapi_exception(
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2335, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1960, in _exec_single_context
self.dialect.do_execute(
File "/home/zz/.cache/pypoetry/virtualenvs/charity-kg-dBz_RD7O-py3.10/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) UNIQUE constraint failed: full_llm_cache.prompt, full_llm_cache.llm, full_llm_cache.idx
[SQL: INSERT INTO full_llm_cache (prompt, llm, idx, response) VALUES (?, ?, ?, ?)]
[parameters: ('[{"lc": 1, "type": "constructor", "id": ["langchain", "schema", "messages", "HumanMessage"], "kwargs": {"content": "Summarise the following text in 5 ... (1288 characters truncated) ... se money for the relief fund. A release date has yet to be set for the recording, which was organised by Sharon Osbourne. \\n```", "type": "human"}}]', '{"id": ["langchain", "chat_models", "openai", "ChatOpenAI"], "kwargs": {"max_retries": 2, "model_name": "gpt-4o", "n": 1, "openai_api_key": {"id": [" ... (12 characters truncated) ... EY"], "lc": 1, "type": "secret"}, "openai_proxy": "", "temperature": 0.7}, "lc": 1, "name": "ChatOpenAI", "type": "constructor"}---[(\'stop\', None)]', 0, '{"lc": 1, "type": "constructor", "id": ["langchain", "schema", "output", "ChatGeneration"], "kwargs": {"text": "Sir Elton John performed a charity co ... (1181 characters truncated) ... 0b-5074984500f1-0", "usage_metadata": {"input_tokens": 314, "output_tokens": 65, "total_tokens": 379}, "tool_calls": [], "invalid_tool_calls": []}}}}')]
(Background on this error at: https://sqlalche.me/e/20/gkpj)
```
### Description
When using SQLite cache in an async setup where X number of chat jobs are running in parallel, sqlite3.IntegrityError happens at randomly point. It seems to be caused by having the same key (prompt? llm_string?) inserted into the DB simultaneously, even though I am already using **.agenerate**
Note that this problem happens randomly so it is difficult to catch. I have written a minimal program above to replicate this error.
**What does the minimal program do:**
- Reads top 50 news text from a csv file on your local OS (you need to download it from 'https://www.kaggle.com/datasets/alfathterry/bbc-full-text-document-classification?resource=download')
- (note: you MIGHT be able to reproduce the error using shorter texts instead but I am not sure. I just try to replicate my setup as close as possible as my program sends long prompts)
- Randomly chooses 20 news from the above 50
- Creates 20 chat jobs to run simultaneously (as a single 'batch'), each asking GPT to summarise the news
- Uses a shared SQLite cache object
- The program will print the total cost for each batch so eventually, you should see the cost remain constant at 0 meaning all calls go through the cache. But I never reached that point before the exception takes place.
**What will happen:**
Randomly at some point, the program will break with the error above. Make sure you delete the generated cache between different program runs.
I ran the program 3 times:
- 1st time it breaks at batch 7
- 2nd time it breaks at batch 10
- 3rd time it breaks at batch 4
### System Info
langchain 0.2.6
langchain-community 0.2.6
langchain-core 0.2.11
langchain-openai 0.1.14
OS: Ubuntu 22.04
Python: 3.10 managed by pyenv and poetry. | SQLiteCache under async setup randomly breaks due to sqlite3.IntegrityError (langchain_community.cache) | https://api.github.com/repos/langchain-ai/langchain/issues/23904/comments | 0 | 2024-07-05T10:52:14Z | 2024-07-11T10:55:11Z | https://github.com/langchain-ai/langchain/issues/23904 | 2,392,364,762 | 23,904 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.1/docs/modules/memory/chat_messages/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
![Screenshot from 2024-07-05 19-48-46](https://github.com/langchain-ai/langchain/assets/8540764/8fccce5c-7b7e-488d-b0e2-f71da736bbfa)
Similar to: https://github.com/langchain-ai/langchain/issues/23892
### Idea or request for content:
N/A | DOC: <Issue related to /v0.1/docs/modules/memory/chat_messages/> 404 on ChatMessageHistory link | https://api.github.com/repos/langchain-ai/langchain/issues/23902/comments | 3 | 2024-07-05T09:49:23Z | 2024-07-07T13:17:38Z | https://github.com/langchain-ai/langchain/issues/23902 | 2,392,260,779 | 23,902 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_openai import ChatOpenAI
from typing import Optional
from langchain_core.pydantic_v1 import BaseModel, Field
class Joke(BaseModel):
"""Joke to tell user."""
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(description="How funny the joke is, from 1 to 10")
llm = ChatOpenAI(model="gpt-3.5-turbo")
structured_llm = llm.with_structured_output(Joke)
result = structured_llm.invoke("Tell me a joke about cats")
print(result) # result: None
```
### Error Message and Stack Trace (if applicable)
Nothing
### Description
i try to langchain's method: with_structured_output(), https://python.langchain.com/v0.2/docs/how_to/structured_output/,
but find the output is None.
### System Info
python == 3.9.19
langchain == 0.2.6
langchain_core == 0.2.10
langchain-openai == 0.1.13
| when run method “with_structured_output”, output print nothing?? code was copied from langchain doc. | https://api.github.com/repos/langchain-ai/langchain/issues/23901/comments | 2 | 2024-07-05T09:33:50Z | 2024-07-06T12:26:51Z | https://github.com/langchain-ai/langchain/issues/23901 | 2,392,234,090 | 23,901 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.chat_models.azureml_endpoint import AzureMLChatOnlineEndpoint
from langchain_community.llms.azureml_endpoint import ContentFormatterBase
from langchain_community.chat_models.azureml_endpoint import (
AzureMLEndpointApiType,
CustomOpenAIChatContentFormatter,
)
from langchain_core.messages import HumanMessage
chat = AzureMLChatOnlineEndpoint(
endpoint_url="https://llm-host-westeurope-mx8x22bi.westeurope.inference.ml.azure.com/score",
endpoint_api_type=AzureMLEndpointApiType.dedicated,
endpoint_api_key="xY1BWYshxYJhQGZE6P7Uc1of34BW9b5t",
content_formatter=CustomOpenAIChatContentFormatter(),
)
```
```
response = chat.invoke(
[HumanMessage(content="Hallo")],max_tokens=512
)
response
```
### Error Message and Stack Trace (if applicable)
I think I have set up the right deployment type. See here the full trace:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File [~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:140](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:140), in CustomOpenAIChatContentFormatter.format_response_payload(self, output, api_type)
139 try:
--> 140 choice = json.loads(output)["output"]
141 except (KeyError, IndexError, TypeError) as e:
KeyError: 'output'
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
[/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb) Zelle 4 line 8
[5](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y133sZmlsZQ%3D%3D?line=4) prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])
[7](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y133sZmlsZQ%3D%3D?line=6) chain = prompt | chat
----> [8](vscode-notebook-cell:/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/test.ipynb#Y133sZmlsZQ%3D%3D?line=7) chain.invoke({"text": "Explain the importance of low latency for LLMs."})
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2507](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2507), in RunnableSequence.invoke(self, input, config, **kwargs)
2505 input = step.invoke(input, config, **kwargs)
2506 else:
-> 2507 input = step.invoke(input, config)
2508 # finish the root run
2509 except BaseException as e:
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:248](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:248), in BaseChatModel.invoke(self, input, config, stop, **kwargs)
237 def invoke(
238 self,
239 input: LanguageModelInput,
(...)
243 **kwargs: Any,
244 ) -> BaseMessage:
245 config = ensure_config(config)
246 return cast(
247 ChatGeneration,
--> 248 self.generate_prompt(
249 [self._convert_input(input)],
250 stop=stop,
251 callbacks=config.get("callbacks"),
252 tags=config.get("tags"),
253 metadata=config.get("metadata"),
254 run_name=config.get("run_name"),
255 run_id=config.pop("run_id", None),
256 **kwargs,
257 ).generations[0][0],
258 ).message
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:677](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:677), in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
669 def generate_prompt(
670 self,
671 prompts: List[PromptValue],
(...)
674 **kwargs: Any,
675 ) -> LLMResult:
676 prompt_messages = [p.to_messages() for p in prompts]
--> 677 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:534](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:534), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
532 if run_managers:
533 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 534 raise e
535 flattened_outputs = [
536 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
537 for res in results
538 ]
539 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:524](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:524), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
521 for i, m in enumerate(messages):
522 try:
523 results.append(
--> 524 self._generate_with_cache(
525 m,
526 stop=stop,
527 run_manager=run_managers[i] if run_managers else None,
528 **kwargs,
529 )
530 )
531 except BaseException as e:
532 if run_managers:
File [~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:749](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:749), in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
747 else:
748 if inspect.signature(self._generate).parameters.get("run_manager"):
--> 749 result = self._generate(
750 messages, stop=stop, run_manager=run_manager, **kwargs
751 )
752 else:
753 result = self._generate(messages, stop=stop, **kwargs)
File [~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:279](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:279), in AzureMLChatOnlineEndpoint._generate(self, messages, stop, run_manager, **kwargs)
273 request_payload = self.content_formatter.format_messages_request_payload(
274 messages, _model_kwargs, self.endpoint_api_type
275 )
276 response_payload = self.http_client.call(
277 body=request_payload, run_manager=run_manager
278 )
--> 279 generations = self.content_formatter.format_response_payload(
280 response_payload, self.endpoint_api_type
281 )
282 return ChatResult(generations=[generations])
File [~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:142](https://file+.vscode-resource.vscode-cdn.net/Users/mweissenba001/Documents/GitHub/fastapi_rag_demo/~/anaconda3/lib/python3.11/site-packages/langchain_community/chat_models/azureml_endpoint.py:142), in CustomOpenAIChatContentFormatter.format_response_payload(self, output, api_type)
140 choice = json.loads(output)["output"]
141 except (KeyError, IndexError, TypeError) as e:
--> 142 raise ValueError(self.format_error_msg.format(api_type=api_type)) from e
143 return ChatGeneration(
144 message=BaseMessage(
145 content=choice.strip(),
(...)
148 generation_info=None,
149 )
150 if api_type == AzureMLEndpointApiType.serverless:
ValueError: Error while formatting response payload for chat model of type `AzureMLEndpointApiType.dedicated`. Are you using the right formatter for the deployed model and endpoint type?
```
### Description
Hi,
I set up Mixtral 8x22B on Azure AI/Machine Learning and now want to use it with Langchain. I have difficulties with the format I am getting, e.g. a ChatOpenAI response looks like this:
```
from langchain_openai import ChatOpenAI
llmm = ChatOpenAI()
llmm.invoke("Hallo")
```
`AIMessage(content='Hallo! Wie kann ich Ihnen helfen?', response_metadata={'token_usage': {'completion_tokens': 8, 'prompt_tokens': 8, 'total_tokens': 16}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='r')`
This is how it looks when I am loading Mixtral 8x22B with AzureMLChatOnlineEndpoint:
```
from langchain_community.chat_models.azureml_endpoint import AzureMLChatOnlineEndpoint
from langchain_community.chat_models.azureml_endpoint import (
AzureMLEndpointApiType,
CustomOpenAIChatContentFormatter,
)
from langchain_core.messages import HumanMessage
chat = AzureMLChatOnlineEndpoint(
endpoint_url="...",
endpoint_api_type=AzureMLEndpointApiType.dedicated,
endpoint_api_key="...",
content_formatter=CustomOpenAIChatContentFormatter(),
)
chat.invoke("Hallo")
```
`BaseMessage(content='Hallo, ich bin ein deutscher Sprachassistent. Was kann ich für', type='assistant', id='run-23')`
So with the Mixtral model the output a **different format (BaseMessage vs. AIMessage)**. How can I change this to make it work just like an ChatOpenAI model?
I further explored if it works in a chain with a ChatPromptTemplate without success:
```
from langchain_core.prompts import ChatPromptTemplate
system = "You are a helpful assistant called Bot."
human = "{text}"
prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])
chain = prompt | chat
chain.invoke({"text": "Who are you?"})
```
This results in `KeyError: 'output'` and `ValueError: Error while formatting response payload for chat model of type `AzureMLEndpointApiType.dedicated`. Are you using the right formatter for the deployed model and endpoint type?`.
See full trace above.
In my application I want to easily switch between these two models.
Thanks in advance!
### System Info
langchain 0.2.6 pypi_0 pypi
langchain-chroma 0.1.0 pypi_0 pypi
langchain-community 0.2.6 pypi_0 pypi
langchain-core 0.2.10 pypi_0 pypi
langchain-experimental 0.0.49 pypi_0 pypi
langchain-groq 0.1.5 pypi_0 pypi
langchain-openai 0.1.7 pypi_0 pypi
langchain-postgres 0.0.3 pypi_0 pypi
langchain-text-splitters 0.2.1 | Load LLM (Mixtral 8x22B) from Azure AI endpoint as Langchain Model - BaseMessage instead of AIMessage | https://api.github.com/repos/langchain-ai/langchain/issues/23899/comments | 6 | 2024-07-05T06:52:55Z | 2024-07-11T20:09:37Z | https://github.com/langchain-ai/langchain/issues/23899 | 2,391,963,121 | 23,899 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.utilities import SQLDatabase
db = SQLDatabase.from_uri(db_path, include_tables=shortlisted_tables, sample_rows_in_table_info=2)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Currently when initiating the database and `sample_rows_in_table_info` takes the number of rows to be used. These are typically the top rows from my tables. I want to manually select rows for my use case, rather than top N rows.
I'm trying to do this as my top rows might have values in some of the columns missing, and I don't want to use rows with missing values for my query generation.
Is there any way we can do this, if so, please share.
Resources followed: https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase.from_uri
### System Info
`pip show langchain`
```Name: langchain
Version: 0.2.3
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /home/ankit/anaconda3/envs/chatai_production/lib/python3.9/site-packages
Requires: aiohttp, async-timeout, langchain-core, langchain-text-splitters, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by: langchain-community``` | sample_rows_in_table_info: int = 3, I don't want to use top N rows, but select some rows manually to be passed to SQLDatabase | https://api.github.com/repos/langchain-ai/langchain/issues/23898/comments | 0 | 2024-07-05T06:29:57Z | 2024-07-05T06:40:48Z | https://github.com/langchain-ai/langchain/issues/23898 | 2,391,926,241 | 23,898 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from cassandra.cluster import Cluster
from ssl import PROTOCOL_TLSv1_2, SSLContext, CERT_NONE, PROTOCOL_TLS, PROTOCOL_SSLv23
from cassandra.auth import PlainTextAuthProvider
from asyncio.log import logger
from config import *
from langchain_openai import AzureChatOpenAI
from langchain.globals import set_llm_cache
from fastapi import FastAPI
from pydantic import BaseModel
import uvicorn
from langchain.schema import HumanMessage
from langchain_core.messages import AIMessage
from langchain_core.outputs.chat_generation import ChatGeneration
from langchain_core.load import dumps
import cassio
from langchain_community.cache import CassandraCache
#creating generation_info for ChatGeneration Object from 'res' <AIMessage Object>
#creating ChatGeneration Object
cluster = Cluster(['*******************'], port = 9042)
session = cluster.connect()
print(session)
aoai_endpoint = '******************************************************'
aoai_api_key = '****************************************************'
aoai_api_version = '2024-05-01-preview'
app = FastAPI()
class Item(BaseModel):
question: str
llm = AzureChatOpenAI(
model='gpt-4o',
azure_endpoint=aoai_endpoint,
azure_deployment='gpt-4o',
api_version=aoai_api_version,
api_key=aoai_api_key,
temperature=0.0,
max_tokens=4000,
)
@app.post("/askquestion")
def say_joke(item: Item):
cassio.init(session=session, keyspace='cycling')
set_llm_cache(CassandraCache())
message = HumanMessage(content=item.question)
response = llm.invoke([message])
return response.content
if __name__ == "__main__":
uvicorn.run(host="0.0.0.0", port=8000, app=app)
### Error Message and Stack Trace (if applicable)
INFO: 172.26.64.1:52437 - "POST /askquestion HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 399, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/fastapi/routing.py", line 193, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/starlette/concurrency.py", line 42, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 859, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dd/Langchain/test.py", line 47, in say_joke
response = llm.invoke([message])
^^^^^^^^^^^^^^^^^^^^^
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 248, in invoke
self.generate_prompt(
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 681, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 538, in generate
raise e
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 528, in generate
self._generate_with_cache(
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 712, in _generate_with_cache
llm_string = self._get_llm_string(stop=stop, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 455, in _get_llm_string
_cleanup_llm_representation(serialized_repr, 1)
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1239, in _cleanup_llm_representation
_cleanup_llm_representation(value, depth + 1)
File "/home/dd/LLAMA/llmvenv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 1229, in _cleanup_llm_representation
if serialized["type"] == "not_implemented" and "repr" in serialized:
~~~~~~~~~~^^^^^^^^
TypeError: string indices must be integers, not 'str'
### Description
We are trying to implement Cassandra Exact Caching with our example code. We have used Azure Managed Cassandra instance and standalone docker instance as well.
But the code did not work with either of these DB setup.
With a simple tweaking in the Lang chain source code (python3.12/site-packages/langchain_core/language_models/chat_models.py).
Line# 455(_cleanup_llm_representation(serialized_repr, 1)) in the above python code was commented and after that it worked.
====================================================================
def _get_llm_string(self, stop: Optional[List[str]] = None, **kwargs: Any) -> str:
if self.is_lc_serializable():
params = {**kwargs, **{"stop": stop}}
param_string = str(sorted([(k, v) for k, v in params.items()]))
# This code is not super efficient as it goes back and forth between
# json and dict.
serialized_repr = dumpd(self)
**_cleanup_llm_representation(serialized_repr, 1)**
llm_string = json.dumps(serialized_repr, sort_keys=True)
return llm_string + "---" + param_string
else:
params = self._get_invocation_params(stop=stop, **kwargs)
params = {**params, **kwargs}
return str(sorted([(k, v) for k, v in params.items()]))
### System Info
(llmvenv) dd@KDC1-L-3326K00:~/Langchain$ pip show langchain-community
Name: langchain-community
Version: 0.2.6
Summary: Community contributed LangChain integrations.
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /home/dd/LLAMA/llmvenv/lib/python3.12/site-packages
Requires: aiohttp, dataclasses-json, langchain, langchain-core, langsmith, numpy, PyYAML, requests, SQLAlchemy, tenacity
Required-by:
(llmvenv) dd@KDC1-L-3326K00:~/Langchain$
============================================================
(llmvenv) dd@KDC1-L-3326K00:~/Langchain$ pip show cassandra-driver
Name: cassandra-driver
Version: 3.29.1
Summary: DataStax Driver for Apache Cassandra
Home-page: http://github.com/datastax/python-driver
Author: DataStax
Author-email:
License:
Location: /home/dd/LLAMA/llmvenv/lib/python3.12/site-packages
Requires: geomet
Required-by: cassio
(llmvenv) dd@KDC1-L-3326K00:~/Langchain$
=============================================================
(llmvenv) dd@KDC1-L-3326K00:~/Langchain$ pip show cassio
Name: cassio
Version: 0.1.8
Summary: A framework-agnostic Python library to seamlessly integrate Apache Cassandra(R) with ML/LLM/genAI workloads.
Home-page: https://cassio.org
Author: Stefano Lottini
Author-email: [email protected]
License: Apache-2.0
Location: /home/dd/LLAMA/llmvenv/lib/python3.12/site-packages
Requires: cassandra-driver, numpy, requests
Required-by:
(llmvenv) dd@KDC1-L-3326K00:~/Langchain$ | Cassandra Exact Cache issue: TypeError: string indices must be integers, not 'str' | https://api.github.com/repos/langchain-ai/langchain/issues/23896/comments | 5 | 2024-07-05T05:15:48Z | 2024-07-05T19:05:12Z | https://github.com/langchain-ai/langchain/issues/23896 | 2,391,829,528 | 23,896 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/chatbot/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The documentation refers to "ChatMessageHistory" while this particular page does not exist in the documentation.
![Screenshot from 2024-07-04 20-54-25](https://github.com/langchain-ai/langchain/assets/56577852/34e9b560-9dfe-4637-a6cf-c4f1d9c7d407)
The documentation referred to here is present in the prompt section: https://python.langchain.com/v0.2/docs/tutorials/chatbot/#prompt-templates
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/tutorials/chatbot/> Missing documentation. | https://api.github.com/repos/langchain-ai/langchain/issues/23892/comments | 1 | 2024-07-05T01:56:21Z | 2024-07-05T19:48:12Z | https://github.com/langchain-ai/langchain/issues/23892 | 2,391,636,123 | 23,892 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
from langchain_aws import ChatBedrock
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
model = ChatBedrock(
model_id="meta.llama3-70b-instruct-v1:0"
)
@tool
def magic_function(input: int) -> int:
"""Applies a magic function to an input."""
return input + 2
tools = [magic_function]
agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "what is the value of magic_function(3)?"})
### Error Message and Stack Trace (if applicable)
File "/MVP/mvp/lib/python3.12/site-packages/langchain_core/messages/ai.py", line 243, in __add__
response_metadata = merge_dicts(
^^^^^^^^^^^^
File "/MVP/mvp/lib/python3.12/site-packages/langchain_core/utils/_merge.py", line 40, in merge_dicts
raise TypeError(
TypeError: Additional kwargs key generation_token_count already exists in left dict and value has unsupported type <class 'int'>.
### Description
I am trying out the example from the langchain website and it is giving me error as **TypeError: Additional kwargs key generation_token_count already exists in left dict and value has unsupported type <class 'int'>.**
I cannot solve the error, and don't understand it. can you please resolve it?
### System Info
langchain==0.2.6
langchain-anthropic==0.1.19
langchain-aws==0.1.9
langchain-community==0.2.6
langchain-core==0.2.11
langchain-text-splitters==0.2.2
Mac
Python 3.12 | raise TypeError( TypeError: Additional kwargs key generation_token_count already exists in left dict and value has unsupported type <class 'int'>. | https://api.github.com/repos/langchain-ai/langchain/issues/23891/comments | 0 | 2024-07-05T01:06:04Z | 2024-07-08T15:01:30Z | https://github.com/langchain-ai/langchain/issues/23891 | 2,391,600,864 | 23,891 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
from langchain.agents import initialize_agent, AgentType
from langchain.agents import Tool
from langchain_experimental.utilities import PythonREPL
import datetime
from langchain.agents import AgentExecutor
from langchain.chains.conversation.memory import (
ConversationBufferMemory,
)
from langchain.prompts import MessagesPlaceholder
from langchain_community.chat_models import BedrockChat
from langchain.agents import OpenAIMultiFunctionsAgent
# You can create the tool to pass to an agent
repl_tool = Tool(
name="python_repl",
description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.",
func=PythonREPL().run,
)
memory = ConversationBufferMemory(return_messages=True, k=10, memory_key="chat_history")
prompt = OpenAIMultiFunctionsAgent.create_prompt(
system_message=SystemMessage(content="You are an helpful AI bot"),
extra_prompt_messages=[MessagesPlaceholder(variable_name="chat_history")],
)
llm = BedrockChat(
model_id="anthropic.claude-3-5-sonnet-20240620-v1:0",
client=client, #initialized elsewhere
model_kwargs={"max_tokens": 4050, "temperature": 0.5},
verbose=True,
)
tools = [
repl_tool,
]
agent_executor = initialize_agent(
tools,
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
max_iterations=10,
memory=memory,
prompt=prompt,
)
res = agent_executor.invoke({
'input': 'hi how are you?'
})
print(res['output']
# Hello! I'm an AI assistant, so I don't have feelings, but I'm functioning well and ready to help. How can I assist you today?
res=agent_executor.invoke({
"input": "what was my previous message?"
})
print(res['output']
# I'm sorry, but I don't have access to any previous messages. Each interaction starts fresh, so I don't have information about what you said before this current question. If you have a specific topic or question you'd like to discuss, please feel free to ask and I'll be happy to help.
# but when I checked the memory buffer
print(memory.buffer)
# [HumanMessage(content='hi how are you?'), AIMessage(content="Hello! As an AI assistant, I don't have feelings, but I'm functioning well and ready to help you. How can I assist you today?"), HumanMessage(content='hi how are you?'), AIMessage(content="Hello! I'm an AI assistant, so I don't have feelings, but I'm functioning well and ready to help. How can I assist you today?"), HumanMessage(content='what was my previous message?'), AIMessage(content="I'm sorry, but I don't have access to any previous messages. Each interaction starts fresh, so I don't have information about what you said before this current question. If you have a specific topic or question you'd like to discuss, please feel free to ask and I'll be happy to help.")]
# As you can see memory is getting updated
# so I checked the prompt template of the agent executor
pprint(agent_executor.agent.llm_chain.prompt)
# ChatPromptTemplate(input_variables=['agent_scratchpad', 'input'], messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='Respond to the human as helpfully and accurately as possible. You have access to the following tools:\n\npython_repl: A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`., args: {{\'tool_input\': {{\'type\': \'string\'}}}}\n\nUse a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or python_repl\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{\n "action": "Final Answer",\n "action_input": "Final response to human"\n}}\n```\n\nBegin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:')), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['agent_scratchpad', 'input'], template='{input}\n\n{agent_scratchpad}'))])
# As you can see there is no input variable placeholder for `chat_memory`
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
- I'm trying to use an agent executor with memory (`AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION`)
- I'm passing the right prompt template which contains the `memory_key`
- The initialized agent executor's prompt template resorts to a default prompt template that does not contain the `memory_key` place holder
### System Info
```sh
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:13:18 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6030
> Python Version: 3.12.2 (v3.12.2:6abddd9f6a, Feb 6 2024, 17:02:06) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.1.11
> langchain_community: 0.0.27
> langsmith: 0.1.80
> langchain_anthropic: 0.1.15
> langchain_aws: 0.1.6
> langchain_experimental: 0.0.53
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | `langchain.agents.initialize_agent` does not support custom prompt template | https://api.github.com/repos/langchain-ai/langchain/issues/23884/comments | 1 | 2024-07-04T19:02:16Z | 2024-07-08T05:43:48Z | https://github.com/langchain-ai/langchain/issues/23884 | 2,391,353,782 | 23,884 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
%pip install --upgrade --quiet langchain-community langchain_openai gql httpx requests-toolbelt
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain_openai import OpenAI
llm = OpenAI(temperature=0,api_key="")
headers = {
'Content-Type':'application/json',
'Authorization': 'Bearer TOKEN_HERE'
}
tools = load_tools(
["graphql"],
graphql_endpoint="https://streaming.bitquery.io/eap",
headers=headers
)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
```
```py
graphql_fields = """subscription {
Solana {
InstructionBalanceUpdates(limit: {count: 10}) {
Transaction {
Index
FeePayer
Fee
Signature
Result {
Success
ErrorMessage
}
}
Instruction {
InternalSeqNumber
Index
CallPath
Program {
Address
Name
Parsed
}
}
Block {
Time
Hash
Height
}
BalanceUpdate {
Account {
Address
}
Amount
Currency {
Decimals
CollectionAddress
Name
Key
IsMutable
Symbol
}
}
}
}
}
"""
suffix = "Search for the Transaction with positive Balance stored in the graphql database that has this schema "
agent.run(suffix + graphql_fields)
```
### Error Message and Stack Trace (if applicable)
ERROR
```
> Entering new AgentExecutor chain...
I should look for a transaction with a positive balance
Action: query_graphql
Action Input: query { Solana { InstructionBalanceUpdates(limit: {count: 10}) { Transaction { Index FeePayer Fee Signature Result { Success ErrorMessage } } Instruction { InternalSeqNumber Index CallPath Program { Address Name Parsed } } Block { Time Hash Height } BalanceUpdate { Account { Address } Amount Currency { Decimals CollectionAddress Name Key IsMutable Symbol } } } } }
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
File ~/miniconda3/envs/snakes/lib/python3.10/site-packages/requests/models.py:971, in Response.json(self, **kwargs)
[970](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/site-packages/requests/models.py:970) try:
--> [971](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/site-packages/requests/models.py:971) return complexjson.loads(self.text, **kwargs)
[972](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/site-packages/requests/models.py:972) except JSONDecodeError as e:
[973](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/site-packages/requests/models.py:973) # Catch JSON-related errors and raise as requests.JSONDecodeError
[974](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/site-packages/requests/models.py:974) # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
File ~/miniconda3/envs/snakes/lib/python3.10/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
[343](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/__init__.py:343) if (cls is None and object_hook is None and
[344](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/__init__.py:344) parse_int is None and parse_float is None and
[345](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/__init__.py:345) parse_constant is None and object_pairs_hook is None and not kw):
--> [346](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/__init__.py:346) return _default_decoder.decode(s)
[347](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/__init__.py:347) if cls is None:
File ~/miniconda3/envs/snakes/lib/python3.10/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
[333](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/decoder.py:333) """Return the Python representation of ``s`` (a ``str`` instance
[334](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/decoder.py:334) containing a JSON document).
[335](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/decoder.py:335)
[336](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/decoder.py:336) """
--> [337](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/decoder.py:337) obj, end = self.raw_decode(s, idx=_w(s, 0).end())
[338](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/json/decoder.py:338) end = _w(s, end).end()
File ~/miniconda3/envs/snakes/lib/python3.10/json/decoder.py:355, in JSONDecoder.raw_decode(self, s, idx)
...
[255](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/site-packages/gql/transport/requests.py:255) f"{reason}: "
[256](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/site-packages/gql/transport/requests.py:256) f"{result_text}"
[257](https://file+.vscode-resource.vscode-cdn.net/Users/ankushsingal/Desktop/langchain-project/langchain-prac/notebooks/~/miniconda3/envs/snakes/lib/python3.10/site-packages/gql/transport/requests.py:257) )
TransportServerError: 401 Client Error: Unauthorized for url: https://streaming.bitquery.io/eap
```
### Description
if i use without langchain it works but does not work with it
### System Info
```py
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.11
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.83
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | Agents and GraphQL- 401 Client Error: Unauthorized for url: https://streaming.bitquery.io/eap | https://api.github.com/repos/langchain-ai/langchain/issues/23881/comments | 0 | 2024-07-04T16:50:05Z | 2024-07-05T06:37:30Z | https://github.com/langchain-ai/langchain/issues/23881 | 2,391,210,682 | 23,881 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
# This part works like a charm
llm = HuggingFaceEndpoint(
name="mistral",
endpoint_url=classifier_agent_config.endpoint_url,
task="text-generation",
**classifier_agent_config.generation_config
)
# This part raises error
chat_llm = ChatHuggingFace(llm=llm, verbose=True)
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
\.venv\Lib\site-packages\huggingface_hub\utils\_errors.py:304, in hf_raise_for_status(response, endpoint_name)
303 try:
--> 304 response.raise_for_status()
305 except HTTPError as e:
\.venv\Lib\site-packages\requests\models.py:1024, in Response.raise_for_status(self)
1023 if http_error_msg:
-> 1024 raise HTTPError(http_error_msg, response=self)
HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/whoami-v2
The above exception was the direct cause of the following exception:
HfHubHTTPError Traceback (most recent call last)
\.venv\Lib\site-packages\huggingface_hub\hf_api.py:1397, in HfApi.whoami(self, token)
1396 try:
-> 1397 hf_raise_for_status(r)
1398 except HTTPError as e:
\.venv\Lib\site-packages\huggingface_hub\utils\_errors.py:371, in hf_raise_for_status(response, endpoint_name)
369 # Convert `HTTPError` into a `HfHubHTTPError` to display request information
370 # as well (request id and/or server error message)
--> 371 raise HfHubHTTPError(str(e), response=response) from e
HfHubHTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/whoami-v2 (Request ID: Root=1-66869944-058281c36301f9472614deeb;255a1059-9b6e-47ab-bcfc-3c0bced7baa0)
Invalid username or password.
The above exception was the direct cause of the following exception:
HTTPError Traceback (most recent call last)
Cell In[31], line 1
----> 1 chat_llm = ChatHuggingFace(llm=llm)
\.venv\Lib\site-packages\langchain_huggingface\chat_models\huggingface.py:169, in ChatHuggingFace.__init__(self, **kwargs)
165 super().__init__(**kwargs)
167 from transformers import AutoTokenizer # type: ignore[import]
--> 169 self._resolve_model_id()
171 self.tokenizer = (
172 AutoTokenizer.from_pretrained(self.model_id)
173 if self.tokenizer is None
174 else self.tokenizer
175 )
\.venv\Lib\site-packages\langchain_huggingface\chat_models\huggingface.py:295, in ChatHuggingFace._resolve_model_id(self)
291 """Resolve the model_id from the LLM's inference_server_url"""
293 from huggingface_hub import list_inference_endpoints # type: ignore[import]
--> 295 available_endpoints = list_inference_endpoints("*")
296 if _is_huggingface_hub(self.llm) or (
297 hasattr(self.llm, "repo_id") and self.llm.repo_id
298 ):
299 self.model_id = self.llm.repo_id
\.venv\Lib\site-packages\huggingface_hub\hf_api.py:7081, in HfApi.list_inference_endpoints(self, namespace, token)
7079 # Special case: list all endpoints for all namespaces the user has access to
7080 if namespace == "*":
-> 7081 user = self.whoami(token=token)
7083 # List personal endpoints first
7084 endpoints: List[InferenceEndpoint] = list_inference_endpoints(namespace=self._get_namespace(token=token))
\.venv\Lib\site-packages\huggingface_hub\utils\_validators.py:114, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
111 if check_use_auth_token:
112 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 114 return fn(*args, **kwargs)
\.venv\Lib\site-packages\huggingface_hub\hf_api.py:1399, in HfApi.whoami(self, token)
1397 hf_raise_for_status(r)
1398 except HTTPError as e:
-> 1399 raise HTTPError(
1400 "Invalid user token. If you didn't pass a user token, make sure you "
1401 "are properly logged in by executing `huggingface-cli login`, and "
1402 "if you did pass a user token, double-check it's correct.",
1403 request=e.request,
1404 response=e.response,
1405 ) from e
1406 return r.json()
HTTPError: Invalid user token. If you didn't pass a user token, make sure you are properly logged in by executing `huggingface-cli login`, and if you did pass a user token, double-check it's correct.
### Description
* I am trying to use langchain_huggingface library to connect to a TGI instance locally and as expected it connects and I am able to infer the same. But when converting HuggingFaceEndpoint to ChatHuggingFace, it raises error requesting user token to be provided.
### System Info
langchain-huggingface = "0.0.3"
langchain-core=="0.2.11"
platform: windows (TGI is run in a ec2 linux instance)
Python 3.12.2 | LangChain x HuggingFace - Using ChatHuggingFace requires hf token for local TGI using locally saved model | https://api.github.com/repos/langchain-ai/langchain/issues/23872/comments | 3 | 2024-07-04T13:03:19Z | 2024-07-06T08:31:02Z | https://github.com/langchain-ai/langchain/issues/23872 | 2,390,826,853 | 23,872 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/templates/openai-functions-tool-retrieval-agent/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
In [this documentation page](https://python.langchain.com/v0.2/docs/templates/openai-functions-tool-retrieval-agent/#usage), the line "This template is based on [this Agent How-To](https://python.langchain.com/docs/modules/agents/how_to/custom_agent_with_tool_retrieval)." refers to a link that gives page not found.
### Idea or request for content:
It would be beneficial to provide a working link to a proper example of a use case of this template, since the rest of the page's documentation is quite sparse. | DOC: <Issue related to /v0.2/docs/templates/openai-functions-tool-retrieval-agent/> URL in Documentation Gives Page Not Found | https://api.github.com/repos/langchain-ai/langchain/issues/23870/comments | 0 | 2024-07-04T12:37:47Z | 2024-07-17T15:06:41Z | https://github.com/langchain-ai/langchain/issues/23870 | 2,390,775,230 | 23,870 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_community.chat_models import ChatZhipuAI
from langchain_core.messages import HumanMessage
# .环境变量
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "balabala"
os.environ["ZHIPUAI_API_KEY"] = "balabala"
os.environ["TAVILY_API_KEY"]="balabala"
llm=ChatZhipuAI(model="glm-4")
search = TavilySearchResults(max_results=2)
tools = [search]
llm_with_tools=llm.bind_tools(tools)
response=llm_with_tools.invoke([HumanMessage(content="hello")])
print(response.content)
pass
```
### Error Message and Stack Trace (if applicable)
Exception has occurred: NotImplementedError
exception: no description
File "E:\GitHub\langchain\1\agent_1.py", line 33, in <module>
llm_with_tools=llm.bind_tools(tools)
^^^^^^^^^^^^^^^^^^^^^
NotImplementedError:
### Description
the code in "langchain_core\language_models\chat_models.py.BaseChatModel.bind_tools" is incomplete, as show below"
```python
def bind_tools(
self,
tools: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]],
**kwargs: Any,
) -> Runnable[LanguageModelInput, BaseMessage]:
raise NotImplementedError()
```
According to the ChatOpenAI, the code should be:
```python
def bind_tools(
self,
tools: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]],
*,
tool_choice: Optional[
Union[dict, str, Literal["auto", "none", "required", "any"], bool]
] = None,
**kwargs: Any,
) -> Runnable[LanguageModelInput, BaseMessage]:
"""Bind tool-like objects to this chat model.
Assumes model is compatible with OpenAI tool-calling API.
Args:
tools: A list of tool definitions to bind to this chat model.
Can be a dictionary, pydantic model, callable, or BaseTool. Pydantic
models, callables, and BaseTools will be automatically converted to
their schema dictionary representation.
tool_choice: Which tool to require the model to call.
Options are:
name of the tool (str): calls corresponding tool;
"auto": automatically selects a tool (including no tool);
"none": does not call a tool;
"any" or "required": force at least one tool to be called;
True: forces tool call (requires `tools` be length 1);
False: no effect;
or a dict of the form:
{"type": "function", "function": {"name": <<tool_name>>}}.
**kwargs: Any additional parameters to pass to the
:class:`~langchain.runnable.Runnable` constructor.
"""
formatted_tools = [convert_to_openai_tool(tool) for tool in tools]
if tool_choice:
if isinstance(tool_choice, str):
# tool_choice is a tool/function name
if tool_choice not in ("auto", "none", "any", "required"):
tool_choice = {
"type": "function",
"function": {"name": tool_choice},
}
# 'any' is not natively supported by OpenAI API.
# We support 'any' since other models use this instead of 'required'.
if tool_choice == "any":
tool_choice = "required"
elif isinstance(tool_choice, bool):
tool_choice = "required"
elif isinstance(tool_choice, dict):
tool_names = [
formatted_tool["function"]["name"]
for formatted_tool in formatted_tools
]
if not any(
tool_name == tool_choice["function"]["name"]
for tool_name in tool_names
):
raise ValueError(
f"Tool choice {tool_choice} was specified, but the only "
f"provided tools were {tool_names}."
)
else:
raise ValueError(
f"Unrecognized tool_choice type. Expected str, bool or dict. "
f"Received: {tool_choice}"
)
kwargs["tool_choice"] = tool_choice
return super().bind(tools=formatted_tools, **kwargs)
```
After replacing this part, the code is running well.
### System Info
Python 3.12.3
langchain==0.2.6
langchain-chroma==0.1.2
langchain-community==0.2.6
langchain-core==0.2.10
langchain-huggingface==0.0.3
langchain-openai==0.1.14
langchain-text-splitters==0.2.2
langserve==0.2.2
langsmith==0.1.82 | Loss of function in component ChatZhipuAI | https://api.github.com/repos/langchain-ai/langchain/issues/23868/comments | 8 | 2024-07-04T11:38:22Z | 2024-07-11T03:20:44Z | https://github.com/langchain-ai/langchain/issues/23868 | 2,390,662,856 | 23,868 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.embeddings import GPT4AllEmbeddings
embeddings = GPT4AllEmbeddings()
### Error Message and Stack Trace (if applicable)
KeyError Traceback (most recent call last)
[<ipython-input-6-542601a9aeec>](https://localhost:8080/#) in <cell line: 1>()
----> 1 x = GPT4AllEmbeddings()
2 frames
[/usr/local/lib/python3.10/dist-packages/langchain_community/embeddings/gpt4all.py](https://localhost:8080/#) in validate_environment(cls, values)
37
38 values["client"] = Embed4All(
---> 39 model_name=values["model_name"],
40 n_threads=values.get("n_threads"),
41 device=values.get("device"),
KeyError: 'model_name'
### Description
It can not find `model_name` from values.
### System Info
langchain==0.2.6
langchain-community==0.2.6
langchain-core==0.2.11
langchain-text-splitters==0.2.2 | Community: Keyerror `model_name` for GPT4AllEmbeddings | https://api.github.com/repos/langchain-ai/langchain/issues/23863/comments | 3 | 2024-07-04T10:44:12Z | 2024-07-05T18:09:02Z | https://github.com/langchain-ai/langchain/issues/23863 | 2,390,551,046 | 23,863 |
[
"hwchase17",
"langchain"
] | ### URL
https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.graph.Graph.html
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The code of the remove_node in https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/runnables/graph.py is as follow
```python
def remove_node(self, node: Node) -> None:
"""Remove a node from the graphm and all edges connected to it."""
self.nodes.pop(node.id)
self.edges = [
edge
for edge in self.edges
if edge.source != node.id and edge.target != node.id
]
``` graph is spelled graphm
### Idea or request for content:
```python
def remove_node(self, node: Node) -> None:
"""Remove a node from the **graph** and all edges connected to it."""
self.nodes.pop(node.id)
self.edges = [
edge
for edge in self.edges
if edge.source != node.id and edge.target != node.id
]
```
Should replace *graphm* with *graph* | DOC: remove_node typo in runnables/ graph | https://api.github.com/repos/langchain-ai/langchain/issues/23861/comments | 1 | 2024-07-04T09:52:45Z | 2024-07-05T17:07:02Z | https://github.com/langchain-ai/langchain/issues/23861 | 2,390,451,274 | 23,861 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
for content_part in cast(List[Dict], message.content):
if content_part.get("type") == "text":
content += f"\n{content_part['text']}"
elif content_part.get("type") == "image_url":
image_url = None
temp_image_url = content_part.get("image_url")
if isinstance(temp_image_url, str):
image_url = content_part["image_url"]
elif (
isinstance(temp_image_url, dict) and "url" in temp_image_url
):
image_url = temp_image_url
else:
raise ValueError(
"Only string image_url or dict with string 'url' "
"inside content parts are supported."
)
image_url_components = image_url.split(",")
# Support data:image/jpeg;base64,<image> format
# and base64 strings
if len(image_url_components) > 1:
images.append(image_url_components[1])
else:
images.append(image_url_components[0])
### Error Message and Stack Trace (if applicable)
File "/Users/workspace/langchain-demo/venv/lib/python3.11/site-packages/langchain_community/chat_models/ollama.py", line 154, in _convert_messages_to_ollama_messages
image_url_components = image_url.split(",")
^^^^^^^^^^^^^^^
AttributeError: 'dict' object has no attribute 'split'
### Description
modify the code as follows:
elif isinstance(temp_image_url, dict) and 'url' in temp_image_url:
image_url = temp_image_url['url']"
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
> Python Version: 3.11.3 (v3.11.3:f3909b8bc8, Apr 4 2023, 20:12:10) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.1.52
> langchain: 0.1.17
> langchain_community: 0.0.36
> langsmith: 0.1.83
> langchain_chatchat: 0.3.0.20240625.1
> langchain_experimental: 0.0.58
> langchain_openai: 0.0.6
> langchain_text_splitters: 0.0.2
> langchainhub: 0.1.14
> langgraph: 0.0.28
Packages not installed (Not Necessarily a Problem)
-------------------------------------------------- | ollama.py encountered an error while retrieving images from multimodal data | https://api.github.com/repos/langchain-ai/langchain/issues/23859/comments | 0 | 2024-07-04T09:17:56Z | 2024-07-04T09:20:26Z | https://github.com/langchain-ai/langchain/issues/23859 | 2,390,368,372 | 23,859 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```javacript
// ===tavily request===
{
query: '"{\\"input\\":\\"2023 NBA final winner\\"}"',
max_results: 1,
api_key: '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'
}
// ===tavily response===
{
query: '"{\\"input\\":\\"2023 NBA final winner\\"}"',
follow_up_questions: null,
answer: null,
images: null,
results: [],
response_time: 1.28
}
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
https://github.com/langchain-ai/langchainjs/blob/1fddf296f922dcaa362a90c8fe90b4bfd84b6c3e/libs/langchain-community/src/retrievers/tavily_search_api.ts#L97
the tavily `search` api is not reliable
### System Info
@langchain/community": "^0.2.13 | TavilySearchResults in Agents Quick Start always return empty result | https://api.github.com/repos/langchain-ai/langchain/issues/23858/comments | 0 | 2024-07-04T09:02:13Z | 2024-07-04T09:02:58Z | https://github.com/langchain-ai/langchain/issues/23858 | 2,390,334,797 | 23,858 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/output_parser_fixing/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I followed the example codes on this page, and I got error as:
```
KeyError: "Input to PromptTemplate is missing variables {'completion'}. Expected: ['completion', 'error', 'instructions'] Received: ['instructions', 'input', 'error']"
```
LangChain version:
```
langchain 0.2.6 pyhd8ed1ab_0 conda-forge
langchain-community 0.2.6 pyhd8ed1ab_0 conda-forge
langchain-core 0.2.11 pyhd8ed1ab_0 conda-forge
langchain-openai 0.1.14 pyhd8ed1ab_0 conda-forge
langchain-text-splitters 0.2.2 pyhd8ed1ab_0 conda-forge
```
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/how_to/output_parser_fixing/> | https://api.github.com/repos/langchain-ai/langchain/issues/23856/comments | 2 | 2024-07-04T07:16:26Z | 2024-07-23T11:20:02Z | https://github.com/langchain-ai/langchain/issues/23856 | 2,390,132,148 | 23,856 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Not applicable
### Error Message and Stack Trace (if applicable)
_No response_
### Description
There is no HNSW index in the pgvector vector store:
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/pgvector.py
Unlike the pgembedding vectore store:
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/pgembedding.py#L192
### System Info
Not applicable | No HNSW index in pgvector vector store | https://api.github.com/repos/langchain-ai/langchain/issues/23853/comments | 4 | 2024-07-04T05:04:44Z | 2024-07-20T09:22:33Z | https://github.com/langchain-ai/langchain/issues/23853 | 2,389,946,645 | 23,853 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I use the following code to load my documents.
```python
def load_documents(directory):
SOURCE_DOCUMENTS_DIR = directory
SOURCE_DOCUMENTS_FILTER = "**/*.txt"
loader = DirectoryLoader(f"{SOURCE_DOCUMENTS_DIR}", glob=SOURCE_DOCUMENTS_FILTER, show_progress=True, use_multithreading=True)
print(f"Loading {SOURCE_DOCUMENTS_DIR} directory: ", end="")
data = loader.load()
print(f"Splitting {len(data)} documents")
return data
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The following is a line from a text document I am loading. This is how it looks in Notepad.
**Document Name: https://www.kinecta.org//about-us/executive-staff**
When I load the document using DirectoryLoader (I load a list of other docs as well), and print out the doc.page_content, I get the following:
page_content='Document Name: https://www.kinecta.org//about\n\nus/executive\n\nstaff\n\n'
**As you can see, it converted the dashes into new line characters. Any idea what this is?**
This is the code I use to load my documents.
### System Info
Python 3.11
Langchain 0.1.12 | DirectoryLoader converting characters randomly into new line characters? | https://api.github.com/repos/langchain-ai/langchain/issues/23849/comments | 0 | 2024-07-04T00:37:40Z | 2024-07-04T00:40:06Z | https://github.com/langchain-ai/langchain/issues/23849 | 2,389,708,027 | 23,849 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import langchain
### Error Message and Stack Trace (if applicable)
No module named 'langchain_core'
### Description
Starting this morning we are getting different kind of errors while trying to use langchain in Vertex AI, not quite sure what is the root of problem, Google or Langchain.
### System Info
Python 3.10.12 | No module named 'langchain_core' - Langchain in Vertex AI | https://api.github.com/repos/langchain-ai/langchain/issues/23838/comments | 8 | 2024-07-03T19:34:38Z | 2024-07-03T20:46:26Z | https://github.com/langchain-ai/langchain/issues/23838 | 2,389,312,087 | 23,838 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import langchain
from langchain_openai import ChatOpenAI
from langchain.chains.conversation.base import ConversationChain
from langchain.memory import ConversationBufferMemory
from langchain_community.cache import SQLiteCache
from langchain_community.callbacks import get_openai_callback
import tiktoken,sys
from langchain_core.globals import set_llm_cache, get_llm_cache
cache = SQLiteCache(database_path=".langchain.db")
llm = ChatOpenAI(model_name='gpt-3.5-turbo',
openai_api_key=sys.argv[1])
memory = ConversationBufferMemory()
conversation = ConversationChain(
llm=llm,
memory = memory,
verbose=True
)
with get_openai_callback() as cb:
input="Hi, my name is Andrew"
tokenizer=tiktoken.get_encoding("cl100k_base")
toks=len(tokenizer.encode(text=input))
print(toks)
costs={"tokens used":0, "prompt tokens":0, "completion tokens":0, "successful requests":0, "cost (usd)":0}
result = conversation.predict(input=input)
costs['tokens used']=cb.total_tokens
costs['prompt tokens'] = cb.prompt_tokens
costs['completion tokens'] = cb.completion_tokens
costs['successful requests'] = cb.successful_requests
costs['cost (used)'] = cb.total_cost
print(result)
print(costs)
res=conversation.predict(input="What is 1+1?")
costs['tokens used'] = cb.total_tokens
costs['prompt tokens'] = cb.prompt_tokens
costs['completion tokens'] = cb.completion_tokens
costs['successful requests'] = cb.successful_requests
costs['cost (used)'] = cb.total_cost
print(res)
print(costs)
res=conversation.predict(input="What is my name?")
costs['tokens used'] = cb.total_tokens
costs['prompt tokens'] = cb.prompt_tokens
costs['completion tokens'] = cb.completion_tokens
costs['successful requests'] = cb.successful_requests
costs['cost (used)'] = cb.total_cost
print(res)
print(costs)
set_llm_cache(cache)
res=conversation.predict(input="What is my name?")
costs['tokens used'] = cb.total_tokens
costs['prompt tokens'] = cb.prompt_tokens
costs['completion tokens'] = cb.completion_tokens
costs['successful requests'] = cb.successful_requests
costs['cost (used)'] = cb.total_cost
print(res)
print(costs)
costs['tokens used'] = cb.total_tokens
costs['prompt tokens'] = cb.prompt_tokens
costs['completion tokens'] = cb.completion_tokens
costs['successful requests'] = cb.successful_requests
costs['cost (used)'] = cb.total_cost
print(res)
print(costs)
set_llm_cache(None)
res=conversation.predict(input="What is my name?")
costs['tokens used'] = cb.total_tokens
costs['prompt tokens'] = cb.prompt_tokens
costs['completion tokens'] = cb.completion_tokens
costs['successful requests'] = cb.successful_requests
costs['cost (used)'] = cb.total_cost
print(res)
print(costs)
costs['tokens used'] = cb.total_tokens
costs['prompt tokens'] = cb.prompt_tokens
costs['completion tokens'] = cb.completion_tokens
costs['successful requests'] = cb.successful_requests
costs['cost (used)'] = cb.total_cost
print(res)
print(costs)
print("end")
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/home/zz/Work/zz-notebooks/autogen/src/autogen/sandbox/example_langchain.py", line 56, in <module>
res=conversation.predict(input="What is my name?")
File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 317, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 168, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 383, in __call__
return self.invoke(
File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 166, in invoke
raise e
File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 127, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 139, in generate
return self.llm.generate_prompt(
File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 681, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 538, in generate
raise e
File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 528, in generate
self._generate_with_cache(
File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 712, in _generate_with_cache
llm_string = self._get_llm_string(stop=stop, **kwargs)
File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 455, in _get_llm_string
_cleanup_llm_representation(serialized_repr, 1)
File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 1239, in _cleanup_llm_representation
_cleanup_llm_representation(value, depth + 1)
File "/home/zz/.cache/pypoetry/virtualenvs/autogen-DrvUaj4U-py3.10/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 1229, in _cleanup_llm_representation
if serialized["type"] == "not_implemented" and "repr" in serialized:
TypeError: string indices must be integers
```
### Description
The fourth call to the LLM will cause the error above. Note that this is after calling 'set_llm_cache'
### System Info
langchain 0.2.6
langchain-community 0.2.6
langchain-core 0.2.11
langchain-openai 0.1.14 | generate/predict fails when using sql cache | https://api.github.com/repos/langchain-ai/langchain/issues/23824/comments | 13 | 2024-07-03T16:02:38Z | 2024-07-04T19:07:34Z | https://github.com/langchain-ai/langchain/issues/23824 | 2,388,986,993 | 23,824 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.document_loaders import WebBaseLoader
loader = WebBaseLoader('https://m.vk.com/support?category_id=2')
data = loader.load()
print(data[0])
### Error Message and Stack Trace (if applicable)
Your browser is out of dateThis may cause VK to work slowly or experience errors.Update your browser or install one of the following:ChromeOperaFirefox
### Description
I'm trying to parse all data from "'https://m.vk.com/support" (including info from sublinks, to construct RAG). But see the output with the content: "Your browser is out of dateThis may cause VK to work slowly or experience errors.Update your browser or install one of the following:ChromeOperaFirefox"
### System Info
langchain==0.2.6
langchain-community==0.2.6
langchain-core==0.2.11
langchain-text-splitters==0.2.2
windows
python 3.10 | Get warning about browser instead of real info in WebBaseLoader | https://api.github.com/repos/langchain-ai/langchain/issues/23813/comments | 1 | 2024-07-03T13:33:51Z | 2024-07-04T10:44:27Z | https://github.com/langchain-ai/langchain/issues/23813 | 2,388,652,128 | 23,813 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangGraph/LangChain rather than my code.
- [X] I am sure this is better as an issue [rather than a GitHub discussion](https://github.com/langchain-ai/langgraph/discussions/new/choose), since this is a LangGraph bug and not a design question.
### Example Code
```python
Following below example with locally hosted llama3 70B instruct model with ChatOpenAI
https://langchain-ai.github.io/langgraph/how-tos/create-react-agent/
Simlar issue with the following example:
from langchain_core.tools import tool
from langgraph.graph import MessageGraph from langgraph.prebuilt import ToolNode, tools_condition
@tool def divide(a: float, b: float) -> int: """Return a / b.""" return a / b
llm = ChatOpenAI( model_name = 'Meta-Llama-3-70B-Instruct', base_url = "http://172.17.0.8:xxxx/v1/", api_key = "EMPTY", temperature=0).bind( response_format={"type": "json_object"} )
tools = [divide]
graph_builder = MessageGraph()
graph_builder.add_node("tools", ToolNode(tools))
graph_builder.add_node("chatbot", llm.bind_tools(tools))
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_conditional_edges( ... "chatbot", tools_condition ... ) graph_builder.set_entry_point("chatbot")
graph = graph_builder.compile()
graph.invoke([("user", "What's 329993 divided by 13662?")])
```
### Error Message and Stack Trace (if applicable)
```shell
BadRequestError: Error code: 400 - {'object': 'error', 'message': "[{'type': 'extra_forbidden', 'loc': ('body', 'tools'), 'msg': 'Extra inputs are not permitted', 'input': [{'type': 'function', 'function': {'name': 'get_weather', 'description': 'Use this to get weather information.', 'parameters': {'type': 'object', 'properties': {'city': {'enum': ['nyc', 'sf'], 'type': 'string'}}, 'required': ['city']}}}], 'url': 'https://errors.pydantic.dev/2.7/v/extra_forbidden'}]", 'type': 'BadRequestError', 'param': None, 'code': 400}
```
### Description
I have tried instantiating ChatOpenAI as follows:
llm = ChatOpenAI( model_name = 'Meta-Llama-3-70B-Instruct', base_url = "http://172.17.0.8:xxxx/v1/", api_key = "EMPTY", temperature=0)
llm = ChatOpenAI( model_name = 'Meta-Llama-3-70B-Instruct', base_url = "http://172.17.0.8:xxxx/v1/", api_key = "EMPTY", temperature=0).bind( response_format={"type": "json_object"} )
### System Info
Meta's llama 3 70B Instruct locally hosted on vllm.
ChatOpenAI works fine for other application for example RAG and LCEL | BadRequestError with vllm locally hosted Llama3 70B Model | https://api.github.com/repos/langchain-ai/langchain/issues/23814/comments | 2 | 2024-07-03T12:26:40Z | 2024-07-04T06:06:13Z | https://github.com/langchain-ai/langchain/issues/23814 | 2,388,682,742 | 23,814 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from dotenv import load_dotenv
from langchain_core.runnables import Runnable, RunnableConfig, chain
from langchain_core.tracers.context import tracing_v2_enabled
from phoenix.trace.langchain import LangChainInstrumentor
LangChainInstrumentor().instrument()
load_dotenv()
@chain
def inner_chain(input):
print("inner chain")
return {"inner": input}
@chain
def outer_chain(input):
print("outer chain")
return inner_chain.invoke(input={"inner": "foo_sync"})
@chain
async def outer_chain_async(input):
print("outer chain async")
return await inner_chain.ainvoke(input={"inner": "foo_async"})
async def main_async():
# call async the outsider that inside has a sync call
await outer_chain.ainvoke(input={"outer": "foo"})
# call async the outsider that inside has a async call
await outer_chain_async.ainvoke(input={"outer": "foo_async_outer"})
def main():
outer_chain.invoke(input={"outer": "foo"})
if __name__ == "__main__":
with tracing_v2_enabled(project_name="test"):
# call sync
main()
# call async
import asyncio
asyncio.run(main_async())
```
### Error Message and Stack Trace (if applicable)
The inner chain should be attached as child to the outer chain. Instead the async calls are shown as independent traces.
Arize-phoenix traces:
![image](https://github.com/langchain-ai/langchain/assets/11597393/3e4e7029-b53b-40d0-bb8d-7f03b444962a)
Langsmith traces:
![image](https://github.com/langchain-ai/langchain/assets/11597393/8d9448d6-fb13-4f85-adeb-918688421ce7)
### Description
I am analysing the traces generated with `invoke` and `ainvoke` function on a chain that is innerly calling another chain.
For the sync run, the inner chain is properly traced under the outer chain. Instead when the `ainvoke` is called on the outer chain, the inner chain is traced as separate.
I would expect that both `invoke` and `ainvoke` produce the same traces.
I think the problem might be in the `run_in_executor` function that is called in the `Runnable.ainvoke`, that may be missing some parameters to the self.invoke about the context of the parent chain.
I tested it with two different tracing solutions:
- arize-phoenix
- langsmith
And the issue exists with both of them, so I think the issue is on the side of langchain or opentelemetry.
Environment requirements:
```text
langchain
arize-phoenix
python-dotenv
langsmith
```
Environment variables:
```
LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
LANGCHAIN_API_KEY="<my_key_here>"
LANGCHAIN_PROJECT="test"
```
### System Info
`python -m langchain_core.sys_info`:
```text
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.11
> langchain: 0.2.6
> langsmith: 0.1.83
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
```
| Nested ainvoke does not show as child in tracing | https://api.github.com/repos/langchain-ai/langchain/issues/23811/comments | 2 | 2024-07-03T12:15:40Z | 2024-07-04T07:10:03Z | https://github.com/langchain-ai/langchain/issues/23811 | 2,388,478,996 | 23,811 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain.js documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain.js rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def merge_dicts(left: Dict[str, Any], right: Dict[str, Any]) -> Dict[str, Any]:
"""Merge two dicts, handling specific scenarios where a key exists in both
dictionaries but has a value of None in 'left'. In such cases, the method uses the
value from 'right' for that key in the merged dictionary.
Example:
If left = {"function_call": {"arguments": None}} and
right = {"function_call": {"arguments": "{\n"}}
then, after merging, for the key "function_call",
the value from 'right' is used,
resulting in merged = {"function_call": {"arguments": "{\n"}}.
"""
merged = left.copy()
for right_k, right_v in right.items():
if right_k not in merged:
merged[right_k] = right_v
elif right_v is not None and merged[right_k] is None:
merged[right_k] = right_v
elif right_v is None:
continue
elif type(merged[right_k]) != type(right_v):
raise TypeError(
f'additional_kwargs["{right_k}"] already exists in this message,'
" but with a different type."
)
elif isinstance(merged[right_k], str):
merged[right_k] += right_v
elif isinstance(merged[right_k], dict):
merged[right_k] = merge_dicts(merged[right_k], right_v)
elif isinstance(merged[right_k], list):
merged[right_k] = merge_lists(merged[right_k], right_v)
# added this for integer
elif isinstance(merged[right_k], int):
merged[right_k] += right_v
elif merged[right_k] == right_v:
continue
else:
raise TypeError(
f"Additional kwargs key {right_k} already exists in left dict and "
f"value has unsupported type {type(merged[right_k])}."
)
return merged
### Error Message and Stack Trace (if applicable)
Error: Additional kwargs key prompt_token_count already exists in left dict and value has unsupported type <class 'int'>.
### Description
Im trying to run my agents on langchain using gemini 1.5 pro as the LLM, but it is running into the above error, because in the merge_dict() function it is not expecting an integer instance and for prompt_token_count gemini returns it in integer form, also this happens only if left and right dict have different values of prompt_token_count. Manually I have added the fix and it works well, but requesting to kindly include it for us folks who are using gemini-1.5-pro, with other LLMs (non google LLMs) have not faced the same issue. Only with gemini.
### System Info
windows and linux
langchain-core==0.2.11
python | Additional kwargs key prompt_token_count already exists in left dict and value has unsupported type <class 'int'> in langchain-core/utils/_merge.py merge_dict() function when running with Google gemini-1.5-pro | https://api.github.com/repos/langchain-ai/langchain/issues/23827/comments | 4 | 2024-07-03T12:07:04Z | 2024-07-04T10:35:42Z | https://github.com/langchain-ai/langchain/issues/23827 | 2,389,067,343 | 23,827 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.llms import HuggingFaceEndpoint
llm = HuggingFaceEndpoint(endpoint_url="https://mixtral.ai.me", huggingfacehub_api_token=<token>)
### Error Message and Stack Trace (if applicable)
(can't paste because this issue is from an airgapped environment)
```
File /usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py:341, in BaseModel.__init__
ValidationError: 1 validation error for HuggingFaceEndpoint
__root__
Could not authenticate with huggingface_hub. Please check your API token. (type=value_error)
```
### Description
I am trying to use HuggingFaceEndpoint in order to query my locally hosted mixtral. I have a proxy before the proxy that accepts Bearer API tokens. The tokens and API works in other places (e.g. postman) but not with langchain.
### System Info
langchain==0.2.6
langchain-community==0.0.38
langchain-core==0.1.52
langchain-text-splitters==0.2.2
platform: Ubuntu 22.04
Python version: 3.10.12
The runtime is actually Nvidia's Pytorch container from the NGC catalog, tag 24.01.
The environment is airgapped, and we go through a pipeline in order to bring in new library versions. | Could not authenticate with huggingface_hub. | https://api.github.com/repos/langchain-ai/langchain/issues/23808/comments | 2 | 2024-07-03T09:51:14Z | 2024-07-24T07:00:16Z | https://github.com/langchain-ai/langchain/issues/23808 | 2,388,190,250 | 23,808 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import uuid
from langchain.retrievers.multi_vector import MultiVectorRetriever
from langchain.storage import InMemoryStore
from langchain_community.vectorstores import Chroma
from langchain_core.documents import Document
from langchain_community.embeddings import GPT4AllEmbeddings
model_name = "all-MiniLM-L6-v2.gguf2.f16.gguf"
gpt4all_kwargs = {'allow_download': 'True'}
# The vectorstore to use to index the child chunks
vectorstore = Chroma(
collection_name="summaries", embedding_function=GPT4AllEmbeddings(
model_name=model_name,
gpt4all_kwargs=gpt4all_kwargs
)
)
# The storage layer for the parent documents
store = InMemoryStore()
id_key = "doc_id"
# The retriever (empty to start)
print("empty to start")
retriever = MultiVectorRetriever(
vectorstore=vectorstore,
docstore=store,
id_key=id_key,
)
# Add texts
print("Add texts")
doc_ids = [str(uuid.uuid4()) for _ in texts]
summary_texts = [
Document(page_content=s, metadata={id_key: doc_ids[i]})
for i, s in enumerate(text_summaries)
]
retriever.vectorstore.add_documents(summary_texts)
retriever.docstore.mset(list(zip(doc_ids, texts)))
# Add tables
print("Add tables")
table_ids = [str(uuid.uuid4()) for _ in tables]
summary_tables = [
Document(page_content=s, metadata={id_key: table_ids[i]})
for i, s in enumerate(table_summaries)
]
retriever.vectorstore.add_documents(summary_tables)
retriever.docstore.mset(list(zip(table_ids, tables)))
### Error Message and Stack Trace (if applicable)
_No response_
### Description
在jupyter notebook中运行
retriever.vectorstore.add_documents(summary_texts)
retriever.docstore.mset(list(zip(doc_ids, texts)))
时会发生kernel崩溃,并且没有出现任何报错,内存没有发生溢出
![134d081afabba25c2401496679d81b3](https://github.com/langchain-ai/langchain/assets/166845129/80829250-402e-4e68-b22d-574ba6264064)
### System Info
Package Information
-------------------
> langchain_core: 0.2.10
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.82
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20 | When running cookbook/Semi_Structured_RAG.ipynb, appears kernel died | https://api.github.com/repos/langchain-ai/langchain/issues/23802/comments | 0 | 2024-07-03T08:43:35Z | 2024-07-03T08:50:29Z | https://github.com/langchain-ai/langchain/issues/23802 | 2,388,044,064 | 23,802 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
If I create a vectorstore with:
```python
from langchain_community.embeddings import FakeEmbeddings
from langchain_chroma.vectorstores import Chroma
vectorstore = Chroma.from_documents(documents=splits, embedding = FakeEmbeddings(size=1352), collection_name="colm", persist_directory="my_dir")
```
The only way to retrieve the persisted collection from `my_dir` is:
```python
vectorstore = Chroma.from_documents(documents=splits, embedding = FakeEmbeddings(size=1352), collection_name="colm")
```
OR
```python
vectorstore = Chroma.from_text(text=..., embedding = FakeEmbeddings(size=1352), collection_name="colm")
```
Related issues:
https://github.com/langchain-ai/langchain/issues/22361
https://github.com/langchain-ai/langchain/issues/20866
https://github.com/langchain-ai/langchain/issues/19807#issuecomment-2028610882
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I want dedicated `classmethod` just like this:
https://github.com/langchain-ai/langchain/blob/27aa4d38bf93f3eef7c46f65cc0d0ef3681137eb/libs/partners/qdrant/langchain_qdrant/vectorstores.py#L1351
that returns me an instance of `Chroma` without inserting any texts
### System Info
langchain-chroma 0.1.1 | partner[chroma]: Not able to load persisted collection without calling `from_documents` | https://api.github.com/repos/langchain-ai/langchain/issues/23797/comments | 3 | 2024-07-03T07:44:25Z | 2024-07-05T04:59:25Z | https://github.com/langchain-ai/langchain/issues/23797 | 2,387,920,544 | 23,797 |
[
"hwchase17",
"langchain"
] | ### URL
_No response_
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
**Description**:
I recently discovered a very useful feature in the LangChain CLI that allows templates to be installed from a specific subdirectory within a repository using a URL fragment, like so:
`git+ssh://[email protected]/norisuke3/llm.git#subdirectory=templates/japanese-speak`
However, I was unable to find any documentation on this feature in the current LangChain documentation, and I had to dig into [the source code to find out how to use it](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/utils/git.py#L42). This feature is incredibly useful for managing multiple templates in a single repository and would greatly benefit other users if it were documented.
**Proposed Solution**:
Add a section in the documentation that explains how to install templates from a specific subdirectory within a repository using the URL fragment notation. Good place to put this description can be adding a new page under a section of Additional Resources in [this page](https://github.com/langchain-ai/langchain/blob/master/templates/README.md)?
**Example**:
langchain app add "git+ssh://[email protected]/norisuke3/llm.git#subdirectory=templates/japanese-speak"
**Additional Context**:
This feature allows users to manage and install multiple templates from a single repository, which is a common use case for organizing LangChain templates. Including this in the documentation would improve user experience and reduce the need for source code exploration.
### Idea or request for content:
_No response_ | DOC: Documenting the use of subdirectory for template installation with 'langchain app add' | https://api.github.com/repos/langchain-ai/langchain/issues/23777/comments | 1 | 2024-07-02T20:17:37Z | 2024-07-02T21:11:51Z | https://github.com/langchain-ai/langchain/issues/23777 | 2,387,090,774 | 23,777 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import os
from dotenv import load_dotenv
from langchain import hub
from langchain.agents import AgentExecutor, create_react_agent
from langchain.tools import BaseTool
from langchain_openai import ChatOpenAI
load_dotenv()
class WeatherTool(BaseTool):
def __init__(self):
super().__init__(
name="fetch_current_weather",
description="get current weather based location",
)
def _run(self, location: str):
answer = "It's raining"
return answer
def main():
prompt = hub.pull("hwchase17/react")
tools = [WeatherTool()]
llm = ChatOpenAI(
model="gpt-4o", temperature=0.0, openai_api_key=os.getenv("OPENAI_API_KEY")
)
agent = create_react_agent(llm=llm, tools=tools, prompt=prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
return_intermediate_stpes=True,
handle_parsing_errors=True,
memory=None,
max_iterations=2,
verbose=True,
)
query = "What is the weather today in London?"
result1 = agent_executor.invoke({"input": query})
# LangChain bug - return_intermediate_steps not being set correctly during instantiation
agent_executor.return_intermediate_steps = True
result2 = agent_executor.invoke({"input": query})
print(f"Keys in result1: {result1.keys()}")
print(f"Keys in result2: {result2.keys()}")
if __name__ == "__main__":
main()
```
### Error Message and Stack Trace (if applicable)
Output is:
```
> Entering new AgentExecutor chain...
To answer the question about the weather in London, I need to fetch the current weather data for that location.
Action: fetch_current_weather
Action Input: LondonIt's rainingI now know the final answer.
Final Answer: The weather today in London is rainy.
> Finished chain.
> Entering new AgentExecutor chain...
I need to find the current weather in London.
Action: fetch_current_weather
Action Input: LondonIt's rainingI now know the final answer.
Final Answer: The weather today in London is raining.
> Finished chain.
Keys in result1: dict_keys(['input', 'output'])
Keys in result2: dict_keys(['input', 'output', 'intermediate_steps'])
```
### Description
Adding `return_intermediate_steps=True` to `AgentExecutor` does not seem to work. Instead I have to set this value after instantiation.
### System Info
langchain==0.2.6
langchain-openai==0.1.13
langchainhub==0.1.20
python-dotenv==1.0.1 | Agent Executor's return_intermediate_steps does not have desired effect | https://api.github.com/repos/langchain-ai/langchain/issues/23760/comments | 1 | 2024-07-02T12:06:31Z | 2024-07-02T12:10:29Z | https://github.com/langchain-ai/langchain/issues/23760 | 2,386,082,343 | 23,760 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
instructions = """You are an agent designed to write and execute python code to answer questions.
You have access to a python REPL, which you can use to execute python code.
If you get an error, debug your code and try again.
Only use the output of your code to answer the question.
You might know the answer without running any code, but you should still run the code to get the answer.
If it does not seem like you can write code to answer the question, just return "I don't know" as the answer.
"""
base_prompt = hub.pull("langchain-ai/openai-functions-template")
prompt = base_prompt.partial(instructions=instructions)
agent = create_openai_functions_agent(ChatOpenAI(temperature=0), tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
code=open("./code.txt", "r", encoding="utf-8").read()
require=f"""I'll give you a piece of Python code based on Pytorch that trains a communication model. The model consists of five components: SemanticEncoder, ChannelEncoder, PhysicalChannel, ChannelDecoder, and SemanticDecoder. The input text is first tokenized, propagated forward through the model, and finally decoded into text using the tokenizer. Now I need you to test the BLEU of the trained model.\n\n
the code is as follows:\n\n
```{code}```
"""
```
### Error Message and Stack Trace (if applicable)
ModuleNotFoundError("No module named 'nltk'")I don't have access to the NLTK library to calculate the BLEU score. You can run the code on your local machine with NLTK installed to get the BLEU score for the trained model.
### Description
I would like to know what libraries pythonPERLTool supports. I saw in the official documentation that you can use torch. Does this come with it? Does it support other common libraries
### System Info
langchain==0.2.6
langchain-community==0.2.6
langchain-core==0.2.10
langchain-experimental==0.0.62
langchain-openai==0.1.13
langchain-text-splitters==0.2.2
langchainhub==0.1.20 | pythonPERL library question | https://api.github.com/repos/langchain-ai/langchain/issues/23759/comments | 0 | 2024-07-02T11:40:32Z | 2024-07-02T11:43:07Z | https://github.com/langchain-ai/langchain/issues/23759 | 2,386,029,879 | 23,759 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
"""Standard LangChain interface tests"""
from typing import Type
import pytest
from langchain_core.language_models import BaseChatModel
from langchain_standard_tests.unit_tests import ChatModelUnitTests
from langchain_upstage import ChatUpstage
class TestUpstageStandard(ChatModelUnitTests):
@pytest.fixture
def chat_model_class(self) -> Type[BaseChatModel]:
return ChatUpstage
@pytest.fixture
def chat_model_params(self) -> dict:
return {
"model": "solar-1-mini-chat",
}
```
### Error Message and Stack Trace (if applicable)
```
Spawning shell within /Users/juhyung/Library/Caches/pypoetry/virtualenvs/langchain-upstage-Id53XXjQ-py3.11
➜ upstage git:(SDR-22) emulate bash -c '. /Users/juhyung/Library/Caches/pypoetry/virtualenvs/langchain-upstage-Id53XXjQ-py3.11/bin/activate'
(langchain-upstage-py3.11) ➜ upstage git:(SDR-22) poetry install
Installing dependencies from lock file
Package operations: 36 installs, 0 updates, 0 removals
- Installing typing-extensions (4.12.2)
- Installing annotated-types (0.7.0)
- Installing certifi (2024.6.2)
- Installing charset-normalizer (3.3.2)
- Installing h11 (0.14.0)
- Installing idna (3.7)
- Installing pydantic-core (2.20.0)
- Installing sniffio (1.3.1)
- Installing urllib3 (2.2.2)
- Installing anyio (4.4.0)
- Installing httpcore (1.0.5)
- Installing jsonpointer (3.0.0)
- Installing orjson (3.10.5)
- Installing pydantic (2.8.0)
- Installing requests (2.32.3)
- Installing distro (1.9.0)
- Installing filelock (3.15.4)
- Installing fsspec (2024.6.1)
- Installing httpx (0.27.0)
- Installing jsonpatch (1.33)
- Installing langsmith (0.1.83)
- Installing packaging (24.1)
- Installing pyyaml (6.0.1)
- Installing regex (2024.5.15)
- Installing tenacity (8.4.2)
- Installing tqdm (4.66.4)
- Installing huggingface-hub (0.23.4)
- Installing langchain-core (0.2.10 aa16553)
- Installing mypy-extensions (1.0.0)
- Installing openai (1.35.7)
- Installing tiktoken (0.7.0)
- Installing langchain-openai (0.1.13 aa16553)
- Installing mypy (0.991)
- Installing pypdf (4.2.0)
- Installing tokenizers (0.19.1)
- Installing types-requests (2.32.0.20240622)
Installing the current project: langchain-upstage (0.1.7)
(langchain-upstage-py3.11) ➜ upstage git:(SDR-22) poetry install --with test
Installing dependencies from lock file
Package operations: 19 installs, 0 updates, 0 removals
- Installing mdurl (0.1.2)
- Installing iniconfig (2.0.0)
- Installing markdown-it-py (3.0.0)
- Installing pluggy (1.5.0)
- Installing pygments (2.18.0)
- Installing six (1.16.0)
- Installing numpy (1.26.4)
- Installing pytest (7.4.4)
- Installing python-dateutil (2.9.0.post0)
- Installing rich (13.7.1)
- Installing typing-inspect (0.9.0)
- Installing watchdog (4.0.1)
- Installing docarray (0.32.1)
- Installing freezegun (1.5.1)
- Installing langchain-standard-tests (0.1.1 aa16553)
- Installing pytest-asyncio (0.21.2)
- Installing pytest-mock (3.14.0)
- Installing pytest-watcher (0.3.5)
- Installing syrupy (4.6.1)
Installing the current project: langchain-upstage (0.1.7)
(langchain-upstage-py3.11) ➜ upstage git:(SDR-22) poetry install --with integration_test
Group(s) not found: integration_test (via --with)
(langchain-upstage-py3.11) ➜ upstage git:(SDR-22) poetry install --with integration_tests
Group(s) not found: integration_tests (via --with)
(langchain-upstage-py3.11) ➜ upstage git:(SDR-22) hx pyproject.toml
(langchain-upstage-py3.11) ➜ upstage git:(SDR-22) poetry install --with test_integration
Installing dependencies from lock file
Package operations: 1 install, 0 updates, 0 removals
- Installing pillow (10.4.0)
Installing the current project: langchain-upstage (0.1.7)
(langchain-upstage-py3.11) ➜ upstage git:(SDR-22) make test
poetry run pytest tests/unit_tests/
================================================================================================================================================== test session starts ===================================================================================================================================================
platform darwin -- Python 3.11.7, pytest-7.4.4, pluggy-1.5.0
rootdir: /Users/juhyung/upstage/projects/langchain-upstage/libs/upstage
configfile: pyproject.toml
plugins: syrupy-4.6.1, anyio-4.4.0, asyncio-0.21.2, mock-3.14.0
asyncio: mode=Mode.AUTO
collected 39 items
tests/unit_tests/test_chat_models.py ............... [ 38%]
tests/unit_tests/test_chat_models_standard.py FFEEEE [ 53%]
tests/unit_tests/test_embeddings.py .... [ 64%]
tests/unit_tests/test_groundedness_check.py . [ 66%]
tests/unit_tests/test_imports.py . [ 69%]
tests/unit_tests/test_layout_analysis.py .......... [ 94%]
tests/unit_tests/test_secrets.py .. [100%]
========================================================================================================================================================= ERRORS =========================================================================================================================================================
_____________________________________________________________________________________________________________________________ ERROR at setup of TestUpstageStandard.test_bind_tool_pydantic ______________________________________________________________________________________________________________________________
self = <tests.unit_tests.test_chat_models_standard.TestUpstageStandard object at 0x10f086a50>
@pytest.fixture
def model(self) -> BaseChatModel:
return self.chat_model_class(
> **{**self.standard_chat_model_params, **self.chat_model_params}
)
E TypeError: 'method' object is not a mapping
/Users/juhyung/Library/Caches/pypoetry/virtualenvs/langchain-upstage-Id53XXjQ-py3.11/lib/python3.11/site-packages/langchain_standard_tests/unit_tests/chat_models.py:47: TypeError
_______________________________________________________________________________________________________________________ ERROR at setup of TestUpstageStandard.test_with_structured_output[Person] ________________________________________________________________________________________________________________________
self = <tests.unit_tests.test_chat_models_standard.TestUpstageStandard object at 0x10f0902d0>
@pytest.fixture
def model(self) -> BaseChatModel:
return self.chat_model_class(
> **{**self.standard_chat_model_params, **self.chat_model_params}
)
E TypeError: 'method' object is not a mapping
/Users/juhyung/Library/Caches/pypoetry/virtualenvs/langchain-upstage-Id53XXjQ-py3.11/lib/python3.11/site-packages/langchain_standard_tests/unit_tests/chat_models.py:47: TypeError
_______________________________________________________________________________________________________________________ ERROR at setup of TestUpstageStandard.test_with_structured_output[schema1] _______________________________________________________________________________________________________________________
self = <tests.unit_tests.test_chat_models_standard.TestUpstageStandard object at 0x10f090610>
@pytest.fixture
def model(self) -> BaseChatModel:
return self.chat_model_class(
> **{**self.standard_chat_model_params, **self.chat_model_params}
)
E TypeError: 'method' object is not a mapping
/Users/juhyung/Library/Caches/pypoetry/virtualenvs/langchain-upstage-Id53XXjQ-py3.11/lib/python3.11/site-packages/langchain_standard_tests/unit_tests/chat_models.py:47: TypeError
_______________________________________________________________________________________________________________________________ ERROR at setup of TestUpstageStandard.test_standard_params _______________________________________________________________________________________________________________________________
self = <tests.unit_tests.test_chat_models_standard.TestUpstageStandard object at 0x10f090f90>
@pytest.fixture
def model(self) -> BaseChatModel:
return self.chat_model_class(
> **{**self.standard_chat_model_params, **self.chat_model_params}
)
E TypeError: 'method' object is not a mapping
/Users/juhyung/Library/Caches/pypoetry/virtualenvs/langchain-upstage-Id53XXjQ-py3.11/lib/python3.11/site-packages/langchain_standard_tests/unit_tests/chat_models.py:47: TypeError
======================================================================================================================================================== FAILURES ========================================================================================================================================================
_____________________________________________________________________________________________________________________________________________ TestUpstageStandard.test_init ______________________________________________________________________________________________________________________________________________
self = <tests.unit_tests.test_chat_models_standard.TestUpstageStandard object at 0x10f054950>
def test_init(self) -> None:
model = self.chat_model_class(
> **{**self.standard_chat_model_params, **self.chat_model_params}
)
E TypeError: 'method' object is not a mapping
/Users/juhyung/Library/Caches/pypoetry/virtualenvs/langchain-upstage-Id53XXjQ-py3.11/lib/python3.11/site-packages/langchain_standard_tests/unit_tests/chat_models.py:87: TypeError
________________________________________________________________________________________________________________________________________ TestUpstageStandard.test_init_streaming _________________________________________________________________________________________________________________________________________
self = <tests.unit_tests.test_chat_models_standard.TestUpstageStandard object at 0x10f085710>
def test_init_streaming(
self,
) -> None:
model = self.chat_model_class(
> **{
**self.standard_chat_model_params,
**self.chat_model_params,
"streaming": True,
}
)
E TypeError: 'method' object is not a mapping
/Users/juhyung/Library/Caches/pypoetry/virtualenvs/langchain-upstage-Id53XXjQ-py3.11/lib/python3.11/site-packages/langchain_standard_tests/unit_tests/chat_models.py:95: TypeError
================================================================================================================================================== slowest 5 durations ===================================================================================================================================================
0.54s call tests/unit_tests/test_chat_models.py::test_upstage_tokenizer
0.34s call tests/unit_tests/test_chat_models.py::test_upstage_tokenizer_get_num_tokens
0.12s call tests/unit_tests/test_groundedness_check.py::test_initialization
0.08s call tests/unit_tests/test_chat_models.py::test_upstage_model_param
0.04s call tests/unit_tests/test_chat_models.py::test_initialization
================================================================================================================================================ short test summary info =================================================================================================================================================
FAILED tests/unit_tests/test_chat_models_standard.py::TestUpstageStandard::test_init - TypeError: 'method' object is not a mapping
FAILED tests/unit_tests/test_chat_models_standard.py::TestUpstageStandard::test_init_streaming - TypeError: 'method' object is not a mapping
ERROR tests/unit_tests/test_chat_models_standard.py::TestUpstageStandard::test_bind_tool_pydantic - TypeError: 'method' object is not a mapping
ERROR tests/unit_tests/test_chat_models_standard.py::TestUpstageStandard::test_with_structured_output[Person] - TypeError: 'method' object is not a mapping
ERROR tests/unit_tests/test_chat_models_standard.py::TestUpstageStandard::test_with_structured_output[schema1] - TypeError: 'method' object is not a mapping
ERROR tests/unit_tests/test_chat_models_standard.py::TestUpstageStandard::test_standard_params - TypeError: 'method' object is not a mapping
========================================================================================================================================= 2 failed, 33 passed, 4 errors in 5.57s =========================================================================================================================================
make: *** [test] Error 1
(langchain-upstage-py3.11) ➜ upstage git:(SDR-22)
```
also lint fails
```
Run make lint_tests
poetry run ruff .
poetry run ruff format tests --diff
16 files already formatted
poetry run ruff --select I tests
mkdir .mypy_cache_test; poetry run mypy tests --cache-dir .mypy_cache_test
tests/unit_tests/test_chat_models_standard.py:[14](https://github.com/langchain-ai/langchain-upstage/actions/runs/9756872538/job/26928071498?pr=9#step:11:15): error: Signature of "chat_model_class" incompatible with supertype "ChatModelTests" [override]
tests/unit_tests/test_chat_models_standard.py:18: error: Signature of "chat_model_params" incompatible with supertype "ChatModelTests" [override]
tests/integration_tests/test_chat_models_standard.py:14: error: Signature of "chat_model_class" incompatible with supertype "ChatModelTests" [override]
tests/integration_tests/test_chat_models_standard.py:18: error: Signature of "chat_model_params" incompatible with supertype "ChatModelTests" [override]
Found 4 errors in 2 files (checked [16](https://github.com/langchain-ai/langchain-upstage/actions/runs/9756872538/job/26928071498?pr=9#step:11:17) source files)
make: *** [Makefile:31: lint_tests] Error 1
Error: Process completed with exit code 2.
```
### Description
i use
```
langchain-standard-tests = { git = "https://github.com/langchain-ai/langchain.git", subdirectory = "libs/standard-tests" }
```
### System Info
it fails on every version of python
<img width="328" alt="image" src="https://github.com/langchain-ai/langchain/assets/20140126/96fd6de5-5d7d-481a-8127-bca853964cf5">
| something is wrong in standard_test | https://api.github.com/repos/langchain-ai/langchain/issues/23755/comments | 1 | 2024-07-02T07:52:18Z | 2024-07-02T14:34:01Z | https://github.com/langchain-ai/langchain/issues/23755 | 2,385,511,056 | 23,755 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
model = ChatFireworks(model=model_name)
parser = PydanticOutputParser(pydantic_object=pydantic)
prompt = ChatPromptTemplate.from_messages([
("system", "Answer the user query. Wrap the output in json tags\n{format_instructions}"),
("human", "{query}"),
]).partial(format_instructions=parser.get_format_instructions())
chain = prompt | model | parser
try:
output = chain.invoke({"query": input})
except (OutputParserException, InvalidRequestError) as e:
output = f"An error occurred: {e}"
### Error Message and Stack Trace (if applicable)
_No response_
### Description
An error occurred: {'error': {'object': 'error', 'type': 'invalid_request_error', 'message': 'jinja template rendering failed. System role not supported'}}
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
> Python Version: 3.12.4 (main, Jun 21 2024, 11:46:08) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.81
> langchain_fireworks: 0.1.3
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| Pydantc output parser not working with gemma fireworks ai | https://api.github.com/repos/langchain-ai/langchain/issues/23754/comments | 2 | 2024-07-02T07:15:07Z | 2024-07-03T05:54:48Z | https://github.com/langchain-ai/langchain/issues/23754 | 2,385,434,234 | 23,754 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
retriever.vectorstore.add_documents(summary_docs) will crash (list index out of range )
# Helper function to add documents to the vectorstore and docstore
def add_documents(retriever, doc_summaries, doc_contents):
doc_ids = [str(uuid.uuid4()) for _ in doc_contents]
summary_docs = [
Document(page_content=s, metadata={id_key: doc_ids[i]})
for i, s in enumerate(doc_summaries)
]
retriever.vectorstore.add_documents(summary_docs)
retriever.docstore.mset(list(zip(doc_ids, doc_contents)))
### Error Message and Stack Trace (if applicable)
_No response_
### Description
run Multi_modal_RAG.ipynb step by step
### System Info
langchain 0.1.9
langchain-chroma 0.1.1
langchain-community 0.0.24
langchain-core 0.1.27
langchain-experimental 0.0.52
langchain-google-genai 0.0.9
langchain-openai 0.0.7
langchain-text-splitters 0.2.1
langchainhub 0.1.2 | Multi_modal_RAG.ipynb run IndexError: list index out of range | https://api.github.com/repos/langchain-ai/langchain/issues/23746/comments | 0 | 2024-07-02T02:22:47Z | 2024-07-02T02:25:21Z | https://github.com/langchain-ai/langchain/issues/23746 | 2,385,055,305 | 23,746 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import langchain.agents import AgentExecutor
import langchain.agents as lc_agents
def fetch_config_from_header(config: Dict[str, Any], req: Request) -> Dict[str, Any]:
config = config.copy()
configurable = config.get("configurable", {})
if "x-model-name" in req.headers:
configurable["model_name"] = req.headers["x-model-name"]
else:
raise HTTPException(401, "No model name provided")
if "x-api-key" in req.headers:
configurable["default_headers"] = {
"Content-Type":"application/json",
"api-key": req.headers["x-api-key"]
}
else:
raise HTTPException(401, "No API key provided")
if "x-model-kwargs" in req.headers:
configurable["model_kwargs"] = json.loads(req.headers["x-model-kwargs"])
else:
raise HTTPException(401, "No model arguments provided")
configurable["openai_api_base"] = f"https://someendpoint.com/{req.headers['x-model-name']}"
config["configurable"] = configurable
return config
chat_model = ChatOpenAI(
model_name = "some_model",
model_kwargs = {},
default_headers = {},
openai_api_key = "placeholder",
openai_api_base = "placeholder").configurable_fields(
model_name = ConfigurableField(id="model_name"),
model_kwargs = ConfigurableField(id="model_kwargs"),
default_headers = ConfigurableField(id="default_headers"),
openai_api_base = ConfigurableField(id="openai_api_base"),
)
agent = lc_agents.tool_calling_agent.base.create_tool_calling_agent(chat_model, tools, prompt_template)
runnable = AgentExecutor(agent=agent, tools=tools)
add_routes(
app,
runnable.with_types(input_type=InputChat),
path="/some_agent",
per_req_config_modifier=fetch_config_from_header,
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Ideally when we set a field to be configurable, it should be updated accordingly when new configurable values are given by per_req_config_modifier.
However, none of the configurable variables such as temperature, openai_api_base, default_headers, etc. are passed to the final client.
Some of the related values from certain functions
```
# returned value of config in fetch_config_from_header()
{'configurable': {'model_name': 'some_model', 'default_headers': {'Content-Type': 'application/json', 'api-key': 'some_api_key'}, 'model_kwargs': {'user': 'some_user'}, 'openai_api_base': 'https://someendpoint.com/some_model', 'temperature': 0.6}
# values of cast_to, opts in openai's _base_client.py AsyncAPIClient.post()
cast_to: <class 'openai.types.chat.chat_completion.ChatCompletion'>
opts: method='post' url='/chat/completions' params={} headers=NOT_GIVEN max_retries=NOT_GIVEN timeout=NOT_GIVEN files=None idempotency_key=None post_parser=NOT_GIVEN json_data={'messages': [{'content': 'some_content', 'role': 'system'}], 'model': 'default_model', 'n': 1, 'stream': False, 'temperature': 0.7} extra_json=None
```
### System Info
```
langchain==0.2.6
langchain-community==0.2.6
langchain-core==0.2.10
langchain-experimental==0.0.62
langchain-openai==0.1.13
langchain-text-splitters==0.2.2
langgraph==0.1.5
langserve==0.2.2
langsmith==0.1.82
openai==1.35.7
platform = linux
python version = 3.12.4
``` | ConfigurableFields does not works for agent | https://api.github.com/repos/langchain-ai/langchain/issues/23745/comments | 4 | 2024-07-02T02:15:08Z | 2024-07-02T15:09:58Z | https://github.com/langchain-ai/langchain/issues/23745 | 2,385,048,568 | 23,745 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
This is a spam issue.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Spam!
### System Info
no | Spam issue | https://api.github.com/repos/langchain-ai/langchain/issues/23720/comments | 3 | 2024-07-01T15:27:12Z | 2024-07-17T15:36:13Z | https://github.com/langchain-ai/langchain/issues/23720 | 2,384,138,131 | 23,720 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.agents import AgentExecutor, create_react_agent
from langchain.memory import ConversationBufferMemory
from langchain_core.tools import Tool
from langchain_core.prompts import PromptTemplate
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_community.tools import WikipediaQueryRun
from langchain_community.utilities import WikipediaAPIWrapper
from langchain_community.tools import YouTubeSearchTool
youtube = YouTubeSearchTool()
wiki = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
tools = [
Tool(
name="youtube",
func=youtube.run,
description="Helps in getting youtube videos",
),
Tool(
name="wiki",
func=wiki.run,
description="Useful to search about a popular entity",
)
]
tool_names = ["youtube","wiki"]
template = '''Answer the following questions as best you can. You have access to the following tools:
{tools}
Begin!
Question: {input}
Thought:{agent_scratchpad}
Action: the action to take, should be one of [{tool_names}]
'''
prompt = PromptTemplate (input_variables = ["tools","tool_names","input"] , template = template)
memory = ConversationBufferMemory(memory_key="chat_history")
llm= ChatGoogleGenerativeAI (model="chat-bison@002")
agent=create_react_agent(llm=llm, tools=tools, prompt = prompt)
agent_chain = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)
agent_chain.invoke({"input": "Tell me something about USA"})
### Error Message and Stack Trace (if applicable)
> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
InvalidArgument Traceback (most recent call last)
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_google_genai/chat_models.py:178, in _chat_with_retry.<locals>._chat_with_retry(**kwargs)
177 try:
--> 178 return generation_method(**kwargs)
179 # Do not retry for these errors.
File ~/opt/anaconda3/lib/python3.11/site-packages/google/ai/generativelanguage_v1beta/services/generative_service/client.py:1122, in GenerativeServiceClient.stream_generate_content(self, request, model, contents, retry, timeout, metadata)
1121 # Send the request.
-> 1122 response = rpc(
1123 request,
1124 retry=retry,
1125 timeout=timeout,
1126 metadata=metadata,
1127 )
1129 # Done; return the response.
File ~/opt/anaconda3/lib/python3.11/site-packages/google/api_core/gapic_v1/method.py:131, in _GapicCallable.__call__(self, timeout, retry, compression, *args, **kwargs)
129 kwargs["compression"] = compression
--> 131 return wrapped_func(*args, **kwargs)
File ~/opt/anaconda3/lib/python3.11/site-packages/google/api_core/retry/retry_unary.py:293, in Retry.__call__.<locals>.retry_wrapped_func(*args, **kwargs)
290 sleep_generator = exponential_sleep_generator(
291 self._initial, self._maximum, multiplier=self._multiplier
292 )
--> 293 return retry_target(
294 target,
295 self._predicate,
296 sleep_generator,
297 timeout=self._timeout,
298 on_error=on_error,
299 )
File ~/opt/anaconda3/lib/python3.11/site-packages/google/api_core/retry/retry_unary.py:153, in retry_target(target, predicate, sleep_generator, timeout, on_error, exception_factory, **kwargs)
151 except Exception as exc:
152 # defer to shared logic for handling errors
--> 153 _retry_error_helper(
154 exc,
155 deadline,
156 sleep,
157 error_list,
158 predicate,
159 on_error,
160 exception_factory,
161 timeout,
162 )
163 # if exception not raised, sleep before next attempt
File ~/opt/anaconda3/lib/python3.11/site-packages/google/api_core/retry/retry_base.py:212, in _retry_error_helper(exc, deadline, next_sleep, error_list, predicate_fn, on_error_fn, exc_factory_fn, original_timeout)
207 final_exc, source_exc = exc_factory_fn(
208 error_list,
209 RetryFailureReason.NON_RETRYABLE_ERROR,
210 original_timeout,
211 )
--> 212 raise final_exc from source_exc
213 if on_error_fn is not None:
File ~/opt/anaconda3/lib/python3.11/site-packages/google/api_core/retry/retry_unary.py:144, in retry_target(target, predicate, sleep_generator, timeout, on_error, exception_factory, **kwargs)
143 try:
--> 144 result = target()
145 if inspect.isawaitable(result):
File ~/opt/anaconda3/lib/python3.11/site-packages/google/api_core/timeout.py:120, in TimeToDeadlineTimeout.__call__.<locals>.func_with_timeout(*args, **kwargs)
118 kwargs["timeout"] = max(0, self._timeout - time_since_first_attempt)
--> 120 return func(*args, **kwargs)
File ~/opt/anaconda3/lib/python3.11/site-packages/google/api_core/grpc_helpers.py:174, in _wrap_stream_errors.<locals>.error_remapped_callable(*args, **kwargs)
173 except grpc.RpcError as exc:
--> 174 raise exceptions.from_grpc_error(exc) from exc
InvalidArgument: 400 Request contains an invalid argument.
The above exception was the direct cause of the following exception:
ChatGoogleGenerativeAIError Traceback (most recent call last)
Cell In[30], line 70
66 agent=create_react_agent(llm=llm, tools=tools, prompt = prompt)
68 agent_chain = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)
---> 70 agent_chain.invoke({"input": "Tell me something about USA"})
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:166, in Chain.invoke(self, input, config, **kwargs)
164 except BaseException as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
169 if include_run_info:
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
153 try:
154 self._validate_inputs(inputs)
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
159 )
161 final_outputs: Dict[str, Any] = self.prep_outputs(
162 inputs, outputs, return_only_outputs
163 )
164 except BaseException as e:
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1433, in AgentExecutor._call(self, inputs, run_manager)
1431 # We now enter the agent loop (until it returns something).
1432 while self._should_continue(iterations, time_elapsed):
-> 1433 next_step_output = self._take_next_step(
1434 name_to_tool_map,
1435 color_mapping,
1436 inputs,
1437 intermediate_steps,
1438 run_manager=run_manager,
1439 )
1440 if isinstance(next_step_output, AgentFinish):
1441 return self._return(
1442 next_step_output, intermediate_steps, run_manager=run_manager
1443 )
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1139, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1130 def _take_next_step(
1131 self,
1132 name_to_tool_map: Dict[str, BaseTool],
(...)
1136 run_manager: Optional[CallbackManagerForChainRun] = None,
1137 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1138 return self._consume_next_step(
-> 1139 [
1140 a
1141 for a in self._iter_next_step(
1142 name_to_tool_map,
1143 color_mapping,
1144 inputs,
1145 intermediate_steps,
1146 run_manager,
1147 )
1148 ]
1149 )
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1139, in <listcomp>(.0)
1130 def _take_next_step(
1131 self,
1132 name_to_tool_map: Dict[str, BaseTool],
(...)
1136 run_manager: Optional[CallbackManagerForChainRun] = None,
1137 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1138 return self._consume_next_step(
-> 1139 [
1140 a
1141 for a in self._iter_next_step(
1142 name_to_tool_map,
1143 color_mapping,
1144 inputs,
1145 intermediate_steps,
1146 run_manager,
1147 )
1148 ]
1149 )
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:1167, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1164 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
1166 # Call the LLM to see what to do.
-> 1167 output = self.agent.plan(
1168 intermediate_steps,
1169 callbacks=run_manager.get_child() if run_manager else None,
1170 **inputs,
1171 )
1172 except OutputParserException as e:
1173 if isinstance(self.handle_parsing_errors, bool):
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py:398, in RunnableAgent.plan(self, intermediate_steps, callbacks, **kwargs)
390 final_output: Any = None
391 if self.stream_runnable:
392 # Use streaming to make sure that the underlying LLM is invoked in a
393 # streaming
(...)
396 # Because the response from the plan is not a generator, we need to
397 # accumulate the output into final output and return that.
--> 398 for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
399 if final_output is None:
400 final_output = chunk
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2882, in RunnableSequence.stream(self, input, config, **kwargs)
2876 def stream(
2877 self,
2878 input: Input,
2879 config: Optional[RunnableConfig] = None,
2880 **kwargs: Optional[Any],
2881 ) -> Iterator[Output]:
-> 2882 yield from self.transform(iter([input]), config, **kwargs)
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2869, in RunnableSequence.transform(self, input, config, **kwargs)
2863 def transform(
2864 self,
2865 input: Iterator[Input],
2866 config: Optional[RunnableConfig] = None,
2867 **kwargs: Optional[Any],
2868 ) -> Iterator[Output]:
-> 2869 yield from self._transform_stream_with_config(
2870 input,
2871 self._transform,
2872 patch_config(config, run_name=(config or {}).get("run_name") or self.name),
2873 **kwargs,
2874 )
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1867, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
1865 try:
1866 while True:
-> 1867 chunk: Output = context.run(next, iterator) # type: ignore
1868 yield chunk
1869 if final_output_supported:
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:2831, in RunnableSequence._transform(self, input, run_manager, config, **kwargs)
2828 else:
2829 final_pipeline = step.transform(final_pipeline, config)
-> 2831 for output in final_pipeline:
2832 yield output
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1163, in Runnable.transform(self, input, config, **kwargs)
1160 final: Input
1161 got_first_val = False
-> 1163 for ichunk in input:
1164 # The default implementation of transform is to buffer input and
1165 # then call stream.
1166 # It'll attempt to gather all input into a single chunk using
1167 # the `+` operator.
1168 # If the input is not addable, then we'll assume that we can
1169 # only operate on the last chunk,
1170 # and we'll iterate until we get to the last chunk.
1171 if not got_first_val:
1172 final = ichunk
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:4784, in RunnableBindingBase.transform(self, input, config, **kwargs)
4778 def transform(
4779 self,
4780 input: Iterator[Input],
4781 config: Optional[RunnableConfig] = None,
4782 **kwargs: Any,
4783 ) -> Iterator[Output]:
-> 4784 yield from self.bound.transform(
4785 input,
4786 self._merge_configs(config),
4787 **{**self.kwargs, **kwargs},
4788 )
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py:1181, in Runnable.transform(self, input, config, **kwargs)
1178 final = ichunk
1180 if got_first_val:
-> 1181 yield from self.stream(final, config, **kwargs)
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:265, in BaseChatModel.stream(self, input, config, stop, **kwargs)
258 except BaseException as e:
259 run_manager.on_llm_error(
260 e,
261 response=LLMResult(
262 generations=[[generation]] if generation else []
263 ),
264 )
--> 265 raise e
266 else:
267 run_manager.on_llm_end(LLMResult(generations=[[generation]]))
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:245, in BaseChatModel.stream(self, input, config, stop, **kwargs)
243 generation: Optional[ChatGenerationChunk] = None
244 try:
--> 245 for chunk in self._stream(messages, stop=stop, **kwargs):
246 if chunk.message.id is None:
247 chunk.message.id = f"run-{run_manager.run_id}"
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_google_genai/chat_models.py:833, in ChatGoogleGenerativeAI._stream(self, messages, stop, run_manager, tools, functions, safety_settings, tool_config, generation_config, **kwargs)
811 def _stream(
812 self,
813 messages: List[BaseMessage],
(...)
822 **kwargs: Any,
823 ) -> Iterator[ChatGenerationChunk]:
824 request = self._prepare_request(
825 messages,
826 stop=stop,
(...)
831 generation_config=generation_config,
832 )
--> 833 response: GenerateContentResponse = _chat_with_retry(
834 request=request,
835 generation_method=self.client.stream_generate_content,
836 **kwargs,
837 metadata=self.default_metadata,
838 )
839 for chunk in response:
840 _chat_result = _response_to_result(chunk, stream=True)
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_google_genai/chat_models.py:196, in _chat_with_retry(generation_method, **kwargs)
193 except Exception as e:
194 raise e
--> 196 return _chat_with_retry(**kwargs)
File ~/opt/anaconda3/lib/python3.11/site-packages/tenacity/__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
287 @functools.wraps(f)
288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289 return self(f, *args, **kw)
File ~/opt/anaconda3/lib/python3.11/site-packages/tenacity/__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
378 while True:
--> 379 do = self.iter(retry_state=retry_state)
380 if isinstance(do, DoAttempt):
381 try:
File ~/opt/anaconda3/lib/python3.11/site-packages/tenacity/__init__.py:314, in BaseRetrying.iter(self, retry_state)
312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314 return fut.result()
316 if self.after is not None:
317 self.after(retry_state)
File ~/opt/anaconda3/lib/python3.11/concurrent/futures/_base.py:449, in Future.result(self, timeout)
447 raise CancelledError()
448 elif self._state == FINISHED:
--> 449 return self.__get_result()
451 self._condition.wait(timeout)
453 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File ~/opt/anaconda3/lib/python3.11/concurrent/futures/_base.py:401, in Future.__get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
404 self = None
File ~/opt/anaconda3/lib/python3.11/site-packages/tenacity/__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
380 if isinstance(do, DoAttempt):
381 try:
--> 382 result = fn(*args, **kwargs)
383 except BaseException: # noqa: B902
384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File ~/opt/anaconda3/lib/python3.11/site-packages/langchain_google_genai/chat_models.py:190, in _chat_with_retry.<locals>._chat_with_retry(**kwargs)
187 raise ValueError(error_msg)
189 except google.api_core.exceptions.InvalidArgument as e:
--> 190 raise ChatGoogleGenerativeAIError(
191 f"Invalid argument provided to Gemini: {e}"
192 ) from e
193 except Exception as e:
194 raise e
ChatGoogleGenerativeAIError: Invalid argument provided to Gemini: 400 Request contains an invalid argument.
### Description
I also tried with different langchain wrapper classes for google models. Also tried with different models. When I tried the same with 'GoogleGenerativeAI' class, I get the following error
AttributeError Traceback (most recent call last)
[<ipython-input-7-a2422220c061>](https://ogjiokqjqun-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240627-060121_RC00_647236509#) in <cell line: 70>()
68 agent_chain = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)
69
---> 70 agent_chain.invoke({"input": "Tell me something about SRK"})
25 frames
[/usr/local/lib/python3.10/dist-packages/langchain_google_genai/llms.py](https://ogjiokqjqun-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240627-060121_RC00_647236509#) in _completion_with_retry(prompt, is_gemini, stream, **kwargs)
82 try:
83 if is_gemini:
---> 84 return llm.client.generate_content(
85 contents=prompt,
86 stream=stream,
AttributeError: module 'google.generativeai' has no attribute 'generate_content'
### System Info
python version 3.11.8
langchain -0.2.5
langchain_community-0.2.5
langchain_core-0.2.9
langchain-google-gen-ai-1.0.7 | 'ChatGoogleGenerativeAI wrapper class doesn't work with google chat models such as 'chat-bison@002 | https://api.github.com/repos/langchain-ai/langchain/issues/23714/comments | 0 | 2024-07-01T12:16:49Z | 2024-07-01T15:31:26Z | https://github.com/langchain-ai/langchain/issues/23714 | 2,383,693,372 | 23,714 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_openai import AzureChatOpenAI
from langchain_community.callbacks import get_openai_callback
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.prompts import ChatPromptTemplate
def chat_completion( temperature: int = 0):
try:
chat = AzureChatOpenAI(
azure_deployment=MODEL,
azure_endpoint=AZURE_OPENAI_ENDPOINT,
api_key=AZURE_OPENAI_API_KEY,
openai_api_version=API_VERSION,
model_version=1106,
temperature=temperature,
max_tokens=MAX_TOKENS,
)
messages = [SystemMessage(content="You are a wonderful assistant"), HumanMessage(content="Write a haiku about the sea")]
prompt = ChatPromptTemplate.from_messages(messages)
runnable = prompt | chat
with get_openai_callback() as cb:
res = runnable.invoke({})
print( f"Total tokens: {cb.total_tokens}")
print(f"Total cost: {cb.total_cost}")
print(f"Haiku: {res.content}")
except Exception as err:
print(str(err))
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The get_openai_callback() callback does not have updated models for Azure in the file: "\langchain_community\callbacks\openai_info.py"
So when making a request the total_cost returned is 0.0.
I know that the error is solved by adding the models to the dictionary: "MODEL_COST_PER_1K_TOKENS".
But when deploying in docker for example it would be a tedious task to be modifying the file.
Would it be possible to add the models for Azure?
https://azure.microsoft.com/en-us/pricing/details/cognitive-services/openai-service/
Models:
"gpt-35-turbo-1106": 0.001,
"gpt-35-turbo-1106-completion": 0.002,
"gpt-35-turbo-0125": 0.0005,
"gpt-35-turbo-0125-completion": 0.0015,
### System Info
langchain==0.2.6
langchain-community==0.2.6
langchain-core==0.2.10
langchain-openai==0.1.8
langchain-text-splitters==0.2.0
langsmith==0.1.82
| The get_openai_callback() function is not updated for Azure models. | https://api.github.com/repos/langchain-ai/langchain/issues/23713/comments | 2 | 2024-07-01T12:12:51Z | 2024-07-14T11:51:05Z | https://github.com/langchain-ai/langchain/issues/23713 | 2,383,684,487 | 23,713 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangGraph/LangChain rather than my code.
- [X] I am sure this is better as an issue [rather than a GitHub discussion](https://github.com/langchain-ai/langgraph/discussions/new/choose), since this is a LangGraph bug and not a design question.
### Example Code
```python
import asyncio
from traceback import print_stack
from typing import Sequence
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_core.language_models.fake_chat_models import FakeChatModel
from langchain_core.messages import BaseMessage, HumanMessage
from langchain_core.runnables.history import RunnableWithMessageHistory
class SyncFunctionCalledWithinAsyncContextError(Exception):
pass
class TestChatMessageHistory(BaseChatMessageHistory):
def __init__(self) -> None:
self._messages = [
HumanMessage(content='all good')
]
@property
def messages(self) -> list[BaseMessage]:
print_stack()
raise SyncFunctionCalledWithinAsyncContextError
async def aget_messages(self) -> list[BaseMessage]:
return self._messages
def add_messages(self, messages: Sequence[BaseMessage]) -> None:
print_stack()
raise SyncFunctionCalledWithinAsyncContextError
async def aadd_messages(self, messages: Sequence[BaseMessage]) -> None:
self._messages.extend(messages)
def clear(self) -> None:
print_stack()
raise SyncFunctionCalledWithinAsyncContextError
chat = FakeChatModel()
runnable_with_history = RunnableWithMessageHistory(
chat,
get_session_history=lambda session_id: TestChatMessageHistory(),
)
async def main() -> None:
result = await runnable_with_history.ainvoke(
[HumanMessage(content='hello?')],
{'configurable': {'session_id': 'dummy'}},
)
print(result)
if __name__ == '__main__':
asyncio.run(main())
```
### Error Message and Stack Trace (if applicable)
```shell
File "bug.py", line 57, in <module>
asyncio.run(main())
File "/home/user/.pyenv/versions/3.11.4/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
File "/home/user/.pyenv/versions/3.11.4/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
File "/home/user/.pyenv/versions/3.11.4/lib/python3.11/asyncio/base_events.py", line 640, in run_until_complete
self.run_forever()
File "/home/user/.pyenv/versions/3.11.4/lib/python3.11/asyncio/base_events.py", line 607, in run_forever
self._run_once()
File "/home/user/.pyenv/versions/3.11.4/lib/python3.11/asyncio/base_events.py", line 1922, in _run_once
handle._run()
File "/home/user/.pyenv/versions/3.11.4/lib/python3.11/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/home/user/workspace/luna/.venv/lib/python3.11/site-packages/langchain_core/tracers/base.py", line 388, in _end_trace
await self._on_run_update(run)
File "/home/user/workspace/luna/.venv/lib/python3.11/site-packages/langchain_core/tracers/root_listeners.py", line 106, in _on_run_update
await acall_func_with_variable_args(self._arg_on_end, run, self.config)
File "/home/user/workspace/luna/.venv/lib/python3.11/site-packages/langchain_core/runnables/history.py", line 511, in _aexit_history
historic_messages = config["configurable"]["message_history"].messages
File "/home/user/workspace/luna/bug.py", line 24, in messages
print_stack()
Error in AsyncRootListenersTracer.on_llm_end callback: SyncFunctionCalledWithinAsyncContextError()
```
### Description
* I am trying to use `RunnableWithMessageHistory` in async context
* I am expecting that only async methods of my `ChatMessageHistory` backend will be used
* However, even being in async context and calling `ainvoke`, I still see **sync** `ChatMessageHistory.messages` property called
* Expected behaviour: instead of sync `.messages` property, an async method `.aget_messages()` should be called
# Why this happens?
Inside `langchain_core/runnables/history.py`:
```python
class RunnableWithMessageHistory(RunnableBindingBase):
# ...
def _exit_history(self, run: Run, config: RunnableConfig) -> None:
hist: BaseChatMessageHistory = config["configurable"]["message_history"]
# Get the input messages
inputs = load(run.inputs)
input_messages = self._get_input_messages(inputs)
# If historic messages were prepended to the input messages, remove them to
# avoid adding duplicate messages to history.
if not self.history_messages_key:
historic_messages = config["configurable"]["message_history"].messages
input_messages = input_messages[len(historic_messages) :]
# Get the output messages
output_val = load(run.outputs)
output_messages = self._get_output_messages(output_val)
hist.add_messages(input_messages + output_messages)
async def _aexit_history(self, run: Run, config: RunnableConfig) -> None:
hist: BaseChatMessageHistory = config["configurable"]["message_history"]
# Get the input messages
inputs = load(run.inputs)
input_messages = self._get_input_messages(inputs)
# If historic messages were prepended to the input messages, remove them to
# avoid adding duplicate messages to history.
if not self.history_messages_key:
historic_messages = config["configurable"]["message_history"].messages # <----------------- !!!
input_messages = input_messages[len(historic_messages) :]
# Get the output messages
output_val = load(run.outputs)
output_messages = self._get_output_messages(output_val)
await hist.aadd_messages(input_messages + output_messages)
```
Async version `_aexit_history` has been copied from sync `_exit_history`, thus a developer probably forgot to replace sync version for messages retrieval to async one. I think that the solution should be in replacing
```python
historic_messages = config["configurable"]["message_history"].messages
```
with
```python
historic_messages = await config["configurable"]["message_history"].aget_messages()
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: langchain-ai/langgraph#1 SMP Mon, 16 Jan 2023 13:59:21 +0000
> Python Version: 3.11.4 (main, Aug 17 2023, 14:57:18) [GCC 13.2.1 20230801]
Package Information
-------------------
> langchain_core: 0.2.10
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.82
> langchain_groq: 0.1.5
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| Within async context, `RunnableWithMessageHistory` calls sync `.messages` property instead of async `.aget_messages()` method | https://api.github.com/repos/langchain-ai/langchain/issues/23716/comments | 0 | 2024-07-01T09:33:35Z | 2024-07-01T18:33:06Z | https://github.com/langchain-ai/langchain/issues/23716 | 2,384,013,658 | 23,716 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.messages import SystemMessage, merge_message_runs
chain = SystemMessage(content="Hello, World!") + SystemMessage(content=["foo", "bar"])
runnable = chain | merge_message_runs()
runnable.invoke(input={})
```
### Error Message and Stack Trace (if applicable)
```text
Traceback (most recent call last):
File "/Users/JP-Ellis/mwe/mwe.py", line 5, in <module>
runnable.invoke(input={})
File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2507, in invoke
input = step.invoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3985, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1599, in _call_with_config
context.run(
File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 380, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3853, in _invoke
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 380, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/messages/utils.py", line 460, in merge_message_runs
messages = convert_to_messages(messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/messages/utils.py", line 268, in convert_to_messages
return [_convert_to_message(m) for m in messages]
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/messages/utils.py", line 235, in _convert_to_message
_message = _create_message_from_message_type(message_type_str, template)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/JP-Ellis/mwe/.venv/lib/python3.12/site-packages/langchain_core/messages/utils.py", line 204, in _create_message_from_message_type
raise ValueError(
ValueError: Unexpected message type: messages. Use one of 'human', 'user', 'ai', 'assistant', or 'system'.
```
### Description
## Summary
The `merge_message_runs` function is incompatible with messages composed together using the LangChain Expression Language (LCEL).
## Description
The examples and tests for `merge_message_runs` verify that the logic is sound on a sequence of messages:
```python
[
SystemMessage(content=...),
SystemMessage(content=...),
]
```
However, if the messages are instead composed together using the LCEL:
```python
(
SystemMessage(content=...)
+ SystemMessage(content=...)
) # <-- Type ChatPromptValue
```
The internal logic used by `merge_message_runs` of iterating over the messages in the (assumed) sequence fails, resulting in the above error message.
Some monkey-patching on my part would indicate that adjusting the `convert_to_messages` function to handle both `PromptValue` subclasses as well as sequences (or more generally, iterables) works:
```python
def convert_to_messages(
messages: Iterable[MessageLikeRepresentation] | PromptValue,
) -> List[BaseMessage]:
if isinstance(messages, PromptValue):
return [_convert_to_message(m) for m in messages.to_messages()]
return [_convert_to_message(m) for m in messages]
```
I am happy to create a PR for the above if that seems like an appropriate solution.
### System Info
```console
❯ uv pip list
Package Version
------------------ --------
annotated-types 0.7.0
certifi 2024.6.2
charset-normalizer 3.3.2
idna 3.7
jsonpatch 1.33
jsonpointer 3.0.0
langchain-core 0.2.10
langsmith 0.1.82
orjson 3.10.5
packaging 24.1
pydantic 2.7.4
pydantic-core 2.18.4
pyyaml 6.0.1
requests 2.32.3
tenacity 8.4.2
typing-extensions 4.12.2
urllib3 2.2.2
❯ uname -mprs
Darwin 23.5.0 arm64 arm
❯ python --version
Python 3.12.4
❯ which python
/Users/JP-Ellis/mwe/.venv/bin/python
``` | `merge_message_runs` incompatible with LCEL | https://api.github.com/repos/langchain-ai/langchain/issues/23706/comments | 0 | 2024-07-01T09:27:34Z | 2024-07-15T15:58:07Z | https://github.com/langchain-ai/langchain/issues/23706 | 2,383,322,744 | 23,706 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
This is the minimal code to reproduce the error:
```python
import dotenv
import os
from langchain_community.agent_toolkits import create_sql_agent
from langchain_community.utilities import SQLDatabase
from langchain_mistralai import ChatMistralAI
# Get api key from .env file
dotenv.load_dotenv(".dev.env")
api_key = str(os.getenv("MISTRAL_API_KEY"))
# Create langchain database object
db = SQLDatabase.from_uri("postgresql://root:root@localhost:65432/test")
# Create agent
llm = ChatMistralAI(model_name="mistral-small-latest", api_key=api_key)
agent_executor = create_sql_agent(llm, db=db, agent_type="tool-calling", verbose=True)
agent_executor.invoke("Do any correct query")
```
This is the payload of the call to the mistral API route `/chat/completions`:
```json
{
"messages": [
{
"role": "system",
"content": "You are an agent designed to interact with a SQL database.\\nGiven an input question, create a syntactically correct postgresql query to run, then look at the results of the query and return the answer.\\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 10 results.\\nYou can order the results by a relevant column to return the most interesting examples in the database.\\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\\nYou have access to tools for interacting with the database.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\\n\\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\\n\\nIf the question does not seem related to the database, just return \"I don\"t know\" as the answer.\\n"
},
{
"role": "user",
"content": "Do any correct query"
},
{
"role": "assistant",
"content": "I should look at the tables in the database to see what I can query. Then I should query the schema of the most relevant tables."
}
],
"model": "mistral-small-latest",
"tools": [
{
"type": "function",
"function": {
"name": "sql_db_query",
"description": "Input to this tool is a detailed and correct SQL query, output is a result from the database. If the query is not correct, an error message will be returned. If an error is returned, rewrite the query, check the query, and try again. If you encounter an issue with Unknown column "xxxx" in "field list", use sql_db_schema to query the correct table fields.",
"parameters": {
"type": "object",
"properties": {
"query": {
"description": "A detailed and correct SQL query.",
"type": "string"
}
},
"required": [
"query"
]
}
}
},
{
"type": "function",
"function": {
"name": "sql_db_schema",
"description": "Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist by calling sql_db_list_tables first! Example Input: table1, table2, table3",
"parameters": {
"type": "object",
"properties": {
"table_names": {
"description": "A comma-separated list of the table names for which to return the schema. Example input: \"table1, table2, table3\"",
"type": "string"
}
},
"required": [
"table_names"
]
}
}
},
{
"type": "function",
"function": {
"name": "sql_db_list_tables",
"description": "Input is an empty string, output is a comma-separated list of tables in the database.",
"parameters": {
"type": "object",
"properties": {
"tool_input": {
"description": "An empty string",
"default": "",
"type": "string"
}
}
}
}
},
{
"type": "function",
"function": {
"name": "sql_db_query_checker",
"description": "Use this tool to double check if your query is correct before executing it. Always use this tool before executing a query with sql_db_query!",
"parameters": {
"type": "object",
"properties": {
"query": {
"description": "A detailed and SQL query to be checked.",
"type": "string"
}
},
"required": [
"query"
]
}
}
}
],
"stream": true
}
```
### Error Message and Stack Trace (if applicable)
```
python test.py
> Entering new SQL Agent Executor chain...
Traceback (most recent call last):
.../test.py", line 18, in <module>
agent_executor.invoke("Do any correct query")
.../lib/python3.11/site-packages/langchain/chains/base.py", line 166, in invoke
raise e
.../lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
.../lib/python3.11/site-packages/langchain/agents/agent.py", line 1433, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
.../lib/python3.11/site-packages/langchain/agents/agent.py", line 1139, in _take_next_step
[
.../lib/python3.11/site-packages/langchain/agents/agent.py", line 1139, in <listcomp>
[
.../lib/python3.11/site-packages/langchain/agents/agent.py", line 1167, in _iter_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
.../lib/python3.11/site-packages/langchain/agents/agent.py", line 515, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
.../lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2882, in stream
yield from self.transform(iter([input]), config, **kwargs)
.../lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2869, in transform
yield from self._transform_stream_with_config(
.../lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1867, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.../lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2831, in _transform
for output in final_pipeline:
.../lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1163, in transform
for ichunk in input:
.../lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4784, in transform
yield from self.bound.transform(
.../lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1181, in transform
yield from self.stream(final, config, **kwargs)
.../lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 265, in stream
raise e
.../lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 245, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
.../lib/python3.11/site-packages/langchain_mistralai/chat_models.py", line 523, in _stream
for chunk in self.completion_with_retry(
.../lib/python3.11/site-packages/langchain_mistralai/chat_models.py", line 391, in iter_sse
_raise_on_error(event_source.response)
.../lib/python3.11/site-packages/langchain_mistralai/chat_models.py", line 131, in _raise_on_error
raise httpx.HTTPStatusError(
httpx.HTTPStatusError: Error response 400 while fetching https://api.mistral.ai/v1/chat/completions: {"object":"error","message":"Expected last role User or Tool (or Assistant with prefix True) for serving but got assistant","type":"invalid_request_error","param":null,"code":null}
```
### Description
I'm following [this guide](https://python.langchain.com/v0.1/docs/use_cases/sql/agents/#setup) to implement a SQL agent with the `langchain_community.agent_toolkits.create_sql_agent` function but instead of using OpenAI I want to use the Mistral API. When I try to implement this agent with mistral I get the error you can see above. The mistral chat completion api doesn't expect the last message of the chat to be an assistant message unless the prefix feature is enabled. I don't know what is the expected behavior of this agent so I can't tell if it's an agent issue, or a mistral client issue.
### System Info
```bash
# python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.11.6 (main, Nov 22 2023, 18:29:18) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.2.7
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.77
> langchain_experimental: 0.0.60
> langchain_mistralai: 0.1.8
> langchain_text_splitters: 0.2.1
> langserve: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
``` | create_sql_agent with ChatMistralAI causes this error:"Expected last role User or Tool (or Assistant with prefix True) for serving but got assistant" | https://api.github.com/repos/langchain-ai/langchain/issues/23703/comments | 0 | 2024-07-01T08:53:21Z | 2024-07-01T08:55:57Z | https://github.com/langchain-ai/langchain/issues/23703 | 2,383,245,268 | 23,703 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
llm = ChatOpenAI(model_name="gpt-4-0314", streaming=True)
messages = [
SystemMessage(content="You're a helpful assistant"),
HumanMessage(content="What is the purpose of model regularization?"),
]
llm(messages)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
BaseChatModel.generate supports caching, but `.stream` method doesn't ([source](https://github.com/langchain-ai/langchain/blob/9604cb833b9cb9d04a0eb60754e68402ab2d4b3c/libs/core/langchain_core/language_models/chat_models.py#L281)).
This creates the need for workarounds like https://github.com/langchain-ai/langchain/issues/20782.
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
> Python Version: 3.11.7 (main, Jan 16 2024, 12:02:24) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.2.7
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.77
> langchain_anthropic: 0.1.13
> langchain_aws: 0.1.6
> langchain_openai: 0.1.7
> langchain_text_splitters: 0.2.0
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | BaseChatModel.stream method not supporting caching | https://api.github.com/repos/langchain-ai/langchain/issues/23701/comments | 0 | 2024-07-01T08:47:21Z | 2024-07-01T08:49:55Z | https://github.com/langchain-ai/langchain/issues/23701 | 2,383,232,429 | 23,701 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/chatbot/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
In the section about message history, there's a sentence:
> This `session_id` is used to distinguish between separate conversations, and should be passed in as part of the config when calling the new chain (we'll show how to do that.
### Idea or request for content:
There's a missing parenthesis at the end of the sentence which should be added. | DOC: missing parenthesis | https://api.github.com/repos/langchain-ai/langchain/issues/23687/comments | 0 | 2024-06-30T13:55:07Z | 2024-06-30T13:57:34Z | https://github.com/langchain-ai/langchain/issues/23687 | 2,382,267,174 | 23,687 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
% python3 --version
Python 3.11.9
% pip install 'langchain[all]'
```
### Error Message and Stack Trace (if applicable)
ERROR: Could not build wheels for greenlet, which is required to install pyproject.toml-based projects
### Description
ERROR: Could not build wheels for greenlet, which is required to install pyproject.toml-based projects
### System Info
% python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
> Python Version: 3.11.9 (main, Apr 19 2024, 11:44:45) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.2.10
> langsmith: 0.1.82
> langchain_groq: 0.1.6
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| ERROR: Could not build wheels for greenlet, which is required to install pyproject.toml-based projects | https://api.github.com/repos/langchain-ai/langchain/issues/23682/comments | 1 | 2024-06-30T08:34:16Z | 2024-06-30T08:39:22Z | https://github.com/langchain-ai/langchain/issues/23682 | 2,382,149,439 | 23,682 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.utilities import SQLDatabase
host = config["DEFAULT"]["DB_HOST"]
port = config["DEFAULT"]["DB_PORT"]
db_name = config["DEFAULT"]["DB_NAME"]
username = config["DEFAULT"]["DB_USERNAME"]
password = os.getenv(config["DEFAULT"]["DB_PASSWORD"])
url = f"postgresql+psycopg2://{username}:{password}@{host}:{port}/{db_name}"
include_tables = [
"schema",
"schema_field"
]
db = SQLDatabase.from_uri(url, include_tables=include_tables)
```
### Error Message and Stack Trace (if applicable)
```console
{
"name": "ValueError",
"message": "include_tables {'schema_field', 'schema'} not found in database",
"stack": "---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[15], line 16
9 url = f\"postgresql+psycopg2://{username}:{password}@{host}:{port}/{db_name}\"
11 include_tables = [
12 \"schema\",
13 \"schema_field\"
14 ]
---> 16 db = SQLDatabase.from_uri(url, include_tables=include_tables)
File ~/mambaforge/envs/langchain/lib/python3.12/site-packages/langchain_community/utilities/sql_database.py:135, in SQLDatabase.from_uri(cls, database_uri, engine_args, **kwargs)
133 \"\"\"Construct a SQLAlchemy engine from URI.\"\"\"
134 _engine_args = engine_args or {}
--> 135 return cls(create_engine(database_uri, **_engine_args), **kwargs)
File ~/mambaforge/envs/langchain/lib/python3.12/site-packages/langchain_community/utilities/sql_database.py:82, in SQLDatabase.__init__(self, engine, schema, metadata, ignore_tables, include_tables, sample_rows_in_table_info, indexes_in_table_info, custom_table_info, view_support, max_string_length, lazy_table_reflection)
80 missing_tables = self._include_tables - self._all_tables
81 if missing_tables:
---> 82 raise ValueError(
83 f\"include_tables {missing_tables} not found in database\"
84 )
85 self._ignore_tables = set(ignore_tables) if ignore_tables else set()
86 if self._ignore_tables:
ValueError: include_tables {'schema_field', 'schema'} not found in database"
}
```
### Description
Using:
```python
include_tables = [
"schema$raw",
"schema_field$raw"
]
```
instead of the fields without the `$raw` suffix works.
In this postgresql database, there are `$raw` version of each schema (e.g., `schema$raw` and `schema`). The `$raw` version includes all records, while the non-raw version contains the "cleaned up" tables.
It appears that `langchain_community.utilities.SQLDatabase` is not properly handling this situation. It only seems to detect the `$raw` schemas.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:49 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6020
> Python Version: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:35:20) [Clang 16.0.6 ]
Package Information
-------------------
> langchain_core: 0.2.1
> langchain: 0.2.1
> langchain_community: 0.0.38
> langsmith: 0.1.77
> langchain_groq: 0.1.5
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.0
> langgraph: 0.0.69
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | langchain_community.utilities.SQLDatabase: include_tables {...} not found in database | https://api.github.com/repos/langchain-ai/langchain/issues/23672/comments | 3 | 2024-06-29T20:58:02Z | 2024-07-08T17:05:58Z | https://github.com/langchain-ai/langchain/issues/23672 | 2,381,956,295 | 23,672 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.vectorstores import Chroma
results in
TypeError: typing.ClassVar[typing.Collection[str]] is not valid as type argument
### Error Message and Stack Trace (if applicable)
--------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[3], line 6
4 import matplotlib.pyplot as plt
5 from langchain_community.embeddings import HuggingFaceEmbeddings
----> 6 from langchain_community.vectorstores import Chroma
7 from tqdm import tqdm
8 from datasets import load_dataset
File <frozen importlib._bootstrap>:1075, in _handle_fromlist(module, fromlist, import_, recursive)
File /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/langchain_community/vectorstores/__init__.py:514, in __getattr__(name)
512 def __getattr__(name: str) -> Any:
513 if name in _module_lookup:
--> 514 module = importlib.import_module(_module_lookup[name])
515 return getattr(module, name)
516 raise AttributeError(f"module {__name__} has no attribute {name}")
File /home/user/miniconda3/envs/textgen/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:23
21 from langchain_core.embeddings import Embeddings
22 from langchain_core.utils import xor_args
---> 23 from langchain_core.vectorstores import VectorStore
25 from langchain_community.vectorstores.utils import maximal_marginal_relevance
27 if TYPE_CHECKING:
File /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/langchain_core/vectorstores.py:755
751 tags = kwargs.pop("tags", None) or [] + self._get_retriever_tags()
752 return VectorStoreRetriever(vectorstore=self, tags=tags, **kwargs)
--> 755 class VectorStoreRetriever(BaseRetriever):
756 """Base Retriever class for VectorStore."""
758 vectorstore: VectorStore
File /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/pydantic/main.py:282, in ModelMetaclass.__new__(mcs, name, bases, namespace, **kwargs)
279 return isinstance(v, untouched_types) or v.__class__.__name__ == 'cython_function_or_method'
281 if (namespace.get('__module__'), namespace.get('__qualname__')) != ('pydantic.main', 'BaseModel'):
--> 282 annotations = resolve_annotations(namespace.get('__annotations__', {}), namespace.get('__module__', None))
283 # annotation only fields need to come first in fields
284 for ann_name, ann_type in annotations.items():
File /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/pydantic/typing.py:287, in resolve_annotations(raw_annotations, module_name)
285 value = ForwardRef(value)
286 try:
--> 287 value = _eval_type(value, base_globals, None)
288 except NameError:
289 # this is ok, it can be fixed with update_forward_refs
290 pass
File /home/user/miniconda3/envs/textgen/lib/python3.10/typing.py:327, in _eval_type(t, globalns, localns, recursive_guard)
321 """Evaluate all forward references in the given type t.
322 For use of globalns and localns see the docstring for get_type_hints().
323 recursive_guard is used to prevent infinite recursion with a recursive
324 ForwardRef.
325 """
326 if isinstance(t, ForwardRef):
--> 327 return t._evaluate(globalns, localns, recursive_guard)
328 if isinstance(t, (_GenericAlias, GenericAlias, types.UnionType)):
329 ev_args = tuple(_eval_type(a, globalns, localns, recursive_guard) for a in t.__args__)
File /home/user/miniconda3/envs/textgen/lib/python3.10/typing.py:693, in ForwardRef._evaluate(self, globalns, localns, recursive_guard)
689 if self.__forward_module__ is not None:
690 globalns = getattr(
691 sys.modules.get(self.__forward_module__, None), '__dict__', globalns
692 )
--> 693 type_ = _type_check(
694 eval(self.__forward_code__, globalns, localns),
695 "Forward references must evaluate to types.",
696 is_argument=self.__forward_is_argument__,
697 allow_special_forms=self.__forward_is_class__,
698 )
699 self.__forward_value__ = _eval_type(
700 type_, globalns, localns, recursive_guard | {self.__forward_arg__}
701 )
702 self.__forward_evaluated__ = True
File /home/user/miniconda3/envs/textgen/lib/python3.10/typing.py:167, in _type_check(arg, msg, is_argument, module, allow_special_forms)
164 arg = _type_convert(arg, module=module, allow_special_forms=allow_special_forms)
165 if (isinstance(arg, _GenericAlias) and
166 arg.__origin__ in invalid_generic_forms):
--> 167 raise TypeError(f"{arg} is not valid as type argument")
168 if arg in (Any, NoReturn, Final, TypeAlias):
169 return arg
### Description
after installing langchain_community with -U still getting the error
### System Info
python 3.10
oracle linux 8 | TypeError: typing.ClassVar[typing.Collection[str]] is not valid as type argument Selection deleted | https://api.github.com/repos/langchain-ai/langchain/issues/23664/comments | 1 | 2024-06-29T16:00:09Z | 2024-06-29T16:08:15Z | https://github.com/langchain-ai/langchain/issues/23664 | 2,381,827,643 | 23,664 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
loader = TextLoader(file_path)
# loader = Docx2txtLoader(file_path)
documents = loader.load() # + docx_documents
print("texts doc: =============================")
print(type(documents))
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=800, chunk_overlap=200)
# text_splitter = TokenTextSplitter(chunk_size=512, chunk_overlap=24)
texts = text_splitter.split_documents(documents)
raph = Neo4jGraph()
llm_transformer = LLMGraphTransformer(llm=model)
print("===================load llm_transformer!=========================")
graph_documents = llm_transformer.convert_to_graph_documents(texts)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Traceback (most recent call last):
File "/work/baichuan/script/langchain/graphRag.py", line 225, in <module>
graph_documents = llm_transformer.convert_to_graph_documents(texts)
File "/root/miniconda3/envs/rag/lib/python3.10/site-packages/langchain_experimental/graph_transformers/llm.py", line 762, in convert_to_graph_documents
return [self.process_response(document) for document in documents]
File "/root/miniconda3/envs/rag/lib/python3.10/site-packages/langchain_experimental/graph_transformers/llm.py", line 762, in <listcomp>
return [self.process_response(document) for document in documents]
File "/root/miniconda3/envs/rag/lib/python3.10/site-packages/langchain_experimental/graph_transformers/llm.py", line 714, in process_response
nodes_set.add((rel["head"], rel["head_type"]))
TypeError: list indices must be integers or slices, not str
### System Info
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
absl-py 2.1.0 pypi_0 pypi
accelerate 0.21.0 pypi_0 pypi
addict 2.4.0 pypi_0 pypi
aiofiles 23.2.1 pypi_0 pypi
aiohttp 3.9.5 py310h2372a71_0 conda-forge
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
altair 5.3.0 pypi_0 pypi
annotated-types 0.7.0 pyhd8ed1ab_0 conda-forge
anyio 4.3.0 pyhd8ed1ab_0 conda-forge
astunparse 1.6.2 pypi_0 pypi
async-timeout 4.0.3 pyhd8ed1ab_0 conda-forge
attrs 23.2.0 pyh71513ae_0 conda-forge
backoff 2.2.1 pypi_0 pypi
beautifulsoup4 4.12.3 pypi_0 pypi
bitsandbytes 0.41.0 pypi_0 pypi
blas 1.0 mkl anaconda
blinker 1.8.2 pypi_0 pypi
brotli-python 1.0.9 py310hd8f1fbe_7 conda-forge
bzip2 1.0.8 h5eee18b_6
ca-certificates 2024.3.11 h06a4308_0
certifi 2024.2.2 py310h06a4308_0
chardet 5.2.0 pypi_0 pypi
charset-normalizer 3.3.2 pyhd8ed1ab_0 conda-forge
click 8.1.7 pypi_0 pypi
cmake 3.29.3 pypi_0 pypi
contourpy 1.2.1 pypi_0 pypi
cudatoolkit 11.4.2 h7a5bcfd_10 conda-forge
cycler 0.12.1 pypi_0 pypi
dataclasses-json 0.6.6 pyhd8ed1ab_0 conda-forge
datasets 2.14.7 pypi_0 pypi
deepdiff 7.0.1 pypi_0 pypi
deepspeed 0.9.5 pypi_0 pypi
dill 0.3.7 pypi_0 pypi
dnspython 2.6.1 pypi_0 pypi
docstring-parser 0.16 pypi_0 pypi
docx2txt 0.8 pypi_0 pypi
einops 0.8.0 pypi_0 pypi
email-validator 2.1.1 pypi_0 pypi
emoji 2.12.1 pypi_0 pypi
exceptiongroup 1.2.1 pypi_0 pypi
faiss 1.7.3 py310cuda112hae2f2aa_0_cuda conda-forge
faiss-gpu 1.7.3 h5b0ac8e_0 conda-forge
fastapi 0.111.0 pypi_0 pypi
fastapi-cli 0.0.4 pypi_0 pypi
ffmpy 0.3.2 pypi_0 pypi
filelock 3.14.0 pypi_0 pypi
filetype 1.2.0 pypi_0 pypi
flask 3.0.3 pypi_0 pypi
flask-cors 4.0.1 pypi_0 pypi
fonttools 4.52.1 pypi_0 pypi
frozenlist 1.4.1 py310h2372a71_0 conda-forge
fsspec 2023.10.0 pypi_0 pypi
gradio-client 0.17.0 pypi_0 pypi
greenlet 1.1.2 py310hd8f1fbe_2 conda-forge
grpcio 1.64.0 pypi_0 pypi
h11 0.14.0 pypi_0 pypi
hjson 3.1.0 pypi_0 pypi
httpcore 1.0.5 pypi_0 pypi
httptools 0.6.1 pypi_0 pypi
httpx 0.27.0 pypi_0 pypi
huggingface-hub 0.17.3 pypi_0 pypi
idna 3.7 pyhd8ed1ab_0 conda-forge
importlib-metadata 7.1.0 pypi_0 pypi
importlib-resources 6.4.0 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561 anaconda
itsdangerous 2.2.0 pypi_0 pypi
jinja2 3.1.4 pypi_0 pypi
joblib 1.2.0 py310h06a4308_0 anaconda
json-repair 0.25.2 pypi_0 pypi
jsonpatch 1.33 pyhd8ed1ab_0 conda-forge
jsonpath-python 1.0.6 pypi_0 pypi
jsonpointer 2.4 py310hff52083_3 conda-forge
jsonschema 4.22.0 pypi_0 pypi
jsonschema-specifications 2023.12.1 pypi_0 pypi
kiwisolver 1.4.5 pypi_0 pypi
langchain 0.2.6 pypi_0 pypi
langchain-community 0.2.6 pypi_0 pypi
langchain-core 0.2.10 pypi_0 pypi
langchain-experimental 0.0.62 pypi_0 pypi
langchain-text-splitters 0.2.2 pypi_0 pypi
langdetect 1.0.9 pypi_0 pypi
langsmith 0.1.82 pypi_0 pypi
ld_impl_linux-64 2.38 h1181459_1
libblas 3.9.0 12_linux64_mkl conda-forge
libfaiss 1.7.3 cuda112hb18a002_0_cuda conda-forge
libfaiss-avx2 1.7.3 cuda112h1234567_0_cuda conda-forge
libffi 3.4.4 h6a678d5_1
libgcc-ng 13.2.0 h77fa898_7 conda-forge
libgfortran-ng 7.5.0 ha8ba4b0_17
libgfortran4 7.5.0 ha8ba4b0_17
libgomp 13.2.0 h77fa898_7 conda-forge
liblapack 3.9.0 12_linux64_mkl conda-forge
libstdcxx-ng 13.2.0 hc0a3c3a_7 conda-forge
libuuid 1.41.5 h5eee18b_0
lit 18.1.6 pypi_0 pypi
loguru 0.7.0 pypi_0 pypi
lxml 5.2.2 pypi_0 pypi
markdown 3.6 pypi_0 pypi
markdown-it-py 3.0.0 pypi_0 pypi
markupsafe 2.1.5 pypi_0 pypi
marshmallow 3.21.2 pyhd8ed1ab_0 conda-forge
matplotlib 3.8.4 pypi_0 pypi
mdurl 0.1.2 pypi_0 pypi
mkl 2021.4.0 h06a4308_640 anaconda
mkl-service 2.4.0 py310h7f8727e_0 anaconda
mkl_fft 1.3.1 py310hd6ae3a3_0 anaconda
mkl_random 1.2.2 py310h00e6091_0 anaconda
mmengine 0.10.4 pypi_0 pypi
mpi 1.0 mpich
mpi4py 3.1.4 py310hfc96bbd_0
mpich 3.3.2 hc856adb_0
mpmath 1.3.0 pypi_0 pypi
multidict 6.0.5 py310h2372a71_0 conda-forge
multiprocess 0.70.15 pypi_0 pypi
mypy_extensions 1.0.0 pyha770c72_0 conda-forge
ncurses 6.4 h6a678d5_0
neo4j 5.22.0 pypi_0 pypi
networkx 3.3 pypi_0 pypi
ninja 1.11.1.1 pypi_0 pypi
nltk 3.8.1 pypi_0 pypi
numpy 1.21.4 pypi_0 pypi
numpy-base 1.24.3 py310h8e6c178_0 anaconda
nvidia-cublas-cu11 11.10.3.66 pypi_0 pypi
nvidia-cuda-cupti-cu11 11.7.101 pypi_0 pypi
nvidia-cuda-nvrtc-cu11 11.7.99 pypi_0 pypi
nvidia-cuda-runtime-cu11 11.7.99 pypi_0 pypi
nvidia-cudnn-cu11 8.5.0.96 pypi_0 pypi
nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
nvidia-curand-cu11 10.2.10.91 pypi_0 pypi
nvidia-cusolver-cu11 11.4.0.1 pypi_0 pypi
nvidia-cusparse-cu11 11.7.4.91 pypi_0 pypi
nvidia-nccl-cu11 2.14.3 pypi_0 pypi
nvidia-nvtx-cu11 11.7.91 pypi_0 pypi
opencv-python 4.9.0.80 pypi_0 pypi
openssl 3.3.0 h4ab18f5_3 conda-forge
ordered-set 4.1.0 pypi_0 pypi
orjson 3.10.3 py310he421c4c_0 conda-forge
packaging 24.0 pypi_0 pypi
pandas 1.2.5 pypi_0 pypi
peft 0.4.0 pypi_0 pypi
pillow 10.3.0 pypi_0 pypi
pip 24.0 py310h06a4308_0
platformdirs 4.2.2 pypi_0 pypi
protobuf 5.27.0 pypi_0 pypi
psutil 5.9.8 pypi_0 pypi
py-cpuinfo 9.0.0 pypi_0 pypi
pyarrow 16.1.0 pypi_0 pypi
pyarrow-hotfix 0.6 pypi_0 pypi
pydantic 2.7.3 pypi_0 pypi
pydantic-core 2.18.4 pypi_0 pypi
pydub 0.25.1 pypi_0 pypi
pygments 2.18.0 pypi_0 pypi
pyparsing 3.1.2 pypi_0 pypi
pypdf 4.2.0 pypi_0 pypi
pyre-extensions 0.0.29 pypi_0 pypi
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.10.14 h955ad1f_1
python-dateutil 2.9.0.post0 pypi_0 pypi
python-dotenv 1.0.1 pypi_0 pypi
python-iso639 2024.4.27 pypi_0 pypi
python-magic 0.4.27 pypi_0 pypi
python-multipart 0.0.9 pypi_0 pypi
python_abi 3.10 2_cp310 conda-forge
pytz 2024.1 pypi_0 pypi
pyyaml 6.0.1 py310h2372a71_1 conda-forge
rapidfuzz 3.9.3 pypi_0 pypi
readline 8.2 h5eee18b_0
referencing 0.35.1 pypi_0 pypi
regex 2024.5.15 pypi_0 pypi
requests 2.32.2 pyhd8ed1ab_0 conda-forge
rich 13.7.1 pypi_0 pypi
rpds-py 0.18.1 pypi_0 pypi
ruff 0.4.7 pypi_0 pypi
safetensors 0.4.3 pypi_0 pypi
scikit-learn 1.3.0 py310h1128e8f_0 anaconda
scipy 1.10.1 pypi_0 pypi
semantic-version 2.10.0 pypi_0 pypi
sentence-transformers 2.7.0 pypi_0 pypi
sentencepiece 0.2.0 pypi_0 pypi
setuptools 70.0.0 pypi_0 pypi
shellingham 1.5.4 pypi_0 pypi
shtab 1.7.1 pypi_0 pypi
six 1.16.0 pyhd3eb1b0_1 anaconda
sniffio 1.3.1 pyhd8ed1ab_0 conda-forge
soupsieve 2.5 pypi_0 pypi
sqlalchemy 2.0.30 py310hc51659f_0 conda-forge
sqlite 3.45.3 h5eee18b_0
starlette 0.37.2 pypi_0 pypi
sympy 1.12 pypi_0 pypi
tabulate 0.9.0 pypi_0 pypi
tenacity 8.3.0 pyhd8ed1ab_0 conda-forge
tensorboard 2.16.2 pypi_0 pypi
tensorboard-data-server 0.7.2 pypi_0 pypi
termcolor 2.4.0 pypi_0 pypi
threadpoolctl 2.2.0 pyh0d69192_0 anaconda
tiktoken 0.7.0 pypi_0 pypi
tk 8.6.14 h39e8969_0
tokenizers 0.14.1 pypi_0 pypi
tomli 2.0.1 pypi_0 pypi
tomlkit 0.12.0 pypi_0 pypi
toolz 0.12.1 pypi_0 pypi
torch 2.0.0 pypi_0 pypi
tqdm 4.62.3 pypi_0 pypi
transformers 4.34.0 pypi_0 pypi
transformers-stream-generator 0.0.5 pypi_0 pypi
triton 2.0.0 pypi_0 pypi
trl 0.7.11 pypi_0 pypi
typer 0.12.3 pypi_0 pypi
typing-extensions 4.9.0 pypi_0 pypi
typing_inspect 0.9.0 pyhd8ed1ab_0 conda-forge
tyro 0.8.4 pypi_0 pypi
tzdata 2024a h04d1e81_0
ujson 5.10.0 pypi_0 pypi
unstructured 0.14.4 pypi_0 pypi
unstructured-client 0.22.0 pypi_0 pypi
urllib3 2.2.1 pyhd8ed1ab_0 conda-forge
uvicorn 0.30.1 pypi_0 pypi
uvloop 0.19.0 pypi_0 pypi
watchfiles 0.22.0 pypi_0 pypi
websockets 11.0.3 pypi_0 pypi
werkzeug 3.0.3 pypi_0 pypi
wheel 0.43.0 py310h06a4308_0
wikipedia 1.4.0 pypi_0 pypi
wrapt 1.16.0 pypi_0 pypi
xformers 0.0.19 pypi_0 pypi
xxhash 3.4.1 pypi_0 pypi
xz 5.4.6 h5eee18b_1
yaml 0.2.5 h7f98852_2 conda-forge
yapf 0.40.2 pypi_0 pypi
yarl 1.9.4 py310h2372a71_0 conda-forge
zipp 3.18.2 pypi_0 pypi
zlib 1.2.13 h5eee18b_1
| llm_transformer.convert_to_graph_documents TypeError: list indices must be integers or slices, not str | https://api.github.com/repos/langchain-ai/langchain/issues/23661/comments | 17 | 2024-06-29T08:45:40Z | 2024-07-23T16:08:21Z | https://github.com/langchain-ai/langchain/issues/23661 | 2,381,588,395 | 23,661 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/chat/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
ChatHuggingFace does not support structured output, and raises a `NotImplementedError`
### Idea or request for content:
_No response_ | DOC: ChatHuggingFace incorrectly marked as supporting structured output | https://api.github.com/repos/langchain-ai/langchain/issues/23660/comments | 8 | 2024-06-29T05:30:34Z | 2024-07-05T23:08:34Z | https://github.com/langchain-ai/langchain/issues/23660 | 2,381,507,664 | 23,660 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
We observe performance differences between `model.bind_tools(tools, tool_choice="any")` and `model.bind_tools(tools, tool_choice=tool_name)` when `len(tools) == 1`.
Implementations would need to be provider specific. | Update with_structured_output to explicitly pass tool name | https://api.github.com/repos/langchain-ai/langchain/issues/23644/comments | 0 | 2024-06-28T21:00:18Z | 2024-07-16T15:32:51Z | https://github.com/langchain-ai/langchain/issues/23644 | 2,381,192,142 | 23,644 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
%pip install langchain==0.2.6
%pip install pymilvus==2.4.4
%pip install langchain_milvus==0.1.1
from langchain_milvus.vectorstores import Milvus
b2 = 10000
b1 = 4
loops, remainder = divmod(len(docs), b2)
while loops >=0:
print(b1 ,b2)
_docs = docs[b1:b2]
db = Milvus.from_documents(documents=_docs,embedding= embed_model, collection_name ='gene', connection_args={'db_name':'<db>','user':dbutils.secrets.get(scope = "milvus" , key = 'MILVUS_USER'), 'password':dbutils.secrets.get(scope = "milvus" , key = 'MILVUS_PASSWORD'), 'host': dbutils.secrets.get(scope = "milvus" , key = 'MILVUS_HOST'), 'port': dbutils.secrets.get(scope = "milvus" , key = 'MILVUS_PORT')}
)
loops -= 1
b1 = b2+1
b2 += 10000
print('done')
db.similarity_search(<query>) #This Works
#############################Now Establishing a Connection First before testing out Similarity Search###################
##Loading the Collection we created earlier, using 'from_documents'
db = Milvus(collection_name= 'gene', embedding_function=embed_model, connection_args={'db_name': '<db>','user':dbutils.secrets.get(scope = "milvus" , key = 'MILVUS_USER'), 'password':dbutils.secrets.get(scope = "milvus" , key = 'MILVUS_PASSWORD'), 'host': dbutils.secrets.get(scope = "milvus" , key = 'MILVUS_HOST'), 'port': dbutils.secrets.get(scope = "milvus" , key = 'MILVUS_PORT')})
db.similarity_search(<query>) ##This Does not work and returns an empty list.
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to create a collection on Milvus (hosted) using langchain. Then I want to load the collection back again and do a similairty search.
- I created the collection and loaded the documents successfully using Milvus.from_documents object.
- I then ran a similarity search on the object using db.similarity search. It worked fine and gave me accurate results.
- I then tried to establish a connection and load the collection back again, by initiating the Milvus class object i.e. db = Milvus(collection_name = , connection_args = {}) It worked without any errors.
- But when I tried to run a similarity search on the object (db.similarity_search), It just returned an empty string.
Note:
- The collection exists in Milvus.
- Im passing 'db_name' as a connection argument because I only have access to that particualt db withing Milvus.
### System Info
%pip install langchain==0.2.6
%pip install pymilvus==2.4.4
%pip install langchain_milvus==0.1.1
Databricks Runtime 14 | Similarity Search Returns Empty when Using Milvus | https://api.github.com/repos/langchain-ai/langchain/issues/23634/comments | 0 | 2024-06-28T14:19:43Z | 2024-06-28T14:22:22Z | https://github.com/langchain-ai/langchain/issues/23634 | 2,380,553,191 | 23,634 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
We recently upgraded our libaries as follows:
```
-langchain = "0.2.0rc2"
+langchain = "^0.2.5"
-langchain-community = "0.2.0rc1"
+langchain-community = "^0.2.5"
-langchain-anthropic = "^0.1.9"
+langchain-anthropic = "^0.1.15"
-langchain-groq = "0.1.3"
+langchain-groq = "^0.1.5"
-langchain-core = "^0.1.52"
+langchain-core = "^0.2.9"
-langgraph = "^0.0.38"
+langgraph = "^0.0.69"
```
And have lost the ability to view the Tavily Search tool call's output in our callback handler.
When we revert packages, the tool output appears again. Our custom callback handler is able to output the Tool call output with every other tool in our stack after the update.
We implement the tavily tool like so:
```python
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
def get_tavily_search_tool(callbacks=[]):
search = TavilySearchResults(
name="search",
api_wrapper=TavilySearchAPIWrapper(tavily_api_key="tvly-"),
callbacks=callbacks,
max_results=5
)
return search
```
And our callback handler looks something like this:
```python
async def on_tool_start(
self,
serialized: Dict[str, Any],
input_str: str,
**kwargs: Any,
) -> None:
tool_spec = flatten_dict(serialized)
tool_name = tool_spec.get("name", "Unknown Tool")
tool_description = tool_spec.get("description", "No description available")
self.queue.put_nowait(f"\n\nUsing `{tool_name}` (*{tool_description}*)\n")
async def on_tool_end(
self,
output: str,
color: Optional[str] = None,
observation_prefix: Optional[str] = None,
llm_prefix: Optional[str] = None,
**kwargs: Any,
) -> None:
"""If not the final action, print out observation."""
if observation_prefix is not None:
self.queue.put_nowait(f"\n{observation_prefix}\n")
if len(output) > 10000:
# truncate output to 10.000 characters
output = output[:10000]
output += " ... (truncated to 10,000 characters)"
self.queue.put_nowait(f"\n```json\n{output}\n```\n\n")
else:
if isinstance(output, dict):
pretty_output = json.dumps(output, indent=4)
self.queue.put_nowait(f"\n```json\n{pretty_output}\n```\n\n")
elif isinstance(output, str):
# attempt to parse the output as json
try:
pretty_output = json.dumps(ast.literal_eval(output), indent=4)
self.queue.put_nowait(f"\n```json\n{pretty_output}\n```\n\n")
except:
pretty_output = output
self.queue.put_nowait(f"\n```\n{pretty_output}\n```\n\n")
if llm_prefix is not None:
self.queue.put_nowait(f"\n{llm_prefix}\n")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
We recently upgraded our libaries and have lost the ability to view the Tavily Search tool call's output in our callback handler.
When we revert packages, the tool output appears again. Our custom callback handler is able to output the Tool call output with every other tool in our stack after the update.
### System Info
langchain==0.1.16
langchain-anthropic==0.1.13
langchain-community==0.0.34
langchain-core==0.1.52
langchain-google-genai==1.0.3
langchain-mistralai==0.1.8
langchain-openai==0.1.6
langchain-text-splitters==0.0.1 | [Community] Tavily Search lost Tool output Callbacks in newest versions | https://api.github.com/repos/langchain-ai/langchain/issues/23632/comments | 2 | 2024-06-28T13:52:27Z | 2024-07-01T21:44:28Z | https://github.com/langchain-ai/langchain/issues/23632 | 2,380,492,978 | 23,632 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
class GradeDocuments(BaseModel):
score: str = Field(
description="Die Frage handelt sich um ein Smalltalk-Thema, 'True' oder 'False'"
)
def question_classifier(state: AgentState):
question = state["question"]
print(f"In question classifier with question: {question}")
system = """<s>[INST] Du bewertest, ob es in sich bei der Frage des Nutzers um ein Smalltalk-Thema handelt oder nicht. \n
Falls es bei der Frage um generelle Smalltalk-Fragen wie zum Beispiel: 'Hallo, wer bist du?' geht, bewerte es als 'True'. \n
Falls es sich bei der Frage um eine spezifische Frage zu einem Thema handelt wie zum Beispiel: 'Nenne mir Vorteile von Multi CLoud' geht, bewerte die Frage mit 'False'.[/INST]"""
grade_prompt = ChatPromptTemplate.from_messages(
[
("system", system),
(
"human",
"Frage des Nutzers: {question}",
),
]
)
#llm = ChatOpenAI() mit ChatOpenAI gehts, mit Chatgroq vorher iwie nicht mehr
env_vars = dotenv_values('.env')
load_dotenv()
groq_key = env_vars.get("GROQ_API_KEY")
print("Loading Structured Groq.")
llm = ChatGroq(model_name="mixtral-8x7b-32768", groq_api_key = groq_key)
structured_llm = llm.with_structured_output(GradeDocuments)
grader_llm = grade_prompt | structured_llm
result = grader_llm.invoke({"question": question})
state["is_smalltalk"] = result.score
return state
```
### Error Message and Stack Trace (if applicable)
When it comes to grade_llm.invoke:
Error code: 400 - {'error': {'message': 'response_format` does not support streaming', 'type': 'invalid_request_error'}}
### Description
Hi,
I want to use a Groq LLM for getting structured output, in my case it should return True or False. The code works fine with using ChatOpenAI() but it fails when using Groq even if it should work with structured output as I have read in the documentation.
I also tried `structured_llm = llm.with_structured_output(GradeDocuments, method="json_mode")` without success.
I also already updated my langchain-groq version.
Does anyone have an idea how to solve this?
EDIT: I also tried with a simple example where it works with ChatOpenAI but not with Groq:
With ChatOpenAI:
```
from langchain_groq import ChatGroq
class GradeDocuments(BaseModel):
"""Boolean values to check for relevance on retrieved documents."""
score: str = Field(
description="Die Frage handelt sich um ein Smalltalk-Thema, 'True' oder 'False'"
)
model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0) # with this it works
#model = ChatGroq(model_name="mixtral-8x7b-32768", groq_api_key = "")
structured_llm = model.with_structured_output(GradeDocuments)
structured_llm.invoke("Hello, how are you?")
# Returns: GradeDocuments(score='False')
```
With ChatGroq the same errror like above. But if I use e.g. the Llama 3 model from Groq it works, so seems like it is an issue of the Mixtral 8x7B model
### System Info
langchain-groq 0.1.5
langchain 0.2.5 | Structured Output with Groq: Error code: 400 - {'error': {'message': 'response_format` does not support streaming', 'type': 'invalid_request_error'}} | https://api.github.com/repos/langchain-ai/langchain/issues/23629/comments | 4 | 2024-06-28T11:52:52Z | 2024-07-15T16:42:34Z | https://github.com/langchain-ai/langchain/issues/23629 | 2,380,257,120 | 23,629 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
### Issue Description
I encountered a problem when using the `Qdrant.from_existing_collection` method in the Langchain Qdrant integration. Here is the code I used:
```python
from langchain_community.vectorstores.qdrant import Qdrant
url = "http://localhost:6333"
collection_name = "unique_case_2020"
qdrant = Qdrant.from_existing_collection(
embedding=embeddings, # Please set according to actual situation
collection_name=collection_name,
url=url
)
```
When I run this code, I get the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[21], line 7
3 url = "http://localhost:6333"
4 collection_name = "unique_case_2020"
----> 7 qdrant = Qdrant.from_existing_collection(
8 embedding=embeddings, # Please set according to actual situation
9 collection_name=collection_name,
10 url=url
11 )
TypeError: Qdrant.from_existing_collection() missing 1 required positional argument: 'path'
```
To resolve this, I added the `path` argument, but encountered another error:
```python
qdrant = Qdrant.from_existing_collection(
embedding=embeddings, # Please set according to actual situation
collection_name=collection_name,
url=url,
path=""
)
```
This raised the following error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[23], line 7
3 url = "http://localhost:6333"
4 collection_name = "unique_case_2020"
----> 7 qdrant = Qdrant.from_existing_collection(
8 embedding=embeddings, # Please set according to actual situation
9 collection_name=collection_name,
10 url=url,
11 path=""
12 )
File ~/.local/lib/python3.10/site-packages/langchain_community/vectorstores/qdrant.py:1397, in Qdrant.from_existing_collection(cls, embedding, path, collection_name, location, url, port, grpc_port, prefer_grpc, https, api_key, prefix, timeout, host, **kwargs)
1374 @classmethod
1375 def from_existing_collection(
1376 cls: Type[Qdrant],
(...)
1390 **kwargs: Any,
1391 ) -> Qdrant:
1392 """
1393 Get instance of an existing Qdrant collection.
1394 This method will return the instance of the store without inserting any new
1395 embeddings
1396 """
-> 1397 client, async_client = cls._generate_clients(
1398 location=location,
1399 url=url,
1400 port=port,
1401 grpc_port=grpc_port,
1402 prefer_grpc=prefer_grpc,
1403 https=https,
1404 api_key=api_key,
1405 prefix=prefix,
1406 timeout=timeout,
1407 host=host,
1408 path=path,
1409 **kwargs,
1410 )
1411 return cls(
1412 client=client,
1413 async_client=async_client,
(...)
1416 **kwargs,
1417 )
File ~/.local/lib/python3.10/site-packages/langchain_community/vectorstores/qdrant.py:2250, in Qdrant._generate_clients(location, url, port, grpc_port, prefer_grpc, https, api_key, prefix, timeout, host, path, **kwargs)
2233 @staticmethod
2234 def _generate_clients(
2235 location: Optional[str] = None,
(...)
2246 **kwargs: Any,
2247 ) -> Tuple[Any, Any]:
2248 from qdrant_client import AsyncQdrantClient, QdrantClient
-> 2250 sync_client = QdrantClient(
2251 location=location,
2252 url=url,
2253 port=port,
2254 grpc_port=grpc_port,
2255 prefer_grpc=prefer_grpc,
2256 https=https,
2257 api_key=api_key,
2258 prefix=prefix,
2259 timeout=timeout,
2260 host=host,
2261 path=path,
2262 **kwargs,
2263 )
2265 if location == ":memory:" or path is not None:
2266 # Local Qdrant cannot co-exist with Sync and Async clients
2267 # We fallback to sync operations in this case
2268 async_client = None
File ~/.local/lib/python3.10/site-packages/qdrant_client/qdrant_client.py:107, in QdrantClient.__init__(self, location, url, port, grpc_port, prefer_grpc, https, api_key, prefix, timeout, host, path, force_disable_check_same_thread, grpc_options, auth_token_provider, **kwargs)
104 self._client: QdrantBase
106 if sum([param is not None for param in (location, url, host, path)]) > 1:
--> 107 raise ValueError(
108 "Only one of <location>, <url>, <host> or <path> should be specified."
109 )
111 if location == ":memory:":
112 self._client = QdrantLocal(
113 location=location,
114 force_disable_check_same_thread=force_disable_check_same_thread,
115 )
ValueError: Only one of <location>, <url>, <host> or <path> should be specified.
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
### Expected Behavior
The `from_existing_collection` method should allow the `path` argument to be optional, as specifying both `url` and `path` leads to a conflict, and `path` should not be mandatory when `url` is provided.
### Actual Behavior
- When `path` is not provided, a `TypeError` is raised indicating that `path` is a required positional argument.
- When `path` is provided, a `ValueError` is raised indicating that only one of `<location>`, `<url>`, `<host>`, or `<path>` should be specified.
### Suggested Fix
- Update the `from_existing_collection` method to make the `path` argument optional.
- Adjust the internal logic to handle cases where `url` is provided without requiring `path`.
### Reproduction
1. Use the provided code to instantiate a `Qdrant` object from an existing collection.
2. Observe the `TypeError` when `path` is not provided.
3. Observe the `ValueError` when `path` is provided along with `url`.
Thank you for looking into this issue.
### System Info
### Environment
- Python version: 3.10
- Name: langchain-community
Version: 0.2.2
Summary: Community contributed LangChain integrations.
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /home/lighthouse/.local/lib/python3.10/site-packages
Requires: aiohttp, dataclasses-json, langchain, langchain-core, langsmith, numpy, PyYAML, requests, SQLAlchemy, tenacity
Required-by: langchain-experimental
- Qdrant version: 1.9.x (docker pull) | BUG in langchain_community.vectorstores.qdrant | https://api.github.com/repos/langchain-ai/langchain/issues/23626/comments | 1 | 2024-06-28T09:05:36Z | 2024-08-06T15:00:43Z | https://github.com/langchain-ai/langchain/issues/23626 | 2,379,952,622 | 23,626 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/retrievers/google_vertex_ai_search/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I could not get any responses from Vertex AI Search when referring the datastore with chunk option.
```
from langchain_community.retrievers import (
GoogleVertexAIMultiTurnSearchRetriever,
GoogleVertexAISearchRetriever,
)
retriever = GoogleVertexAISearchRetriever(
project_id=PROJECT_ID,
location_id=LOCATION,
data_store_id=DATA_STORE_ID,
max_documents=3,
)
query = "What is Transformer?"
retriever.invoke(query)
```
### Idea or request for content:
When I tried to query with the dataset without chunk option, it returned correct results:
```
from langchain_community.retrievers import (
GoogleVertexAIMultiTurnSearchRetriever,
GoogleVertexAISearchRetriever,
)
retriever = GoogleVertexAISearchRetriever(
project_id=PROJECT_ID,
location_id=LOCATION,
data_store_id=DATA_STORE_ID,
max_documents=3,
)
query = "What is Transformer?"
retriever.invoke(query)
[Document(page_content='2 Background\nThe goal of reducing sequential computation also forms the foundation of the Extended Neural GPU\n[16], ByteNet [18] and ConvS2S [9], all of which use convolutional neural networks as basic building\nblock, computing hidden representations in parallel for all input and output positions. In these models,\nthe number of operations required to relate signals from two arbitrary input or output positions grows\nin the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes\nit more difficult to learn dependencies between distant positions [12]. In the Transformer this is\nreduced to a constant number of operations, albeit at the cost of reduced effective resolution due\nto averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as\ndescribed in section 3.2.\nSelf-attention, sometimes called intra-attention is an attention mechanism rel
``` | Vertex AI Search doesn't return the result with chunked dataset | https://api.github.com/repos/langchain-ai/langchain/issues/23624/comments | 0 | 2024-06-28T08:07:11Z | 2024-06-28T08:09:54Z | https://github.com/langchain-ai/langchain/issues/23624 | 2,379,849,291 | 23,624 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Currently we only support properties that are explicitly listed in OpenAI's [API reference](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages).
https://github.com/langchain-ai/langchain/blob/a1520357c8053c89cf13caa269636688908d3bf1/libs/partners/openai/langchain_openai/chat_models/base.py#L222
In a popular OpenAI [cookbook](https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models), `"name"` is included with a tool message.
Support for "name" would enable use of ChatOpenAI with certain proxies, namely Google Gemini, as described [here](https://github.com/langchain-ai/langchain/pull/23551).
Opened a [discussion](https://community.openai.com/t/is-name-a-supported-parameter-for-tool-messages/843543) on the OpenAI forums to try to get clarity. | openai: add "name" to supported properties for tool messages? | https://api.github.com/repos/langchain-ai/langchain/issues/23601/comments | 0 | 2024-06-27T18:43:53Z | 2024-06-27T18:46:33Z | https://github.com/langchain-ai/langchain/issues/23601 | 2,378,867,844 | 23,601 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# #import dependencies
import json
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage,
messages_from_dict,
messages_to_dict)
from dotenv import load_dotenv,find_dotenv
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain.prompts import PromptTemplate
load_dotenv(find_dotenv())
#load existing session messages from db
messages_from_db = self.db_handler.check_into_db(user_id, chat_session_id)
deserialized_messages = self.deserialized_db_messages(messages_from_db)
# Develop ChatMessagesHistory
retrieved_chat_history = ChatMessageHistory(messages= deserialized_messages)
# Create a new ConversationBufferMemory from a ChatMessageHistory class
retrieved_memory = ConversationBufferMemory(
chat_memory=retrieved_chat_history, memory_key="chat_history")
# print(retrieved_memory)
# Build a second Conversational Retrieval Chain
second_chain = ConversationalRetrievalChain.from_llm(
self.llm,
retriever=self.vectordb.as_retriever(),
memory=retrieved_memory,
combine_docs_chain_kwargs={"prompt": self.QA_PROMPT},
get_chat_history=lambda h : h,
verbose=True
)
#*********************************************
answer = second_chain.invoke({"question": question})
#*********************************************
### Error Message and Stack Trace (if applicable)
![verbose](https://github.com/langchain-ai/langchain/assets/76678681/67337fe9-b827-40eb-9711-74e1c1fa8cd5)
### Description
i'm also facing same issue,
LLMChain rephrasing my follow up question in a wrong way, like below
![verbose](https://github.com/langchain-ai/langchain/assets/76678681/b3802a67-4796-45fa-b4ee-aee9d07ccbfa)
anyone can help me, how can i control the behavior of default prompt template (LLMChain) which are rephrasing question incorrectly before passing to my custom prompt
### System Info
(pri_env) abusufyan@abusufyan:~/development/vetrefs-llama/app$ pip show langchain
Name: langchain
Version: 0.2.3
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /home/abusufyan/development/private_VetRef/pri_env/lib/python3.11/site-packages
Requires: aiohttp, langchain-core, langchain-text-splitters, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by: langchain-community | rephrasing follow up question incorrectly | https://api.github.com/repos/langchain-ai/langchain/issues/23587/comments | 0 | 2024-06-27T14:22:11Z | 2024-06-27T14:24:46Z | https://github.com/langchain-ai/langchain/issues/23587 | 2,378,307,191 | 23,587 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from transformers import AutoTokenizer
from langchain_huggingface import ChatHuggingFace
from langchain_huggingface import HuggingFaceEndpoint
import requests
sample = requests.get(
"https://raw.githubusercontent.com/huggingface/blog/main/langchain.md"
).text
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-70B-Instruct")
def n_tokens(text):
return len(tokenizer(text)["input_ids"])
print(f"The number of tokens in the sample is {n_tokens(sample)}")
llm_10 = HuggingFaceEndpoint(
repo_id="meta-llama/Meta-Llama-3-70B-Instruct",
max_new_tokens=10,
cache=False,
seed=123,
)
llm_4096 = HuggingFaceEndpoint(
repo_id="meta-llama/Meta-Llama-3-70B-Instruct",
max_new_tokens=4096,
cache=False,
seed=123,
)
messages = [
(
"system",
"You are a smart AI that has to describe a given text in to at least 1000 characters.",
),
("user", f"Summarize the following text:\n\n{sample}\n"),
]
# native endpoint
response_10_native = llm_10.invoke(messages)
print(f"Native response 10: {n_tokens(response_10_native)} tokens")
response_4096_native = llm_4096.invoke(messages)
print(f"Native response 4096: {n_tokens(response_4096_native)} tokens")
# make sure the native responses are different lengths
assert len(response_10_native) < len(
response_4096_native
), f"Native response 10 should be shorter than native response 4096, 10 `max_new_tokens`: {n_tokens(response_10_native)}, 4096 `max_new_tokens`: {n_tokens(response_4096_native)}"
# chat implementation from langchain_huggingface
chat_model_10 = ChatHuggingFace(llm=llm_10)
chat_model_4096 = ChatHuggingFace(llm=llm_4096)
# chat implementation for 10 tokens
response_10 = chat_model_10.invoke(messages)
print(f"Response 10: {n_tokens(response_10.content)} tokens")
actual_response_tokens_10 = response_10.response_metadata.get(
"token_usage"
).completion_tokens
print(
f"Actual response 10: {actual_response_tokens_10} tokens (always 100 for some reason!)"
)
# chat implementation for 4096 tokens
response_4096 = chat_model_4096.invoke(messages)
print(f"Response 4096: {n_tokens(response_4096.content)} tokens")
actual_response_tokens_4096 = response_4096.response_metadata.get(
"token_usage"
).completion_tokens
print(
f"Actual response 4096: {actual_response_tokens_4096} tokens (always 100 for some reason!)"
)
# assert that the responses are different lengths, which fails because the token usage is always 100
print("-" * 20)
print(f"Output for 10 tokens: {response_10.content}")
print("-" * 20)
print(f"Output for 4096 tokens: {response_4096.content}")
print("-" * 20)
assert len(response_10.content) < len(
response_4096.content
), f"Response 10 should be shorter than response 4096, 10 `max_new_tokens`: {n_tokens(response_10.content)}, 4096 `max_new_tokens`: {n_tokens(response_4096.content)}"
```
This is the output from the script:
```
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
The number of tokens in the sample is 1809
Native response 10: 11 tokens
Native response 4096: 445 tokens
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Response 10: 101 tokens
Actual response 10: 100 tokens (always 100 for some reason!)
Response 4096: 101 tokens
Actual response 4096: 100 tokens (always 100 for some reason!)
--------------------
Output for 10 tokens: The text announces the launch of a new partner package called `langchain_huggingface` in LangChain, jointly maintained by Hugging Face and LangChain. This package aims to bring the power of Hugging Face's latest developments into LangChain and keep it up-to-date. The package was created by the community, and by becoming a partner package, the time it takes to bring new features from Hugging Face's ecosystem to LangChain's users will be reduced.
The package integrates seamlessly with Lang
--------------------
Output for 4096 tokens: The text announces the launch of a new partner package called `langchain_huggingface` in LangChain, jointly maintained by Hugging Face and LangChain. This package aims to bring the power of Hugging Face's latest developments into LangChain and keep it up-to-date. The package was created by the community, and by becoming a partner package, the time it takes to bring new features from Hugging Face's ecosystem to LangChain's users will be reduced.
The package integrates seamlessly with Lang
--------------------
```
### Error Message and Stack Trace (if applicable)
AssertionError: Response 10 should be shorter than response 4096, 10 `max_new_tokens`: 101, 4096 `max_new_tokens`: 101
### Description
There seems to be an issues when using `langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint` together with the `langchain_huggingface.chat_models.huggingface.ChatHuggingFace` implementation.
When just using the `HuggingFaceEndpoint`, the parameter `max_new_tokens` is properly implemented, while this does not work properly when wrapping inside `ChatHuggingFace(llm=...)`. The latter implementation always returns a response of 100 tokens, and I am unable to get this to work properly after searching the docs + source code.
I have created a reproducible example using `meta-llama/Meta-Llama-3-70B-Instruct` (as this model is also supported for serverless).
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:19:05 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T8112
> Python Version: 3.12.3 (main, Apr 9 2024, 08:09:14) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.10
> langchain: 0.2.6
> langchain_community: 0.2.5
> langsmith: 0.1.82
> langchain_anthropic: 0.1.15
> langchain_aws: 0.1.7
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.9
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | ChatHuggingFace + HuggingFaceEndpoint does not properly implement `max_new_tokens` | https://api.github.com/repos/langchain-ai/langchain/issues/23586/comments | 4 | 2024-06-27T14:17:33Z | 2024-07-13T12:15:29Z | https://github.com/langchain-ai/langchain/issues/23586 | 2,378,296,008 | 23,586 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.agent_toolkits import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
llm = HuggingFaceEndpoint(
endpoint_url="endpoint_url",
max_new_tokens=512,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.01,
repetition_penalty=1.03,
)
db = SQLDatabase.from_uri("sqlite:///Chinook.db?isolation_level=IMMEDIATE")
toolkit = SQLDatabaseToolkit(db=db,llm=llm)
agent_executor = create_asql_agent(
llm=llm,
toolkit=toolkit,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
agent_executor.invoke(
"How many genres are there?"
)
### Error Message and Stack Trace (if applicable)
> Entering new SQL Agent Executor chain...
I need to know the table that contains the genres.
Action: sql_db_list_tables
Action Input:
ObservationAlbum, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track Now I know the table that contains the genres is Genre.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made a mistake, I should remove the Observation part.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes and the \n.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes and the \n and the space.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes and the \n and the space and the error message.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes and the \n and the space and the error message and the table_names.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes and the \n and the space and the error message and the table_names and the Observation.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes and the \n and the space and the error message and the table_names and the Observation and the Error.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes and the \n and the space and the error message and the table_names and the Observation and the Error and the colon.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes and the \n and the space and the error message and the table_names and the Observation and the Error and the colon and the table_names.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database
> Finished chain.
{'input': 'How many genres are there?',
'output': 'Agent stopped due to iteration limit or time limit.'}
### Description
SQL Agent extracts the table name with \n line breaker and next line word 'Observation' as can be seen as 'Genre\nObservation'
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Fri May 24 14:06:39 UTC 2024
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.10
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.82
> langchain_experimental: 0.0.62
> langchain_huggingface: 0.0.3
> langchain_mistralai: 0.1.8
> langchain_openai: 0.1.10
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | SQL Agent extracts the table name with \n linebreaker and next line word 'Observation' | https://api.github.com/repos/langchain-ai/langchain/issues/23585/comments | 2 | 2024-06-27T13:43:13Z | 2024-06-30T20:10:27Z | https://github.com/langchain-ai/langchain/issues/23585 | 2,378,200,472 | 23,585 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def embed_documents(self, texts: List[str]) -> List[List[float]]:
text_features = []
for text in texts:
# Tokenize the text
tokenized_text = self.tokenizer(text).to('cuda')
def embed_image(self, uris: List[str]) -> List[List[float]]:
try:
from PIL import Image as _PILImage
except ImportError:
raise ImportError("Please install the PIL library: pip install pillow")
# Open images directly as PIL images
pil_images = [_PILImage.open(uri) for uri in uris]
image_features = []
for pil_image in pil_images:
# Preprocess the image for the model
preprocessed_image = self.preprocess(pil_image).unsqueeze(0).to('cuda')
### Error Message and Stack Trace (if applicable)
no gpu support yet!
### Description
no gpu support yet!
### System Info
no gpu support yet! | langchain_experimental openclip no gpu | https://api.github.com/repos/langchain-ai/langchain/issues/23567/comments | 0 | 2024-06-27T05:57:59Z | 2024-06-27T06:00:40Z | https://github.com/langchain-ai/langchain/issues/23567 | 2,377,224,742 | 23,567 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/rag/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
https://python.langchain.com/v0.2/docs/tutorials/rag/#retrieval-and-generation-generate
docs say any LangChain LLM or ChatModel could be substituted in.So where i can find a new model exclude methioned in the doc.
i want to use local model.
Like model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3", device_map="auto")
### Idea or request for content:
as a beginner,i dont know the difference between ChatModel and model loading by from_pretrained,but one output right,another output error | DOC: how can i find a new chatmodel to substitute metioned in the docs | https://api.github.com/repos/langchain-ai/langchain/issues/23566/comments | 4 | 2024-06-27T05:28:13Z | 2024-06-28T06:14:57Z | https://github.com/langchain-ai/langchain/issues/23566 | 2,377,169,955 | 23,566 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
memory = ConversationBufferMemory(memory_key="chat_history")
chat_history=[]
if co.count_documents(query) != 0:
for i in range(0, len(co.find(query)[0]["content"]), 1):
if i % 2 == 0:
chat_history.append(HumanMessage(content=co.find(query)[0]["content"][i]))
else:
chat_history.append(AIMessage(content=co.find(query)[0]["content"][i]))
memory.chat_memory=chat_history
llm = OLLAMA(model=language_model)
print(memory.chat_memory)
tools = load_tools(["google-serper"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True,memory=memory)
xx=agent.run(content)
### Error Message and Stack Trace (if applicable)
ValueError: variable chat_history should be a list of base messages
### Description
我就是想将memory加载到agent中,加载到conversationchain时都没问题
### System Info
windows
latest | agent的memory如何添加?我尝试了许多方法,始终报错variable chat_history should be a list of base messages | https://api.github.com/repos/langchain-ai/langchain/issues/23563/comments | 2 | 2024-06-27T03:46:38Z | 2024-06-28T03:25:59Z | https://github.com/langchain-ai/langchain/issues/23563 | 2,376,933,707 | 23,563 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/langserve/#1-create-new-app-using-langchain-cli-command
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The docs miss describing that remote runnables can take a timeout value as input during declaration. The default value is 5 seconds and so if llms take longer than that to respond, there is an error.
https://github.com/langchain-ai/langchainjs/blob/00c7ff15957bf2a5223cfc62878f94bafe9ded22/langchain/src/runnables/remote.ts#L180
This is more relevant for local development.
### Idea or request for content:
Add in the description for the optional options parameter which takes the timeout value in the docs. I can make a pull request if needed. | DOC: Lack of description of options (and thereby timeout) parameter in RemoteRunnable constructor. | https://api.github.com/repos/langchain-ai/langchain/issues/23537/comments | 0 | 2024-06-26T13:18:25Z | 2024-06-26T13:21:18Z | https://github.com/langchain-ai/langchain/issues/23537 | 2,375,331,115 | 23,537 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
code from this link:
https://python.langchain.com/v0.1/docs/use_cases/query_analysis/techniques/routing/
```
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass()
# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.
# os.environ["LANGCHAIN_TRACING_V2"] = "true"
# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
```
```
from typing import Literal
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI
class RouteQuery(BaseModel):
"""Route a user query to the most relevant datasource."""
datasource: Literal["python_docs", "js_docs", "golang_docs"] = Field(
...,
description="Given a user question choose which datasource would be most relevant for answering their question",
)
llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
structured_llm = llm.with_structured_output(RouteQuery)
system = """You are an expert at routing a user question to the appropriate data source.
Based on the programming language the question is referring to, route it to the relevant data source."""
prompt = ChatPromptTemplate.from_messages(
[
("system", system),
("human", "{question}"),
]
)
router = prompt | structured_llm
```
```
question = """Why doesn't the following code work:
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages(["human", "speak in {language}"])
prompt.invoke("french")
"""
router.invoke({"question": question})
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
UnprocessableEntityError Traceback (most recent call last)
Cell In[6], line 8
1 question = """Why doesn't the following code work:
2
3 from langchain_core.prompts import ChatPromptTemplate
(...)
6 prompt.invoke("french")
7 """
----> 8 router.invoke({"question": question})
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\langchain_core\runnables\base.py:2399, in RunnableSequence.invoke(self, input, config)
2397 try:
2398 for i, step in enumerate(self.steps):
-> 2399 input = step.invoke(
2400 input,
2401 # mark each step as a child run
2402 patch_config(
2403 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
2404 ),
2405 )
2406 # finish the root run
2407 except BaseException as e:
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\langchain_core\runnables\base.py:4433, in RunnableBindingBase.invoke(self, input, config, **kwargs)
4427 def invoke(
4428 self,
4429 input: Input,
4430 config: Optional[RunnableConfig] = None,
4431 **kwargs: Optional[Any],
4432 ) -> Output:
-> 4433 return self.bound.invoke(
4434 input,
4435 self._merge_configs(config),
4436 **{**self.kwargs, **kwargs},
4437 )
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\langchain_core\language_models\chat_models.py:170, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
159 def invoke(
160 self,
161 input: LanguageModelInput,
(...)
165 **kwargs: Any,
166 ) -> BaseMessage:
167 config = ensure_config(config)
168 return cast(
169 ChatGeneration,
--> 170 self.generate_prompt(
171 [self._convert_input(input)],
172 stop=stop,
173 callbacks=config.get("callbacks"),
174 tags=config.get("tags"),
175 metadata=config.get("metadata"),
176 run_name=config.get("run_name"),
177 run_id=config.pop("run_id", None),
178 **kwargs,
179 ).generations[0][0],
180 ).message
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\langchain_core\language_models\chat_models.py:599, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
591 def generate_prompt(
592 self,
593 prompts: List[PromptValue],
(...)
596 **kwargs: Any,
597 ) -> LLMResult:
598 prompt_messages = [p.to_messages() for p in prompts]
--> 599 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\langchain_core\language_models\chat_models.py:456, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
454 if run_managers:
455 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 456 raise e
457 flattened_outputs = [
458 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
459 for res in results
460 ]
461 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\langchain_core\language_models\chat_models.py:446, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
443 for i, m in enumerate(messages):
444 try:
445 results.append(
--> 446 self._generate_with_cache(
447 m,
448 stop=stop,
449 run_manager=run_managers[i] if run_managers else None,
450 **kwargs,
451 )
452 )
453 except BaseException as e:
454 if run_managers:
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\langchain_core\language_models\chat_models.py:671, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
669 else:
670 if inspect.signature(self._generate).parameters.get("run_manager"):
--> 671 result = self._generate(
672 messages, stop=stop, run_manager=run_manager, **kwargs
673 )
674 else:
675 result = self._generate(messages, stop=stop, **kwargs)
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\langchain_openai\chat_models\base.py:543, in BaseChatOpenAI._generate(self, messages, stop, run_manager, **kwargs)
541 message_dicts, params = self._create_message_dicts(messages, stop)
542 params = {**params, **kwargs}
--> 543 response = self.client.create(messages=message_dicts, **params)
544 return self._create_chat_result(response)
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\openai\_utils\_utils.py:277, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
275 msg = f"Missing required argument: {quote(missing[0])}"
276 raise TypeError(msg)
--> 277 return func(*args, **kwargs)
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\openai\resources\chat\completions.py:590, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
558 @required_args(["messages", "model"], ["messages", "model", "stream"])
559 def create(
560 self,
(...)
588 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
589 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
--> 590 return self._post(
591 "/chat/completions",
592 body=maybe_transform(
593 {
594 "messages": messages,
595 "model": model,
596 "frequency_penalty": frequency_penalty,
597 "function_call": function_call,
598 "functions": functions,
599 "logit_bias": logit_bias,
600 "logprobs": logprobs,
601 "max_tokens": max_tokens,
602 "n": n,
603 "presence_penalty": presence_penalty,
604 "response_format": response_format,
605 "seed": seed,
606 "stop": stop,
607 "stream": stream,
608 "stream_options": stream_options,
609 "temperature": temperature,
610 "tool_choice": tool_choice,
611 "tools": tools,
612 "top_logprobs": top_logprobs,
613 "top_p": top_p,
614 "user": user,
615 },
616 completion_create_params.CompletionCreateParams,
617 ),
618 options=make_request_options(
619 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
620 ),
621 cast_to=ChatCompletion,
622 stream=stream or False,
623 stream_cls=Stream[ChatCompletionChunk],
624 )
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\openai\_base_client.py:1240, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1226 def post(
1227 self,
1228 path: str,
(...)
1235 stream_cls: type[_StreamT] | None = None,
1236 ) -> ResponseT | _StreamT:
1237 opts = FinalRequestOptions.construct(
1238 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
1239 )
-> 1240 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\openai\_base_client.py:921, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
912 def request(
913 self,
914 cast_to: Type[ResponseT],
(...)
919 stream_cls: type[_StreamT] | None = None,
920 ) -> ResponseT | _StreamT:
--> 921 return self._request(
922 cast_to=cast_to,
923 options=options,
924 stream=stream,
925 stream_cls=stream_cls,
926 remaining_retries=remaining_retries,
927 )
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\openai\_base_client.py:1020, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
1017 err.response.read()
1019 log.debug("Re-raising status error")
-> 1020 raise self._make_status_error_from_response(err.response) from None
1022 return self._process_response(
1023 cast_to=cast_to,
1024 options=options,
(...)
1027 stream_cls=stream_cls,
1028 )
UnprocessableEntityError: Error code: 422 - {'detail': [{'type': 'enum', 'loc': ['body', 'tool_choice', 'str-enum[ChatCompletionToolChoiceOptionEnum]'], 'msg': "Input should be 'none' or 'auto'", 'input': 'required', 'ctx': {'expected': "'none' or 'auto'"}}, {'type': 'model_attributes_type', 'loc': ['body', 'tool_choice', 'ChatCompletionNamedToolChoice'], 'msg': 'Input should be a valid dictionary or object to extract fields from', 'input': 'required'}]}
```
### Description
When using the routing example shown in the langchain docs, it only works if the "langchain-openai" version is 0.1.8 or lower. The newest versions (0.1.9+) break this logic. Routers are used in my workflow and this is preventing me from upgrading my packages. Please either revert the breaking changes or provide new documentation to support this type of routing functionality.
### System Info
langchain ==0.2.3
langchain-chroma ==0.1.1
langchain-community ==0.2.0
langchain-core ==0.2.3
langchain-experimental ==0.0.59
langchain-google-genai ==1.0.4
langchain-google-vertexai ==1.0.4
langchain-openai ==0.1.10
langchain-text-splitters ==0.2.0
langchainhub ==0.1.15
langgraph ==0.1.1
openai ==1.27.0
platform: windows
python version 3.10.10 | Routing Example Does Not Work with langchain-openai > 0.1.8 | https://api.github.com/repos/langchain-ai/langchain/issues/23536/comments | 5 | 2024-06-26T13:00:33Z | 2024-06-30T15:56:05Z | https://github.com/langchain-ai/langchain/issues/23536 | 2,375,280,952 | 23,536 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
class Shell(BaseModel):
command: str = Field(..., description="The shell command to be executed.")
script_args: Optional[List[str]] = Field(default=None, description="Optional arguments for the shell command.")
inputs: Optional[str] = Field(default=None, description="User inputs for the command, e.g., for Python scripts.")
@tool("shell_tool",args_schema=Shell)
def shell_tool(command, script_args,inputs) -> str:
"""
Execute the given shell command and return its output.
example with shell args: shell_tool('python foo.py',["Hello World","Welcome"],None)
example with user inputs (When the script has input("Enter a nnumber")): shell_tool('python add.py',None,'5\\n6\\n')
example for simple case: shell_tool('python foo.py',None,None)
"""
try:
safe_command = shlex.split(command)
if script_args:
safe_command.extend(script_args)
result = subprocess.Popen(safe_command,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE,text=True)
stdout,stderr=result.communicate(input=inputs)
if result.returncode != 0:
return f"Error: {stderr}"
return stdout
except Exception as e:
return f"Exception occurred: {str(e)}"
shell_tool.invoke('python sum.py',None,'5\n')
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[15], [line 1](vscode-notebook-cell:?execution_count=15&line=1)
----> [1](vscode-notebook-cell:?execution_count=15&line=1) shell_tool.invoke('python sum.py',None,'5\n')
TypeError: BaseTool.invoke() takes from 2 to 3 positional arguments but 4 were given
### Description
I executed this tool. but I get this error. But when I remove the decorator, the @tool decorator (now its a normal function) it works but once the decorator is on it's a tool and when I invoke it gives me this error. The same error (incomplete output/response) is seen when I use this tool inside an agent. Can anyone help me
### System Info
langchain==0.2.5
langchain-community==0.2.5
langchain-core==0.2.9
langchain-experimental==0.0.61
langchain-groq==0.1.5
langchain-text-splitters==0.2.1
python 3.12.4
Windows 11 system
| Langchain Tools: TypeError: BaseTool.invoke() takes from 2 to 3 positional arguments but 4 were given | https://api.github.com/repos/langchain-ai/langchain/issues/23533/comments | 2 | 2024-06-26T12:10:08Z | 2024-06-26T13:29:44Z | https://github.com/langchain-ai/langchain/issues/23533 | 2,375,160,398 | 23,533 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import langchain_core.prompts.chat
### Error Message and Stack Trace (if applicable)
_No response_
### Description
我修改了xml.py命名后,json.py又出现了相同的问题,我都搞不清楚到底是我的问题还是langchain本身的问题了,怎么会有json.py这样的命名呢?
### System Info
windows
latest | import xml.etree.ElementTree as ET ModuleNotFoundError: No module named 'xml.etree' | https://api.github.com/repos/langchain-ai/langchain/issues/23529/comments | 0 | 2024-06-26T11:24:33Z | 2024-06-26T11:24:33Z | https://github.com/langchain-ai/langchain/issues/23529 | 2,375,074,030 | 23,529 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/llm_chain/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
For novices using LangChain with LangServe it could be really helpfull if there was a second simple example showing how to make an application featuring not only model calling with a prompt template, but using also a vector database for retrieval. This will show how to build the chain for example to the new on LangChain people for which the page is for. Actually the whole docs doesn't contain simple example of LangSmith with retrieval like (`retriever = vectorstore.as_retriever()`). There is one exaple which could be pretty complex for learners here: "https://github.com/langchain-ai/langserve/blob/main/examples/conversational_retrieval_chain/server.py" | DOC: Add second exaple in Build a Simple LLM Application with LCEL docs page for better understanding | https://api.github.com/repos/langchain-ai/langchain/issues/23518/comments | 0 | 2024-06-26T07:45:02Z | 2024-06-26T07:47:42Z | https://github.com/langchain-ai/langchain/issues/23518 | 2,374,617,272 | 23,518 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.agents import AgentExecutor, create_openai_tools_agent
```
### Error Message and Stack Trace (if applicable)
```
ImportError: cannot import name '_set_config_context' from 'langchain_core.runnables.config'
```
### Description
Following the [streaming agent document](https://python.langchain.com/v0.1/docs/modules/agents/how_to/streaming/) code.
I get the error when import the modules.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.8 (tags/v3.11.8:db85d51, Feb 6 2024, 22:03:32) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.10
> langchain: 0.2.6
> langchain_community: 0.2.1
> langsmith: 0.1.82
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.20
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | ImportError: cannot import name '_set_config_context' from 'langchain_core.runnables.config' | https://api.github.com/repos/langchain-ai/langchain/issues/23517/comments | 2 | 2024-06-26T06:31:25Z | 2024-06-26T10:49:12Z | https://github.com/langchain-ai/langchain/issues/23517 | 2,374,460,655 | 23,517 |
[
"hwchase17",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/document_loaders/rst/
### Checklist
- [x] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
- How to download `UnstructuredRSTLoader` is not mentioned. When I checked the docs of `Unstructured`, I could not find the loader. The only thing I saw was [this](https://python.langchain.com/v0.2/docs/integrations/providers/unstructured/) and ended up doing `pip install unstructured`. But still can't use the code.
```py
from langchain_community.document_loaders import UnstructuredRSTLoader
loader = UnstructuredRSTLoader(file_path="test.rst", mode="elements")
docs = loader.load()
print(docs[0])
```
Error:
```
(venv) robin@robin:~/Desktop/playground/FURY-data-script$ python rstparser.py
Traceback (most recent call last):
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/file_conversion.py", line 16, in convert_file_to_text
text = pypandoc.convert_file(filename, target_format, format=source_format)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/pypandoc/__init__.py", line 195, in convert_file
raise RuntimeError("source_file is not a valid path")
RuntimeError: source_file is not a valid path
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/robin/Desktop/playground/FURY-data-script/rstparser.py", line 4, in <module>
docs = loader.load()
^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/langchain_core/document_loaders/base.py", line 29, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/langchain_community/document_loaders/unstructured.py", line 88, in lazy_load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/langchain_community/document_loaders/rst.py", line 57, in _get_elements
return partition_rst(filename=self.file_path, **self.unstructured_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/documents/elements.py", line 593, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/filetype.py", line 626, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/filetype.py", line 582, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/chunking/dispatch.py", line 74, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/partition/rst.py", line 53, in partition_rst
html_text = convert_file_to_html_text_using_pandoc(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/file_conversion.py", line 65, in convert_file_to_html_text_using_pandoc
return convert_file_to_text(
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/utils.py", line 249, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/file_conversion.py", line 25, in convert_file_to_text
supported_source_formats, _ = pypandoc.get_pandoc_formats()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/pypandoc/__init__.py", line 546, in get_pandoc_formats
_ensure_pandoc_path()
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/pypandoc/__init__.py", line 797, in _ensure_pandoc_path
raise OSError("No pandoc was found: either install pandoc and add it\n"
OSError: No pandoc was found: either install pandoc and add it
to your PATH or or call pypandoc.download_pandoc(...) or
install pypandoc wheels with included pandoc.
(venv) robin@robin:~/Desktop/playground/FURY-data-script$ python rstparser.py
Traceback (most recent call last):
File "/home/robin/Desktop/playground/FURY-data-script/rstparser.py", line 4, in <module>
docs = loader.load()
^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/langchain_core/document_loaders/base.py", line 29, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/langchain_community/document_loaders/unstructured.py", line 88, in lazy_load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/langchain_community/document_loaders/rst.py", line 57, in _get_elements
return partition_rst(filename=self.file_path, **self.unstructured_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/documents/elements.py", line 593, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/filetype.py", line 626, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/filetype.py", line 582, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/chunking/dispatch.py", line 74, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/partition/rst.py", line 53, in partition_rst
html_text = convert_file_to_html_text_using_pandoc(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/file_conversion.py", line 65, in convert_file_to_html_text_using_pandoc
return convert_file_to_text(
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/utils.py", line 249, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/file_conversion.py", line 16, in convert_file_to_text
text = pypandoc.convert_file(filename, target_format, format=source_format)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/pypandoc/__init__.py", line 200, in convert_file
return _convert_input(discovered_source_files, format, 'path', to, extra_args=extra_args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/pypandoc/__init__.py", line 364, in _convert_input
_ensure_pandoc_path()
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/pypandoc/__init__.py", line 797, in _ensure_pandoc_path
raise OSError("No pandoc was found: either install pandoc and add it\n"
OSError: No pandoc was found: either install pandoc and add it
to your PATH or or call pypandoc.download_pandoc(...) or
install pypandoc wheels with included pandoc.
```
I later tried `pip install "unstructured[all-docs]"` but it started downloading `torch` at which point I gave up.
### Idea or request for content:
Things to be added:
- How to download the library.
- Langchain docs should have more details regarding the loader instead of linking to `unstructured`, the docs linked are [outdated](https://unstructured-io.github.io/unstructured/bricks.html#partition-rst) and moved. | DOC: <Issue related to /v0.2/docs/integrations/document_loaders/rst/> | https://api.github.com/repos/langchain-ai/langchain/issues/23515/comments | 0 | 2024-06-26T06:23:28Z | 2024-08-08T07:08:54Z | https://github.com/langchain-ai/langchain/issues/23515 | 2,374,432,928 | 23,515 |
[
"hwchase17",
"langchain"
] | ### URL
https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The right way to add history to a chat to restart the chat from a middle point is missing the clarity. The documentation points towards **PromptTemplate**
```
history = {'input': 'What is life?', 'history': 'Human: What is life?\nAI: {}', 'response': '{ "Life" : {\n "Definition" : "A complex and multifaceted phenomenon characterized by the presence of organization, metabolism, homeostasis, and reproduction.",\n "Context" : ["Biology", "Philosophy", "Psychology"],\n "Subtopics" : [\n {"Self-awareness": "The capacity to have subjective experiences, such as sensations, emotions, and thoughts."},\n {"Evolutionary perspective": "A process driven by natural selection, genetic drift, and other mechanisms that shape the diversity of life on Earth."},\n {"Quantum perspective": "A realm where quantum mechanics and general relativity intersect, potentially influencing the emergence of consciousness."}\n ]\n} }'}
PROMPT_TEMPLATE = """
{history}
"""
custom_prompt = PromptTemplate(
input_variables=["history"], template=PROMPT_TEMPLATE
)
chain = ConversationChain(
prompt=custom_prompt,
llm=llm,
memory=ConversationBufferMemory()
)
prompt = "What is life?"
answer = chain.invoke(input=prompt)
```
>
> Error:
> miniconda3/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
> raise validation_error
> pydantic.v1.error_wrappers.ValidationError: 1 validation error for ConversationChain
> __root__
> Got unexpected prompt input variables. The prompt expects ['history'], but got ['history'] as inputs from memory, and input as the normal input key. (type=value_error)
### Idea or request for content:
Please provide the right and straightforward way to induce history in a conversation. | DOC: Right way to initialize with past history of conversation | https://api.github.com/repos/langchain-ai/langchain/issues/23511/comments | 0 | 2024-06-26T03:08:58Z | 2024-07-03T19:41:13Z | https://github.com/langchain-ai/langchain/issues/23511 | 2,374,088,042 | 23,511 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [ ] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
def test_chain(chain):
test_queries = [
"What is the capital of France?",
"Explain the process of photosynthesis.",
]
for query in test_queries:
try:
logging.info(f"Running query: {query}")
response = chain.invoke(query)
logging.info(f"Query: {query}")
logging.info(f"Response: {response}")
print(f"Query: {query}")
print(f"Response: {response}\n")
except Exception as e:
logging.error(
f"An error occurred while processing the query '{query}': {e}")
traceback.print_exc()
if __name__ == "__main__":
chain = main()
test_chain(chain)
### Error Message and Stack Trace (if applicable)
TypeError('can only concatenate str (not "ChatPromptValue") to str')Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1626, in _call_with_config
context.run(
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/passthrough.py", line 456, in _invoke
**self.mapper.invoke(
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3142, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3142, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2499, in invoke
input = step.invoke(
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3963, in invoke
return self._call_with_config(
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1626, in _call_with_config
context.run(
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3837, in _invoke
output = call_func_with_variable_args(
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/usr/local/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 263, in __call__
return super().__call__(text_inputs, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1243, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/usr/local/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1249, in run_single
model_inputs = self.preprocess(inputs, **preprocess_params)
File "/usr/local/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 288, in preprocess
prefix + prompt_text,
TypeError: can only concatenate str (not "ChatPromptValue") to str
### Description
I expect to see an answer generated by the llm, but always end up running into this error: TypeError('can only concatenate str (not "ChatPromptValue") to str')
Even though the chain is valid.
### System Info
pip freeze | grep langchain
langchain==0.1.13
langchain-community==0.0.31
langchain-core==0.1.52
langchain-openai==0.1.1
langchain-qdrant==0.1.1
langchain-text-splitters==0.0.2 | Issue with RunnableAssign<answer>: TypeError('can only concatenate str (not "ChatPromptValue") to str')Traceback (most recent call last): | https://api.github.com/repos/langchain-ai/langchain/issues/23505/comments | 3 | 2024-06-25T21:47:08Z | 2024-07-02T00:52:56Z | https://github.com/langchain-ai/langchain/issues/23505 | 2,373,718,098 | 23,505 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
@tool
def request_bing(query : str) -> str:
"""
Searches the internet for additional information.
Specifically useful when you need to answer questions about current events or the current state of the world.
Prefer Content related to finance.
"""
url = "https://api.bing.microsoft.com/v7.0/search"
headers = {"Ocp-Apim-Subscription-Key": os.getenv("AZURE_KEY")}
params = {"q": query}
response = requests.get(url, headers=headers, params=params)
response.raise_for_status()
data = response.json()
snippets_list = [result['snippet'] for result in data['webPages']['value']]
snippets = "\n".join(snippets_list)
return snippets
```
### Error Message and Stack Trace (if applicable)
```openai.APIError: The model produced invalid content.```
### Description
I'm using langchain ReAct agent + tools and starting from Jun 23rd seems to receive lots of exceptions.
```openai.APIError: The model produced invalid content.```
Suspecting that openai side changed something on function calling? Could you please shed some lights on it?
I'm using gpt-4o as the llm. Bing search tool defined as above.
### System Info
langchain==0.2.4
langchain-community==0.2.4
langchain-core==0.2.6
langchain-openai==0.1.8
langchain-text-splitters==0.2.1 | openai.APIError: The model produced invalid content. | https://api.github.com/repos/langchain-ai/langchain/issues/23407/comments | 6 | 2024-06-25T16:18:40Z | 2024-07-05T04:42:37Z | https://github.com/langchain-ai/langchain/issues/23407 | 2,373,099,869 | 23,407 |