Spaces:
Runtime error
Runtime error
AstraBert
commited on
Commit
•
51187cc
1
Parent(s):
751b52b
first commit
Browse files
README.md
CHANGED
@@ -1,13 +1,138 @@
|
|
1 |
---
|
2 |
-
title:
|
3 |
-
emoji:
|
4 |
colorFrom: red
|
5 |
colorTo: yellow
|
6 |
sdk: gradio
|
7 |
sdk_version: 4.25.0
|
8 |
app_file: app.py
|
9 |
-
pinned:
|
10 |
license: apache-2.0
|
11 |
---
|
12 |
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
title: everything-rag
|
3 |
+
emoji: 🤖
|
4 |
colorFrom: red
|
5 |
colorTo: yellow
|
6 |
sdk: gradio
|
7 |
sdk_version: 4.25.0
|
8 |
app_file: app.py
|
9 |
+
pinned: true
|
10 |
license: apache-2.0
|
11 |
---
|
12 |
|
13 |
+
<!-- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference -->
|
14 |
+
|
15 |
+
# everything-rag
|
16 |
+
|
17 |
+
>_How was this README generated? Levearaging the power of AI with **reAIdme**, an HuggingChat assistant based on meta-llama/Llama-2-70b-chat-hf._
|
18 |
+
_Go and give it a try [here](https://hf.co/chat/assistant/660d9a4f590a7924eed02a32)!_ 🤖
|
19 |
+
|
20 |
+
<div align="center">
|
21 |
+
<img src="https://img.shields.io/github/languages/top/AstraBert/everything-rag" alt="GitHub top language">
|
22 |
+
<img src="https://img.shields.io/github/commit-activity/t/AstraBert/everything-rag" alt="GitHub commit activity">
|
23 |
+
<img src="https://img.shields.io/badge/everything_rag-almost_completely_stable-green" alt="Static Badge">
|
24 |
+
<img src="https://img.shields.io/badge/Release-v0.1.0-blue" alt="Static Badge">
|
25 |
+
<img src="https://img.shields.io/badge/Docker_image_size-6.44GB-red" alt="Static Badge">
|
26 |
+
<div>
|
27 |
+
<a href="https://astrabert.github.io/everything-rag/"><img src="https://github.com/AstraBert/everything-rag/blob/main/data/example_chat.png" alt="Example chat" align="center"></a>
|
28 |
+
<p><i>Example chat with everything-rag, mediated by google/flan-t5-base</i></p>
|
29 |
+
</div>
|
30 |
+
</div>
|
31 |
+
|
32 |
+
|
33 |
+
### Table of Contents
|
34 |
+
|
35 |
+
1. [Introduction](#introduction)
|
36 |
+
2. [Inspiration](#inspiration)
|
37 |
+
2. [Getting Started](#getting-started)
|
38 |
+
3. [Using the Chatbot](#using-the-chatbot)
|
39 |
+
4. [Troubleshooting](#troubleshooting)
|
40 |
+
5. [Contributing](#contributing)
|
41 |
+
6. [References](#reference)
|
42 |
+
|
43 |
+
## Introduction
|
44 |
+
|
45 |
+
Introducing **everything-rag**, your fully customizable and local chatbot assistant! 🤖
|
46 |
+
|
47 |
+
With everything-rag, you can:
|
48 |
+
|
49 |
+
1. Use virtually any LLM you want: Switch between different LLMs like _gemma-7b_ or _llama-7b_ to suit your needs.
|
50 |
+
2. Use your own data: everything-rag can work with any data you provide, whether it's a PDF about data sciences or a document about pallas' cats!🐈
|
51 |
+
3. Enjoy 100% local and 100% free functionality: No need for hosted APIs or pay-as-you-go services. everything-rag is completely free to use and runs on your desktop. Plus, with the chat_history functionality in ConversationalRetrievalChain, you can easily retrieve and review previous conversations with your chatbot, making it even more convenient to use.
|
52 |
+
|
53 |
+
While everything-rag offers many benefits, there are a couple of limitations to keep in mind:
|
54 |
+
|
55 |
+
1. Performance-critical tasks: Loading large models (>1~2 GB) and generating text can be resource-intensive, so it's recommended to have at least 16GB RAM and 4 CPU cores for optimal performance.
|
56 |
+
2. Small LLMs can still allucinate: While large LLMs like _gemma-7b_ and _llama-7b_ tend to produce better results, smaller models like _openai-community/gpt2_ can still produce suboptimal responses in certain situations.
|
57 |
+
|
58 |
+
In summary, everything-rag is a simple, customizable, and local chatbot assistant that offers a wide range of features and capabilities. By leveraging the power of RAG, everything-rag offers a unique and flexible chatbot experience that can be tailored to your specific needs and preferences. Whether you're looking for a simple chatbot to answer basic questions or a more advanced conversational AI to engage with your users, everything-rag has got you covered.😊
|
59 |
+
|
60 |
+
## Inspiration
|
61 |
+
|
62 |
+
This project is a humble and modest carbon-copy of its main and true inspirations, i.e. [Jan.ai](https://jan.ai/), [Cheshire Cat AI](https://cheshirecat.ai/), [privateGPT](https://privategpt.io/) and many other projects that focus on making LLMs (and AI in general) open-source and easily accessible to everyone.
|
63 |
+
|
64 |
+
## Getting Started
|
65 |
+
|
66 |
+
You can do two things:
|
67 |
+
|
68 |
+
- Play with generation on [Kaggle](https://www.kaggle.com/code/astrabertelli/gemma-for-datasciences)
|
69 |
+
- Clone this repository, head over to [the python script](https://github.com/AstraBert/everything-rag/blob/main/scripts/gemma_for_datasciences.py) and modify everything to your needs!
|
70 |
+
- Docker installation (🥳**FULLY IMPLEMENTED**): you can install everything-rag through docker image and running it thanks do Docker by following these really simple commands:
|
71 |
+
|
72 |
+
```bash
|
73 |
+
docker pull ghcr.io/astrabert/everything-rag:latest
|
74 |
+
docker run -p 7860:7860 everything-rag:latest -m microsoft/phi-2 -t text-generation
|
75 |
+
```
|
76 |
+
- **IMPORTANT NOTE**: running the script within `docker run` does not log the port on which the app is running until you press `Ctrl+C`, but in that moment it also interrupt the execution! The app will run on port `0.0.0.0:7860`, so just make sure to open your browser on that port and to refresh it after 30s to 1 or 2 mins, when the model and the tokenizer should be loaded and the app should be ready to work!
|
77 |
+
|
78 |
+
- As you can see, you just need to specify the LLM model and its task (this is mandatory). Keep in mind that, for what concerns v0.1.0, everything-rag supports only text-generation and text2text-generation. For these two tasks, you can use virtually *any* model from HuggingFace Hub: the sole recommendation is to watch out for your disk space, RAM and CPU power, LLMs can be quite resource-consuming!
|
79 |
+
|
80 |
+
## Using the Chatbot
|
81 |
+
|
82 |
+
### GUI
|
83 |
+
|
84 |
+
The chatbot has a brand-new GradIO-based interface that runs on local server. You can interact by uploading directly your pdf files and/or sending messages, all by running:
|
85 |
+
|
86 |
+
```bash
|
87 |
+
python3 scripts/chat.py -m provider/modelname -t task
|
88 |
+
```
|
89 |
+
|
90 |
+
The suggested workflow is, nevertheless, the one that exploits Docker.
|
91 |
+
|
92 |
+
### Code breakdown - notebook
|
93 |
+
|
94 |
+
Everything is explained in [the dedicated notebook](https://github.com/AstraBert/everything-rag/blob/main/scripts/gemma-for-datasciences.ipynb), but here's a brief breakdown of the code:
|
95 |
+
|
96 |
+
1. The first section imports the necessary libraries, including Hugging Face Transformers, langchain-community, and tkinter.
|
97 |
+
2. The next section installs the necessary dependencies, including the gemma-2b model, and defines some useful functions for making the LLM-based data science assistant work.
|
98 |
+
3. The create_a_persistent_db function creates a persistent database from a PDF file, using the PyPDFLoader to split the PDF into smaller chunks and the Hugging Face embeddings to transform the text into numerical vectors. The resulting database is stored in a LocalFileStore.
|
99 |
+
4. The just_chatting function implements a chat system using the Hugging Face model and the persistent database. It takes a query, tokenizes it, and passes it to the model to generate a response. The response is then returned as a dictionary of strings.
|
100 |
+
5. The chat_gui class defines a simple chat GUI that displays the chat history and allows the user to input queries. The send_message function is called when the user presses the "Send" button, and it sends the user's message to the just_chatting function to get a response.
|
101 |
+
6. The script then creates a root Tk object and instantiates a ChatGUI object, which starts the main loop.
|
102 |
+
|
103 |
+
Et voilà, your chatbot is up and running!🦿
|
104 |
+
|
105 |
+
## Troubleshooting
|
106 |
+
|
107 |
+
### Common Issues Q&A
|
108 |
+
|
109 |
+
* Q: The chatbot is not responding😭
|
110 |
+
> A: Make sure that the PDF document is in the specified path and that the database has been created successfully.
|
111 |
+
* Q: The chatbot is taking soooo long🫠
|
112 |
+
> A: This is quite common with resource-limited environments that deal with too large or too small models: large models require **at least** 32 GB RAM and >8 core CPU, whereas small model can easily be allucinating and producing responses that are endless repetitions of the same thing! Check *penalty_score* parameter to avoid this. **try rephrasing the query and be as specific as possible**
|
113 |
+
* Q: My model is allucinatin and/or repeating the same sentence over and over again😵💫
|
114 |
+
> A: This is quite common with small or old models: check *penalty_score* and *temperature* parameter to avoid this.
|
115 |
+
* Q: The chatbot is giving incorrect/non-meaningful answers🤥
|
116 |
+
>A: Check that the PDF document is relevant and up-to-date. Also, **try rephrasing the query and be as specific as possible**
|
117 |
+
* Q: An error occurred while generating the answer💔
|
118 |
+
>A: This frequently occures when your (small) LLM has a limited maximum hidden size (generally 512 or 1024) and the context that the retrieval-augmented chain produces goes beyond that maximum. You could, potentially, modify the configuration of the model, but this would mean dramatically increase its resource consumption, and your small laptop is not prepared to take it, trust me!!! A solution, if you have enough RAM and CPU power, is to switch to larger LLMs: they do not have problems in this sense.
|
119 |
+
|
120 |
+
## Contributing
|
121 |
+
|
122 |
+
|
123 |
+
Contributions are welcome! If you would like to improve the chatbot's functionality or add new features, please fork the repository and submit a pull request.
|
124 |
+
|
125 |
+
## Reference
|
126 |
+
|
127 |
+
|
128 |
+
* [Hugging Face Transformers](https://github.com/huggingface/transformers)
|
129 |
+
* [Langchain-community](https://github.com/langchain-community/langchain-community)
|
130 |
+
* [Tkinter](https://docs.python.org/3/library/tkinter.html)
|
131 |
+
* [PDF document about data science](https://www.kaggle.com/datasets/astrabertelli/what-is-datascience-docs)
|
132 |
+
* [GradIO](https://www.gradio.app/)
|
133 |
+
|
134 |
+
## License
|
135 |
+
|
136 |
+
This project is licensed under the Apache 2.0 License.
|
137 |
+
|
138 |
+
If you use this work for your projects, please consider citing the author [Astra Bertelli](http://astrabert.vercel.app).
|
app.py
ADDED
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import gradio as gr
|
2 |
+
import os
|
3 |
+
import time
|
4 |
+
from utils import *
|
5 |
+
|
6 |
+
vectordb = ""
|
7 |
+
|
8 |
+
|
9 |
+
def print_like_dislike(x: gr.LikeData):
|
10 |
+
print(x.index, x.value, x.liked)
|
11 |
+
|
12 |
+
def add_message(history, message):
|
13 |
+
if len(message["files"]) > 0:
|
14 |
+
history.append((message["files"], None))
|
15 |
+
if message["text"] is not None and message["text"] != "":
|
16 |
+
history.append((message["text"], None))
|
17 |
+
return history, gr.MultimodalTextbox(value=None, interactive=False)
|
18 |
+
|
19 |
+
|
20 |
+
def bot(history):
|
21 |
+
global vectordb
|
22 |
+
global tsk
|
23 |
+
if type(history[-1][0]) != tuple:
|
24 |
+
if vectordb == "":
|
25 |
+
pipe = pipeline(tsk, tokenizer=tokenizer, model=model)
|
26 |
+
response = pipe(history[-1][0])[0]
|
27 |
+
response = response["generated_text"]
|
28 |
+
history[-1][1] = ""
|
29 |
+
for character in response:
|
30 |
+
history[-1][1] += character
|
31 |
+
time.sleep(0.05)
|
32 |
+
yield history
|
33 |
+
else:
|
34 |
+
try:
|
35 |
+
response = just_chatting(model=model, tokenizer=tokenizer, query=history[-1][0], vectordb=vectordb, chat_history=[convert_none_to_str(his) for his in history])["answer"]
|
36 |
+
history[-1][1] = ""
|
37 |
+
for character in response:
|
38 |
+
history[-1][1] += character
|
39 |
+
time.sleep(0.05)
|
40 |
+
yield history
|
41 |
+
except Exception as e:
|
42 |
+
response = f"Sorry, the error '{e}' occured while generating the response; check [troubleshooting documentation](https://astrabert.github.io/everything-rag/#troubleshooting) for more"
|
43 |
+
if type(history[-1][0]) == tuple:
|
44 |
+
filelist = []
|
45 |
+
for i in history[-1][0]:
|
46 |
+
filelist.append(i)
|
47 |
+
finalpdf = merge_pdfs(filelist)
|
48 |
+
vectordb = create_a_persistent_db(finalpdf, os.path.dirname(finalpdf)+"_localDB", os.path.dirname(finalpdf)+"_embcache")
|
49 |
+
response = "VectorDB was successfully created, now you can ask me anything about the document you uploaded!😊"
|
50 |
+
history[-1][1] = ""
|
51 |
+
for character in response:
|
52 |
+
history[-1][1] += character
|
53 |
+
time.sleep(0.05)
|
54 |
+
yield history
|
55 |
+
|
56 |
+
with gr.Blocks() as demo:
|
57 |
+
chatbot = gr.Chatbot(
|
58 |
+
[[None, "Hi, I'm **everything-rag**🤖.\nI'm here to assist you and let you chat with _your_ pdfs!\nCheck [my website](https://astrabert.github.io/everything-rag/) for troubleshooting and documentation reference\nHave fun!😊"]],
|
59 |
+
label="everything-rag",
|
60 |
+
elem_id="chatbot",
|
61 |
+
bubble_full_width=False,
|
62 |
+
)
|
63 |
+
|
64 |
+
chat_input = gr.MultimodalTextbox(interactive=True, file_types=["pdf"], placeholder="Enter message or upload file...", show_label=False)
|
65 |
+
|
66 |
+
chat_msg = chat_input.submit(add_message, [chatbot, chat_input], [chatbot, chat_input])
|
67 |
+
bot_msg = chat_msg.then(bot, chatbot, chatbot, api_name="bot_response")
|
68 |
+
bot_msg.then(lambda: gr.MultimodalTextbox(interactive=True), None, [chat_input])
|
69 |
+
|
70 |
+
chatbot.like(print_like_dislike, None, None)
|
71 |
+
gr.ClearButton(chatbot)
|
72 |
+
demo.queue()
|
73 |
+
if __name__ == "__main__":
|
74 |
+
demo.launch()
|
75 |
+
|
76 |
+
|
requirements.txt
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
langchain-community==0.0.13
|
2 |
+
langchain==0.1.1
|
3 |
+
pypdf==3.17.4
|
4 |
+
sentence_transformers==2.2.2
|
5 |
+
chromadb==0.4.22
|
6 |
+
gradio
|
7 |
+
transformers
|
8 |
+
trl
|
9 |
+
peft
|
utils.py
ADDED
@@ -0,0 +1,147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
|
2 |
+
import time
|
3 |
+
from langchain_community.llms import HuggingFacePipeline
|
4 |
+
from langchain.storage import LocalFileStore
|
5 |
+
from langchain.embeddings import CacheBackedEmbeddings
|
6 |
+
from langchain_community.vectorstores import Chroma
|
7 |
+
from langchain.text_splitter import CharacterTextSplitter
|
8 |
+
from langchain_community.document_loaders import PyPDFLoader
|
9 |
+
from langchain_community.embeddings import HuggingFaceEmbeddings
|
10 |
+
from langchain.chains import ConversationalRetrievalChain
|
11 |
+
import os
|
12 |
+
from pypdf import PdfMerger
|
13 |
+
from argparse import ArgumentParser
|
14 |
+
|
15 |
+
|
16 |
+
mod = "google/gemma-2b"
|
17 |
+
tsk = "text-generation"
|
18 |
+
|
19 |
+
def merge_pdfs(pdfs: list):
|
20 |
+
merger = PdfMerger()
|
21 |
+
for pdf in pdfs:
|
22 |
+
merger.append(pdf)
|
23 |
+
merger.write(f"{pdfs[-1].split('.')[0]}_results.pdf")
|
24 |
+
merger.close()
|
25 |
+
return f"{pdfs[-1].split('.')[0]}_results.pdf"
|
26 |
+
|
27 |
+
def create_a_persistent_db(pdfpath, dbpath, cachepath) -> None:
|
28 |
+
"""
|
29 |
+
Creates a persistent database from a PDF file.
|
30 |
+
|
31 |
+
Args:
|
32 |
+
pdfpath (str): The path to the PDF file.
|
33 |
+
dbpath (str): The path to the storage folder for the persistent LocalDB.
|
34 |
+
cachepath (str): The path to the storage folder for the embeddings cache.
|
35 |
+
"""
|
36 |
+
print("Started the operation...")
|
37 |
+
a = time.time()
|
38 |
+
loader = PyPDFLoader(pdfpath)
|
39 |
+
documents = loader.load()
|
40 |
+
|
41 |
+
### Split the documents into smaller chunks for processing
|
42 |
+
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
|
43 |
+
texts = text_splitter.split_documents(documents)
|
44 |
+
|
45 |
+
### Use HuggingFace embeddings for transforming text into numerical vectors
|
46 |
+
### This operation can take a while the first time but, once you created your local database with
|
47 |
+
### cached embeddings, it should be a matter of seconds to load them!
|
48 |
+
embeddings = HuggingFaceEmbeddings()
|
49 |
+
store = LocalFileStore(
|
50 |
+
os.path.join(
|
51 |
+
cachepath, os.path.basename(pdfpath).split(".")[0] + "_cache"
|
52 |
+
)
|
53 |
+
)
|
54 |
+
cached_embeddings = CacheBackedEmbeddings.from_bytes_store(
|
55 |
+
underlying_embeddings=embeddings,
|
56 |
+
document_embedding_cache=store,
|
57 |
+
namespace=os.path.basename(pdfpath).split(".")[0],
|
58 |
+
)
|
59 |
+
|
60 |
+
b = time.time()
|
61 |
+
print(
|
62 |
+
f"Embeddings successfully created and stored at {os.path.join(cachepath, os.path.basename(pdfpath).split('.')[0]+'_cache')} under namespace: {os.path.basename(pdfpath).split('.')[0]}"
|
63 |
+
)
|
64 |
+
print(f"To load and embed, it took: {b - a}")
|
65 |
+
|
66 |
+
persist_directory = os.path.join(
|
67 |
+
dbpath, os.path.basename(pdfpath).split(".")[0] + "_localDB"
|
68 |
+
)
|
69 |
+
vectordb = Chroma.from_documents(
|
70 |
+
documents=texts,
|
71 |
+
embedding=cached_embeddings,
|
72 |
+
persist_directory=persist_directory,
|
73 |
+
)
|
74 |
+
c = time.time()
|
75 |
+
print(
|
76 |
+
f"Persistent database successfully created and stored at {os.path.join(dbpath, os.path.basename(pdfpath).split('.')[0] + '_localDB')}"
|
77 |
+
)
|
78 |
+
print(f"To create a persistent database, it took: {c - b}")
|
79 |
+
return vectordb
|
80 |
+
|
81 |
+
def convert_none_to_str(l: list):
|
82 |
+
newlist = []
|
83 |
+
for i in range(len(l)):
|
84 |
+
if l[i] is None or type(l[i])==tuple:
|
85 |
+
newlist.append("")
|
86 |
+
else:
|
87 |
+
newlist.append(l[i])
|
88 |
+
return tuple(newlist)
|
89 |
+
|
90 |
+
def just_chatting(
|
91 |
+
task,
|
92 |
+
model,
|
93 |
+
tokenizer,
|
94 |
+
query,
|
95 |
+
vectordb,
|
96 |
+
chat_history=[]
|
97 |
+
):
|
98 |
+
"""
|
99 |
+
Implements a chat system using Hugging Face models and a persistent database.
|
100 |
+
|
101 |
+
Args:
|
102 |
+
task (str): Task for the pipeline; for now supported task are ['text-generation', 'text2text-generation']
|
103 |
+
model (AutoModelForCausalLM): Hugging Face model, already loaded and prepared.
|
104 |
+
tokenizer (AutoTokenizer): Hugging Face tokenizer, already loaded and prepared.
|
105 |
+
model_task (str): Task for the Hugging Face model.
|
106 |
+
persistent_db_dir (str): Directory for the persistent database.
|
107 |
+
embeddings_cache (str): Path to cache Hugging Face embeddings.
|
108 |
+
pdfpath (str): Path to the PDF file.
|
109 |
+
query (str): Question by the user
|
110 |
+
vectordb (ChromaDB): vectorstorer variable for retrieval.
|
111 |
+
chat_history (list): A list with previous questions and answers, serves as context; by default it is empty (it may make the model allucinate)
|
112 |
+
"""
|
113 |
+
### Create a text-generation pipeline and connect it to a ConversationalRetrievalChain
|
114 |
+
pipe = pipeline(task,
|
115 |
+
model=model,
|
116 |
+
tokenizer=tokenizer,
|
117 |
+
max_new_tokens = 2048,
|
118 |
+
repetition_penalty = float(10),
|
119 |
+
)
|
120 |
+
|
121 |
+
local_llm = HuggingFacePipeline(pipeline=pipe)
|
122 |
+
llm_chain = ConversationalRetrievalChain.from_llm(
|
123 |
+
llm=local_llm,
|
124 |
+
chain_type="stuff",
|
125 |
+
retriever=vectordb.as_retriever(search_kwargs={"k": 1}),
|
126 |
+
return_source_documents=False,
|
127 |
+
)
|
128 |
+
rst = llm_chain({"question": query, "chat_history": chat_history})
|
129 |
+
return rst
|
130 |
+
|
131 |
+
|
132 |
+
try:
|
133 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
134 |
+
mod,
|
135 |
+
)
|
136 |
+
|
137 |
+
tokenizer.pad_token = tokenizer.eos_token
|
138 |
+
|
139 |
+
model = AutoModelForCausalLM.from_pretrained(
|
140 |
+
mod,
|
141 |
+
)
|
142 |
+
except Exception as e:
|
143 |
+
import sys
|
144 |
+
print(f"The error {e} occured while handling model and tokenizer loading: please ensure that the model you provided was correct and suitable for the specified task. Be also sure that the HF repository for the loaded model contains all the necessary files.", file=sys.stderr)
|
145 |
+
sys.exit(1)
|
146 |
+
|
147 |
+
|