Building a Chainlit App
What if we want to take our Week 1 Day 2 assignment - Pythonic RAG - and bring it out of the notebook?
Well - we'll cover exactly that here!
Anatomy of a Chainlit Application
Chainlit is a Python package similar to Streamlit that lets users write a backend and a front end in a single (or multiple) Python file(s). It is mainly used for prototyping LLM-based Chat Style Applications - though it is used in production in some settings with 1,000,000s of MAUs (Monthly Active Users).
The primary method of customizing and interacting with the Chainlit UI is through a few critical decorators.
NOTE: Simply put, the decorators (in Chainlit) are just ways we can "plug-in" to the functionality in Chainlit.
We'll be concerning ourselves with three main scopes:
- On application start - when we start the Chainlit application with a command like
chainlit run app.py
- On chat start - when a chat session starts (a user opens the web browser to the address hosting the application)
- On message - when the users sends a message through the input text box in the Chainlit UI
Let's dig into each scope and see what we're doing!
On Application Start:
The first thing you'll notice is that we have the traditional "wall of imports" this is to ensure we have everything we need to run our application.
import os
from typing import List
from chainlit.types import AskFileResponse
from aimakerspace.text_utils import CharacterTextSplitter, TextFileLoader
from aimakerspace.openai_utils.prompts import (
UserRolePrompt,
SystemRolePrompt,
AssistantRolePrompt,
)
from aimakerspace.openai_utils.embedding import EmbeddingModel
from aimakerspace.vectordatabase import VectorDatabase
from aimakerspace.openai_utils.chatmodel import ChatOpenAI
import chainlit as cl
Next up, we have some prompt templates. As all sessions will use the same prompt templates without modification, and we don't need these templates to be specific per template - we can set them up here - at the application scope.
system_template = """\
Use the following context to answer a users question. If you cannot find the answer in the context, say you don't know the answer."""
system_role_prompt = SystemRolePrompt(system_template)
user_prompt_template = """\
Context:
{context}
Question:
{question}
"""
user_role_prompt = UserRolePrompt(user_prompt_template)
NOTE: You'll notice that these are the exact same prompt templates we used from the Pythonic RAG Notebook in Week 1 Day 2!
Following that - we can create the Python Class definition for our RAG pipeline - or chain, as we'll refer to it in the rest of this walkthrough.
Let's look at the definition first:
class RetrievalAugmentedQAPipeline:
def __init__(self, llm: ChatOpenAI(), vector_db_retriever: VectorDatabase) -> None:
self.llm = llm
self.vector_db_retriever = vector_db_retriever
async def arun_pipeline(self, user_query: str):
### RETRIEVAL
context_list = self.vector_db_retriever.search_by_text(user_query, k=4)
context_prompt = ""
for context in context_list:
context_prompt += context[0] + "\n"
### AUGMENTED
formatted_system_prompt = system_role_prompt.create_message()
formatted_user_prompt = user_role_prompt.create_message(question=user_query, context=context_prompt)
### GENERATION
async def generate_response():
async for chunk in self.llm.astream([formatted_system_prompt, formatted_user_prompt]):
yield chunk
return {"response": generate_response(), "context": context_list}
Notice a few things:
- We have modified this
RetrievalAugmentedQAPipeline
from the initial notebook to support streaming. - In essence, our pipeline is chaining a few events together:
- We take our user query, and chain it into our Vector Database to collect related chunks
- We take those contexts and our user's questions and chain them into the prompt templates
- We take that prompt template and chain it into our LLM call
- We chain the response of the LLM call to the user
- We are using a lot of
async
again!
Now, we're going to create a helper function for processing uploaded text files.
First, we'll instantiate a shared CharacterTextSplitter
.
text_splitter = CharacterTextSplitter()
Now we can define our helper.
def process_text_file(file: AskFileResponse):
import tempfile
with tempfile.NamedTemporaryFile(mode="w", delete=False, suffix=".txt") as temp_file:
temp_file_path = temp_file.name
with open(temp_file_path, "wb") as f:
f.write(file.content)
text_loader = TextFileLoader(temp_file_path)
documents = text_loader.load_documents()
texts = text_splitter.split_text(documents)
return texts
Simply put, this downloads the file as a temp file, we load it in with TextFileLoader
and then split it with our TextSplitter
, and returns that list of strings!
Why do we want to support streaming? What about streaming is important, or useful?
ANSWER #1:
Streaming is the continuous transmission of the data from the model to the UI. Instead of waiting and batching up the response into a single large message, the response is sent in pieces (streams) as it is created.
The advantages of streaming:
- quicker initial response - the user sees the first part of the answer sooner
- it is easier to identify the results are incorrect and terminate the request
- it is a more natural mode of communication for humans
- better handling of large data, not requiring complex caching
- essential for real time processing
- humans can only read so fast so its an advantage to get some of the data earlier
On Chat Start:
The next scope is where "the magic happens". On Chat Start is when a user begins a chat session. This will happen whenever a user opens a new chat window, or refreshes an existing chat window.
You'll see that our code is set-up to immediately show the user a chat box requesting them to upload a file.
while files == None:
files = await cl.AskFileMessage(
content="Please upload a Text File file to begin!",
accept=["text/plain"],
max_size_mb=2,
timeout=180,
).send()
Once we've obtained the text file - we'll use our processing helper function to process our text!
After we have processed our text file - we'll need to create a VectorDatabase
and populate it with our processed chunks and their related embeddings!
vector_db = VectorDatabase()
vector_db = await vector_db.abuild_from_list(texts)
Once we have that piece completed - we can create the chain we'll be using to respond to user queries!
retrieval_augmented_qa_pipeline = RetrievalAugmentedQAPipeline(
vector_db_retriever=vector_db,
llm=chat_openai
)
Now, we'll save that into our user session!
NOTE: Chainlit has some great documentation about User Session.
QUESTION #2:
Why are we using User Session here? What about Python makes us need to use this? Why not just store everything in a global variable?
ANSWER #2:
The application hopefully will be run by many people, at the same time. If the data was stored in a global variable this would be accessed by everyone using the application. So everytime someone started a new session, the information would be overwritten, meaning everyone would basically get the same results. Unless only one person used the system at a time.
So the goal is to keep each users session information separate from all the other users. The ChainLit User session provides the capability of storing each users data separately.
On Message
First, we load our chain from the user session:
chain = cl.user_session.get("chain")
Then, we run the chain on the content of the message - and stream it to the front end - that's it!
msg = cl.Message(content="")
result = await chain.arun_pipeline(message.content)
async for stream_resp in result["response"]:
await msg.stream_token(stream_resp)
π
With that - you've created a Chainlit application that moves our Pythonic RAG notebook to a Chainlit application!
π§ CHALLENGE MODE π§
For an extra challenge - modify the behaviour of your applciation by integrating changes you made to your Pythonic RAG notebook (using new retrieval methods, etc.)
If you're still looking for a challenge, or didn't make any modifications to your Pythonic RAG notebook:
- Allow users to upload PDFs (this will require you to build a PDF parser as well)
- Modify the VectorStore to leverage Qdrant
NOTE: The motivation for these challenges is simple - the beginning of the course is extremely information dense, and people come from all kinds of different technical backgrounds. In order to ensure that all learners are able to engage with the content confidently and comfortably, we want to focus on the basic units of technical competency required. This leads to a situation where some learners, who came in with more robust technical skills, find the introductory material to be too simple - and these open-ended challenges help us do this!
Support pdf documents
Code was modified to support pdf documents in the following areas:
- Change to the request for documents in on_chat_start:
- changed the message to ask for .txt or .pdf file
- changed the acceptable file formats so that the pdf documents are included in the select pop up
while not files:
files = await cl.AskFileMessage(
content="Please upload a .txt or .pdf file to begin processing!",
accept=["text/plain", "application/pdf"],
max_size_mb=2,
timeout=180,
).send()
- change process_text_file() function to handle .pdf files
- refactor the code to do all file handling in richard.text_utils
- app calls process_file, optionally passing in the text splitter function
- default text splitter function is CharacterTextSplitter
texts = process_file(file)
- load_file() function does the following
- read the uploaded document into a temporary file
- identify the file extension
- process a .txt file as before resulting in the texts list
- if the file is .pdf use the PyMuPDF library to read each page and extract the text and add it to texts list
- use the passed in text splitter function to split the documents
def load_file(self, file, text_splitter=CharacterTextSplitter()):
file_extension = os.path.splitext(file.name)[1].lower()
with tempfile.NamedTemporaryFile(mode="wb", delete=False, suffix=file_extension) as temp_file:
self.temp_file_path = temp_file.name
temp_file.write(file.content)
if os.path.isfile(self.temp_file_path):
if self.temp_file_path.endswith(".txt"):
self.load_text_file()
elif self.temp_file_path.endswith(".pdf"):
self.load_pdf_file()
else:
raise ValueError(
f"Unsupported file type: {self.temp_file_path}"
)
return text_splitter.split_text(self.documents)
else:
raise ValueError(
"Not a file"
)
def load_text_file(self):
with open(self.temp_file_path, "r", encoding=self.encoding) as f:
self.documents.append(f.read())
def load_pdf_file(self):
pdf_document = fitz.open(self.temp_file_path)
for page_num in range(len(pdf_document)):
page = pdf_document.load_page(page_num)
text = page.get_text()
self.documents.append(text)
- Test the handling of .pdf and .txt files
Several different .pdf and .txt files were successfully uploaded and processed by the app