rgallardo commited on
Commit
e414796
1 Parent(s): 7f5e5f5

Update examples

Browse files
Files changed (2) hide show
  1. app.py +1 -1
  2. updated_context.txt +1 -35
app.py CHANGED
@@ -59,7 +59,7 @@ with gr.Blocks() as demo:
59
  chatbot = gr.Chatbot()
60
  text = gr.Textbox(label="Ask a question (press enter to submit)", default_value="How are you?")
61
  gr.Examples(
62
- ["What's the name of the dataset that was built?", "what task does it focus on?", "Describe that task"],
63
  text
64
  )
65
 
 
59
  chatbot = gr.Chatbot()
60
  text = gr.Textbox(label="Ask a question (press enter to submit)", default_value="How are you?")
61
  gr.Examples(
62
+ ["What's the name of the dataset that was built?", "what task does it focus on?", "what is that task about?"],
63
  text
64
  )
65
 
updated_context.txt CHANGED
@@ -6,14 +6,11 @@ While these large models are great for general tasks, they may need to be fine-t
6
 
7
  As a treat to our amazing readers, we fine-tuned an LLM to build a chatbot that can answer natural questions about the rich content published on our blog. This article will walk you through the fine-tuning process, from building our own dataset to optimizing inference time for the fine-tuned model!
8
 
9
- Say Hi!
10
- Before diving into the details of our fine-tuning pipeline, we invite you to try out our chatbot for yourself! We’ve set up a 🤗 Hugging Face Space where you can ask the chatbot any questions you have about this blog post. Simply follow this link to access the space and start chatting! In addition, we’ve uploaded the fine-tuned model and its ONNX export to 🤗 Hugging Face so that you can explore them at will.
11
-
12
  The model powering the chatbot was trained on the task of Conversational Question Answering, which means it can answer questions regarding a specific context (in this case, the content of this blog) or previous questions or answers from the current conversation thread. Try chatting with the bot in a conversational style, just as you would with one of Tryolabs’ tech experts!
13
 
14
  Caveats
15
  Remember that this is just a small demo, so the answers may not be as accurate as expected; in the Improvements section, we’ll discuss how we can enhance the model’s responses! Also, the content of this blog was not included in the training set, which means that the chatbot will give you answers about new, unseen data!
16
- Response time may take around 10 seconds due to the use of 🤗 Hugging Face’s free-tier space, which only has access to two CPU cores. This response time is slow for a chatbot being used in production, but it's fast for a CPU deployment and for such large inputs (since the model needs to process the entire blog to generate an answer). We'll discuss optimizing inference time in the Optimizing with ONNX section.
17
  The Boom of Foundation Models
18
  Have you ever wondered how computers can understand and respond to human language? The answer lies in a new concept called ‘Foundation Models’. Popularized under this name by the Stanford University, these are machine learning models trained on immense amounts of data (over tens of terabytes and growing) that can be adapted to a wide range of downstream tasks in fields such as image, text, and audio, among others.
19
 
@@ -42,9 +39,6 @@ Our goal was to build a chatbot that could answer questions about Tryolabs’ bl
42
 
43
  Like the CoQA dataset for conversational question answering, TryoCoQA consists of questions and answers about a specific context in a conversational manner, with some questions and answers referencing previous points in the conversation. We aimed for natural language questions with a variety of writing styles and vocabulary, and answers that can be short spans of text extracted from the context or free-form text, hoping for the model to produce more human-like responses with high-quality content.
44
 
45
- The dataset and guidelines for building it are available in the following GitHub repository.
46
-
47
-
48
 
49
  2. Choosing a Foundation Model
50
  Selecting the right open-source LLM to fine-tune can be a tough choice. There’s a handful of them, and while they may perform quite well for a broad range of general tasks, it can be challenging to predict how well they will perform on our specific task before fine-tuning. Think of it like buying a new pair of dancing shoes - they may look great and feel comfortable, but you won't know how well they'll perform until you hit the dance floor.
@@ -71,8 +65,6 @@ How well the model fits your specific task. You can dig deeper into the datasets
71
  The size of your inputs and outputs. Some models may not scale well for large inputs and outputs due to high memory consumption or a substantial number of operations.
72
  The size of your dataset. If you have a small dataset, choose a more powerful model capable of zero-shot or few-shot learning. Nevertheless, the more data you have, the better results you will achieve.
73
  How much computing power is available for training and inference. This can be a significant factor in determining which models you can use, as larger models may not fit in your memory, or training may become painfully slow. You can use libraries like 🤗 Accelerate for distributed training to make the most out of your hardware.
74
- All this can seem overwhelming, but don’t worry; here at Tryolabs, we can help you choose the right model for your needs! Contact us and get started on building your own fine-tuning pipeline.
75
-
76
 
77
 
78
  3. Fine-tuning strategy
@@ -96,15 +88,6 @@ In addition to deciding how to combine the datasets, we also needed to adapt our
96
 
97
  To refresh the reader’s memory, in the use-case of Conversational Question Answering, the goal of the model is to generate the answer to a question given a context and the previous questions and answers from the conversation. With this in mind, we formatted the inputs following this structure:
98
 
99
- {context} ||
100
- <Q> {previous_question_2}
101
- <A> {previous_answer_2}
102
- <Q> {previous_question_1}
103
- <A> {previous_answer_1}
104
- <Q> {question}
105
- <A>
106
-
107
-
108
  Here, the input is a text string containing the context (i.e., the content of one of Tryolabs blog posts) followed by the last two question-and-answer pairs in the conversation and the current target question, with the target output being the answer to the target question. We chose to add just the last two question-and-answer pairs to limit the amount of conversation history the model needs to pay attention to while still being able to generate coherent responses. Note that this is a hyper-parameter you can adjust when fine-tuning your own model.
109
 
110
  With our data prepared and fine-tuning strategy determined, the final step was setting up our infrastructure environment and training the model.
@@ -133,15 +116,9 @@ To assess its performance, we used the F1 Score by validating how many tokens ap
133
 
134
  Since we had two different training steps, we also had two additional evaluation steps. The first training, on SQuAD2.0 and CoQa, resulted in a 74.29 F1 Score on the validation split after 3 epochs. The second training, on TryoCoQa, produced a 54.77 F1 Score after 166 epochs.
135
 
136
-
137
-
138
-
139
-
140
-
141
  More than analyzing the quantitative metrics is required to evaluate these results and conversational models in general. It is essential to consider the qualitative aspect of the model's answers, like their grammatical correctness and coherence within the conversation context. Sometimes it is preferable to have better answers (qualitatively speaking) than a better F1. So we looked at some answers from the validation set to ensure that the model was correctly generating what we were looking for. Our analysis revealed that higher F1 scores were generally associated with greater-quality answers. As a result, we selected the checkpoint with the highest F1 score to use in constructing our demonstration chatbot.
142
 
143
 
144
- If you want to play around with our fine-tuned model, you can find it on 🤗 Hugging Face with the ID tryolabs/long-t5-tglobal-base-blogpost-cqa!
145
  Faster inference with 🤗 Optimum and ONNX
146
  After fine-tuning our model, we wanted to make it available to our awesome readers, so we deployed it on 🤗 Hugging Face Spaces, which offers a free tier with two CPU cores for running inference on the model. However, this setup can lead to slow inference times, and processing significant inputs like ours doesn’t make it any better. And a chatbot that takes a few minutes to answer a question doesn't strike anyone as being particularly chatty, does it? So, to improve the speed of our chatbot, we turned to 🤗 Optimum and the ONNX Runtime!
147
 
@@ -155,18 +132,7 @@ Once our model was exported to ONNX, we used 🤗 Optimum’s integration with t
155
 
156
  With these improvements, we could achieve an x2 speed-up on inference time! Although the model still takes around 10 seconds to answer, this is a reasonable speed for a CPU-only deployment and processing such large inputs.
157
 
158
-
159
- It’s worth noting that members of the ONNX community are actively working on tools for exporting and optimizing the latest ML architectures. The features mentioned in this section are fresh from the oven, thanks to the invaluable support of the ONNX team. Their work is crucial for unifying deep learning frameworks and optimizing models for faster and more cost-effective training and inference.
160
-
161
- Improvements
162
- We’re just scratching the surface of what’s possible. There are numerous potential improvements to keep working on to enhance the user's overall experience and the chatbot's performance. Here are some ideas to keep working on:
163
-
164
- Fine-tune on more public datasets for different downstream tasks. By doing so, we hope that the model learns to respond in a more human-like manner. For example, we could fine-tune on context-free QA datasets or on free-form answers so that the model not only retrieves the answers from the context but also generates original answers. We could also fine-tune on summarizing or paraphrasing tasks so that the user can also ask the chatbot to rewrite certain parts of the blog or summarize entire sections!
165
- Increase the size of our dataset. We built a small dataset for this demo, so it's not surprising that the model doesn't generalize well to new unseen blogs. We could potentially train a more accurate model by providing it with more training data.
166
- Optimize training using ONNX Runtime. We could achieve better results in the same amount of time if we could train for more epochs or fit a larger version of LongT5 in the same amount of memory. To do this, we could leverage the optimization power of ONNX Runtime for training PyTorch models, which would allow us to accelerate training speeds and optimize memory usage. Thanks to the recently released 🤗 Optimum integration with ONNX Runtime for training, this can be done in just a few lines of code!
167
  Takeaways
168
  With the ever-increasing popularity of LLMs, it can seem almost impossible to train these models without having access to millions of dollars in resources and tons of data. However, with the right skills and knowledge about Foundation Models, Deep Learning, and the Transformer architecture, we showed you that fine-tuning these huge models is possible, even with few resources and a small dataset!
169
 
170
  Fine-tuning is the key to unlocking the full potential of Foundation Models for your business. It allows you to take a pre-trained model and adapt it to your specific needs without breaking the bank.
171
-
172
- And the best part? We're here to help you every step of the way. If you're ready to see how fine-tuning can benefit your business, don't hesitate to reach out, and let's get started on your fine-tuning journey today!
 
6
 
7
  As a treat to our amazing readers, we fine-tuned an LLM to build a chatbot that can answer natural questions about the rich content published on our blog. This article will walk you through the fine-tuning process, from building our own dataset to optimizing inference time for the fine-tuned model!
8
 
 
 
 
9
  The model powering the chatbot was trained on the task of Conversational Question Answering, which means it can answer questions regarding a specific context (in this case, the content of this blog) or previous questions or answers from the current conversation thread. Try chatting with the bot in a conversational style, just as you would with one of Tryolabs’ tech experts!
10
 
11
  Caveats
12
  Remember that this is just a small demo, so the answers may not be as accurate as expected; in the Improvements section, we’ll discuss how we can enhance the model’s responses! Also, the content of this blog was not included in the training set, which means that the chatbot will give you answers about new, unseen data!
13
+ Response time may take around 10 seconds due to the use of 🤗 Hugging Face’s free-tier space, which only has access to two CPU cores. This response time is slow for a chatbot being used in production, but it's fast for a CPU deployment and for such large inputs (since the model needs to process the entire blog to generate an answer).
14
  The Boom of Foundation Models
15
  Have you ever wondered how computers can understand and respond to human language? The answer lies in a new concept called ‘Foundation Models’. Popularized under this name by the Stanford University, these are machine learning models trained on immense amounts of data (over tens of terabytes and growing) that can be adapted to a wide range of downstream tasks in fields such as image, text, and audio, among others.
16
 
 
39
 
40
  Like the CoQA dataset for conversational question answering, TryoCoQA consists of questions and answers about a specific context in a conversational manner, with some questions and answers referencing previous points in the conversation. We aimed for natural language questions with a variety of writing styles and vocabulary, and answers that can be short spans of text extracted from the context or free-form text, hoping for the model to produce more human-like responses with high-quality content.
41
 
 
 
 
42
 
43
  2. Choosing a Foundation Model
44
  Selecting the right open-source LLM to fine-tune can be a tough choice. There’s a handful of them, and while they may perform quite well for a broad range of general tasks, it can be challenging to predict how well they will perform on our specific task before fine-tuning. Think of it like buying a new pair of dancing shoes - they may look great and feel comfortable, but you won't know how well they'll perform until you hit the dance floor.
 
65
  The size of your inputs and outputs. Some models may not scale well for large inputs and outputs due to high memory consumption or a substantial number of operations.
66
  The size of your dataset. If you have a small dataset, choose a more powerful model capable of zero-shot or few-shot learning. Nevertheless, the more data you have, the better results you will achieve.
67
  How much computing power is available for training and inference. This can be a significant factor in determining which models you can use, as larger models may not fit in your memory, or training may become painfully slow. You can use libraries like 🤗 Accelerate for distributed training to make the most out of your hardware.
 
 
68
 
69
 
70
  3. Fine-tuning strategy
 
88
 
89
  To refresh the reader’s memory, in the use-case of Conversational Question Answering, the goal of the model is to generate the answer to a question given a context and the previous questions and answers from the conversation. With this in mind, we formatted the inputs following this structure:
90
 
 
 
 
 
 
 
 
 
 
91
  Here, the input is a text string containing the context (i.e., the content of one of Tryolabs blog posts) followed by the last two question-and-answer pairs in the conversation and the current target question, with the target output being the answer to the target question. We chose to add just the last two question-and-answer pairs to limit the amount of conversation history the model needs to pay attention to while still being able to generate coherent responses. Note that this is a hyper-parameter you can adjust when fine-tuning your own model.
92
 
93
  With our data prepared and fine-tuning strategy determined, the final step was setting up our infrastructure environment and training the model.
 
116
 
117
  Since we had two different training steps, we also had two additional evaluation steps. The first training, on SQuAD2.0 and CoQa, resulted in a 74.29 F1 Score on the validation split after 3 epochs. The second training, on TryoCoQa, produced a 54.77 F1 Score after 166 epochs.
118
 
 
 
 
 
 
119
  More than analyzing the quantitative metrics is required to evaluate these results and conversational models in general. It is essential to consider the qualitative aspect of the model's answers, like their grammatical correctness and coherence within the conversation context. Sometimes it is preferable to have better answers (qualitatively speaking) than a better F1. So we looked at some answers from the validation set to ensure that the model was correctly generating what we were looking for. Our analysis revealed that higher F1 scores were generally associated with greater-quality answers. As a result, we selected the checkpoint with the highest F1 score to use in constructing our demonstration chatbot.
120
 
121
 
 
122
  Faster inference with 🤗 Optimum and ONNX
123
  After fine-tuning our model, we wanted to make it available to our awesome readers, so we deployed it on 🤗 Hugging Face Spaces, which offers a free tier with two CPU cores for running inference on the model. However, this setup can lead to slow inference times, and processing significant inputs like ours doesn’t make it any better. And a chatbot that takes a few minutes to answer a question doesn't strike anyone as being particularly chatty, does it? So, to improve the speed of our chatbot, we turned to 🤗 Optimum and the ONNX Runtime!
124
 
 
132
 
133
  With these improvements, we could achieve an x2 speed-up on inference time! Although the model still takes around 10 seconds to answer, this is a reasonable speed for a CPU-only deployment and processing such large inputs.
134
 
 
 
 
 
 
 
 
 
 
135
  Takeaways
136
  With the ever-increasing popularity of LLMs, it can seem almost impossible to train these models without having access to millions of dollars in resources and tons of data. However, with the right skills and knowledge about Foundation Models, Deep Learning, and the Transformer architecture, we showed you that fine-tuning these huge models is possible, even with few resources and a small dataset!
137
 
138
  Fine-tuning is the key to unlocking the full potential of Foundation Models for your business. It allows you to take a pre-trained model and adapt it to your specific needs without breaking the bank.