Update README.md
Browse files
README.md
CHANGED
@@ -1,29 +1,83 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
-
inference:
|
4 |
language:
|
5 |
- en
|
|
|
6 |
---
|
7 |
|
8 |
# PlanLLM
|
9 |
|
10 |
<img src="https://i.imgur.com/nHuVNAn.png" alt="drawing" style="width:300px;"/>
|
11 |
|
12 |
-
|
13 |
|
14 |
PlanLLM is a conversational assistant trained to assist users in completing a recipe from beginning to end and be able to answer any related or relevant requests that the user might have.
|
15 |
The model was also tested with DIY Tasks and performed similarly.
|
16 |
|
17 |
-
|
18 |
|
19 |
PlanLLM was trained by fine-tuning a [Vicuna](https://huggingface.co/lmsys/vicuna-7b-v1.1) model on synthetic dialogue between users and an assistant about a given recipe.
|
20 |
The model was first trained using SFT and then using Direct Preference Optimization (DPO).
|
21 |
|
22 |
-
####
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
It's the same as Vicuna. A non-commercial Apache 2.0 license.
|
25 |
|
26 |
-
|
27 |
|
28 |
["Plan-Grounded Large Language Models for Dual Goal Conversational Settings" (Accepted at EACL 2024)
|
29 |
-
Diogo Gl贸ria-Silva, Rafael Ferreira, Diogo Tavares, David Semedo, Jo茫o Magalh茫es](https://arxiv.org/abs/2402.01053)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
inference: false
|
4 |
language:
|
5 |
- en
|
6 |
+
library_name: transformers
|
7 |
---
|
8 |
|
9 |
# PlanLLM
|
10 |
|
11 |
<img src="https://i.imgur.com/nHuVNAn.png" alt="drawing" style="width:300px;"/>
|
12 |
|
13 |
+
## Model Details
|
14 |
|
15 |
PlanLLM is a conversational assistant trained to assist users in completing a recipe from beginning to end and be able to answer any related or relevant requests that the user might have.
|
16 |
The model was also tested with DIY Tasks and performed similarly.
|
17 |
|
18 |
+
### Training
|
19 |
|
20 |
PlanLLM was trained by fine-tuning a [Vicuna](https://huggingface.co/lmsys/vicuna-7b-v1.1) model on synthetic dialogue between users and an assistant about a given recipe.
|
21 |
The model was first trained using SFT and then using Direct Preference Optimization (DPO).
|
22 |
|
23 |
+
#### Details
|
24 |
+
|
25 |
+
SFT:
|
26 |
+
- Train Type: Fully Sharded Data Parallel (FSDP) with 4 A100 40GB GPUs
|
27 |
+
- Batch Size: 1
|
28 |
+
- Gradient Acc. Steps: 64
|
29 |
+
- Train steps: 600
|
30 |
+
|
31 |
+
DPO:
|
32 |
+
- Train Type: Low-Rank Adaptation (LoRA) with 1 A100 40GB GPU
|
33 |
+
- LoRA Rank: 64
|
34 |
+
- LoRA Alpha: 16
|
35 |
+
- Batch Size: 1
|
36 |
+
- Gradient Acc. Steps: 64
|
37 |
+
- Train steps: 350
|
38 |
+
|
39 |
+
|
40 |
+
### Dataset
|
41 |
+
|
42 |
+
PlanLLM was trained on synthetic user-system dialogues where the role of the system is to aid the user in completing a predetermined task. For our case, we used recipes.
|
43 |
+
|
44 |
+
These dialogues were generated using the user utterances collected from Alexa users who interacted with TWIZ, our entry in the Alexa Prize Taskbot Challenge 1.
|
45 |
+
Using an intent classifier we mapped each user utterance to a specific intent allowing us to collect intent-specific utterances and a dialogue graph of each dialogue (with intents being the graph nodes).
|
46 |
+
For the system responses, we used a combination of templates, external knowledge sources, and Large Language Models.
|
47 |
+
|
48 |
+
Using this we built a pipeline that would navigate a dialogue graph generating user requests and system responses for each turn, creating complete dialogues that follow a similar dialogue pattern used by real users.
|
49 |
+
|
50 |
+
#### Details
|
51 |
+
|
52 |
+
SFT:
|
53 |
+
- Dialogues: 10k (90/5/5 splits)
|
54 |
+
- Recipes: 1000
|
55 |
+
|
56 |
+
DPO:
|
57 |
+
- Dialogues: 3k (90/5/5 splits)
|
58 |
+
- Recipes: 1000 (same recipes used for SFT)
|
59 |
+
|
60 |
+
|
61 |
+
### License
|
62 |
|
63 |
It's the same as Vicuna. A non-commercial Apache 2.0 license.
|
64 |
|
65 |
+
### Paper
|
66 |
|
67 |
["Plan-Grounded Large Language Models for Dual Goal Conversational Settings" (Accepted at EACL 2024)
|
68 |
+
Diogo Gl贸ria-Silva, Rafael Ferreira, Diogo Tavares, David Semedo, Jo茫o Magalh茫es](https://arxiv.org/abs/2402.01053)
|
69 |
+
|
70 |
+
#### Cite Us!
|
71 |
+
|
72 |
+
```
|
73 |
+
@InProceedings{planllm_eacl24,
|
74 |
+
author="Gl贸ria-Silva, Diogo
|
75 |
+
and Ferreira, Rafael
|
76 |
+
and Tavares, Diogo
|
77 |
+
and Semedo, David
|
78 |
+
and Magalh茫es, Jo茫o",
|
79 |
+
title="Plan-Grounded Large Language Models for Dual Goal Conversational Settings",
|
80 |
+
booktitle="European Chapter of the Association for Computational Linguistics (EACL 2024)",
|
81 |
+
year="2024",
|
82 |
+
}
|
83 |
+
```
|