Update README.md
Browse files
README.md
CHANGED
@@ -1,21 +1,54 @@
|
|
1 |
---
|
2 |
library_name: peft
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
-
## Training procedure
|
5 |
|
|
|
6 |
|
7 |
-
|
8 |
-
-
|
9 |
-
|
10 |
-
|
11 |
-
- llm_int8_threshold: 6.0
|
12 |
-
- llm_int8_skip_modules: None
|
13 |
-
- llm_int8_enable_fp32_cpu_offload: False
|
14 |
-
- llm_int8_has_fp16_weight: False
|
15 |
-
- bnb_4bit_quant_type: nf4
|
16 |
-
- bnb_4bit_use_double_quant: True
|
17 |
-
- bnb_4bit_compute_dtype: bfloat16
|
18 |
-
### Framework versions
|
19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
-
- PEFT 0.5.0
|
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
+
tags:
|
4 |
+
- meta-llama
|
5 |
+
- code
|
6 |
+
- instruct
|
7 |
+
- databricks-dolly-15k
|
8 |
+
- Llama-2-70b-hf
|
9 |
+
datasets:
|
10 |
+
- databricks/databricks-dolly-15k
|
11 |
+
base_model: meta-llama/Llama-2-70b-hf
|
12 |
---
|
|
|
13 |
|
14 |
+
For our finetuning process, we utilized the meta-llama/Llama-2-70b-hf model and the Databricks-dolly-15k dataset.
|
15 |
|
16 |
+
This dataset, a meticulous compilation of over 15,000 records, was a result of the dedicated work of thousands of Databricks professionals. It was specifically designed to further improve the interactive capabilities of ChatGPT-like systems.
|
17 |
+
The dataset contributors crafted prompt / response pairs across eight distinct instruction categories. Besides the seven categories mentioned in the InstructGPT paper, they also ventured into an open-ended, free-form category. The contributors, emphasizing genuine and original content, refrained from sourcing information online, except in special cases where Wikipedia was the source for certain instruction categories. There was also a strict directive against the use of generative AI for crafting instructions or responses.
|
18 |
+
The contributors could address questions from their peers. Rephrasing the original question was encouraged, and there was a clear preference to answer only those queries they were certain about.
|
19 |
+
In some categories, the data comes with reference texts sourced from Wikipedia. Users might find bracketed Wikipedia citation numbers (like [42]) within the context field of the dataset. For smoother downstream applications, it's advisable to exclude these.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
+
Our finetuning leveraged the [MonsterAPI](https://monsterapi.ai)'s intuitive, no-code [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm).
|
22 |
+
|
23 |
+
This efficient process, surprisingly cost-effective,
|
24 |
+
was completed in just 17.5 hours for 3 epochs, running on an A100 80GB GPU.
|
25 |
+
Breaking it down further, each epoch took only 5.8 hours and cost a mere `$19.25`. The total cost for all 3 epochs came to `$57.75`.
|
26 |
+
|
27 |
+
#### Hyperparameters & Run details:
|
28 |
+
- Epochs: 3
|
29 |
+
- Cost per epoch: $19.25
|
30 |
+
- Total finetuning Cost: $57.75
|
31 |
+
- Model Path: meta-llama/Llama-2-70b-hf
|
32 |
+
- Dataset: databricks/databricks-dolly-15k
|
33 |
+
- Learning rate: (not provided in the original data)
|
34 |
+
- Number of epochs: 3
|
35 |
+
- Data split: (not provided in the original data, assuming Training: 90% / Validation: 10%)
|
36 |
+
- Gradient accumulation steps: (not provided in the original data)
|
37 |
+
|
38 |
+
license: apache-2.0
|
39 |
+
---
|
40 |
+
|
41 |
+
######
|
42 |
+
|
43 |
+
Prompt Used:
|
44 |
+
|
45 |
+
```
|
46 |
+
### INSTRUCTION:
|
47 |
+
[instruction]
|
48 |
+
|
49 |
+
[context]
|
50 |
+
|
51 |
+
### RESPONSE:
|
52 |
+
[response]
|
53 |
+
```
|
54 |
|
|