File size: 2,237 Bytes
2cc5335
 
d572c8f
 
 
 
 
 
 
 
 
2cc5335
 
2623960
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d572c8f
 
 
2623960
d572c8f
 
 
 
 
 
 
 
 
 
 
2cc5335
9a7edbf
 
 
 
 
2623960
9a7edbf
2623960
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
library_name: peft
tags:
- meta-llama
- code
- instruct
- databricks-dolly-15k
- Llama-2-70b-hf
datasets:
- databricks/databricks-dolly-15k
base_model: meta-llama/Llama-2-70b-hf
---

### Finetuning Overview:

**Model Used:** meta-llama/Llama-2-70b-hf  
**Dataset:** Databricks-dolly-15k  

#### Dataset Insights:

The Databricks-dolly-15k dataset is an impressive compilation of over 15,000 records, made possible by the hard work and dedication of a multitude of Databricks professionals. It has been tailored to:

- Elevate the interactive capabilities of ChatGPT-like systems.
- Provide prompt/response pairs spanning eight distinct instruction categories, inclusive of the seven categories from the InstructGPT paper and an exploratory open-ended category.
- Ensure genuine and original content, largely offline-sourced with exceptions for Wikipedia in particular categories, and free from generative AI influences.

In an innovative approach, contributors had the opportunity to rephrase and answer queries from their peers, highlighting a focus on accuracy and clarity. Additionally, some data subsets feature Wikipedia-sourced reference texts, marked by bracketed citation numbers like [42]. For an optimal user experience in downstream applications, it's recommended to remove these.

#### Finetuning Details:

Using [MonsterAPI](https://monsterapi.ai)'s user-friendly [LLM finetuner](https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm), the finetuning:

- Stands out for its cost-effectiveness.
- Was executed in a total of 17.5 hours for 3 epochs with an A100 80GB GPU.
- Broke down to just 5.8 hours and `$19.25` per epoch, culminating in a combined cost of `$57.75` for all epochs.

#### Hyperparameters & Additional Details:

- **Epochs:** 3
- **Cost Per Epoch:** $19.25
- **Total Finetuning Cost:** $57.75
- **Model Path:** meta-llama/Llama-2-70b-hf
- **Learning Rate:** 0.0002
- **Data Split:** Training 90% / Validation 10%
- **Gradient Accumulation Steps:** 4

---

### Prompt Structure:


```
### INSTRUCTION:
[instruction]

[context]

### RESPONSE:
[response]
```

Loss metrics

Training loss (Blue) Validation Loss (orange):
![training loss](train-loss.png "Training loss")

---

license: apache-2.0