dhmeltzer commited on
Commit
7933543
1 Parent(s): 40bc805

Model save

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: meta-llama/Llama-2-13b-hf
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: Llama-2-13b-hf-eli5-cleaned-wiki65k-1024_qlora
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
+
13
+ # Llama-2-13b-hf-eli5-cleaned-wiki65k-1024_qlora
14
+
15
+ This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on the None dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 1.3173
18
+
19
+ ## Model description
20
+
21
+ More information needed
22
+
23
+ ## Intended uses & limitations
24
+
25
+ More information needed
26
+
27
+ ## Training and evaluation data
28
+
29
+ More information needed
30
+
31
+ ## Training procedure
32
+
33
+ ### Training hyperparameters
34
+
35
+ The following hyperparameters were used during training:
36
+ - learning_rate: 0.0002
37
+ - train_batch_size: 16
38
+ - eval_batch_size: 16
39
+ - seed: 42
40
+ - gradient_accumulation_steps: 8
41
+ - total_train_batch_size: 128
42
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
+ - lr_scheduler_type: linear
44
+ - lr_scheduler_warmup_ratio: 0.03
45
+ - num_epochs: 1
46
+
47
+ ### Training results
48
+
49
+ | Training Loss | Epoch | Step | Validation Loss |
50
+ |:-------------:|:-----:|:----:|:---------------:|
51
+ | 1.246 | 0.1 | 82 | 1.3429 |
52
+ | 1.7867 | 0.2 | 164 | 1.3370 |
53
+ | 1.2111 | 0.3 | 246 | 1.3305 |
54
+ | 1.419 | 0.4 | 328 | 1.3258 |
55
+ | 1.8005 | 0.51 | 410 | 1.3248 |
56
+ | 1.1999 | 0.61 | 492 | 1.3216 |
57
+ | 1.4048 | 0.71 | 574 | 1.3197 |
58
+ | 1.5675 | 0.81 | 656 | 1.3193 |
59
+ | 1.2459 | 0.91 | 738 | 1.3173 |
60
+
61
+
62
+ ### Framework versions
63
+
64
+ - Transformers 4.34.0.dev0
65
+ - Pytorch 2.0.1+cu118
66
+ - Datasets 2.14.5
67
+ - Tokenizers 0.13.3