VitaliiVrublevskyi commited on
Commit
f574f47
1 Parent(s): 32ff37d

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -19
README.md CHANGED
@@ -4,10 +4,12 @@ tags:
4
  - generated_from_trainer
5
  datasets:
6
  - glue
 
 
 
7
  model-index:
8
  - name: Llama-2-7b-hf-finetuned-mrpc
9
  results: []
10
- library_name: peft
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,6 +18,10 @@ should probably proofread and complete it, then remove this comment. -->
16
  # Llama-2-7b-hf-finetuned-mrpc
17
 
18
  This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the glue dataset.
 
 
 
 
19
 
20
  ## Model description
21
 
@@ -31,38 +37,35 @@ More information needed
31
 
32
  ## Training procedure
33
 
34
-
35
- The following `bitsandbytes` quantization config was used during training:
36
- - load_in_8bit: True
37
- - load_in_4bit: False
38
- - llm_int8_threshold: 6.0
39
- - llm_int8_skip_modules: None
40
- - llm_int8_enable_fp32_cpu_offload: False
41
- - llm_int8_has_fp16_weight: False
42
- - bnb_4bit_quant_type: fp4
43
- - bnb_4bit_use_double_quant: False
44
- - bnb_4bit_compute_dtype: float32
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
  - learning_rate: 0.0001
49
- - train_batch_size: 2
50
- - eval_batch_size: 2
51
  - seed: 42
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: linear
54
- - num_epochs: 1
55
 
56
  ### Training results
57
 
58
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
59
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|
60
- | No log | 1.0 | 19 | 0.5879 | 0.75 | 0.8 |
 
 
 
 
 
 
 
 
 
61
 
62
 
63
  ### Framework versions
64
 
65
- - PEFT 0.4.0
66
  - Transformers 4.31.0
67
  - Pytorch 2.0.1+cu118
68
  - Datasets 2.14.5
 
4
  - generated_from_trainer
5
  datasets:
6
  - glue
7
+ metrics:
8
+ - accuracy
9
+ - f1
10
  model-index:
11
  - name: Llama-2-7b-hf-finetuned-mrpc
12
  results: []
 
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
18
  # Llama-2-7b-hf-finetuned-mrpc
19
 
20
  This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the glue dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.4716
23
+ - Accuracy: 0.7672
24
+ - F1: 0.8403
25
 
26
  ## Model description
27
 
 
37
 
38
  ## Training procedure
39
 
 
 
 
 
 
 
 
 
 
 
 
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
43
  - learning_rate: 0.0001
44
+ - train_batch_size: 16
45
+ - eval_batch_size: 16
46
  - seed: 42
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
+ - num_epochs: 10
50
 
51
  ### Training results
52
 
53
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
54
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
55
+ | No log | 1.0 | 230 | 0.6045 | 0.7206 | 0.8155 |
56
+ | No log | 2.0 | 460 | 0.6488 | 0.6912 | 0.8158 |
57
+ | 0.6326 | 3.0 | 690 | 0.5236 | 0.7279 | 0.8235 |
58
+ | 0.6326 | 4.0 | 920 | 0.5273 | 0.7255 | 0.8282 |
59
+ | 0.5602 | 5.0 | 1150 | 0.5246 | 0.7402 | 0.8044 |
60
+ | 0.5602 | 6.0 | 1380 | 0.4893 | 0.75 | 0.8311 |
61
+ | 0.5139 | 7.0 | 1610 | 0.4884 | 0.7623 | 0.8289 |
62
+ | 0.5139 | 8.0 | 1840 | 0.4989 | 0.7402 | 0.8307 |
63
+ | 0.4754 | 9.0 | 2070 | 0.4732 | 0.7745 | 0.8435 |
64
+ | 0.4754 | 10.0 | 2300 | 0.4716 | 0.7672 | 0.8403 |
65
 
66
 
67
  ### Framework versions
68
 
 
69
  - Transformers 4.31.0
70
  - Pytorch 2.0.1+cu118
71
  - Datasets 2.14.5