VitaliiVrublevskyi commited on
Commit
95cb2d7
·
1 Parent(s): 1200584

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -24
README.md CHANGED
@@ -10,7 +10,6 @@ metrics:
10
  model-index:
11
  - name: Llama-2-7b-hf-finetuned-mrpc-v0.4
12
  results: []
13
- library_name: peft
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -20,9 +19,9 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the glue dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.5492
24
- - Accuracy: 0.7402
25
- - F1: 0.8251
26
 
27
  ## Model description
28
 
@@ -38,17 +37,6 @@ More information needed
38
 
39
  ## Training procedure
40
 
41
-
42
- The following `bitsandbytes` quantization config was used during training:
43
- - load_in_8bit: True
44
- - load_in_4bit: False
45
- - llm_int8_threshold: 6.0
46
- - llm_int8_skip_modules: None
47
- - llm_int8_enable_fp32_cpu_offload: False
48
- - llm_int8_has_fp16_weight: False
49
- - bnb_4bit_quant_type: fp4
50
- - bnb_4bit_use_double_quant: False
51
- - bnb_4bit_compute_dtype: float32
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
@@ -58,22 +46,28 @@ The following hyperparameters were used during training:
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
- - num_epochs: 5
62
 
63
  ### Training results
64
 
65
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
66
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
67
- | No log | 1.0 | 230 | 0.6542 | 0.6446 | 0.7695 |
68
- | No log | 2.0 | 460 | 0.5938 | 0.6912 | 0.7968 |
69
- | 0.6489 | 3.0 | 690 | 0.5694 | 0.7230 | 0.8151 |
70
- | 0.6489 | 4.0 | 920 | 0.5503 | 0.7230 | 0.8138 |
71
- | 0.5299 | 5.0 | 1150 | 0.5492 | 0.7402 | 0.8251 |
 
 
 
 
 
 
 
72
 
73
 
74
  ### Framework versions
75
 
76
- - PEFT 0.4.0
77
  - Transformers 4.31.0
78
  - Pytorch 2.0.1+cu118
79
  - Datasets 2.14.5
 
10
  model-index:
11
  - name: Llama-2-7b-hf-finetuned-mrpc-v0.4
12
  results: []
 
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
19
 
20
  This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the glue dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.4030
23
+ - Accuracy: 0.8407
24
+ - F1: 0.8862
25
 
26
  ## Model description
27
 
 
37
 
38
  ## Training procedure
39
 
 
 
 
 
 
 
 
 
 
 
 
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
 
46
  - seed: 42
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
+ - num_epochs: 12
50
 
51
  ### Training results
52
 
53
+ | Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss |
54
+ |:-------------:|:-----:|:----:|:--------:|:------:|:---------------:|
55
+ | No log | 1.0 | 230 | 0.6446 | 0.7695 | 0.6542 |
56
+ | No log | 2.0 | 460 | 0.6912 | 0.7968 | 0.5938 |
57
+ | 0.6489 | 3.0 | 690 | 0.7230 | 0.8151 | 0.5694 |
58
+ | 0.6489 | 4.0 | 920 | 0.7230 | 0.8138 | 0.5503 |
59
+ | 0.5299 | 5.0 | 1150 | 0.7402 | 0.8251 | 0.5492 |
60
+ | 0.5299 | 6.0 | 1380 | 0.4880 | 0.7794 | 0.8432 |
61
+ | 0.4687 | 7.0 | 1610 | 0.4559 | 0.8064 | 0.8663 |
62
+ | 0.4687 | 8.0 | 1840 | 0.4298 | 0.8186 | 0.875 |
63
+ | 0.374 | 9.0 | 2070 | 0.4210 | 0.8284 | 0.8818 |
64
+ | 0.374 | 10.0 | 2300 | 0.3953 | 0.8456 | 0.8916 |
65
+ | 0.3096 | 11.0 | 2530 | 0.4074 | 0.8431 | 0.8897 |
66
+ | 0.3096 | 12.0 | 2760 | 0.4030 | 0.8407 | 0.8862 |
67
 
68
 
69
  ### Framework versions
70
 
 
71
  - Transformers 4.31.0
72
  - Pytorch 2.0.1+cu118
73
  - Datasets 2.14.5