Adi-ds commited on
Commit
ab64665
·
1 Parent(s): c70094a

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -13
README.md CHANGED
@@ -4,7 +4,6 @@ tags:
4
  model-index:
5
  - name: Kaggle-Science-LLM
6
  results: []
7
- library_name: peft
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -13,6 +12,8 @@ should probably proofread and complete it, then remove this comment. -->
13
  # Kaggle-Science-LLM
14
 
15
  This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
 
 
16
 
17
  ## Model description
18
 
@@ -28,17 +29,6 @@ More information needed
28
 
29
  ## Training procedure
30
 
31
-
32
- The following `bitsandbytes` quantization config was used during training:
33
- - load_in_8bit: False
34
- - load_in_4bit: True
35
- - llm_int8_threshold: 6.0
36
- - llm_int8_skip_modules: None
37
- - llm_int8_enable_fp32_cpu_offload: False
38
- - llm_int8_has_fp16_weight: False
39
- - bnb_4bit_quant_type: nf4
40
- - bnb_4bit_use_double_quant: True
41
- - bnb_4bit_compute_dtype: bfloat16
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
@@ -54,9 +44,24 @@ The following hyperparameters were used during training:
54
  - training_steps: 50
55
  - label_smoothing_factor: 0.1
56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  ### Framework versions
58
 
59
- - PEFT 0.4.0
60
  - Transformers 4.30.2
61
  - Pytorch 2.0.0
62
  - Datasets 2.1.0
 
4
  model-index:
5
  - name: Kaggle-Science-LLM
6
  results: []
 
7
  ---
8
 
9
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
12
  # Kaggle-Science-LLM
13
 
14
  This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
15
+ It achieves the following results on the evaluation set:
16
+ - Loss: 4.4821
17
 
18
  ## Model description
19
 
 
29
 
30
  ## Training procedure
31
 
 
 
 
 
 
 
 
 
 
 
 
32
  ### Training hyperparameters
33
 
34
  The following hyperparameters were used during training:
 
44
  - training_steps: 50
45
  - label_smoothing_factor: 0.1
46
 
47
+ ### Training results
48
+
49
+ | Training Loss | Epoch | Step | Validation Loss |
50
+ |:-------------:|:-----:|:----:|:---------------:|
51
+ | 6.6677 | 0.01 | 5 | 6.5120 |
52
+ | 6.4854 | 0.02 | 10 | 6.3479 |
53
+ | 6.2537 | 0.02 | 15 | 6.1641 |
54
+ | 6.0912 | 0.03 | 20 | 5.9550 |
55
+ | 5.8341 | 0.04 | 25 | 5.7246 |
56
+ | 5.6128 | 0.05 | 30 | 5.4776 |
57
+ | 5.3665 | 0.06 | 35 | 5.2728 |
58
+ | 5.1581 | 0.06 | 40 | 5.0129 |
59
+ | 4.9526 | 0.07 | 45 | 4.7501 |
60
+ | 4.6988 | 0.08 | 50 | 4.4821 |
61
+
62
+
63
  ### Framework versions
64
 
 
65
  - Transformers 4.30.2
66
  - Pytorch 2.0.0
67
  - Datasets 2.1.0