kei0902 commited on
Commit
148a9cc
·
verified ·
1 Parent(s): 5815ebc

Fine-tuned educational model with LoRA and 8-bit quantization

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -5,14 +5,14 @@ base_model: google/gemma-2-2b
5
  tags:
6
  - generated_from_trainer
7
  model-index:
8
- - name: tuned-educational-model
9
  results: []
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
- # tuned-educational-model
16
 
17
  This model is a fine-tuned version of [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) on an unknown dataset.
18
 
@@ -33,12 +33,12 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - learning_rate: 0.0003
37
- - train_batch_size: 1
38
  - eval_batch_size: 8
39
  - seed: 42
40
  - gradient_accumulation_steps: 16
41
- - total_train_batch_size: 16
42
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - num_epochs: 1
 
5
  tags:
6
  - generated_from_trainer
7
  model-index:
8
+ - name: t-modelV2
9
  results: []
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ # t-modelV2
16
 
17
  This model is a fine-tuned version of [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) on an unknown dataset.
18
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
+ - learning_rate: 8.698757885449884e-05
37
+ - train_batch_size: 4
38
  - eval_batch_size: 8
39
  - seed: 42
40
  - gradient_accumulation_steps: 16
41
+ - total_train_batch_size: 64
42
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - num_epochs: 1