FunPang commited on
Commit
f73c8c0
·
verified ·
1 Parent(s): 25eec66

FunPang/whisper-large-V3-QLoRA-Cantones

Browse files
README.md CHANGED
@@ -1,15 +1,15 @@
1
- ---
2
- base_model: openai/whisper-large-v3
3
- datasets:
4
- - common_voice_13_0
5
- library_name: peft
6
- license: apache-2.0
7
- tags:
8
- - generated_from_trainer
9
- model-index:
10
- - name: whisper-large-V3-QLoRA-Cantones
11
- results: []
12
- ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
@@ -18,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the common_voice_13_0 dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 2.8906
22
 
23
  ## Model description
24
 
@@ -44,14 +44,16 @@ The following hyperparameters were used during training:
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 50
47
- - num_epochs: 1
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
- | 3.0483 | 1.0 | 1753 | 2.8906 |
 
 
55
 
56
 
57
  ### Framework versions
 
1
+ ---
2
+ base_model: openai/whisper-large-v3
3
+ datasets:
4
+ - common_voice_13_0
5
+ library_name: peft
6
+ license: apache-2.0
7
+ tags:
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: whisper-large-V3-QLoRA-Cantones
11
+ results: []
12
+ ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
 
18
 
19
  This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the common_voice_13_0 dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 3.0897
22
 
23
  ## Model description
24
 
 
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 50
47
+ - num_epochs: 3
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
+ | 3.7995 | 1.0 | 1753 | 3.7999 |
55
+ | 3.5221 | 2.0 | 3506 | 3.3667 |
56
+ | 3.0697 | 3.0 | 5259 | 3.0897 |
57
 
58
 
59
  ### Framework versions
adapter_config.json CHANGED
@@ -23,8 +23,8 @@
23
  "rank_pattern": {},
24
  "revision": null,
25
  "target_modules": [
26
- "v_proj",
27
- "q_proj"
28
  ],
29
  "task_type": null,
30
  "use_dora": false,
 
23
  "rank_pattern": {},
24
  "revision": null,
25
  "target_modules": [
26
+ "q_proj",
27
+ "v_proj"
28
  ],
29
  "task_type": null,
30
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7e57b9a1b5b41c1fa8316b569182af238de1cee4835ed9c0aaa7da2d16c0df51
3
  size 62969640
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c5235a2a30dfb9048014ac5ad649b0b97060c14c66982a4fc8b333df42376a0
3
  size 62969640
runs/Sep18_22-31-10_asus2/events.out.tfevents.1726723871.asus2.30204.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:118c7c822bb8325ef0e2cf30cc32e025ce6f9269df9410cc47f9bf10aa543a85
3
+ size 18493
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c1c36d13c1801623da05d42e7cb25ec5261f9ffce8cdf4e2d71d66df7c2e6d18
3
  size 5432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:737e92a55cdc463cffc7cc3606bbe4baefefb28640cf40e31a62c45f85fb845b
3
  size 5432