breco commited on
Commit
7ff4b64
·
verified ·
1 Parent(s): 1b95502

Model save

Browse files
Files changed (2) hide show
  1. README.md +24 -24
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,7 +1,5 @@
1
  ---
2
  library_name: transformers
3
- language:
4
- - spa
5
  license: apache-2.0
6
  base_model: openai/whisper-tiny
7
  tags:
@@ -9,19 +7,19 @@ tags:
9
  metrics:
10
  - wer
11
  model-index:
12
- - name: Whisper Tiny Few Audios - vfranchis
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
- # Whisper Tiny Few Audios - vfranchis
20
 
21
- This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Few audios 1.0 dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: nan
24
- - Wer: 100.0
25
 
26
  ## Model description
27
 
@@ -41,36 +39,38 @@ More information needed
41
 
42
  The following hyperparameters were used during training:
43
  - learning_rate: 1e-05
44
- - train_batch_size: 4
45
  - eval_batch_size: 8
46
  - seed: 42
47
- - gradient_accumulation_steps: 4
48
  - total_train_batch_size: 16
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
- - lr_scheduler_warmup_steps: 10
52
- - training_steps: 100
53
  - mixed_precision_training: Native AMP
54
 
55
  ### Training results
56
 
57
- | Training Loss | Epoch | Step | Validation Loss | Wer |
58
- |:-------------:|:-------:|:----:|:---------------:|:-------:|
59
- | 2.5797 | 3.0769 | 10 | nan | 83.3333 |
60
- | 1.45 | 6.1538 | 20 | nan | 58.3333 |
61
- | 0.5973 | 9.2308 | 30 | nan | 33.3333 |
62
- | 0.234 | 12.3077 | 40 | nan | 33.3333 |
63
- | 0.1089 | 15.3846 | 50 | nan | 25.0 |
64
- | 0.052 | 18.4615 | 60 | nan | 33.3333 |
65
- | 0.0425 | 21.5385 | 70 | nan | 33.3333 |
66
- | 0.2025 | 24.6154 | 80 | nan | 50.0 |
67
- | 1.957 | 27.6923 | 90 | nan | 100.0 |
68
- | 6.8512 | 30.7692 | 100 | nan | 100.0 |
 
 
69
 
70
 
71
  ### Framework versions
72
 
73
  - Transformers 4.44.2
74
- - Pytorch 2.3.1+cu121
75
  - Datasets 2.21.0
76
  - Tokenizers 0.19.1
 
1
  ---
2
  library_name: transformers
 
 
3
  license: apache-2.0
4
  base_model: openai/whisper-tiny
5
  tags:
 
7
  metrics:
8
  - wer
9
  model-index:
10
+ - name: whisper-tiny-few-audios
11
  results: []
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # whisper-tiny-few-audios
18
 
19
+ This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.5693
22
+ - Wer: 30.7692
23
 
24
  ## Model description
25
 
 
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 1e-05
42
+ - train_batch_size: 8
43
  - eval_batch_size: 8
44
  - seed: 42
45
+ - gradient_accumulation_steps: 2
46
  - total_train_batch_size: 16
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
+ - lr_scheduler_warmup_steps: 25
50
+ - training_steps: 300
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Training results
54
 
55
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
56
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|
57
+ | 1.4694 | 0.4 | 25 | 1.0077 | 38.4615 |
58
+ | 0.2677 | 0.8 | 50 | 0.7482 | 46.1538 |
59
+ | 0.1034 | 1.2 | 75 | 0.6347 | 46.1538 |
60
+ | 0.0671 | 1.6 | 100 | 0.6315 | 46.1538 |
61
+ | 0.0546 | 2.0 | 125 | 0.5770 | 30.7692 |
62
+ | 0.0299 | 2.4 | 150 | 0.5620 | 30.7692 |
63
+ | 0.0219 | 2.8 | 175 | 0.5787 | 30.7692 |
64
+ | 0.0218 | 3.2 | 200 | 0.5711 | 30.7692 |
65
+ | 0.0127 | 3.6 | 225 | 0.5717 | 30.7692 |
66
+ | 0.013 | 4.0 | 250 | 0.5558 | 30.7692 |
67
+ | 0.0084 | 4.4 | 275 | 0.5680 | 30.7692 |
68
+ | 0.0102 | 4.8 | 300 | 0.5693 | 30.7692 |
69
 
70
 
71
  ### Framework versions
72
 
73
  - Transformers 4.44.2
74
+ - Pytorch 2.4.1+cu121
75
  - Datasets 2.21.0
76
  - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ab3e686d03a839245777c0751cec0b479bfc633ffea47922035d9e77bd048d7c
3
  size 151061672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:020facb9c354e1b993b72ee3eb70e366b0d60888d7ee627d2011e34ce72b8898
3
  size 151061672