crossdelenna commited on
Commit
6c91f1b
·
verified ·
1 Parent(s): d51a8ad

End of training

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
- base_model: openai/whisper-medium.en
5
  tags:
6
  - generated_from_trainer
7
  metrics:
@@ -16,10 +16,10 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # medium_cross.en
18
 
19
- This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.7616
22
- - Wer: 34.6503
23
 
24
  ## Model description
25
 
@@ -39,21 +39,22 @@ More information needed
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 1e-05
42
- - train_batch_size: 16
43
- - eval_batch_size: 8
44
  - seed: 42
45
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 10
48
- - training_steps: 401
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer |
54
  |:-------------:|:------:|:----:|:---------------:|:-------:|
55
- | 2.2838 | 0.5155 | 200 | 1.0021 | 36.0184 |
56
- | 0.9275 | 1.0309 | 400 | 0.7616 | 34.6503 |
 
57
 
58
 
59
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
  license: apache-2.0
4
+ base_model: crossdelenna/medium_cross.en
5
  tags:
6
  - generated_from_trainer
7
  metrics:
 
16
 
17
  # medium_cross.en
18
 
19
+ This model is a fine-tuned version of [crossdelenna/medium_cross.en](https://huggingface.co/crossdelenna/medium_cross.en) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.3034
22
+ - Wer: 15.1384
23
 
24
  ## Model description
25
 
 
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 1e-05
42
+ - train_batch_size: 22
43
+ - eval_batch_size: 22
44
  - seed: 42
45
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
  - lr_scheduler_warmup_steps: 10
48
+ - training_steps: 1051
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer |
54
  |:-------------:|:------:|:----:|:---------------:|:-------:|
55
+ | 0.664 | 1.2411 | 350 | 0.3998 | 18.2094 |
56
+ | 0.4625 | 2.4823 | 700 | 0.3244 | 16.0633 |
57
+ | 0.3703 | 3.7234 | 1050 | 0.3034 | 15.1384 |
58
 
59
 
60
  ### Framework versions
runs/Feb10_17-08-50_b24e50d8c658/events.out.tfevents.1739207638.b24e50d8c658.405.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:73c1659cfd6ae1e78843d542ba5d0a222558ca0c38eb156a9efc0f9ee2518801
3
- size 7510
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbbdd3a6db533de96ac1fc8f73287ac223d3cfa065e5a9414dcce2637b47b399
3
+ size 7864