maghrane commited on
Commit
29df87b
·
verified ·
1 Parent(s): 501879b

End of training

Browse files
Files changed (1) hide show
  1. README.md +22 -17
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
  license: mit
4
- base_model: maghrane/speecht5_finetuned_Mar
5
  tags:
6
  - generated_from_trainer
7
  model-index:
@@ -14,9 +14,9 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # speecht5_finetuned_Mar
16
 
17
- This model is a fine-tuned version of [maghrane/speecht5_finetuned_Mar](https://huggingface.co/maghrane/speecht5_finetuned_Mar) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.5000
20
 
21
  ## Model description
22
 
@@ -43,24 +43,29 @@ The following hyperparameters were used during training:
43
  - total_train_batch_size: 8
44
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
- - lr_scheduler_warmup_steps: 100
47
- - training_steps: 1000
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
- | Training Loss | Epoch | Step | Validation Loss |
53
- |:-------------:|:------:|:----:|:---------------:|
54
- | 0.5642 | 0.7976 | 100 | 0.5996 |
55
- | 0.5648 | 1.5952 | 200 | 0.5856 |
56
- | 0.5542 | 2.3928 | 300 | 0.5461 |
57
- | 0.546 | 3.1904 | 400 | 0.5565 |
58
- | 0.5493 | 3.9880 | 500 | 0.5271 |
59
- | 0.5203 | 4.7856 | 600 | 0.5404 |
60
- | 0.5214 | 5.5833 | 700 | 0.5191 |
61
- | 0.5208 | 6.3809 | 800 | 0.5082 |
62
- | 0.5092 | 7.1785 | 900 | 0.5037 |
63
- | 0.4999 | 7.9761 | 1000 | 0.5000 |
 
 
 
 
 
64
 
65
 
66
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
  license: mit
4
+ base_model: microsoft/speecht5_tts
5
  tags:
6
  - generated_from_trainer
7
  model-index:
 
14
 
15
  # speecht5_finetuned_Mar
16
 
17
+ This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.4845
20
 
21
  ## Model description
22
 
 
43
  - total_train_batch_size: 8
44
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
+ - lr_scheduler_warmup_steps: 50
47
+ - training_steps: 1500
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:-------:|:----:|:---------------:|
54
+ | 0.7023 | 0.7976 | 100 | 0.6419 |
55
+ | 0.6105 | 1.5952 | 200 | 0.5699 |
56
+ | 0.5895 | 2.3928 | 300 | 0.5631 |
57
+ | 0.5796 | 3.1904 | 400 | 0.5530 |
58
+ | 0.5818 | 3.9880 | 500 | 0.5388 |
59
+ | 0.5448 | 4.7856 | 600 | 0.5399 |
60
+ | 0.562 | 5.5833 | 700 | 0.5166 |
61
+ | 0.542 | 6.3809 | 800 | 0.5207 |
62
+ | 0.5374 | 7.1785 | 900 | 0.5118 |
63
+ | 0.5263 | 7.9761 | 1000 | 0.5009 |
64
+ | 0.5079 | 8.7737 | 1100 | 0.4965 |
65
+ | 0.5108 | 9.5713 | 1200 | 0.4884 |
66
+ | 0.497 | 10.3689 | 1300 | 0.4883 |
67
+ | 0.4915 | 11.1665 | 1400 | 0.4896 |
68
+ | 0.5061 | 11.9641 | 1500 | 0.4845 |
69
 
70
 
71
  ### Framework versions