fahadqazi commited on
Commit
1b2e7b2
·
verified ·
1 Parent(s): c76b640

End of training

Browse files
README.md CHANGED
@@ -15,8 +15,6 @@ should probably proofread and complete it, then remove this comment. -->
15
  # Sindhi-TTS
16
 
17
  This model is a fine-tuned version of [fahadqazi/Sindhi-TTS](https://huggingface.co/fahadqazi/Sindhi-TTS) on the None dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 0.4864
20
 
21
  ## Model description
22
 
@@ -35,7 +33,7 @@ More information needed
35
  ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
- - learning_rate: 2e-05
39
  - train_batch_size: 8
40
  - eval_batch_size: 2
41
  - seed: 42
@@ -44,25 +42,9 @@ The following hyperparameters were used during training:
44
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 100
47
- - training_steps: 1000
48
  - mixed_precision_training: Native AMP
49
 
50
- ### Training results
51
-
52
- | Training Loss | Epoch | Step | Validation Loss |
53
- |:-------------:|:------:|:----:|:---------------:|
54
- | 0.6189 | 0.2298 | 100 | 0.5837 |
55
- | 0.5426 | 0.4595 | 200 | 0.5275 |
56
- | 0.5237 | 0.6893 | 300 | 0.5106 |
57
- | 0.5118 | 0.9190 | 400 | 0.5018 |
58
- | 0.5032 | 1.1488 | 500 | 0.4960 |
59
- | 0.5044 | 1.3785 | 600 | 0.4939 |
60
- | 0.5042 | 1.6083 | 700 | 0.4905 |
61
- | 0.4985 | 1.8380 | 800 | 0.4895 |
62
- | 0.4938 | 2.0678 | 900 | 0.4871 |
63
- | 0.496 | 2.2975 | 1000 | 0.4864 |
64
-
65
-
66
  ### Framework versions
67
 
68
  - Transformers 4.46.2
 
15
  # Sindhi-TTS
16
 
17
  This model is a fine-tuned version of [fahadqazi/Sindhi-TTS](https://huggingface.co/fahadqazi/Sindhi-TTS) on the None dataset.
 
 
18
 
19
  ## Model description
20
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
+ - learning_rate: 5e-05
37
  - train_batch_size: 8
38
  - eval_batch_size: 2
39
  - seed: 42
 
42
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 100
45
+ - training_steps: 5000
46
  - mixed_precision_training: Native AMP
47
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  ### Framework versions
49
 
50
  - Transformers 4.46.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ea533fb214f0bed5164256629469ae495157aa322bec155d307412cad47bdde1
3
  size 617574792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e6d1d46de8f60a8268bbe764cd367c278923086dd3ab4f1628a5d9e1bdb9ae3
3
  size 617574792
runs/Nov17_22-53-01_cef52f5e6380/events.out.tfevents.1731883988.cef52f5e6380.490.7 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8e60145832ae35401e09280cf409d0d42b1a0dea1fa5e0c508be7968fb683349
3
- size 17742
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e8f9b2413d37a08cb8300d5874153deed34682ad1deaeb3d3d77d45abebb099
3
+ size 18857
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1596df1d4bb0063b0c3160d79a2816cc8a7fab4272c1a8989c862352288fa554
3
  size 5432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:443bfeed0a30d1047c0fee4f55b67ed51a0c99ae556fa2376f18d30deaacaf32
3
  size 5432