Edmon02 commited on
Commit
c8ea053
1 Parent(s): 2cd8b03

End of training

Browse files
README.md CHANGED
@@ -1,9 +1,8 @@
1
  ---
2
- base_model: Edmon02/speecht5_finetuned_voxpopuli_hy
 
3
  tags:
4
  - generated_from_trainer
5
- datasets:
6
- - common_voice_17_0
7
  model-index:
8
  - name: speecht5_finetuned_voxpopuli_nl
9
  results: []
@@ -14,9 +13,14 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # speecht5_finetuned_voxpopuli_nl
16
 
17
- This model is a fine-tuned version of [Edmon02/speecht5_finetuned_voxpopuli_hy](https://huggingface.co/Edmon02/speecht5_finetuned_voxpopuli_hy) on the common_voice_17_0 dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.6335
 
 
 
 
 
20
 
21
  ## Model description
22
 
@@ -47,19 +51,9 @@ The following hyperparameters were used during training:
47
  - training_steps: 4000
48
  - mixed_precision_training: Native AMP
49
 
50
- ### Training results
51
-
52
- | Training Loss | Epoch | Step | Validation Loss |
53
- |:-------------:|:--------:|:----:|:---------------:|
54
- | 0.6726 | 25.1572 | 1000 | 0.6327 |
55
- | 0.6491 | 50.3145 | 2000 | 0.6283 |
56
- | 0.6394 | 75.4717 | 3000 | 0.6306 |
57
- | 0.6434 | 100.6289 | 4000 | 0.6335 |
58
-
59
-
60
  ### Framework versions
61
 
62
- - Transformers 4.41.2
63
- - Pytorch 2.3.0+cu121
64
  - Datasets 2.20.0
65
  - Tokenizers 0.19.1
 
1
  ---
2
+ license: mit
3
+ base_model: Edmon02/speecht5_finetuned_hy
4
  tags:
5
  - generated_from_trainer
 
 
6
  model-index:
7
  - name: speecht5_finetuned_voxpopuli_nl
8
  results: []
 
13
 
14
  # speecht5_finetuned_voxpopuli_nl
15
 
16
+ This model is a fine-tuned version of [Edmon02/speecht5_finetuned_hy](https://huggingface.co/Edmon02/speecht5_finetuned_hy) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
+ - eval_loss: 0.5525
19
+ - eval_runtime: 78.2993
20
+ - eval_samples_per_second: 50.537
21
+ - eval_steps_per_second: 25.275
22
+ - epoch: 1.7973
23
+ - step: 2000
24
 
25
  ## Model description
26
 
 
51
  - training_steps: 4000
52
  - mixed_precision_training: Native AMP
53
 
 
 
 
 
 
 
 
 
 
 
54
  ### Framework versions
55
 
56
+ - Transformers 4.43.3
57
+ - Pytorch 2.4.0+cu121
58
  - Datasets 2.20.0
59
  - Tokenizers 0.19.1
generation_config.json CHANGED
@@ -5,5 +5,5 @@
5
  "eos_token_id": 2,
6
  "max_length": 1876,
7
  "pad_token_id": 1,
8
- "transformers_version": "4.41.2"
9
  }
 
5
  "eos_token_id": 2,
6
  "max_length": 1876,
7
  "pad_token_id": 1,
8
+ "transformers_version": "4.43.3"
9
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8e65df1a70cd9556b7b054583b8a06b10d6a50e493ca1eb8df9d8a6705413529
3
  size 577887624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:817e4d65375527c657393679537a2cab648fc715a3a3da9f00c82c049b6cbdcb
3
  size 577887624
runs/Aug04_08-59-32_ip-10-192-12-187/events.out.tfevents.1722761976.ip-10-192-12-187.1938.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b5c9bf7ad6831d1143c3313565d9ce8f784394a9b8a80ebc527ecf6dc74e7918
3
- size 23996
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:809deb021deb183c4c82597462e7ec721348dffff268af7b03dd0a4f75924561
3
+ size 25473