End of training
Browse files- README.md +27 -8
- pytorch_model.bin +1 -1
- training_args.bin +1 -1
README.md
CHANGED
@@ -3,6 +3,8 @@ license: apache-2.0
|
|
3 |
base_model: Samuael/asr-alffamharic-phoneme-based
|
4 |
tags:
|
5 |
- generated_from_trainer
|
|
|
|
|
6 |
model-index:
|
7 |
- name: asr-alffamharic-phoneme-based
|
8 |
results: []
|
@@ -15,13 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
|
|
15 |
|
16 |
This model is a fine-tuned version of [Samuael/asr-alffamharic-phoneme-based](https://huggingface.co/Samuael/asr-alffamharic-phoneme-based) on the None dataset.
|
17 |
It achieves the following results on the evaluation set:
|
18 |
-
-
|
19 |
-
-
|
20 |
-
- eval_runtime: 21.4563
|
21 |
-
- eval_samples_per_second: 16.732
|
22 |
-
- eval_steps_per_second: 2.097
|
23 |
-
- epoch: 5.32
|
24 |
-
- step: 1000
|
25 |
|
26 |
## Model description
|
27 |
|
@@ -40,7 +37,7 @@ More information needed
|
|
40 |
### Training hyperparameters
|
41 |
|
42 |
The following hyperparameters were used during training:
|
43 |
-
- learning_rate:
|
44 |
- train_batch_size: 32
|
45 |
- eval_batch_size: 8
|
46 |
- seed: 42
|
@@ -49,6 +46,28 @@ The following hyperparameters were used during training:
|
|
49 |
- lr_scheduler_warmup_steps: 1000
|
50 |
- num_epochs: 20
|
51 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
### Framework versions
|
53 |
|
54 |
- Transformers 4.34.1
|
|
|
3 |
base_model: Samuael/asr-alffamharic-phoneme-based
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
+
metrics:
|
7 |
+
- wer
|
8 |
model-index:
|
9 |
- name: asr-alffamharic-phoneme-based
|
10 |
results: []
|
|
|
17 |
|
18 |
This model is a fine-tuned version of [Samuael/asr-alffamharic-phoneme-based](https://huggingface.co/Samuael/asr-alffamharic-phoneme-based) on the None dataset.
|
19 |
It achieves the following results on the evaluation set:
|
20 |
+
- Loss: 0.4179
|
21 |
+
- Wer: 0.4986
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
## Model description
|
24 |
|
|
|
37 |
### Training hyperparameters
|
38 |
|
39 |
The following hyperparameters were used during training:
|
40 |
+
- learning_rate: 1e-05
|
41 |
- train_batch_size: 32
|
42 |
- eval_batch_size: 8
|
43 |
- seed: 42
|
|
|
46 |
- lr_scheduler_warmup_steps: 1000
|
47 |
- num_epochs: 20
|
48 |
|
49 |
+
### Training results
|
50 |
+
|
51 |
+
| Training Loss | Epoch | Step | Validation Loss | Wer |
|
52 |
+
|:-------------:|:-----:|:----:|:---------------:|:------:|
|
53 |
+
| 0.3208 | 1.23 | 200 | 0.4162 | 0.5180 |
|
54 |
+
| 0.3139 | 2.45 | 400 | 0.4169 | 0.5141 |
|
55 |
+
| 0.2832 | 3.68 | 600 | 0.4136 | 0.5167 |
|
56 |
+
| 0.2904 | 4.91 | 800 | 0.4137 | 0.5144 |
|
57 |
+
| 0.255 | 6.13 | 1000 | 0.4149 | 0.5113 |
|
58 |
+
| 0.2536 | 7.36 | 1200 | 0.4132 | 0.5108 |
|
59 |
+
| 0.2222 | 8.59 | 1400 | 0.4186 | 0.5086 |
|
60 |
+
| 0.2247 | 9.82 | 1600 | 0.4134 | 0.5076 |
|
61 |
+
| 0.2106 | 11.04 | 1800 | 0.4145 | 0.5093 |
|
62 |
+
| 0.2101 | 12.27 | 2000 | 0.4191 | 0.5026 |
|
63 |
+
| 0.2383 | 13.5 | 2200 | 0.4183 | 0.5 |
|
64 |
+
| 0.2882 | 14.72 | 2400 | 0.4160 | 0.4988 |
|
65 |
+
| 0.2337 | 15.95 | 2600 | 0.4166 | 0.4998 |
|
66 |
+
| 0.2424 | 17.18 | 2800 | 0.4172 | 0.4985 |
|
67 |
+
| 0.2301 | 18.4 | 3000 | 0.4175 | 0.4993 |
|
68 |
+
| 0.2257 | 19.63 | 3200 | 0.4179 | 0.4986 |
|
69 |
+
|
70 |
+
|
71 |
### Framework versions
|
72 |
|
73 |
- Transformers 4.34.1
|
pytorch_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 377686878
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d06583092da54a35f8e79893fa0407b729eff47674389074ffef921951417308
|
3 |
size 377686878
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 4600
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:297df8c644b7d59e8d0aec7e5bacc36ebd293bd56cc9b9f673c9f52538fd0c0e
|
3 |
size 4600
|