ysdede commited on
Commit
b1b6709
·
verified ·
1 Parent(s): a6e5e32

End of training

Browse files
Files changed (2) hide show
  1. README.md +86 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - tr
5
+ license: mit
6
+ base_model: openai/whisper-large-v3-turbo
7
+ tags:
8
+ - generated_from_trainer
9
+ datasets:
10
+ - khanacademy
11
+ - turkish
12
+ - stem
13
+ - asr
14
+ metrics:
15
+ - wer
16
+ model-index:
17
+ - name: whisper-khanacademy-large-v3-turbo-tr
18
+ results:
19
+ - task:
20
+ name: Automatic Speech Recognition
21
+ type: automatic-speech-recognition
22
+ dataset:
23
+ name: ysdede/khanacademy-turkish
24
+ type: khanacademy
25
+ metrics:
26
+ - name: Wer
27
+ type: wer
28
+ value: 15.695132614398135
29
+ ---
30
+
31
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
+ should probably proofread and complete it, then remove this comment. -->
33
+
34
+ # whisper-khanacademy-large-v3-turbo-tr
35
+
36
+ This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the ysdede/khanacademy-turkish dataset.
37
+ It achieves the following results on the evaluation set:
38
+ - Loss: 0.2129
39
+ - Wer: 15.6951
40
+
41
+ ## Model description
42
+
43
+ More information needed
44
+
45
+ ## Intended uses & limitations
46
+
47
+ More information needed
48
+
49
+ ## Training and evaluation data
50
+
51
+ More information needed
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 5e-06
59
+ - train_batch_size: 64
60
+ - eval_batch_size: 32
61
+ - seed: 42
62
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
+ - lr_scheduler_type: linear
64
+ - lr_scheduler_warmup_ratio: 0.15
65
+ - training_steps: 1204
66
+ - mixed_precision_training: Native AMP
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
71
+ |:-------------:|:------:|:----:|:---------------:|:-------:|
72
+ | 0.2298 | 0.1429 | 172 | 0.2418 | 16.5877 |
73
+ | 0.2157 | 0.2857 | 344 | 0.2255 | 15.9611 |
74
+ | 0.1668 | 1.0939 | 516 | 0.2227 | 15.7461 |
75
+ | 0.1752 | 1.2367 | 688 | 0.2159 | 15.8846 |
76
+ | 0.1492 | 2.0449 | 860 | 0.2187 | 15.7571 |
77
+ | 0.1592 | 2.1877 | 1032 | 0.2134 | 15.5421 |
78
+ | 0.1336 | 2.3306 | 1204 | 0.2129 | 15.6951 |
79
+
80
+
81
+ ### Framework versions
82
+
83
+ - Transformers 4.48.0.dev0
84
+ - Pytorch 2.5.1+cu121
85
+ - Datasets 3.2.0
86
+ - Tokenizers 0.21.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c3ef8cb5e5d0f1d9732248b1f450ebfce20aff4cd4ff0098acdb9d00f9d54f7a
3
  size 3235581408
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82888459eb91009f3f26027da598ee649f368fcc2d7d73d37201357046d0b6d7
3
  size 3235581408