saurabhy27-outcomes commited on
Commit
32c8862
1 Parent(s): b4fad1a

End of training

Browse files
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - en
5
+ license: apache-2.0
6
+ base_model: openai/whisper-small
7
+ tags:
8
+ - whisper-event
9
+ - generated_from_trainer
10
+ datasets:
11
+ - saurabhy27-outcomes/singlish_speech_corpus
12
+ metrics:
13
+ - wer
14
+ model-index:
15
+ - name: Whisper small singlish v2
16
+ results: []
17
+ ---
18
+
19
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
+ should probably proofread and complete it, then remove this comment. -->
21
+
22
+ # Whisper small singlish v2
23
+
24
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Singlish Speech Corpus dataset.
25
+ It achieves the following results on the evaluation set:
26
+ - Loss: 0.2000
27
+ - Wer: 33.7188
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 1e-05
47
+ - train_batch_size: 16
48
+ - eval_batch_size: 16
49
+ - seed: 42
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: linear
52
+ - lr_scheduler_warmup_steps: 500
53
+ - training_steps: 5000
54
+ - mixed_precision_training: Native AMP
55
+
56
+ ### Training results
57
+
58
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
59
+ |:-------------:|:------:|:----:|:---------------:|:-------:|
60
+ | 0.1765 | 0.7407 | 500 | 0.1531 | 9.8226 |
61
+ | 0.0711 | 1.4815 | 1000 | 0.1481 | 32.2996 |
62
+ | 0.0415 | 2.2222 | 1500 | 0.1568 | 51.1892 |
63
+ | 0.0221 | 2.9630 | 2000 | 0.1582 | 33.1932 |
64
+ | 0.0064 | 3.7037 | 2500 | 0.1750 | 37.4310 |
65
+ | 0.0024 | 4.4444 | 3000 | 0.1856 | 40.0394 |
66
+ | 0.0017 | 5.1852 | 3500 | 0.1903 | 34.0539 |
67
+ | 0.0008 | 5.9259 | 4000 | 0.1948 | 34.5269 |
68
+ | 0.0006 | 6.6667 | 4500 | 0.1996 | 33.6662 |
69
+ | 0.0005 | 7.4074 | 5000 | 0.2000 | 33.7188 |
70
+
71
+
72
+ ### Framework versions
73
+
74
+ - Transformers 4.45.0.dev0
75
+ - Pytorch 2.4.0+cu121
76
+ - Datasets 2.21.0
77
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:87d3ce89981bb9095bc124178aa3a89eefb4189e794afdb767fe0cd9dfad32ab
3
  size 966995080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd944c00b21f08ad29f7c99d449c6dd209581e1ea50faf8260c10f144e21e7b5
3
  size 966995080
runs/Sep05_11-36-56_ip-172-31-47-76/events.out.tfevents.1725536220.ip-172-31-47-76.42406.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d8bbf198529150a5adf730cd46e8df0332406454684bdd13ce9af342bcc44d1d
3
- size 13159
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b12c1fd9b2572cb53f1bdbb72dc631ead444ac1d69e84da96e7a531c0f6bfb8d
3
+ size 13513