miosipof commited on
Commit
4c2061f
·
verified ·
1 Parent(s): 416d284

End of training

Browse files
Files changed (3) hide show
  1. README.md +7 -7
  2. adapter_model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -23,7 +23,7 @@ model-index:
23
  args: default
24
  metrics:
25
  - type: wer
26
- value: 44.114002478314745
27
  name: Wer
28
  ---
29
 
@@ -34,8 +34,8 @@ should probably proofread and complete it, then remove this comment. -->
34
 
35
  This model is a fine-tuned version of [b-brave/asr_double_training_15-10-2024_merged](https://huggingface.co/b-brave/asr_double_training_15-10-2024_merged) on the ASR_BB_and_EC dataset.
36
  It achieves the following results on the evaluation set:
37
- - Loss: 0.3454
38
- - Wer: 44.1140
39
 
40
  ## Model description
41
 
@@ -61,7 +61,7 @@ The following hyperparameters were used during training:
61
  - gradient_accumulation_steps: 2
62
  - total_train_batch_size: 32
63
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
- - lr_scheduler_type: cosine
65
  - lr_scheduler_warmup_steps: 50
66
  - num_epochs: 3
67
  - mixed_precision_training: Native AMP
@@ -70,9 +70,9 @@ The following hyperparameters were used during training:
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Wer |
72
  |:-------------:|:------:|:----:|:---------------:|:-------:|
73
- | 0.8097 | 0.9852 | 100 | 0.3955 | 49.0706 |
74
- | 0.342 | 1.9704 | 200 | 0.3423 | 42.8748 |
75
- | 0.175 | 2.9557 | 300 | 0.3454 | 44.1140 |
76
 
77
 
78
  ### Framework versions
 
23
  args: default
24
  metrics:
25
  - type: wer
26
+ value: 37.05080545229244
27
  name: Wer
28
  ---
29
 
 
34
 
35
  This model is a fine-tuned version of [b-brave/asr_double_training_15-10-2024_merged](https://huggingface.co/b-brave/asr_double_training_15-10-2024_merged) on the ASR_BB_and_EC dataset.
36
  It achieves the following results on the evaluation set:
37
+ - Loss: 0.3265
38
+ - Wer: 37.0508
39
 
40
  ## Model description
41
 
 
61
  - gradient_accumulation_steps: 2
62
  - total_train_batch_size: 32
63
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
+ - lr_scheduler_type: constant
65
  - lr_scheduler_warmup_steps: 50
66
  - num_epochs: 3
67
  - mixed_precision_training: Native AMP
 
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Wer |
72
  |:-------------:|:------:|:----:|:---------------:|:-------:|
73
+ | 0.6518 | 0.9852 | 100 | 0.3831 | 91.8216 |
74
+ | 0.2995 | 1.9704 | 200 | 0.3371 | 43.1227 |
75
+ | 0.1543 | 2.9557 | 300 | 0.3265 | 37.0508 |
76
 
77
 
78
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f686b48b4d41494dce2009b1fddec17c96b2564dd88f2d95c01afbadeae5ca0e
3
  size 37789960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1f13b3b0683df30e8ea5b27bcd6878deab3d631ed703fd1712d0a4e782a341c
3
  size 37789960
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3ecf0291ba6a46eb2ad475c7102e19411b943d0f7f012c34c3bb643870f6175b
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:afdf269ce02d522457c685ea6d7e78f53f773edd93db4ca0b5917843ab4c7a7f
3
  size 5368