kavyamanohar commited on
Commit
344db46
1 Parent(s): 082e1a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -13
README.md CHANGED
@@ -1,21 +1,20 @@
1
- ---
2
- base_model: facebook/w2v-bert-2.0
3
- license: mit
4
- metrics:
5
- - wer
6
- tags:
7
- - generated_from_trainer
8
- model-index:
9
- - name: w2v-bert-2.0-nonstudio_and_studioRecords
10
- results: []
11
- ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
  # w2v-bert-2.0-nonstudio_and_studioRecords
17
 
18
- This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on an unknown dataset.
 
19
  It achieves the following results on the evaluation set:
20
  - Loss: 0.1722
21
  - Wer: 0.1299
@@ -81,4 +80,4 @@ The following hyperparameters were used during training:
81
  - Transformers 4.39.3
82
  - Pytorch 2.1.1+cu121
83
  - Datasets 2.16.1
84
- - Tokenizers 0.15.1
 
1
+ ---
2
+ base_model: facebook/w2v-bert-2.0
3
+ license: mit
4
+ metrics:
5
+ - wer
6
+ model-index:
7
+ - name: w2v-bert-2.0-nonstudio_and_studioRecords
8
+ results: []
9
+ ---
 
 
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
  should probably proofread and complete it, then remove this comment. -->
13
 
14
  # w2v-bert-2.0-nonstudio_and_studioRecords
15
 
16
+ This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the [IMASC](https://huggingface.co/datasets/thennal/IMaSC), [MSC](https://huggingface.co/datasets/smcproject/MSC), [OpenSLR Malayalam Train split](https://huggingface.co/datasets/vrclc/openslr63), [Festvox Malayalam](https://huggingface.co/datasets/vrclc/openslr63), [CV16](https://huggingface.co/datasets/mozilla-foundation/common_voice_16_0) .
17
+
18
  It achieves the following results on the evaluation set:
19
  - Loss: 0.1722
20
  - Wer: 0.1299
 
80
  - Transformers 4.39.3
81
  - Pytorch 2.1.1+cu121
82
  - Datasets 2.16.1
83
+ - Tokenizers 0.15.1