khanhld commited on
Commit
17cbb76
1 Parent(s): 552b1c8

update readme

Browse files
Files changed (1) hide show
  1. README.md +11 -9
README.md CHANGED
@@ -52,18 +52,20 @@ model-index:
52
  # Vietnamese Speech Recognition using Wav2vec 2.0
53
  ### Table of contents
54
  1. [Model Description](#description)
55
- 2. [Benchmark Result](#benchmark)
56
- 3. [Example Usage](#example)
57
- 4. [Evaluation](#evaluation)
58
- 5. [Contact](#contact)
 
59
 
60
  <a name = "description" ></a>
61
  ### Model Description
62
- Fine-tuned the Wav2vec2-based model on about 160 hours of Vietnamese speech dataset from different resources including [VIOS](https://huggingface.co/datasets/vivos), [COMMON VOICE](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [FOSD](https://data.mendeley.com/datasets/k9sxg2twv4/4) and [VLSP 100h](https://drive.google.com/file/d/1vUSxdORDxk-ePUt-bUVDahpoXiqKchMx/view). We have not yet incorporated the Language Model into our ASR system but still gained a promising result.
63
- <br>
64
- We also provide code for Pre-training and Fine-tuning the Wav2vec2 model (not available for now but will release soon). If you wish to train on your dataset, check it out here:
65
- - [Pretrain](https://github.com/khanld/ASR-Wav2vec-Pretrain)
66
- - [Finetune](https://github.com/khanld/ASR-Wa2vec-Finetune)
 
67
  </br>
68
 
69
  <a name = "benchmark" ></a>
 
52
  # Vietnamese Speech Recognition using Wav2vec 2.0
53
  ### Table of contents
54
  1. [Model Description](#description)
55
+ 2. [Implementation](#implementation)
56
+ 3. [Benchmark Result](#benchmark)
57
+ 4. [Example Usage](#example)
58
+ 5. [Evaluation](#evaluation)
59
+ 6. [Contact](#contact)
60
 
61
  <a name = "description" ></a>
62
  ### Model Description
63
+ Fine-tuned the Wav2vec2-based model on about 160 hours of Vietnamese speech dataset from different resources, including [VIOS](https://huggingface.co/datasets/vivos), [COMMON VOICE](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [FOSD](https://data.mendeley.com/datasets/k9sxg2twv4/4) and [VLSP 100h](https://drive.google.com/file/d/1vUSxdORDxk-ePUt-bUVDahpoXiqKchMx/view). We have not yet incorporated the Language Model into our ASR system but still gained a promising result.
64
+ <a name = "implementation" ></a>
65
+ ### Implementation
66
+ We also provide code for Pre-training and Fine-tuning the Wav2vec2 model. If you wish to train on your dataset, check it out here:
67
+ - [Pre-train code](https://github.com/khanld/ASR-Wav2vec-Pretrain) (not available for now but will release soon)
68
+ - [Fine-tune code](https://github.com/khanld/ASR-Wa2vec-Finetune)
69
  </br>
70
 
71
  <a name = "benchmark" ></a>