Commit
·
ca7f36f
1
Parent(s):
8a701fa
Update README.md
Browse files
README.md
CHANGED
@@ -43,21 +43,17 @@ model-index:
|
|
43 |
|
44 |
# Wav2Vec2-Conformer-Large-960h with Relative Position Embeddings
|
45 |
|
46 |
-
|
47 |
|
48 |
-
|
49 |
|
50 |
-
|
51 |
|
52 |
-
|
53 |
|
54 |
-
**Abstract**
|
55 |
-
|
56 |
-
...
|
57 |
|
58 |
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
|
59 |
|
60 |
-
|
61 |
# Usage
|
62 |
|
63 |
To transcribe audio files the model can be used as a standalone acoustic model as follows:
|
|
|
43 |
|
44 |
# Wav2Vec2-Conformer-Large-960h with Relative Position Embeddings
|
45 |
|
46 |
+
Wav2Vec2-Conformer with relative position embeddings, pretrained and **fine-tuned on 960 hours of Librispeech** on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
|
47 |
|
48 |
+
**Paper**: [fairseq S2T: Fast Speech-to-Text Modeling with fairseq](https://arxiv.org/abs/2010.05171)
|
49 |
|
50 |
+
**Authors**: Changhan Wang, Yun Tang, Xutai Ma, Anne Wu, Sravya Popuri, Dmytro Okhonko, Juan Pino
|
51 |
|
52 |
+
The results of Wav2Vec2-Conformer can be found in Table 3 and Table 4 of the [official paper](https://arxiv.org/abs/2010.05171).
|
53 |
|
|
|
|
|
|
|
54 |
|
55 |
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
|
56 |
|
|
|
57 |
# Usage
|
58 |
|
59 |
To transcribe audio files the model can be used as a standalone acoustic model as follows:
|