Update README.md
#1
by
gauthelo
- opened
README.md
CHANGED
@@ -1,3 +1,96 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-nc-4.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-4.0
|
3 |
+
metrics:
|
4 |
+
- cer
|
5 |
+
- wer
|
6 |
+
library_name: speechbrain
|
7 |
+
pipeline_tag: feature-extraction
|
8 |
+
tags:
|
9 |
+
- speech processing
|
10 |
+
- self-supervision
|
11 |
+
- african languages
|
12 |
+
- fine-tuning
|
13 |
+
---
|
14 |
+
## Model description
|
15 |
+
This self-supervised speech model (a.k.a. SSA-HuBERT-base-5k) is based on a HuBERT Base architecture (~95M params) [1].
|
16 |
+
It was trained on nearly 5 000 hours of speech segments and covers 21 languages and variants spoken in Sub-Saharan Africa.
|
17 |
+
It is a balanced version in gender and languages representation compared to the SSA-HuBERT-base-60k.
|
18 |
+
|
19 |
+
### Pretraining data
|
20 |
+
- Dataset: The training dataset was composed of both studio recordings (controlled environment, prepared talks) and street interviews (noisy environment, spontaneous speech).
|
21 |
+
|
22 |
+
- Languages: Bambara (bam), Dyula (dyu), French (fra), Fula (ful), Fulfulde (ffm), Fulfulde (fuh), Gulmancema (gux), Hausa (hau), Kinyarwanda (kin), Kituba (ktu), Lingala (lin), Luba-Lulua (lua), Mossi (mos), Maninkakan (mwk), Sango (sag), Songhai (son), Swahili (swc), Swahili (swh), Tamasheq (taq), Wolof (wol), Zarma (dje).
|
23 |
+
|
24 |
+
## ASR fine-tuning
|
25 |
+
The SpeechBrain toolkit (Ravanelli et al., 2021) is used to fine-tune the model.
|
26 |
+
Fine-tuning is done for each language using the FLEURS dataset [2].
|
27 |
+
The pretrained model (SSA-HuBERT-base-5k) is considered as a speech encoder and is fully fine-tuned with two 1024 linear layers and a softmax output at the top.
|
28 |
+
|
29 |
+
## License
|
30 |
+
This model is released under the CC-by-NC 4.0 conditions.
|
31 |
+
|
32 |
+
## Publication
|
33 |
+
This model were presented at JEP-TALN 2024.
|
34 |
+
The associated paper is available here: [Africa-Centric Self-Supervised Pre-Training for Multilingual Speech Representation in a Sub-Saharan Context](https://inria.hal.science/hal-04623069/)
|
35 |
+
|
36 |
+
### Citation
|
37 |
+
Please cite our paper when using SSA-HuBERT-base-5k model:
|
38 |
+
|
39 |
+
Antoine Caubrière, Elodie Gauthier. Représentation de la parole multilingue par apprentissage auto-supervisé dans un contexte subsaharien. 35èmes Journées d'Études sur la Parole (JEP 2024) 31ème Conférence sur le Traitement Automatique des Langues Naturelles (TALN 2024) 26ème Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RECITAL 2024), Jul 2024, Toulouse, France. pp.163-172. ⟨hal-04623069⟩
|
40 |
+
|
41 |
+
**Bibtex:**
|
42 |
+
```mk
|
43 |
+
@inproceedings{caubriere:hal-04623069,
|
44 |
+
TITLE = {{Repr{\'e}sentation de la parole multilingue par apprentissage auto-supervis{\'e} dans un contexte subsaharien}},
|
45 |
+
AUTHOR = {Caubri{\`e}re, Antoine and Gauthier, Elodie},
|
46 |
+
URL = {https://inria.hal.science/hal-04623069},
|
47 |
+
BOOKTITLE = {{35{\`e}mes Journ{\'e}es d'{\'E}tudes sur la Parole (JEP 2024) 31{\`e}me Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles (TALN 2024) 26{\`e}me Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RECITAL 2024)}},
|
48 |
+
ADDRESS = {Toulouse, France},
|
49 |
+
EDITOR = {BALAGUER and Mathieu and BENDAHMAN and Nihed and HO-DAC and Lydia-Mai and MAUCLAIR and Julie and MORENO and Jose G and PINQUIER and Julien},
|
50 |
+
PUBLISHER = {{ATALA \& AFPC}},
|
51 |
+
PAGES = {163-172},
|
52 |
+
YEAR = {2024},
|
53 |
+
MONTH = Jul,
|
54 |
+
KEYWORDS = {Apprentissage auto-supervis{\'e} ; Langues subsaharienne ; Reconnaissance de la parole multilingue ; HuBERT},
|
55 |
+
PDF = {https://inria.hal.science/hal-04623069/file/4347.pdf},
|
56 |
+
HAL_ID = {hal-04623069},
|
57 |
+
HAL_VERSION = {v1},
|
58 |
+
}
|
59 |
+
```
|
60 |
+
|
61 |
+
## Results
|
62 |
+
The following results are obtained in a greedy mode (no language model rescoring).
|
63 |
+
Character error rates (CERs) and Word error rates (WERs) are given in the table below, on the 20 languages of the SSA subpart of the FLEURS dataset.
|
64 |
+
|
65 |
+
| **Languages** | **CER** | **WER** |
|
66 |
+
|:--------------------------------|:--------|:--------|
|
67 |
+
| **Afrikaans** | 23.8 | 68.3 |
|
68 |
+
| **Amharic** | 15.5 | 51.4 |
|
69 |
+
| **Fula** | 21.2 | 60.6 |
|
70 |
+
| **Ganda** | 11.7 | 53.3 |
|
71 |
+
| **Hausa** | 11.2 | 35.6 |
|
72 |
+
| **Igbo** | 20.9 | 57.9 |
|
73 |
+
| **Kamba** | 16.3 | 53.7 |
|
74 |
+
| **Lingala** | 8.7 | 24.2 |
|
75 |
+
| **Luo** | 10.2 | 38.5 |
|
76 |
+
| **Northen-Sotho** | 14.4 | 44.6 |
|
77 |
+
| **Nyanja** | 13.7 | 54.5 |
|
78 |
+
| **Oromo** | 22.9 | 77.4 |
|
79 |
+
| **Shona** | 11.2 | 48.2 |
|
80 |
+
| **Somali** | 21.9 | 64.5 |
|
81 |
+
| **Swahili** | 8.6 | 28.8 |
|
82 |
+
| **Umbundu** | 21.7 | 60.8 |
|
83 |
+
| **Wolof** | 19.2 | 54.2 |
|
84 |
+
| **Xhosa** | 12.4 | 52.3 |
|
85 |
+
| **Yoruba** | 25.0 | 68.0 |
|
86 |
+
| **Zulu** | 12.4 | 53.0 |
|
87 |
+
| *Overall average* | *16.1* | *52.5* |
|
88 |
+
|
89 |
+
|
90 |
+
## Reproductibilty
|
91 |
+
We propose a notebook to reproduce the ASR experiments mentioned in our paper. See `SB_ASR_FLEURS_finetuning.ipynb`.
|
92 |
+
By using the `ASR_FLEURS-swahili_hf.yaml` config file, you will be able to run the recipe on Swahili.
|
93 |
+
|
94 |
+
## References
|
95 |
+
[1] Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units. In 2021 IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp.3451–3460, 2021. doi: 10.1109/TASLP.2021.3122291.
|
96 |
+
[2] Alexis Conneau, Min Ma, Simran Khanuja, Yu Zhang, Vera Axelrod, Siddharth Dalmia, Jason Riesa, Clara Rivera, and Ankur Bapna. Fleurs: Few-shot learning evaluation of universal representations of speech. In 2022 IEEE Spoken Language Technology Workshop (SLT), pp. 798–805, 2022. doi: 10.1109/SLT54892.2023.10023141.
|