asd881223 commited on
Commit
f54276b
1 Parent(s): 53ef8c6

Upload 9 files

Browse files
README.md CHANGED
@@ -1,3 +1,143 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: "en"
3
+ thumbnail:
4
+ tags:
5
+ - speechbrain
6
+ - embeddings
7
+ - Speaker
8
+ - Verification
9
+ - Identification
10
+ - pytorch
11
+ - ECAPA
12
+ - TDNN
13
+ license: "apache-2.0"
14
+ datasets:
15
+ - voxceleb
16
+ metrics:
17
+ - EER
18
+ widget:
19
+ - example_title: VoxCeleb Speaker id10003
20
+ src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav
21
+ - example_title: VoxCeleb Speaker id10004
22
+ src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav
23
+ ---
24
+
25
+ <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
26
+ <br/><br/>
27
+
28
+ # Speaker Verification with ECAPA-TDNN embeddings on Voxceleb
29
+
30
+ This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain.
31
+ The system can be used to extract speaker embeddings as well.
32
+ It is trained on Voxceleb 1+ Voxceleb2 training data.
33
+
34
+ For a better experience, we encourage you to learn more about
35
+ [SpeechBrain](https://speechbrain.github.io). The model performance on Voxceleb1-test set(Cleaned) is:
36
+
37
+ | Release | EER(%)
38
+ |:-------------:|:--------------:|
39
+ | 05-03-21 | 0.80 |
40
+
41
+
42
+ ## Pipeline description
43
+
44
+ This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.
45
+
46
+ ## Install SpeechBrain
47
+
48
+ First of all, please install SpeechBrain with the following command:
49
+
50
+ ```
51
+ pip install git+https://github.com/speechbrain/speechbrain.git@develop
52
+ ```
53
+
54
+ Please notice that we encourage you to read our tutorials and learn more about
55
+ [SpeechBrain](https://speechbrain.github.io).
56
+
57
+ ### Compute your speaker embeddings
58
+
59
+ ```python
60
+ import torchaudio
61
+ from speechbrain.inference.speaker import EncoderClassifier
62
+ classifier = EncoderClassifier.from_hparams(source="speechbrain/spkrec-ecapa-voxceleb")
63
+ signal, fs =torchaudio.load('tests/samples/ASR/spk1_snt1.wav')
64
+ embeddings = classifier.encode_batch(signal)
65
+ ```
66
+ The system is trained with recordings sampled at 16kHz (single channel).
67
+ The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *classify_batch*.
68
+
69
+ ### Perform Speaker Verification
70
+
71
+ ```python
72
+ from speechbrain.inference.speaker import SpeakerRecognition
73
+ verification = SpeakerRecognition.from_hparams(source="speechbrain/spkrec-ecapa-voxceleb", savedir="pretrained_models/spkrec-ecapa-voxceleb")
74
+ score, prediction = verification.verify_files("tests/samples/ASR/spk1_snt1.wav", "tests/samples/ASR/spk2_snt1.wav") # Different Speakers
75
+ score, prediction = verification.verify_files("tests/samples/ASR/spk1_snt1.wav", "tests/samples/ASR/spk1_snt2.wav") # Same Speaker
76
+ ```
77
+ The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.
78
+
79
+ ### Inference on GPU
80
+ To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
81
+
82
+ ### Training
83
+ The model was trained with SpeechBrain (aa018540).
84
+ To train it from scratch follows these steps:
85
+ 1. Clone SpeechBrain:
86
+ ```bash
87
+ git clone https://github.com/speechbrain/speechbrain/
88
+ ```
89
+ 2. Install it:
90
+ ```
91
+ cd speechbrain
92
+ pip install -r requirements.txt
93
+ pip install -e .
94
+ ```
95
+
96
+ 3. Run Training:
97
+ ```
98
+ cd recipes/VoxCeleb/SpeakerRec
99
+ python train_speaker_embeddings.py hparams/train_ecapa_tdnn.yaml --data_folder=your_data_folder
100
+ ```
101
+
102
+ You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1-ahC1xeyPinAHp2oAohL-02smNWO41Cc?usp=sharing).
103
+
104
+ ### Limitations
105
+ The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
106
+
107
+ #### Referencing ECAPA-TDNN
108
+ ```
109
+ @inproceedings{DBLP:conf/interspeech/DesplanquesTD20,
110
+ author = {Brecht Desplanques and
111
+ Jenthe Thienpondt and
112
+ Kris Demuynck},
113
+ editor = {Helen Meng and
114
+ Bo Xu and
115
+ Thomas Fang Zheng},
116
+ title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation
117
+ in {TDNN} Based Speaker Verification},
118
+ booktitle = {Interspeech 2020},
119
+ pages = {3830--3834},
120
+ publisher = {{ISCA}},
121
+ year = {2020},
122
+ }
123
+ ```
124
+
125
+ # **Citing SpeechBrain**
126
+ Please, cite SpeechBrain if you use it for your research or business.
127
+
128
+ ```bibtex
129
+ @misc{speechbrain,
130
+ title={{SpeechBrain}: A General-Purpose Speech Toolkit},
131
+ author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
132
+ year={2021},
133
+ eprint={2106.04624},
134
+ archivePrefix={arXiv},
135
+ primaryClass={eess.AS},
136
+ note={arXiv:2106.04624}
137
+ }
138
+ ```
139
+
140
+ # **About SpeechBrain**
141
+ - Website: https://speechbrain.github.io/
142
+ - Code: https://github.com/speechbrain/speechbrain/
143
+ - HuggingFace: https://huggingface.co/speechbrain/
classifier.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eed74c25ec04d5c9c762d712f47ad8f16e5936766f35776cc2052851f7ec7dad
3
+ size 132
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "speechbrain_interface": "SpeakerRecognition"
3
+ }
embedding_model.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fc8e860b5178f20319b6ef48fd86377d5064ec221ff582e13d26b110c92e1a9
3
+ size 133
example1.wav ADDED
Binary file (104 kB). View file
 
example2.flac ADDED
Binary file (39.6 kB). View file
 
hyperparams.yaml ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ############################################################################
2
+ # Model: ECAPA big for Speaker verification
3
+ # ############################################################################
4
+
5
+ # Feature parameters
6
+ n_mels: 80
7
+
8
+ # Pretrain folder (HuggingFace)
9
+ pretrained_path: speechbrain/spkrec-ecapa-voxceleb
10
+
11
+ # Output parameters
12
+ out_n_neurons: 7205
13
+
14
+ # Model params
15
+ compute_features: !new:speechbrain.lobes.features.Fbank
16
+ n_mels: !ref <n_mels>
17
+
18
+ mean_var_norm: !new:speechbrain.processing.features.InputNormalization
19
+ norm_type: sentence
20
+ std_norm: False
21
+
22
+ embedding_model: !new:speechbrain.lobes.models.ECAPA_TDNN.ECAPA_TDNN
23
+ input_size: !ref <n_mels>
24
+ channels: [1024, 1024, 1024, 1024, 3072]
25
+ kernel_sizes: [5, 3, 3, 3, 1]
26
+ dilations: [1, 2, 3, 4, 1]
27
+ attention_channels: 128
28
+ lin_neurons: 192
29
+
30
+ classifier: !new:speechbrain.lobes.models.ECAPA_TDNN.Classifier
31
+ input_size: 192
32
+ out_neurons: !ref <out_n_neurons>
33
+
34
+ mean_var_norm_emb: !new:speechbrain.processing.features.InputNormalization
35
+ norm_type: global
36
+ std_norm: False
37
+
38
+ modules:
39
+ compute_features: !ref <compute_features>
40
+ mean_var_norm: !ref <mean_var_norm>
41
+ embedding_model: !ref <embedding_model>
42
+ mean_var_norm_emb: !ref <mean_var_norm_emb>
43
+ classifier: !ref <classifier>
44
+
45
+ label_encoder: !new:speechbrain.dataio.encoder.CategoricalEncoder
46
+
47
+
48
+ pretrainer: !new:speechbrain.utils.parameter_transfer.Pretrainer
49
+ loadables:
50
+ embedding_model: !ref <embedding_model>
51
+ mean_var_norm_emb: !ref <mean_var_norm_emb>
52
+ classifier: !ref <classifier>
53
+ label_encoder: !ref <label_encoder>
54
+ paths:
55
+ embedding_model: !ref <pretrained_path>/embedding_model.ckpt
56
+ mean_var_norm_emb: !ref <pretrained_path>/mean_var_norm_emb.ckpt
57
+ classifier: !ref <pretrained_path>/classifier.ckpt
58
+ label_encoder: !ref <pretrained_path>/label_encoder.txt
59
+
label_encoder.txt ADDED
The diff for this file is too large to render. See raw diff
 
mean_var_norm_emb.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd70225b05b37be64fc5a95e24395d804231d43f74b2e1e5a513db7b69b34c33
3
+ size 1921