Update README.md
Browse files
README.md
CHANGED
@@ -29,7 +29,22 @@ ECAPA2 is a hybrid neural network architecture and training strategy for speaker
|
|
29 |
- **Paper [optional]:** [More Information Needed]
|
30 |
- **Demo [optional]:** [More Information Needed]
|
31 |
|
32 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
35 |
|
|
|
29 |
- **Paper [optional]:** [More Information Needed]
|
30 |
- **Demo [optional]:** [More Information Needed]
|
31 |
|
32 |
+
## How-to-use
|
33 |
+
|
34 |
+
Extracting speaker embeddings is easy and only requires a few lines of code:
|
35 |
+
```
|
36 |
+
audio = torchaudio.load('sample.wav')
|
37 |
+
ecapa2_model = torch.load('model.pt')
|
38 |
+
embedding = ecapa2_model.extract_embedding(audio)
|
39 |
+
```
|
40 |
+
|
41 |
+
For the extraction of other hierachical features, a separate model function is provided:
|
42 |
+
```
|
43 |
+
feature = ecapa2_model.extract_feature(label='gfe1')
|
44 |
+
```
|
45 |
+
|
46 |
+
The list of available labels exists of: 'lfe1', 'lfe2', 'lfe3', 'lfe4', 'gfe1', 'gfe2', 'pool' and 'embedding' (equal to model.extract_embedding()).
|
47 |
+
|
48 |
|
49 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
50 |
|