Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ license: apache-2.0
|
|
5 |
We proposed WhisperSeg, utilizing the Whisper Transformer pre-trained for Automatic Speech Recognition (ASR) for both human and animal Voice Activity Detection (VAD). For more details, please refer to our paper
|
6 |
|
7 |
>
|
8 |
-
> [**Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection**]()
|
9 |
>
|
10 |
> Nianlong Gu, Kanghwi Lee, Maris Basha, Sumit Kumar Ram, Guanghao You, Richard H. R. Hahnloser <br>
|
11 |
> University of Zurich and ETH Zurich
|
@@ -20,5 +20,22 @@ snapshot_download('nccratliri/vad-multi-species', local_dir = "data/multi-specie
|
|
20 |
|
21 |
For more usage details, please refer to the GitHub repository: https://github.com/nianlonggu/WhisperSeg
|
22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
## Contact
|
24 |
|
|
5 |
We proposed WhisperSeg, utilizing the Whisper Transformer pre-trained for Automatic Speech Recognition (ASR) for both human and animal Voice Activity Detection (VAD). For more details, please refer to our paper
|
6 |
|
7 |
>
|
8 |
+
> [**Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection**](https://doi.org/10.1101/2023.09.30.560270)
|
9 |
>
|
10 |
> Nianlong Gu, Kanghwi Lee, Maris Basha, Sumit Kumar Ram, Guanghao You, Richard H. R. Hahnloser <br>
|
11 |
> University of Zurich and ETH Zurich
|
|
|
20 |
|
21 |
For more usage details, please refer to the GitHub repository: https://github.com/nianlonggu/WhisperSeg
|
22 |
|
23 |
+
## Citation
|
24 |
+
When using this dataset for your work, please cite:
|
25 |
+
```
|
26 |
+
@article {Gu2023.09.30.560270,
|
27 |
+
author = {Nianlong Gu and Kanghwi Lee and Maris Basha and Sumit Kumar Ram and Guanghao You and Richard Hahnloser},
|
28 |
+
title = {Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection},
|
29 |
+
elocation-id = {2023.09.30.560270},
|
30 |
+
year = {2023},
|
31 |
+
doi = {10.1101/2023.09.30.560270},
|
32 |
+
publisher = {Cold Spring Harbor Laboratory},
|
33 |
+
abstract = {This paper introduces WhisperSeg, utilizing the Whisper Transformer pre-trained for Automatic Speech Recognition (ASR) for human and animal Voice Activity Detection (VAD). Contrary to traditional methods that detect human voice or animal vocalizations from a short audio frame and rely on careful threshold selection, WhisperSeg processes entire spectrograms of long audio and generates plain text representations of onset, offset, and type of voice activity. Processing a longer audio context with a larger network greatly improves detection accuracy from few labeled examples. We further demonstrate a positive transfer of detection performance to new animal species, making our approach viable in the data-scarce multi-species setting.Competing Interest StatementThe authors have declared no competing interest.},
|
34 |
+
URL = {https://www.biorxiv.org/content/early/2023/10/02/2023.09.30.560270},
|
35 |
+
eprint = {https://www.biorxiv.org/content/early/2023/10/02/2023.09.30.560270.full.pdf},
|
36 |
+
journal = {bioRxiv}
|
37 |
+
}
|
38 |
+
```
|
39 |
+
|
40 |
## Contact
|
41 |