Datasets:

Modalities:
Audio
Size:
< 1K
Libraries:
Datasets
License:
nianlong commited on
Commit
8815829
·
1 Parent(s): ffaee47

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md CHANGED
@@ -1,3 +1,48 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # Positive Transfer Of The Whisper Speech Transformer To Human And Animal Voice Activity Detection
5
+ We proposed WhisperSeg, utilizing the Whisper Transformer pre-trained for Automatic Speech Recognition (ASR) for both human and animal Voice Activity Detection (VAD). For more details, please refer to our paper
6
+
7
+ >
8
+ > [**Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection**]()
9
+ >
10
+ > Nianlong Gu, Kanghwi Lee, Maris Basha, Sumit Kumar Ram, Guanghao You, Richard H. R. Hahnloser <br>
11
+ > University of Zurich and ETH Zurich
12
+
13
+ This is the Marmoset dataset customized for Animal Voice Activity Detection (vocal segmentation) in WhisperSeg.
14
+
15
+ ## Download Dataset
16
+ ```python
17
+ from huggingface_hub import snapshot_download
18
+ snapshot_download('nccratliri/vad-marmoset', local_dir = "data/marmoset", repo_type="dataset" )
19
+ ```
20
+
21
+ For more usage details, please refer to the GitHub repository: https://github.com/nianlonggu/WhisperSeg
22
+
23
+ When using this dataset, please also cite:
24
+ ```
25
+ @article {10.7554/eLife.68837,
26
+ article_type = {journal},
27
+ title = {Fast and accurate annotation of acoustic signals with deep neural networks},
28
+ author = {Steinfath, Elsa and Palacios-Muñoz, Adrian and Rottschäfer, Julian R and Yuezak, Deniz and Clemens, Jan},
29
+ editor = {Calabrese, Ronald L and Egnor, SE Roian and Troyer, Todd},
30
+ volume = 10,
31
+ year = 2021,
32
+ month = {nov},
33
+ pub_date = {2021-11-01},
34
+ pages = {e68837},
35
+ citation = {eLife 2021;10:e68837},
36
+ doi = {10.7554/eLife.68837},
37
+ url = {https://doi.org/10.7554/eLife.68837},
38
+ abstract = {Acoustic signals serve communication within and across species throughout the animal kingdom. Studying the genetics, evolution, and neurobiology of acoustic communication requires annotating acoustic signals: segmenting and identifying individual acoustic elements like syllables or sound pulses. To be useful, annotations need to be accurate, robust to noise, and fast. We here introduce \textit{DeepAudioSegmenter} (\textit{DAS)}, a method that annotates acoustic signals across species based on a deep-learning derived hierarchical presentation of sound. We demonstrate the accuracy, robustness, and speed of \textit{DAS} using acoustic signals with diverse characteristics from insects, birds, and mammals. \textit{DAS} comes with a graphical user interface for annotating song, training the network, and for generating and proofreading annotations. The method can be trained to annotate signals from new species with little manual annotation and can be combined with unsupervised methods to discover novel signal types. \textit{DAS} annotates song with high throughput and low latency for experimental interventions in realtime. Overall, \textit{DAS} is a universal, versatile, and accessible tool for annotating acoustic communication signals.},
39
+ keywords = {acoustic communication, annotation, song, deep learning, bird, fly},
40
+ journal = {eLife},
41
+ issn = {2050-084X},
42
+ publisher = {eLife Sciences Publications, Ltd},
43
+ }
44
+
45
+ ```
46
+
47
+ ## Contact
48