--- license: mit --- # EDEN ASR Dataset A subset of this data was used to support the development of empathetic feedback modules in [EDEN](https://arxiv.org/abs/2406.17982) and [its prior work](https://arxiv.org/abs/2404.13764). The dataset contains audio clips of native Mandarin speakers. The speakers conversed with a chatbot hosted on an [English practice platform](https://dl.acm.org/doi/abs/10.1145/3491140.3528329?casa_token=ER-mfy0xauQAAAAA:FyDgmH0Y0ke7a6jpOnuycP1HRfeV1B5qaq5JWM5OV5dB9fLFL_vzVRUacZ4fUMRBDl71UeWMIA9Z). 3081 audio clips from 613 conversations and 163 users remained after filtering. The filtering process removes audio clips containing only Mandarin, duplicates, and a subset of self-introductions from the users. Each audio clip ranges from one second to two minutes. We did not collect demographic information to protect user identities. In our original work, we directly transcribed the speech with Whisper Medium. However, since the audio clips are accented speech, these transcripts have instances of ASR error. We have **manually verified** the original transcripts to ensure they are high-quality. Thanks to my mentee, Brittney Lilly, for doing the crux of the ASR transcript verification work! ## Dataset Columns * **audio_url**: The URL to the audio clips; you can download them using wget or urllib to your local machines (see the code snippet below). * **emotion_label**: We manually labeled a subset of the clips as **Neutral** (neutral emotions), **Negative** (the speaker is displaying negative emotions), or **Pauses** (there are a lot of pauses in the speech, potentially signaling language anxiety). The process is documented in [EDEN's prior work](https://arxiv.org/abs/2404.13764). When this field is empty, it means that the clip was not labeled. * **corrected_whisper_transcript**: The high-quality transcript we have verified; the person performing verification is a native English speaker. ## Intended Use Accented ASR research! ## Code Example ```python from datasets import load_dataset dataset = load_dataset("sylviali/EDEN_ASR_Data", split="train") print(dataset) # Download the audio to a local file import urllib urllib.request.urlretrieve(dataset[0]["audio_url"], "audio.wav") # Extract the ASR transcript print(dataset[0]["corrected_whisper_transcript"]) # Extract the emotion label print(dataset[0]["emotion_label"]) # Check audio clips labeled as having negative emotions negative_emotion_clips = dataset.filter(lambda example: example["emotion_label"] == "Negative") print(len(negative_emotion_clips)) print(negative_emotion_clips[0]) ```