ehcalabres
commited on
Commit
·
50f34db
1
Parent(s):
c716336
Initial README.md structure
Browse files
README.md
ADDED
@@ -0,0 +1,139 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- no-annotation
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
languages:
|
7 |
+
- en
|
8 |
+
licenses:
|
9 |
+
- Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
size_categories:
|
13 |
+
- 1K<n<10K
|
14 |
+
source_datasets:
|
15 |
+
- original
|
16 |
+
task_categories:
|
17 |
+
- audio-classification
|
18 |
+
task_ids:
|
19 |
+
- speech-emotion-recognition
|
20 |
+
---
|
21 |
+
|
22 |
+
# Dataset Card for ravdess_speech
|
23 |
+
|
24 |
+
## Table of Contents
|
25 |
+
- [Dataset Description](#dataset-description)
|
26 |
+
- [Dataset Summary](#dataset-summary)
|
27 |
+
- [Supported Tasks](#supported-tasks-and-leaderboards)
|
28 |
+
- [Languages](#languages)
|
29 |
+
- [Dataset Structure](#dataset-structure)
|
30 |
+
- [Data Instances](#data-instances)
|
31 |
+
- [Data Fields](#data-instances)
|
32 |
+
- [Data Splits](#data-instances)
|
33 |
+
- [Dataset Creation](#dataset-creation)
|
34 |
+
- [Curation Rationale](#curation-rationale)
|
35 |
+
- [Source Data](#source-data)
|
36 |
+
- [Annotations](#annotations)
|
37 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
38 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
39 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
40 |
+
- [Discussion of Biases](#discussion-of-biases)
|
41 |
+
- [Other Known Limitations](#other-known-limitations)
|
42 |
+
- [Additional Information](#additional-information)
|
43 |
+
- [Dataset Curators](#dataset-curators)
|
44 |
+
- [Licensing Information](#licensing-information)
|
45 |
+
- [Citation Information](#citation-information)
|
46 |
+
|
47 |
+
## Dataset Description
|
48 |
+
|
49 |
+
- **Homepage:** [Needs More Information]
|
50 |
+
- **Repository:** https://zenodo.org/record/1188976#.YUS4MrozZdS
|
51 |
+
- **Paper:** https://doi.org/10.1371/journal.pone.0196391
|
52 |
+
- **Leaderboard:** [Needs More Information]
|
53 |
+
- **Point of Contact:** [email protected]
|
54 |
+
|
55 |
+
### Dataset Summary
|
56 |
+
|
57 |
+
The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) contains 24 professional actors (12 female, 12 male), vocalizing two lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions. Each expression is produced at two levels of emotional intensity (normal, strong), with an additional neutral expression. The conditions of the audio files are: 16bit, 48kHz .wav.
|
58 |
+
|
59 |
+
### Supported Tasks and Leaderboards
|
60 |
+
|
61 |
+
- audio-classification: The dataset can be used to train a model for Audio Classification tasks, which consists in predict the latent emotion presented on the audios.
|
62 |
+
|
63 |
+
### Languages
|
64 |
+
|
65 |
+
The audios available in the dataset are in English spoken by actors in a neutral North American accent.
|
66 |
+
|
67 |
+
## Dataset Structure
|
68 |
+
|
69 |
+
### Data Instances
|
70 |
+
|
71 |
+
[Needs More Information]
|
72 |
+
|
73 |
+
### Data Fields
|
74 |
+
|
75 |
+
[Needs More Information]
|
76 |
+
|
77 |
+
### Data Splits
|
78 |
+
|
79 |
+
[Needs More Information]
|
80 |
+
|
81 |
+
## Dataset Creation
|
82 |
+
|
83 |
+
### Curation Rationale
|
84 |
+
|
85 |
+
[Needs More Information]
|
86 |
+
|
87 |
+
### Source Data
|
88 |
+
|
89 |
+
#### Initial Data Collection and Normalization
|
90 |
+
|
91 |
+
[Needs More Information]
|
92 |
+
|
93 |
+
#### Who are the source language producers?
|
94 |
+
|
95 |
+
[Needs More Information]
|
96 |
+
|
97 |
+
### Annotations
|
98 |
+
|
99 |
+
#### Annotation process
|
100 |
+
|
101 |
+
[Needs More Information]
|
102 |
+
|
103 |
+
#### Who are the annotators?
|
104 |
+
|
105 |
+
[Needs More Information]
|
106 |
+
|
107 |
+
### Personal and Sensitive Information
|
108 |
+
|
109 |
+
[Needs More Information]
|
110 |
+
|
111 |
+
## Considerations for Using the Data
|
112 |
+
|
113 |
+
### Social Impact of Dataset
|
114 |
+
|
115 |
+
[Needs More Information]
|
116 |
+
|
117 |
+
### Discussion of Biases
|
118 |
+
|
119 |
+
[Needs More Information]
|
120 |
+
|
121 |
+
### Other Known Limitations
|
122 |
+
|
123 |
+
[Needs More Information]
|
124 |
+
|
125 |
+
## Additional Information
|
126 |
+
|
127 |
+
### Dataset Curators
|
128 |
+
|
129 |
+
[Needs More Information]
|
130 |
+
|
131 |
+
### Licensing Information
|
132 |
+
|
133 |
+
The RAVDESS is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, CC BY-NC-SA 4.0
|
134 |
+
|
135 |
+
Commercial licenses for the RAVDESS can also be purchased. For more information, please visit our license fee page, or contact us at [email protected].
|
136 |
+
|
137 |
+
### Citation Information
|
138 |
+
|
139 |
+
Livingstone SR, Russo FA (2018) The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE 13(5): e0196391. https://doi.org/10.1371/journal.pone.0196391.
|