File size: 3,471 Bytes
c077029
 
20a693e
 
 
 
 
 
 
 
 
 
 
 
 
 
8b7437e
20a693e
 
c077029
20a693e
 
 
 
8b7437e
20a693e
 
 
 
 
 
 
 
 
 
 
 
 
8ca88c6
91eeeff
 
 
20a693e
0573b9e
20a693e
 
 
 
 
 
 
 
 
 
0116473
20a693e
 
 
 
 
 
 
 
 
 
3f85539
20a693e
 
 
 
f22011d
20a693e
 
 
 
 
 
 
 
 
269a6e6
20a693e
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: mit
task_categories:
- audio-classification
language:
- en
tags:
- SER
- Speech Emotion Recognition
- Speech Emotion Classification
- Audio Classification
- Audio
- Emotion
- Emo
- Speech
- Mosei
pretty_name: messAIh
size_categories:
- 10K<n<100K
---


DATASET DESCRIPTION

The messAIh dataset is a fork of [CMU MOSEI](http://multicomp.cs.cmu.edu/resources/cmu-mosei-dataset/).

Unlike its parent, MESSAIH is indended for unimodal model development and focusses exclusively on audio classification, more specifically, Speech Emotion Recognition (SER).

Of course, it can be used for bimodal classification by transcribing each audio track.

MESSAIH currently contains 13,234 speech samples annotated according to the [CMU MOSEI](https://aclanthology.org/P18-1208/) scheme:

> Each sentence is annotated for sentiment on a [-3,3] Likert scale of:
> [−3: highly negative, −2 negative, −1 weakly negative, 0 neutral, +1 weakly positive, +2 positive, +3 highly positive].
> Ekman emotions of {happiness, sadness, anger, fear, disgust, surprise}
> are annotated on a [0,3] Likert scale for presence of emotion
> x: [0: no evidence of x, 1: weakly x, 2: x, 3: highly x].


The dataset is provided as a [parquet file](https://drive.google.com/file/d/17qOa2cFDNCH2j2mL5gCNUOwLxpgnzPmB/view?usp=drive_link). 

Provisionally, the file is stored on a [cloud drive](https://drive.google.com/file/d/17qOa2cFDNCH2j2mL5gCNUOwLxpgnzPmB/view?usp=drive_link) as it is too big for GitHub. Note that the original parquet file from August 10th 2023 was buggy and so was the Python script. 

To facilitate inspection, a truncated csv sample file is also provided, but it does not contain the audio arrays.

If you train a model on this dataset, you would make us very happy by letting us know.


UNPACKING THE DATASET

A sample Python script (check the top of the script for the requirements) is also provided for illustrative purposes.

The script reads the parquet file and produces the following:

1. A csv file with file names and MOSEI values (columns names are self-explanatory).
   
2. A folder named "wavs" containing the audio samples.


LEGAL CONSIDERATIONS

Note that producing the wav files might (or might not) constitute copyright infringement as well as a violation of Google's Terms of Service.

Instead, researchers are encouraged to use the numpy arrays contained in the last column of the dataset ("wav2numpy") directly, without actually extracting any playable audio.

That, I believe, may keep us in the grey zone.


CAVEATS

As one can appreciate from the charts contained in the "charts" folder, the dataset is biased towards "positive" emotions, namely happiness.

Certain emotions such as fear may be underrepresented, not only in terms of number of occurences, but, more problematically, in terms of "intensity".

MOSEI is considered a natural or spontaneous emotion dataset (as opposed to an actored or scripted one) showcasing "genuine" emotions.

However, keep in mind that MOSEI was curated from a popular social network and social networks are notoriously abundant in fake emotions.

Moreover, certain emotions may be intrinsically more difficult to detect than others, even from a human perspective.

Yet, MOSEI is possibly one of the best datasets of its kind currently in the public domain.

Also note that the original [MOSEI](http://immortal.multicomp.cs.cmu.edu/CMU-MOSEI/labels/) contains nearly twice as many entries as MESSAIH does.