Datasets:

Modalities:
Text
Languages:
English
License:
File size: 3,500 Bytes
80d2d84
 
 
 
 
 
 
 
 
 
 
 
 
 
cfbfd2f
80d2d84
 
 
 
 
 
 
 
 
 
 
cfbfd2f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80d2d84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
license: gpl-3.0
language:
- en
tags:
- emotion-cause-analysis
---

# Emotion-Cause-in-Friends (ECF)

For the task named Multimodal Emotion-Cause Pair Extraction in Conversation, we accordingly construct a multimodal conversational emotion cause dataset ECF, which contains 9,794 multimodal emotion-cause pairs among 13,619 utterances in the *Friends* sitcom.

For more details, please refer to our GitHub: 

- [Multimodal Emotion-Cause Pair Extraction in Conversations](https://github.com/NUSTM/MECPE/tree/main/data)
- [SemEval-2024 Task 3](https://github.com/NUSTM/SemEval-2024_ECAC)

## Dataset Statistics

| Item                            | Train | Dev   | Test  | Total  |
| ------------------------------- | ----- | ----- | ----- | ------ |
| Conversations                   | 1001  | 112   | 261   | 1,374  |
| Utterances                      | 9,966 | 1,087 | 2,566 | 13,619 |
| Emotion (utterances)            | 5,577 | 668   | 1,445 | 7,690  |
| Emotion-cause (utterance) pairs | 7,055 | 866   | 1,873 | 9,794  |

## About Multimodal Data   

⚠️ Due to potential copyright issues with the TV show "Friends", we do not provide pre-segmented video clips. 

If you need to utilize multimodal data, you may consider the following options:

1. Use the acoustic and visual features we provide:
    - [`audio_embedding_6373.npy`](https://drive.google.com/file/d/1EhU2jFSr_Vi67Wdu1ARJozrTJtgiQrQI/view?usp=share_link): the embedding table composed of the 6373-dimensional acoustic features of each utterances extracted with openSMILE
    - [`video_embedding_4096.npy`](https://drive.google.com/file/d/1NGSsiQYDTqgen_g9qndSuha29JA60x14/view?usp=share_link): the embedding table composed of the 4096-dimensional visual features of each utterances extracted with 3D-CNN

2. Since ECF is constructed based on the MELD dataset, you can download the raw video clips from [MELD](https://github.com/declare-lab/MELD). 
Most utterances in ECF align with MELD. However, **we have made certain modifications to MELD's raw data while constructing ECF, including but not limited to editing utterance text, adjusting timestamps, and adding or removing utterances**. Therefore, some timestamps provided in ECF have been corrected, and there are also new utterances that cannot be found in MELD. Given this, we recommend option (3) if feasible.

3. Download the raw videos of _Friends_ from the website, and use the FFmpeg toolkit to extract audio-visual clips of each utterance based on the timestamps we provide.

   

## Citation

If you find ECF useful for your research, please cite our paper using the following BibTeX entries:

```
@ARTICLE{wang2023multimodal,
  author={Wang, Fanfan and Ding, Zixiang and Xia, Rui and Li, Zhaoyu and Yu, Jianfei},
  journal={IEEE Transactions on Affective Computing}, 
  title={Multimodal Emotion-Cause Pair Extraction in Conversations}, 
  year={2023},
  volume={14},
  number={3},
  pages={1832-1844},
  doi = {10.1109/TAFFC.2022.3226559}
}

@InProceedings{wang2024SemEval,
  author={Wang, Fanfan  and  Ma, Heqing  and  Xia, Rui  and  Yu, Jianfei  and  Cambria, Erik},
  title={SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations},
  booktitle={Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)},
  month={June},
  year={2024},
  address={Mexico City, Mexico},
  publisher={Association for Computational Linguistics},
  pages={2022--2033},
  url = {https://aclanthology.org/2024.semeval2024-1.273}
}
```