File size: 2,501 Bytes
559ddde
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7f30792
559ddde
0007030
 
 
 
 
 
 
 
 
 
 
 
 
 
 
559ddde
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
license: cc-by-4.0
dataset_info:
  features:
  - name: question-type
    dtype: string
  - name: description
    dtype: string
  - name: model-1
    dtype: string
  - name: track-1-id
    dtype: int64
  - name: track-1-begin
    dtype: string
  - name: track-1-end
    dtype: string
  - name: model-2
    dtype: string
  - name: track-2-id
    dtype: int64
  - name: track-2-begin
    dtype: string
  - name: track-2-end
    dtype: string
  - name: answer
    dtype: int64
  splits:
  - name: train
    num_bytes: 2396604
    num_examples: 15600
  download_size: 295100
  dataset_size: 2396604
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
tags:
- music
size_categories:
- 10K<n<100K
---

```
from datasets import load_dataset
dataset = load_dataset('disco-eth/AIME-survey')
```

# AIME Survey: AI Music Evaluation Dataset

This survey dataset accompanies the [AIME audio dataset](https://huggingface.co/datasets/disco-eth/AIME).

The AIME Survey dataset consists of 15,600 pairwise audio comparisons rated by more than 2,500 human participants regarding the music quality and text-audio alignment of 12 state-of-the-art music generation models (as of July 2024). The comparisons were made between 10 second snippets of the audio tracks.

The dataset contains the following fields:

- **question-type**: The type of question for the evaluation of the two audio tracks. This can be either 'Text-Audio Alignment' or 'Music Quality'
- **description**: The tag-based music description that was used to generate the tracks.
- **model-1**: The music generation model that generated track-1.
- **track-1-id**: The id for track-1. This corresponds to the id's in the AIME audio dataset.
- **track-1-begin**: The timestamp for the begin of the audio snippet from track-1.
- **track-1-end**: The timestamp for the end of the audio snippet from track-1.
- **model-2**: The music generation model that generated track-2.
- **track-2-id**: The id for track-2. This corresponds to the id's in the AIME audio dataset.
- **track-2-begin**: The timestamp for the begin of the audio snippet from track-2.
- **track-2-end**: The timestamp for the end of the audio snippet from track-2.
- **answer**: Whether the participant preferred the audio snippet from track-1 (answer=1) or track-2 (answer=2).

For more information or to cite our work please see [Benchmarking Music Generation Models and Metrics via Human Preference Studies](https://openreview.net/forum?id=105yqGIpVW).