File size: 5,244 Bytes
f3bad19
 
 
dcb71db
 
f3bad19
 
 
 
 
 
 
8b477b5
 
 
 
77c5518
546be88
804df9b
47b0aa0
 
 
 
c193ec6
 
 
47b0aa0
 
 
 
c193ec6
 
 
47b0aa0
 
 
c193ec6
 
 
47b0aa0
 
 
c193ec6
 
 
47b0aa0
 
 
c193ec6
 
 
d23c839
 
546be88
d23c839
5767e79
43e58a9
5767e79
ba4f51d
 
9f06ac5
 
962c210
dcb71db
546be88
d23c839
d205970
8bfdf6b
8b477b5
3848b3b
c85dae1
8b477b5
f78bf44
 
 
 
 
47b0aa0
f78bf44
 
 
 
1282903
122ef43
 
b0df395
47b0aa0
 
 
8b477b5
265a800
8b477b5
265a800
8b477b5
265a800
8b477b5
265a800
7a0ea2b
265a800
 
 
 
 
 
7a0ea2b
3c97952
 
265a800
 
 
 
 
ba23be3
265a800
47b0aa0
265a800
7a0ea2b
2a620a4
 
 
 
e681512
2a620a4
1b71b3f
e681512
2a620a4
 
e681512
 
1b71b3f
 
e681512
1b71b3f
 
e681512
 
2a620a4
 
 
 
 
 
 
 
 
 
47b0aa0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
---
task_categories:
- automatic-speech-recognition
multilinguality:
- multilingual
language:
- en
- fr
- de
- es
tags:
- music
- lyrics
- evaluation
- benchmark
- transcription
- pnc
pretty_name: 'Jam-ALT: A Readability-Aware Lyrics Transcription Benchmark'
paperswithcode_id: jam-alt
configs:
- config_name: all
  data_files:
  - split: test
    path:
      - metadata.jsonl
      - subsets/*/audio/*.mp3
  default: true
- config_name: de
  data_files:
  - split: test
    path:
      - subsets/de/metadata.jsonl
      - subsets/de/audio/*.mp3
- config_name: en
  data_files:
  - split: test
    path:
      - subsets/en/metadata.jsonl
      - subsets/en/audio/*.mp3
- config_name: es
  data_files:
  - split: test
    path:
      - subsets/es/metadata.jsonl
      - subsets/es/audio/*.mp3
- config_name: fr
  data_files:
  - split: test
    path:
      - subsets/fr/metadata.jsonl
      - subsets/fr/audio/*.mp3
---

# Jam-ALT: A Readability-Aware Lyrics Transcription Benchmark


## Dataset description

* **Project page:** https://audioshake.github.io/jam-alt/
* **Source code:** https://github.com/audioshake/alt-eval
* **Paper (ISMIR 2024):** https://doi.org/10.5281/zenodo.14877443
* **Paper (arXiv):** https://arxiv.org/abs/2408.06370
* **Extended abstract (ISMIR 2023 LBD):** https://arxiv.org/abs/2311.13987

Jam-ALT is a revision of the [**JamendoLyrics**](https://github.com/f90/jamendolyrics) dataset (80 songs in 4 languages), adapted for use as an **automatic lyrics transcription** (**ALT**) benchmark.

The lyrics have been revised according to the newly compiled [annotation guidelines](GUIDELINES.md), which include rules about spelling and formatting, as well as punctuation and capitalization (PnC).
The audio is identical to the JamendoLyrics dataset.

**Note:** The dataset is not time-aligned as it does not easily map to the timestamps from JamendoLyrics. To evaluate **automatic lyrics alignment** (**ALA**), please use [JamendoLyrics](https://github.com/f90/jamendolyrics) directly.

See the [project website](https://audioshake.github.io/jam-alt/) for details.

## Loading the data

```python
from datasets import load_dataset
dataset = load_dataset("audioshake/jam-alt", split="test")
```

A subset is defined for each language (`en`, `fr`, `de`, `es`);
for example, use `load_dataset("audioshake/jam-alt", "es")` to load only the Spanish songs.

To control how the audio is decoded, cast the `audio` column using `dataset.cast_column("audio", datasets.Audio(...))`.
Useful arguments to `datasets.Audio()` are:
- `sampling_rate` and `mono=True` to control the sampling rate and number of channels.
- `decode=False` to skip decoding the audio and just get the MP3 file paths and contents.

The `load_dataset` function also accepts a `columns` parameter, which can be useful for example if you want to skip downloading the audio (see the example below).

## Running the benchmark

The evaluation is implemented in our [`alt-eval` package](https://github.com/audioshake/alt-eval):
```python
from datasets import load_dataset
from alt_eval import compute_metrics

dataset = load_dataset("audioshake/jam-alt", revision="v1.2.0", split="test")
# transcriptions: list[str]
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
```

For example, the following code can be used to evaluate Whisper:
```python
dataset = load_dataset("audioshake/jam-alt", revision="v1.2.0", split="test")
dataset = dataset.cast_column("audio", datasets.Audio(decode=False))  # Get the raw audio file, let Whisper decode it

model = whisper.load_model("tiny")
transcriptions = [
  "\n".join(s["text"].strip() for s in model.transcribe(a["path"])["segments"])
  for a in dataset["audio"]
]
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
```
Alternatively, if you already have transcriptions, you might prefer to skip loading the `audio` column:
```python
dataset = load_dataset("audioshake/jam-alt", revision="v1.2.0", split="test", columns=["name", "text", "language", "license_type"])
```

## Citation

When using the benchmark, please cite [our paper](https://doi.org/10.5281/zenodo.14877443) as well as the original [JamendoLyrics paper](https://arxiv.org/abs/2306.07744):
```bibtex
@misc{cifka-2024-jam-alt,
  author       = {Ondrej C{\'{\i}}fka and
                  Hendrik Schreiber and
                  Luke Miner and
                  Fabian{-}Robert St{\"{o}}ter},
  title        = {Lyrics Transcription for Humans: {A} Readability-Aware Benchmark},
  booktitle    = {Proceedings of the 25th International Society for 
                  Music Information Retrieval Conference},
  pages        = {737--744},
  year         = 2024,
  publisher    = {ISMIR},
  doi          = {10.5281/ZENODO.14877443},
  url          = {https://doi.org/10.5281/zenodo.14877443}
}
@inproceedings{durand-2023-contrastive,
  author={Durand, Simon and Stoller, Daniel and Ewert, Sebastian},
  booktitle={2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, 
  title={Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages}, 
  year={2023},
  pages={1-5},
  address={Rhodes Island, Greece},
  doi={10.1109/ICASSP49357.2023.10096725}
}
```