|
--- |
|
license: mit |
|
language: |
|
- en |
|
metrics: |
|
- accuracy |
|
pipeline_tag: text-generation |
|
--- |
|
# ๐ผ ChatMusician: Fostering Intrinsic Musical Abilities Into LLM |
|
|
|
[**๐ DemoPage**](https://ezmonyi.github.io/ChatMusician/) | [**๐ค Dataset**](https://huggingface.co/datasets/m-a-p/MusicPile) | [**๐ค Benchmark**](https://huggingface.co/datasets/m-a-p/MusicTheoryBench) | [**๐ arXiv**](http://arxiv.org/abs/2402.16153) | [**Code**](https://github.com/hf-lin/ChatMusician) |
|
|
|
## ๐News |
|
- **๐ฅ[2023-12-10]: The release of ChatMusician's demo, code, model, data, and benchmark. ๐** |
|
- [2023-11-30]: Checkout another awesome project [MMMU](https://huggingface.co/datasets/MMMU/MMMU/) that includes multimodal music reasoning. |
|
|
|
## Introduction |
|
|
|
While Large Language Models (LLMs) demonstrate impressive capabilities in text generation, |
|
we find that their ability has yet to be generalized to music, humanityโs creative language. |
|
We introduce **ChatMusician**, **an open-source LLM that integrates intrinsic musical abilities**. |
|
|
|
It is based on continual pre-training and finetuning LLaMA2 on a text-compatible music representation, ABC notation, and the music is treated as a second language. ChatMusician can understand and generate music with a pure text tokenizer without any external multi-modal neural structures or tokenizers. Interestingly, endowing musical abilities does not harm language abilities, even achieving a slightly higher MMLU score. Our model is capable of composing well-structured, full-length music, conditioned on texts, chords, melodies, motifs, musical forms, etc, surpassing GPT-4 baseline. On our meticulously curated college-level music understanding benchmark, MusicTheoryBench, ChatMusician surpasses LLaMA2 and GPT-3.5 on zero-shot setting by a noticeable |
|
margin. Our work reveals that LLMs can be an excellent compressor for music, but there remains significant territory to be conquered. Code, data, model, and benchmark are open-sourced. |
|
|
|
<!-- <audio controls src="https://cdn-uploads.huggingface.co/production/uploads/5fd6f670053c8345eddc1b68/8NSONUjIF7KGUCfwzPCd9.mpga"></audio> --> |
|
|
|
**ChatMusician-Base is a pretrained model. [ChatMusician](https://huggingface.co/m-a-p/ChatMusician) is recommended for producing symbolic music.** |
|
|
|
## Training Data |
|
|
|
ChatMusician-Base is pretrained on the ๐ค [MusicPile](https://huggingface.co/datasets/m-a-p/MusicPile), which is the first pretraining corpus for **developing musical abilities** in large language models. Check out the dataset card for more details. |
|
|
|
## Training Procedure |
|
|
|
We initialized a fp16-precision ChatMusician-Base from the LLaMA2-7B-Base weights, and applied a continual pre-training plus fine-tuning pipeline. LoRA adapters were integrated into the attention and MLP layers, with additional training on embeddings and all linear layers. The maximum sequence length |
|
was 2048. We utilized 16 80GB-A800 GPUs for one epoch pre-training. DeepSpeed was employed for memory efficiency, and the AdamW optimizer was used with a 1e-4 learning rate and a 5% warmup cosine scheduler. Gradient clipping was set at 1.0. The LoRA parameters dimension, alpha, and dropout were set to 64, 16, and 0.1, with a batch size of 8. |
|
|
|
## Evaluation |
|
|
|
1. Music understanding abilities are evaluated on the [MusicTheoryBench](https://huggingface.co/datasets/m-a-p/MusicTheoryBench). The following figure is zero-shot accuracy on MusicTheoryBench. |
|
We included GPT-3.5, GPT-4, LLaMA2-7B-Base, ChatMusician-Base, and ChatMusician. The blue bar represents the performance on the music knowledge metric, and the red bar represents the music reasoning metric. The dashed line corresponds to a random baseline, with a score of 25%.![MusicTheoryBench_result](./MusicTheoryBench_result_plt.png) |
|
2. General language abilities of ChatMusician are evaluated on the [Massive Multitask Language Understanding (MMLU) dataset](https://huggingface.co/datasets/lukaemon/mmlu). |
|
|
|
|
|
## Limitations |
|
|
|
The current iteration of ChatMusician predominantly generates music in the style of Irish music, attributable to a significant portion of the dataset being sourced from this genre. |
|
The model exhibits hallucinations and faces limitations in supporting open-ended music generation tasks due to the lack of diversity in handcrafted music instructions. |
|
|
|
## Citation |
|
If you find our work helpful, feel free to give us a cite. |
|
``` |
|
@misc{yuan2024chatmusician, |
|
title={ChatMusician: Understanding and Generating Music Intrinsically with LLM}, |
|
author={Ruibin Yuan and Hanfeng Lin and Yi Wang and Zeyue Tian and Shangda Wu and Tianhao Shen and Ge Zhang and Yuhang Wu and Cong Liu and Ziya Zhou and Ziyang Ma and Liumeng Xue and Ziyu Wang and Qin Liu and Tianyu Zheng and Yizhi Li and Yinghao Ma and Yiming Liang and Xiaowei Chi and Ruibo Liu and Zili Wang and Pengfei Li and Jingcheng Wu and Chenghua Lin and Qifeng Liu and Tao Jiang and Wenhao Huang and Wenhu Chen and Emmanouil Benetos and Jie Fu and Gus Xia and Roger Dannenberg and Wei Xue and Shiyin Kang and Yike Guo}, |
|
year={2024}, |
|
eprint={2402.16153}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.SD} |
|
} |
|
``` |