Sheet Music Transformer (base model, fine-tuned on the Grandstaff dataset)

The SMT model fine-tuned on the Camera GrandStaff dataset for pianoform transcription. The code of the model is hosted in this repository.

Model description

The SMT model consists of a vision encoder (ConvNext) and a text decoder (classic Transformer). Given an image of a music system, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.

drawing

Intended uses & limitations

This model is fine-tuned on the GrandStaff dataset, its use is limited to transcribe pianoform images only.

BibTeX entry and citation info

@misc{RiosVila2024,
      title={Sheet Music Transformer: End-To-End Optical Music Recognition Beyond Monophonic Transcription}, 
      author={Antonio Ríos-Vila and Jorge Calvo-Zaragoza and Thierry Paquet},
      year={2024},
      eprint={2402.07596},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2402.07596}, 
}
Downloads last month
352
Safetensors
Model size
21.4M params
Tensor type
F32
·
Inference Examples
Unable to determine this model's library. Check the docs .

Dataset used to train antoniorv6/smt-grandstaff

Collection including antoniorv6/smt-grandstaff