English
coreference-resolution
Eval Results

Maverick mes PreCo

Official weights for Maverick-mes trained on PreCo and based on DeBERTa-large. This model achieves 87.4 Avg CoNLL-F1 on PreCo coreference resolution dataset.

Other available models at SapienzaNLP huggingface hub:

hf_model_name training dataset Score Singletons
"sapienzanlp/maverick-mes-ontonotes" OntoNotes 83.6 No
"sapienzanlp/maverick-mes-litbank" LitBank 78.0 Yes
"sapienzanlp/maverick-mes-preco" PreCo 87.4 Yes

N.B. Each dataset has different annotation guidelines, choose your model according to your use case.

Maverick: Efficient and Accurate Coreference Resolution Defying recent trends

Conference License: CC BY-NC 4.0 Pip Package git

Citation

@inproceedings{martinelli-etal-2024-maverick,
    title = "Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends",
    author = "Martinelli, Giuliano and
      Barba, Edoardo  and
      Navigli, Roberto",
        booktitle = "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2024)",
    year      = "2024",
    address   = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
}
Downloads last month
1
Inference API
Unable to determine this model's library. Check the docs .

Collection including sapienzanlp/maverick-mes-preco

Evaluation results