--- language: - ca license: apache-2.0 tags: - "catalan" - "textual entailment" - "teca" - "CaText" - "Catalan Textual Corpus" datasets: - "projecte-aina/teca" metrics: - "accuracy" model-index: - name: roberta-base-ca-v2-cased-te results: - task: type: text-classification # Required. Example: automatic-speech-recognition dataset: type: projecte-aina/teca name: TECA metrics: - name: Accuracy type: accuracy value: 0.8314 widget: - text: "M'agrades. T'estimo." - text: "M'agrada el sol i la calor. A la Garrotxa plou molt." - text: "El llibre va caure per la finestra. El llibre va sortir volant." - text: "El meu aniversari és el 23 de maig. Faré anys a finals de maig." --- # Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Textual Entailment. ## Table of Contents - [Model Description](#model-description) - [Intended Uses and Limitations](#intended-uses-and-limitations) - [How to Use](#how-to-use) - [Training](#training) - [Training Data](#training-data) - [Training Procedure](#training-procedure) - [Evaluation](#evaluation) - [Variable and Metrics](#variable-and-metrics) - [Evaluation Results](#evaluation-results) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Funding](#funding) - [Contributions](#contributions) ## Model description The **roberta-base-ca-v2-cased-te** is a Textual Entailment (TE) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details). ## Intended Uses and Limitations **roberta-base-ca-v2-cased-te** model can be used to recognize Textual Entailment (TE). The model is limited by its training dataset and may not generalize well for all use cases. ## How to Use Here is how to use this model: ```python from transformers import pipeline from pprint import pprint nlp = pipeline("text-classification", model="projecte-aina/roberta-base-ca-v2-cased-te") example = "M'agrada el sol i la calor. A la Garrotxa plou molt." te_results = nlp(example) pprint(te_results) ``` ## Training ### Training data We used the TE dataset in Catalan called [TECA](https://huggingface.co/datasets/projecte-aina/teca) for training and evaluation. ### Training Procedure The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set. ## Evaluation ### Variable and Metrics This model was finetuned maximizing accuracy. ## Evaluation results We evaluated the roberta-base-ca-cased-te on the TECA test set against standard multilingual and monolingual baselines: | Model | TECA (Accuracy) | | ------------|:----| | roberta-base-ca-v2-cased-te | **83.14** | | BERTa | 79.26 | | mBERT | 74.63 | | XLM-RoBERTa | 33.30 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club). ## Licensing Information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Citation Information If you use any of these resources (datasets or models) in your work, please cite our latest paper: ```bibtex @inproceedings{armengol-estape-etal-2021-multilingual, title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan", author = "Armengol-Estap{\'e}, Jordi and Carrino, Casimiro Pio and Rodriguez-Penagos, Carlos and de Gibert Bonet, Ona and Armentano-Oller, Carme and Gonzalez-Agirre, Aitor and Melero, Maite and Villegas, Marta", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.437", doi = "10.18653/v1/2021.findings-acl.437", pages = "4933--4946", } ``` ### Funding This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).