gonzalez-agirre
commited on
Commit
•
ed836ed
1
Parent(s):
186ce32
Update README.md
Browse files
README.md
CHANGED
@@ -54,12 +54,59 @@ widget:
|
|
54 |
|
55 |
# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Textual Entailment.
|
56 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
57 |
The **roberta-base-ca-v2-cased-te** is a Textual Entailment (TE) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
|
58 |
|
59 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
We used the TE dataset in Catalan called [TECA](https://huggingface.co/datasets/projecte-aina/teca) for training and evaluation.
|
61 |
|
62 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
63 |
We evaluated the roberta-base-ca-cased-te on the TECA test set against standard multilingual and monolingual baselines:
|
64 |
|
65 |
| Model | TECA (Accuracy) |
|
@@ -71,7 +118,12 @@ We evaluated the roberta-base-ca-cased-te on the TECA test set against standard
|
|
71 |
|
72 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
|
73 |
|
74 |
-
|
|
|
|
|
|
|
|
|
|
|
75 |
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
|
76 |
```bibtex
|
77 |
@inproceedings{armengol-estape-etal-2021-multilingual,
|
@@ -96,5 +148,9 @@ If you use any of these resources (datasets or models) in your work, please cite
|
|
96 |
```
|
97 |
|
98 |
### Funding
|
99 |
-
This work was funded by the [
|
|
|
|
|
|
|
100 |
|
|
|
|
54 |
|
55 |
# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Textual Entailment.
|
56 |
|
57 |
+
## Table of Contents
|
58 |
+
- [Model Description](#model-description)
|
59 |
+
- [Intended Uses and Limitations](#intended-uses-and-limitations)
|
60 |
+
- [How to Use](#how-to-use)
|
61 |
+
- [Training](#training)
|
62 |
+
- [Training Data](#training-data)
|
63 |
+
- [Training Procedure](#training-procedure)
|
64 |
+
- [Evaluation](#evaluation)
|
65 |
+
- [Variable and Metrics](#variable-and-metrics)
|
66 |
+
- [Evaluation Results](#evaluation-results)
|
67 |
+
- [Licensing Information](#licensing-information)
|
68 |
+
- [Citation Information](#citation-information)
|
69 |
+
- [Funding](#funding)
|
70 |
+
- [Contributions](#contributions)
|
71 |
+
|
72 |
+
## Model description
|
73 |
+
|
74 |
The **roberta-base-ca-v2-cased-te** is a Textual Entailment (TE) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
|
75 |
|
76 |
+
## Intended Uses and Limitations
|
77 |
+
|
78 |
+
**roberta-base-ca-v2-cased-te** model can be used to recognize Textual Entailment (TE). The model is limited by its training dataset and may not generalize well for all use cases.
|
79 |
+
|
80 |
+
## How to Use
|
81 |
+
|
82 |
+
Here is how to use this model:
|
83 |
+
|
84 |
+
```python
|
85 |
+
from transformers import pipeline
|
86 |
+
from pprint import pprint
|
87 |
+
|
88 |
+
nlp = pipeline("text-classification", model="projecte-aina/roberta-base-ca-v2-cased-te")
|
89 |
+
example = "M'agrada el sol i la calor. A la Garrotxa plou molt."
|
90 |
+
|
91 |
+
te_results = nlp(example)
|
92 |
+
pprint(te_results)
|
93 |
+
```
|
94 |
+
|
95 |
+
## Training
|
96 |
+
|
97 |
+
### Training data
|
98 |
We used the TE dataset in Catalan called [TECA](https://huggingface.co/datasets/projecte-aina/teca) for training and evaluation.
|
99 |
|
100 |
+
### Training Procedure
|
101 |
+
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
|
102 |
+
|
103 |
+
## Evaluation
|
104 |
+
|
105 |
+
### Variable and Metrics
|
106 |
+
|
107 |
+
This model was finetuned maximizing accuracy.
|
108 |
+
|
109 |
+
## Evaluation results
|
110 |
We evaluated the roberta-base-ca-cased-te on the TECA test set against standard multilingual and monolingual baselines:
|
111 |
|
112 |
| Model | TECA (Accuracy) |
|
|
|
118 |
|
119 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
|
120 |
|
121 |
+
|
122 |
+
## Licensing Information
|
123 |
+
|
124 |
+
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
|
125 |
+
|
126 |
+
## Citation Information
|
127 |
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
|
128 |
```bibtex
|
129 |
@inproceedings{armengol-estape-etal-2021-multilingual,
|
|
|
148 |
```
|
149 |
|
150 |
### Funding
|
151 |
+
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/en/inici/index.html) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
|
152 |
+
|
153 |
+
|
154 |
+
## Contributions
|
155 |
|
156 |
+
[N/A]
|