poltextlab
commited on
Commit
•
f1f01ed
1
Parent(s):
54cbed9
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ metrics:
|
|
12 |
- accuracy
|
13 |
- f1-score
|
14 |
---
|
15 |
-
#
|
16 |
## Model description
|
17 |
An `xlm-roberta-large` model finetuned on hungarian training data containing texts of the `media` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
|
18 |
|
@@ -46,7 +46,7 @@ dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.col
|
|
46 |
|
47 |
#### Inference using the Trainer class
|
48 |
```python
|
49 |
-
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/
|
50 |
num_labels=22,
|
51 |
problem_type="multi_label_classification") )
|
52 |
|
@@ -68,7 +68,7 @@ predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).re
|
|
68 |
```
|
69 |
|
70 |
### Fine-tuning procedure
|
71 |
-
`
|
72 |
```python
|
73 |
training_args = TrainingArguments(
|
74 |
output_dir=f"../model/{model_dir}/tmp/",
|
|
|
12 |
- accuracy
|
13 |
- f1-score
|
14 |
---
|
15 |
+
# xlm-roberta-large-hungarian-media-cap-v3
|
16 |
## Model description
|
17 |
An `xlm-roberta-large` model finetuned on hungarian training data containing texts of the `media` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
|
18 |
|
|
|
46 |
|
47 |
#### Inference using the Trainer class
|
48 |
```python
|
49 |
+
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-hungarian-media-cap-v3',
|
50 |
num_labels=22,
|
51 |
problem_type="multi_label_classification") )
|
52 |
|
|
|
68 |
```
|
69 |
|
70 |
### Fine-tuning procedure
|
71 |
+
`xlm-roberta-large-hungarian-media-cap-v3` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
|
72 |
```python
|
73 |
training_args = TrainingArguments(
|
74 |
output_dir=f"../model/{model_dir}/tmp/",
|