File size: 4,391 Bytes
52cde40 5a7c3f4 52cde40 f8e46d5 52cde40 45cb777 9d2ca05 3b0ac1a 9d2ca05 3b0ac1a 6fefe99 f7c6583 52cde40 6fefe99 f8e46d5 0e657d7 e79c2a1 f8e46d5 0e657d7 f8e46d5 0e657d7 fa18a9b f8e46d5 440b040 52cde40 f8e46d5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
---
library_name: transformers
license: mit
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
model-index:
- name: fine_tune_bert_output
results: []
datasets:
- unimelb-nlp/wikiann
language:
- es
metrics:
- recall
- precision
- f1
pipeline_tag: token-classification
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6478787f79f2d49511ec4f5e/zlC7cw2dkAsm-J_cNOpmE.png)
---
# **spanish_bert_based_ner**
---
# fine_tune_bert_output
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an [wikiann](https://huggingface.co/datasets/unimelb-nlp/wikiann) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3320
- Overall Precision: 0.9051
- Overall Recall: 0.9121
- Overall F1: 0.9086
- Overall Accuracy: 0.9577
- Loc F1: 0.9190
- Org F1: 0.8663
- Per F1: 0.9367
## Labels
The following table represents the labels used by the model along with their corresponding indices:
| Index | Label |
|-------|---------|
| 0 | O |
| 1 | B-PER |
| 2 | I-PER |
| 3 | B-ORG |
| 4 | I-ORG |
| 5 | B-LOC |
| 6 | I-LOC |
### Label Descriptions
- **O**: Outside of a named entity.
- **B-PER**: Beginning of a person's name.
- **I-PER**: Inside a person's name.
- **B-ORG**: Beginning of an organization's name.
- **I-ORG**: Inside an organization's name.
- **B-LOC**: Beginning of a location name.
- **I-LOC**: Inside a location name.
## Inference Example
```python
from transformers import pipeline
# Load the model
ner_pipeline = pipeline("ner", model="syubraj/spanish_bert_based_ner")
# Example text
text = "Elon Musk vive en Estados Unidos y es dueño de Space X, Tesla y Starlink"
# Perform inference
entities = ner_pipeline(text)
for ent in entities:
print(f"Word: {ent['word']} | Label: {ent['entity']}")
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | Loc F1 | Org F1 | Per F1 |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|:------:|:------:|:------:|
| 0.2713 | 0.8 | 1000 | 0.2236 | 0.8498 | 0.8672 | 0.8584 | 0.9401 | 0.8834 | 0.8019 | 0.8790 |
| 0.1537 | 1.6 | 2000 | 0.1909 | 0.8772 | 0.8943 | 0.8857 | 0.9495 | 0.9002 | 0.8369 | 0.9164 |
| 0.1152 | 2.4 | 3000 | 0.2095 | 0.8848 | 0.8981 | 0.8914 | 0.9523 | 0.9039 | 0.8432 | 0.9220 |
| 0.0889 | 3.2 | 4000 | 0.2223 | 0.8978 | 0.8998 | 0.8988 | 0.9546 | 0.9080 | 0.8569 | 0.9290 |
| 0.0701 | 4.0 | 5000 | 0.2152 | 0.8937 | 0.9042 | 0.8989 | 0.9544 | 0.9113 | 0.8565 | 0.9246 |
| 0.0457 | 4.8 | 6000 | 0.2365 | 0.9017 | 0.9069 | 0.9043 | 0.9563 | 0.9164 | 0.8616 | 0.9310 |
| 0.0364 | 5.6 | 7000 | 0.2622 | 0.9037 | 0.9086 | 0.9061 | 0.9578 | 0.9148 | 0.8639 | 0.9365 |
| 0.026 | 6.4 | 8000 | 0.2916 | 0.9037 | 0.9159 | 0.9097 | 0.9585 | 0.9183 | 0.8712 | 0.9366 |
| 0.0215 | 7.2 | 9000 | 0.2985 | 0.9022 | 0.9128 | 0.9074 | 0.9565 | 0.9178 | 0.8676 | 0.9323 |
| 0.0134 | 8.0 | 10000 | 0.3071 | 0.904 | 0.9131 | 0.9085 | 0.9574 | 0.9198 | 0.8671 | 0.9344 |
| 0.0091 | 8.8 | 11000 | 0.3335 | 0.9056 | 0.9115 | 0.9085 | 0.9573 | 0.9175 | 0.8670 | 0.9373 |
| 0.0074 | 9.6 | 12000 | 0.3320 | 0.9051 | 0.9121 | 0.9086 | 0.9577 | 0.9190 | 0.8663 | 0.9367 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1 |