Model Description
This model is a fine-tuned version of distilroberta-base on ConLL2003 dataset. It achieves the following results on the evaluation set in Named Entity Recognition (NER)/Token Classification task:
- Loss: 0.0585
- F1: 0.9536
Model Performance
- 1st Place: This fine-tuned model is topped on the best scores ( F1: 94.6%) from Named Entity Recognition (NER) on CoNLL 2003 (English).
- 6th Place: This fine-tuned model is ranked in the 6th place from the Token Classification on conll2003 leaderboard
Model Usage
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("jinhybr/distilroberta-ConLL2003")
model = AutoModelForTokenClassification.from_pretrained("jinhybr/distilroberta-ConLL2003")
nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True)
example = "My name is Tao Jin and live in Canada"
ner_results = nlp(example)
print(ner_results)
[{'entity_group': 'PER', 'score': 0.99686015, 'word': ' Tao Jin', 'start': 11, 'end': 18}, {'entity_group': 'LOC', 'score': 0.9996836, 'word': ' Canada', 'start': 31, 'end': 37}]
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6.0
Training results
Training Loss | Epoch | Step | Validation Loss | F1 |
---|---|---|---|---|
0.1666 | 1.0 | 439 | 0.0621 | 0.9345 |
0.0499 | 2.0 | 878 | 0.0564 | 0.9391 |
0.0273 | 3.0 | 1317 | 0.0553 | 0.9469 |
0.0167 | 4.0 | 1756 | 0.0553 | 0.9492 |
0.0103 | 5.0 | 2195 | 0.0572 | 0.9516 |
0.0068 | 6.0 | 2634 | 0.0585 | 0.9536 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
- Downloads last month
- 21
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for jinhybr/distilroberta-ConLL2003
Base model
distilbert/distilroberta-base