distilbert-base-cased-ner
This model is a fine-tuned version of distilbert-base-cased on the bionlp2004 dataset. It achieves the following results on the evaluation set:
- Loss: 0.2048
- Precision: 0.7436
- Recall: 0.8059
- F1: 0.7735
- Accuracy: 0.9356
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.2313 | 1.0 | 2078 | 0.2127 | 0.7120 | 0.7810 | 0.7449 | 0.9287 |
0.184 | 2.0 | 4156 | 0.1992 | 0.7258 | 0.7999 | 0.7611 | 0.9353 |
0.1471 | 3.0 | 6234 | 0.2048 | 0.7436 | 0.8059 | 0.7735 | 0.9356 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for TunahanGokcimen/distilbert-base-cased-ner
Base model
distilbert/distilbert-base-casedEvaluation results
- Precision on bionlp2004validation set self-reported0.744
- Recall on bionlp2004validation set self-reported0.806
- F1 on bionlp2004validation set self-reported0.773
- Accuracy on bionlp2004validation set self-reported0.936