--- language: - en license: mit base_model: mobilebert-uncased tags: - low-resource NER - token_classification - biomedicine - medical NER - generated_from_trainer datasets: - medicine metrics: - accuracy - precision - recall - f1 model-index: - name: Dagobert42/mobilebert-uncased-biored-finetuned results: [] --- # Dagobert42/mobilebert-uncased-biored-finetuned This model is a fine-tuned version of [mobilebert-uncased](https://huggingface.co/mobilebert-uncased) on the bigbio/biored dataset. It achieves the following results on the evaluation set: - Loss: 0.7686 - Accuracy: 0.7387 - Precision: 0.2041 - Recall: 0.2219 - F1: 0.1908 - Weighted F1: 0.683 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Weighted F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-----------:| | No log | 1.0 | 25 | 1.2311 | 0.7114 | 0.1016 | 0.1429 | 0.1188 | 0.5914 | | No log | 2.0 | 50 | 1.0356 | 0.7114 | 0.1016 | 0.1429 | 0.1188 | 0.5914 | | No log | 3.0 | 75 | 1.0300 | 0.7114 | 0.1016 | 0.1429 | 0.1188 | 0.5914 | | No log | 4.0 | 100 | 1.0246 | 0.7114 | 0.1016 | 0.1429 | 0.1188 | 0.5914 | | No log | 5.0 | 125 | 1.0162 | 0.7114 | 0.1016 | 0.1429 | 0.1188 | 0.5914 | | No log | 6.0 | 150 | 1.0039 | 0.7114 | 0.1016 | 0.1429 | 0.1188 | 0.5914 | | No log | 7.0 | 175 | 0.9806 | 0.7114 | 0.1016 | 0.1429 | 0.1188 | 0.5914 | | No log | 8.0 | 200 | 0.9148 | 0.7114 | 0.1016 | 0.1429 | 0.1188 | 0.5914 | | No log | 9.0 | 225 | 0.8715 | 0.7187 | 0.2116 | 0.1604 | 0.1484 | 0.6172 | | No log | 10.0 | 250 | 0.8303 | 0.7261 | 0.1555 | 0.1972 | 0.1737 | 0.6508 | | No log | 11.0 | 275 | 0.8216 | 0.7292 | 0.1572 | 0.2018 | 0.1764 | 0.6554 | | No log | 12.0 | 300 | 0.8044 | 0.7299 | 0.2295 | 0.2081 | 0.1786 | 0.6605 | | No log | 13.0 | 325 | 0.8108 | 0.732 | 0.2304 | 0.2091 | 0.1797 | 0.662 | | No log | 14.0 | 350 | 0.7920 | 0.7306 | 0.2062 | 0.22 | 0.1877 | 0.6711 | | No log | 15.0 | 375 | 0.8025 | 0.7332 | 0.2164 | 0.2153 | 0.1836 | 0.6674 | | No log | 16.0 | 400 | 0.7937 | 0.7335 | 0.1982 | 0.2248 | 0.2039 | 0.6813 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.15.0