File size: 3,522 Bytes
4568d3f 9b66fa5 4568d3f 2613e69 a535608 2613e69 4d3795a 9b66fa5 4568d3f 2613e69 9b66fa5 2613e69 a535608 2613e69 9b66fa5 2613e69 9b66fa5 2613e69 9b66fa5 2613e69 9b66fa5 2613e69 9b66fa5 4568d3f a535608 2613e69 4568d3f 2613e69 4568d3f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- surrey-nlp/PLOD-unfiltered
metrics:
- precision
- recall
- f1
- accuracy
widget:
- text: Light dissolved inorganic carbon (DIC) resulting from the oxidation of hydrocarbons.
- text: RAFs are plotted for a selection of neurons in the dorsal zone (DZ) of auditory
cortex in Figure 1.
- text: Images were acquired using a GE 3.0T MRI scanner with an upgrade for echo-planar
imaging (EPI).
base_model: albert-large-v2
model-index:
- name: albert-large-v2-finetuned-ner_with_callbacks
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: surrey-nlp/PLOD-unfiltered
type: token-classification
args: PLODunfiltered
metrics:
- type: precision
value: 0.9655166719570215
name: Precision
- type: recall
value: 0.9608483288141474
name: Recall
- type: f1
value: 0.9631768437660728
name: F1
- type: accuracy
value: 0.9589410429715819
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-large-v2-finetuned-ner_with_callbacks
This model is a fine-tuned version of [albert-large-v2](https://huggingface.co/albert-large-v2) on the [PLOD-unfiltered](https://huggingface.co/datasets/surrey-nlp/PLOD-unfiltered) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1235
- Precision: 0.9655
- Recall: 0.9608
- F1: 0.9632
- Accuracy: 0.9589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1377 | 0.49 | 7000 | 0.1294 | 0.9563 | 0.9422 | 0.9492 | 0.9436 |
| 0.1244 | 0.98 | 14000 | 0.1165 | 0.9589 | 0.9504 | 0.9546 | 0.9499 |
| 0.107 | 1.48 | 21000 | 0.1140 | 0.9603 | 0.9509 | 0.9556 | 0.9511 |
| 0.1088 | 1.97 | 28000 | 0.1086 | 0.9613 | 0.9551 | 0.9582 | 0.9536 |
| 0.0918 | 2.46 | 35000 | 0.1059 | 0.9617 | 0.9582 | 0.9600 | 0.9556 |
| 0.0847 | 2.95 | 42000 | 0.1067 | 0.9620 | 0.9586 | 0.9603 | 0.9559 |
| 0.0734 | 3.44 | 49000 | 0.1188 | 0.9646 | 0.9588 | 0.9617 | 0.9574 |
| 0.0725 | 3.93 | 56000 | 0.1065 | 0.9660 | 0.9599 | 0.9630 | 0.9588 |
| 0.0547 | 4.43 | 63000 | 0.1273 | 0.9662 | 0.9602 | 0.9632 | 0.9590 |
| 0.0542 | 4.92 | 70000 | 0.1235 | 0.9655 | 0.9608 | 0.9632 | 0.9589 |
| 0.0374 | 5.41 | 77000 | 0.1401 | 0.9647 | 0.9613 | 0.9630 | 0.9586 |
| 0.0417 | 5.9 | 84000 | 0.1380 | 0.9641 | 0.9622 | 0.9632 | 0.9588 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|