File size: 3,328 Bytes
b003e88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
base_model: dmis-lab/biobert-base-cased-v1.2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: biobert-base-cased-v1.2-finetuned-NER
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# biobert-base-cased-v1.2-finetuned-NER

This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0905
- Accuracy: 0.9765
- Precision (macro): 0.8596
- Recall (macro): 0.8601
- F1 (macro): 0.8563
- Precision (micro): 0.9765
- Recall (micro): 0.9765
- F1 (micro): 0.9765

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision (macro) | Recall (macro) | F1 (macro) | Precision (micro) | Recall (micro) | F1 (micro) |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------------:|:--------------:|:----------:|:-----------------:|:--------------:|:----------:|
| No log        | 1.0   | 152  | 0.1382          | 0.9621   | 0.8016            | 0.6594         | 0.6754     | 0.9621            | 0.9621         | 0.9621     |
| No log        | 2.0   | 304  | 0.0907          | 0.9740   | 0.8363            | 0.7827         | 0.7993     | 0.9740            | 0.9740         | 0.9740     |
| No log        | 3.0   | 456  | 0.0811          | 0.9750   | 0.8661            | 0.8285         | 0.8261     | 0.9750            | 0.9750         | 0.9750     |
| 0.1768        | 4.0   | 608  | 0.0829          | 0.9738   | 0.8322            | 0.8581         | 0.8410     | 0.9738            | 0.9738         | 0.9738     |
| 0.1768        | 5.0   | 760  | 0.0786          | 0.9755   | 0.8350            | 0.8737         | 0.8526     | 0.9755            | 0.9755         | 0.9755     |
| 0.1768        | 6.0   | 912  | 0.0866          | 0.9766   | 0.8539            | 0.8491         | 0.8490     | 0.9766            | 0.9766         | 0.9766     |
| 0.0496        | 7.0   | 1064 | 0.0828          | 0.9757   | 0.8454            | 0.8563         | 0.8494     | 0.9757            | 0.9757         | 0.9757     |
| 0.0496        | 8.0   | 1216 | 0.0932          | 0.9754   | 0.8416            | 0.8622         | 0.8511     | 0.9754            | 0.9754         | 0.9754     |
| 0.0496        | 9.0   | 1368 | 0.0939          | 0.9752   | 0.8368            | 0.8617         | 0.8483     | 0.9752            | 0.9752         | 0.9752     |
| 0.0297        | 10.0  | 1520 | 0.0955          | 0.9759   | 0.8426            | 0.8597         | 0.8500     | 0.9759            | 0.9759         | 0.9759     |


### Framework versions

- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1