murat_chem_model / README.md
muratti18462's picture
End of training
073239a verified
|
raw
history blame
4.68 kB
metadata
library_name: transformers
license: apache-2.0
base_model: alvaroalon2/biobert_chemical_ner
tags:
  - generated_from_trainer
model-index:
  - name: murat_chem_model
    results: []

murat_chem_model

This model is a fine-tuned version of alvaroalon2/biobert_chemical_ner on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3827
  • Chemical: {'precision': 0.8843176605504587, 'recall': 0.8439124487004104, 'f1': 0.863642727145457, 'number': 7310}
  • Overall Precision: 0.8843
  • Overall Recall: 0.8439
  • Overall F1: 0.8636
  • Overall Accuracy: 0.9447

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Chemical Overall Precision Overall Recall Overall F1 Overall Accuracy
1.6817 1.2048 100 0.3090 {'precision': 0.8702002710435175, 'recall': 0.7905608755129959, 'f1': 0.828471077342126, 'number': 7310} 0.8702 0.7906 0.8285 0.9204
1.6817 2.4096 200 0.2900 {'precision': 0.8720861611094718, 'recall': 0.8086183310533516, 'f1': 0.8391538898353209, 'number': 7310} 0.8721 0.8086 0.8392 0.9312
1.6817 3.6145 300 0.3071 {'precision': 0.8830255057167986, 'recall': 0.824076607387141, 'f1': 0.8525332578545147, 'number': 7310} 0.8830 0.8241 0.8525 0.9398
1.6817 4.8193 400 0.3099 {'precision': 0.882646325363063, 'recall': 0.8231190150478797, 'f1': 0.8518439866921499, 'number': 7310} 0.8826 0.8231 0.8518 0.9411
0.1018 6.0241 500 0.3891 {'precision': 0.8816456613066782, 'recall': 0.8325581395348837, 'f1': 0.8563990712727785, 'number': 7310} 0.8816 0.8326 0.8564 0.9402
0.1018 7.2289 600 0.3672 {'precision': 0.8851640513552068, 'recall': 0.8488372093023255, 'f1': 0.8666201117318435, 'number': 7310} 0.8852 0.8488 0.8666 0.9451
0.1018 8.4337 700 0.3459 {'precision': 0.8812949640287769, 'recall': 0.8378932968536251, 'f1': 0.8590462833099579, 'number': 7310} 0.8813 0.8379 0.8590 0.9449
0.1018 9.6386 800 0.3601 {'precision': 0.880656108597285, 'recall': 0.8519835841313269, 'f1': 0.8660826032540675, 'number': 7310} 0.8807 0.8520 0.8661 0.9462
0.1018 10.8434 900 0.3711 {'precision': 0.881471972614463, 'recall': 0.8454172366621067, 'f1': 0.8630682214929124, 'number': 7310} 0.8815 0.8454 0.8631 0.9443
0.0038 12.0482 1000 0.3779 {'precision': 0.8816542644533486, 'recall': 0.8428180574555404, 'f1': 0.8617988529864317, 'number': 7310} 0.8817 0.8428 0.8618 0.9437
0.0038 13.2530 1100 0.3829 {'precision': 0.8837275985663082, 'recall': 0.8432284541723666, 'f1': 0.863003150157508, 'number': 7310} 0.8837 0.8432 0.8630 0.9447
0.0038 14.4578 1200 0.3827 {'precision': 0.8843176605504587, 'recall': 0.8439124487004104, 'f1': 0.863642727145457, 'number': 7310} 0.8843 0.8439 0.8636 0.9447

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1