tanoManzo's picture
End of training
405dc15 verified
---
license: cc-by-nc-sa-4.0
base_model: InstaDeepAI/nucleotide-transformer-500m-1000g
tags:
- generated_from_trainer
metrics:
- precision
- recall
- accuracy
model-index:
- name: nucleotide-transformer-500m-1000g_ft_BioS2_1kbpHG19_DHSs_H3K27AC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nucleotide-transformer-500m-1000g_ft_BioS2_1kbpHG19_DHSs_H3K27AC
This model is a fine-tuned version of [InstaDeepAI/nucleotide-transformer-500m-1000g](https://huggingface.co/InstaDeepAI/nucleotide-transformer-500m-1000g) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8664
- F1 Score: 0.8336
- Precision: 0.8251
- Recall: 0.8424
- Accuracy: 0.8245
- Auc: 0.9047
- Prc: 0.9003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Precision | Recall | Accuracy | Auc | Prc |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:--------:|:------:|:------:|
| 0.529 | 0.0841 | 500 | 0.4714 | 0.8056 | 0.7472 | 0.8740 | 0.7799 | 0.8519 | 0.8346 |
| 0.4706 | 0.1683 | 1000 | 0.4616 | 0.7843 | 0.8182 | 0.7531 | 0.7837 | 0.8702 | 0.8570 |
| 0.4663 | 0.2524 | 1500 | 0.4523 | 0.8218 | 0.7276 | 0.9439 | 0.7863 | 0.8782 | 0.8664 |
| 0.4404 | 0.3366 | 2000 | 0.4665 | 0.8218 | 0.7254 | 0.9478 | 0.7854 | 0.8827 | 0.8716 |
| 0.4456 | 0.4207 | 2500 | 0.4759 | 0.8275 | 0.7481 | 0.9259 | 0.7986 | 0.8924 | 0.8871 |
| 0.4422 | 0.5049 | 3000 | 0.4230 | 0.8282 | 0.7707 | 0.8949 | 0.8061 | 0.8861 | 0.8819 |
| 0.4275 | 0.5890 | 3500 | 0.4123 | 0.8286 | 0.8132 | 0.8446 | 0.8176 | 0.8946 | 0.8883 |
| 0.4234 | 0.6732 | 4000 | 0.4592 | 0.8256 | 0.7345 | 0.9426 | 0.7922 | 0.8961 | 0.8929 |
| 0.4296 | 0.7573 | 4500 | 0.3919 | 0.8369 | 0.8014 | 0.8756 | 0.8218 | 0.9019 | 0.8968 |
| 0.4189 | 0.8415 | 5000 | 0.4052 | 0.8345 | 0.7781 | 0.8997 | 0.8137 | 0.8981 | 0.8948 |
| 0.4233 | 0.9256 | 5500 | 0.3965 | 0.8389 | 0.8040 | 0.8769 | 0.8241 | 0.9024 | 0.8995 |
| 0.4089 | 1.0098 | 6000 | 0.4514 | 0.8382 | 0.7861 | 0.8978 | 0.8191 | 0.9044 | 0.9015 |
| 0.3368 | 1.0939 | 6500 | 0.4123 | 0.8428 | 0.8032 | 0.8865 | 0.8273 | 0.9067 | 0.9039 |
| 0.3194 | 1.1781 | 7000 | 0.5789 | 0.8284 | 0.7382 | 0.9436 | 0.7959 | 0.9006 | 0.8994 |
| 0.3444 | 1.2622 | 7500 | 0.4602 | 0.8283 | 0.8278 | 0.8288 | 0.8206 | 0.9007 | 0.8993 |
| 0.3405 | 1.3463 | 8000 | 0.4591 | 0.8375 | 0.7816 | 0.9020 | 0.8172 | 0.9008 | 0.8986 |
| 0.335 | 1.4305 | 8500 | 0.5358 | 0.8430 | 0.8141 | 0.8740 | 0.8300 | 0.9036 | 0.9015 |
| 0.3228 | 1.5146 | 9000 | 0.6466 | 0.7698 | 0.8828 | 0.6825 | 0.7869 | 0.9052 | 0.9035 |
| 0.3409 | 1.5988 | 9500 | 0.5102 | 0.8326 | 0.8362 | 0.8291 | 0.8260 | 0.9077 | 0.9055 |
| 0.339 | 1.6829 | 10000 | 0.4643 | 0.8373 | 0.8178 | 0.8578 | 0.8260 | 0.9076 | 0.9054 |
| 0.3345 | 1.7671 | 10500 | 0.4526 | 0.8456 | 0.7977 | 0.8997 | 0.8285 | 0.9091 | 0.9062 |
| 0.3325 | 1.8512 | 11000 | 0.5876 | 0.8356 | 0.7666 | 0.9181 | 0.8113 | 0.9020 | 0.8999 |
| 0.344 | 1.9354 | 11500 | 0.4975 | 0.8424 | 0.8131 | 0.8740 | 0.8294 | 0.9081 | 0.9068 |
| 0.3019 | 2.0195 | 12000 | 0.7725 | 0.8352 | 0.8406 | 0.8298 | 0.8290 | 0.9123 | 0.9114 |
| 0.2254 | 2.1037 | 12500 | 0.7338 | 0.7948 | 0.8647 | 0.7353 | 0.8018 | 0.9051 | 0.9042 |
| 0.2171 | 2.1878 | 13000 | 0.8664 | 0.8336 | 0.8251 | 0.8424 | 0.8245 | 0.9047 | 0.9003 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.0