--- base_model: microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext library_name: transformers license: mit pipeline_tag: text-classification tags: - generated_from_trainer model-index: - name: pombe_curation_fold_0 results: - task: type: text-classification name: Text Classification dataset: name: afg1/pombe-canto-data type: text-classification split: test metrics: - type: accuracy value: 0.9254826254826255 name: Accuracy - type: recall value: 0.9372056514913658 name: Recall - type: precision value: 0.9135424636572304 name: Precision - type: f1 value: 0.9252227818674932 name: F1 - type: total_time_in_seconds value: 118.32597812499444 name: Total_Time_In_Seconds - type: samples_per_second value: 21.88868447184131 name: Samples_Per_Second - type: latency_in_seconds value: 0.04568570583976619 name: Latency_In_Seconds --- [Visualize in Weights & Biases](https://wandb.ai/afg1/pombe_curation_model/runs/richbds0) # pombe_curation_fold_0 This model is a fine-tuned version of [microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.42.3 - Pytorch 2.2.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1