malduwais's picture
End of training
8c0d573 verified
|
raw
history blame
2.11 kB
metadata
license: mit
base_model: xlm-roberta-base
tags:
  - generated_from_trainer
model-index:
  - name: xlm-roberta-base-finetuned-Adapter-en-ar-mlm-0.15-large-29OCT
    results: []

xlm-roberta-base-finetuned-Adapter-en-ar-mlm-0.15-large-29OCT

This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.2667
  • Model Preparation Time: 0.0044

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss Model Preparation Time
3.9064 0.2498 1000 3.2366 0.0044
3.0641 0.4995 2000 2.7403 0.0044
2.8162 0.7493 3000 2.5485 0.0044
2.7054 0.9990 4000 2.4384 0.0044
2.6108 1.2488 5000 2.3627 0.0044
2.5357 1.4985 6000 2.3141 0.0044
2.5089 1.7483 7000 2.2847 0.0044
2.4931 1.9980 8000 2.2667 0.0044

Framework versions

  • Transformers 4.43.4
  • Pytorch 2.1.1+cu121
  • Datasets 3.0.2
  • Tokenizers 0.19.1