rinna-AraBert-qa-ar4

This model is a fine-tuned version of aubmindlab/bert-base-arabertv2 on the arcd dataset. It achieves the following results on the evaluation set:

  • Loss: 7.1639

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 7e-05
  • train_batch_size: 2
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss
1.3751 6.88 150 3.4763
0.2526 13.75 300 4.7270
0.1059 20.63 450 5.7927
0.0604 27.51 600 5.6757
0.0347 34.38 750 6.0637
0.0163 41.26 900 6.3835
0.0116 48.14 1050 6.7934
0.0024 55.01 1200 6.8119
0.0021 61.89 1350 6.9426
0.0042 68.77 1500 6.8997
0.0033 75.64 1650 6.8969
0.0055 82.52 1800 7.0831
0.0012 89.4 1950 7.0766
0.0014 96.28 2100 7.1639

Framework versions

  • Transformers 4.33.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
29
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Echiguerkh/rinna-AraBert-qa-ar4

Finetuned
(25)
this model

Dataset used to train Echiguerkh/rinna-AraBert-qa-ar4