Edit model card

Visualize in Weights & Biases

mbart-en-ne-sentence-translation

This model is a fine-tuned version of facebook/mbart-large-50-many-to-many-mmt on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9597
  • Bleu: 48.1358
  • Gen Len: 9.3

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
2.9421 1.0 7 1.7883 10.4307 9.1
1.5462 2.0 14 1.3980 7.5693 7.5
1.0467 3.0 21 0.9333 45.1288 9.4
0.4864 4.0 28 0.9698 48.1358 9.3
0.4757 5.0 35 0.9597 48.1358 9.3

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
611M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for nordenxgt/mbart-en-ne-sentence-translation

Finetuned
(101)
this model