Edit model card

How to get started

  • Run the training: sbatch run.sh

This command will:

  • Set up the environement
  • Install required libraries: pip install -r requirements.txt -q
  • Move to the code folder: cd code
  • Run the training & evaluate: python run_train.py

NER Results

Model Task Pretraining/Finetuning Dataset Pretraining/Finetuning Language(s) Evaluation Dataset Metric Metric's Value
AfroLM-Large Single Task MasakhaNER 2.0 All FON NER F1-Score 80.48
AfriBERTa-Large Single Task MasakhaNER 2.0 All FON NER F1-Score 79.90
XLMR-Base Single Task MasakhaNER 2.0 All FON NER F1-Score 81.90
XLMR-Large Single Task MasakhaNER 2.0 All FON NER F1-Score 81.60
AfroXLMR-Base Single Task MasakhaNER 2.0 All FON NER F1-Score 82.30
AfroXLMR-Large Single Task MasakhaNER 2.0 All FON NER F1-Score 82.70
:---: :---: :---: :---: :---: :---:
MTL Sum (ours) Multi-Task MasakhaNER 2.0 & MasakhaPOS All FON NER F1-Score 79.87
MTL Weighted (ours) Multi-Task MasakhaNER 2.0 & MasakhaPOS All FON NER F1-Score 81.92
MTL Weighted (ours) Multi-Task MasakhaNER 2.0 & MasakhaPOS Fon Data FON NER F1-Score 64.43

POS Results

Model Task Pretraining/Finetuning Dataset Pretraining/Finetuning Language(s) Evaluation Dataset Metric Metric's Value
AfroLM-Large Single Task MasakhaPOS All FON POS Accuracy 82.40
AfriBERTa-Large Single Task MasakhaPOS All FON POS Accuracy 88.40
XLMR-Base Single Task MasakhaPOS All FON POS Accuracy 90.10
XLMR-Large Single Task MasakhaPOS All FON POS Accuracy 90.20
AfroXLMR-Base Single Task MasakhaPOS All FON POS Accuracy 90.10
AfroXLMR-Large Single Task MasakhaPOS All FON POS Accuracy 90.40
:---: :---: :---: :---: :---: :---:
MTL Sum (ours) Multi-Task MasakhaNER 2.0 & MasakhaPOS All FON POS Accuracy 82.45
MTL Weighted (ours) Multi-Task MasakhaNER 2.0 & MasakhaPOS All FON POS Accuracy 89.20
MTL Weighted (ours) Multi-Task MasakhaNER 2.0 & MasakhaPOS Fon Data FON POS Accuracy 80.85

Importance of Merging Representation Type

Merging Type Models Task Metric Metric's Value
Multiplicative MTL Weighted (multi-task; ours; *) NER F1-Score 81.92
Multiplicative MTL Weighted (multi-task; ours; +) NER F1-Score 64.43
:---: :---: :---: :---: :---:
Multiplicative MTL Weighted (multi-task; ours; *) POS Accuracy 89.20
Multiplicative & MTL Weighted (multi-task; ours; +) POS Accuracy 80.85
:---: :---: :---: :---: :---:
Additive MTL Weighted (multi-task; ours; *) NER F1-Score 78.91
Additive MTL Weighted (multi-task; ours; +) NER F1-Score 60.93
:---: :---: :---: :---: :---:
Additive MTL Weighted (multi-task; ours; *) POS Accuracy 86.99
Additive MTL Weighted (multi-task; ours; +) POS Accuracy 78.25

Model End-Points

How to run inference when you have the model

To run inference with the model(s), you can use the testing block defined in our MultitaskFON class.

TODO

  • leverage the impact of the dynamic weighted average loss
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Datasets used to train bonadossou/multitask_model_fon_True_multiplicative