Labour-Law-SA-QA / README.md
faisalaljahlan's picture
Update README.md
538539e
|
raw
history blame
3.36 kB
metadata
base_model: aubmindlab/bert-base-arabert
tags:
  - generated_from_trainer
model-index:
  - name: Labour-Law-SA-QA
    results: []
language:
  - ar
library_name: transformers
pipeline_tag: question-answering
widget:
  - text: ماهو العمل لبعض الوقت
    context: >-
      العمل الذي يؤديه عامل غير متفرغ لدى صاحب عمل ولساعات عمل تقل عن نصف ساعات
      العمل اليومية المعتادة لدى المنشأة، سواء كان هذا العامل يؤدي ساعات عمله
      يومياً أو بعض أيام الأسبوع
    example_title: قانون العمل

Labour-Law-SA-QA

This model is a fine-tuned version of aubmindlab/bert-base-arabert on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1740

Model description

The Labour-Law-SA-QA model is a fine-tuned version of the aubmindlab/bert-base-arabert model on a custom dataset of questions and answers about labour law in Saudi Arabia. The model is trained to predict the answer to a question given the question text and the context of the surrounding text.

Intended uses & limitations

The Labour-Law-SA-QA model is intended to be used to answer questions about labour law in Saudi Arabia. The model is not intended to be used for legal advice, and it should not be used to replace the advice of a qualified lawyer. The model is limited by the quality of the training data. If the training data is not representative of the real-world questions that the model will be asked, then the model's performance will be degraded.

Training and evaluation data

The Labour-Law-SA-QA model was trained on a custom dataset of questions and answers about labour law in Saudi Arabia. The dataset was created by collecting questions from a variety of sources, including government websites. The dataset was then manually cleaned and verified to ensure that the questions and answers were accurate and relevant.

Training procedure

The Labour-Law-SA-QA model was trained using the Hugging Face Transformers library: https://huggingface.co/transformers/. The model was fine-tuned using the Adam optimizer with a learning rate of 2e-05. The model was trained for 9 epochs, and the training was stopped early when the validation loss did not improve for 3 consecutive epochs.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 9

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 34 1.6275
No log 2.0 68 1.4822
No log 3.0 102 1.4659
No log 4.0 136 1.3038
No log 5.0 170 1.3173
No log 6.0 204 1.1665
No log 7.0 238 1.1344
No log 8.0 272 1.1346
No log 9.0 306 1.1740

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1
  • Datasets 2.14.4
  • Tokenizers 0.13.3