faisalaljahlan commited on
Commit
835b783
1 Parent(s): 2af8e27

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -3
README.md CHANGED
@@ -19,18 +19,28 @@ It achieves the following results on the evaluation set:
19
 
20
  ## Model description
21
 
22
- More information needed
 
23
 
24
  ## Intended uses & limitations
25
 
26
- More information needed
 
 
 
27
 
28
  ## Training and evaluation data
29
 
30
- More information needed
 
 
31
 
32
  ## Training procedure
33
 
 
 
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
 
19
 
20
  ## Model description
21
 
22
+ The Labour-Law-SA-QA model is a fine-tuned version of the aubmindlab/bert-base-arabert model on a custom dataset of questions and answers about labour law in Saudi Arabia.
23
+ The model is trained to predict the answer to a question given the question text and the context of the surrounding text.
24
 
25
  ## Intended uses & limitations
26
 
27
+ The Labour-Law-SA-QA model is intended to be used to answer questions about labour law in Saudi Arabia.
28
+ The model is not intended to be used for legal advice, and it should not be used to replace the advice of a qualified lawyer.
29
+ The model is limited by the quality of the training data.
30
+ If the training data is not representative of the real-world questions that the model will be asked, then the model's performance will be degraded.
31
 
32
  ## Training and evaluation data
33
 
34
+ The Labour-Law-SA-QA model was trained on a custom dataset of questions and answers about labour law in Saudi Arabia.
35
+ The dataset was created by collecting questions from a variety of sources, including government websites.
36
+ The dataset was then manually cleaned and verified to ensure that the questions and answers were accurate and relevant.
37
 
38
  ## Training procedure
39
 
40
+ The Labour-Law-SA-QA model was trained using the Hugging Face Transformers library: https://huggingface.co/transformers/.
41
+ The model was fine-tuned using the Adam optimizer with a learning rate of 2e-05.
42
+ The model was trained for 9 epochs, and the training was stopped early when the validation loss did not improve for 3 consecutive epochs.
43
+
44
  ### Training hyperparameters
45
 
46
  The following hyperparameters were used during training: