qa-persian

This model is a fine-tuned version of HooshvareLab/bert-fa-base-uncased on the persian_qa and pquad dataset.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Framework versions

  • Transformers 4.33.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3

Code

You can find the github code from Persian QA Github

Downloads last month
121
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for AliBagherz/qa-persian

Finetuned
(9)
this model

Datasets used to train AliBagherz/qa-persian