File size: 1,762 Bytes
cde657a
 
 
 
 
 
 
 
 
 
 
 
 
63b86b4
 
cde657a
 
 
63b86b4
 
 
 
 
 
 
cde657a
 
 
63b86b4
cde657a
 
 
 
 
 
 
 
 
 
 
79e2a99
 
cde657a
 
 
 
79e2a99
cde657a
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
tags:
- generated_from_trainer
model-index:
- name: roberta-base-fiqa-flm-sq-flit
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# roberta-base-fiqa-flm-sq-flit

This model is a fine-tuned version of roberta-base on a custom dataset create for question answering in 
financial domain.

## Model description

RoBERTa is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. 
The model was further processed as below for the specific downstream QA task.
1. Pretrained for domain adaptation with Masked language modeling (MLM) objective with
the FIQA challenge Opinion-based QA task is available here - https://drive.google.com/file/d/1BlWaV-qVPfpGyJoWQJU9bXQgWCATgxEP/view
2. Pretrained with MLM objective with custom generated dataset for Banking and Finance.
3. Fine Tuned with SQuAD V2 dataset for QA task adaptation.
4. Fine Tuned with custom labeled dataset in SQuAD format for domain and task adaptation.

## Intended uses & limitations

The model is intended to be used for a custom Questions Answering system in the BFSI domain.

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 2.0

### Training results



### Framework versions

- Transformers 4.15.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3