File size: 3,746 Bytes
71bfcf1 01790b7 71bfcf1 ca16bea 329aef4 ca16bea 71bfcf1 ca16bea ab7d3cf ca16bea 262f1ea ca16bea b9097fb df22c74 ca16bea df22c74 ca16bea df22c74 d9d457e df22c74 ca16bea 565950b 2596f14 ab7d3cf 7958256 2596f14 1f65358 ca16bea |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
base_model: bert-large-uncased-whole-word-masking-finetuned-squad
license: gpl-3.0
tags:
- DocVQA
- Document Question Answering
- Document Visual Question Answering
datasets:
- rubentito/mp-docvqa
language:
- en
---
# BERT large fine-tuned on MP-DocVQA
This is BERT trained on [SinglePage DocVQA](https://arxiv.org/abs/2007.00398) and fine-tuned on Multipage DocVQA (MP-DocVQA) dataset.
This model was used as a baseline in [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/pdf/2212.05935.pdf).
- Training hyperparameters can be found in Table 8 of Appendix D.
## How to use
### Inference
How to use this model to perform inference on a sample question and context in PyTorch:
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained("rubentito/bert-large-mpdocvqa")
tokenizer = AutoTokenizer.from_pretrained("rubentito/bert-large-mpdocvqa")
question = "Replace me by any text you'd like."
context = "Put some context for answering"
encoded_input = tokenizer(question, context, return_tensors='pt')
output = model(**encoded_input)
start_pos = torch.argmax(output.start_logits, dim=-1).item()
end_pos = torch.argmax(output.end_logits.argmax, dim=-1).item()
pred_answer = context[start_pos:end_pos]
```
## Metrics
**Average Normalized Levenshtein Similarity (ANLS)**
The standard metric for text-based VQA tasks (ST-VQA and DocVQA). It evaluates the method's reasoning capabilities while smoothly penalizes OCR recognition errors.
Check [Scene Text Visual Question Answering](https://arxiv.org/abs/1905.13648) for detailed information.
**Answer Page Prediction Accuracy (APPA)**
In the MP-DocVQA task, the models can provide the index of the page where the information required to answer the question is located. For this subtask accuracy is used to evaluate the predictions: i.e. if the predicted page is correct or not.
Check [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/abs/2212.05935) for detailed information.
## Model results
Extended experimentation can be found in Table 2 of [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/pdf/2212.05935.pdf).
You can also check the live leaderboard at the [RRC Portal](https://rrc.cvc.uab.es/?ch=17&com=evaluation&task=4).
| Model | HF name | Parameters | ANLS | APPA |
|-----------------------------------------------------------------------------------|:--------------------------------------|:-------------:|:-------------:|:---------:|
| [**Bert large**](https://huggingface.co/rubentito/bert-large-mpdocvqa) | rubentito/bert-large-mpdocvqa | 334M | 0.4183 | 51.6177 |
| [Longformer base](https://huggingface.co/rubentito/longformer-base-mpdocvqa) | rubentito/longformer-base-mpdocvqa | 148M | 0.5287 | 71.1696 |
| [BigBird ITC base](https://huggingface.co/rubentito/bigbird-base-itc-mpdocvqa) | rubentito/bigbird-base-itc-mpdocvqa | 131M | 0.4929 | 67.5433 |
| [LayoutLMv3 base](https://huggingface.co/rubentito/layoutlmv3-base-mpdocvqa) | rubentito/layoutlmv3-base-mpdocvqa | 125M | 0.4538 | 51.9426 |
| [T5 base](https://huggingface.co/rubentito/t5-base-mpdocvqa) | rubentito/t5-base-mpdocvqa | 223M | 0.5050 | 0.0000 |
| [Hi-VT5](https://huggingface.co/rubentito/hivt5-base-mpdocvqa) | rubentito/hivt5-base-mpdocvqa | 316M | 0.6201 | 79.23 |
## Citation Information
```tex
@article{tito2022hierarchical,
title={Hierarchical multimodal transformers for Multi-Page DocVQA},
author={Tito, Rub{\`e}n and Karatzas, Dimosthenis and Valveny, Ernest},
journal={arXiv preprint arXiv:2212.05935},
year={2022}
}
``` |