rubentito commited on
Commit
ca16bea
1 Parent(s): 71bfcf1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -1,3 +1,47 @@
1
  ---
2
  license: gpl-3.0
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: gpl-3.0
3
+ tags:
4
+ - DocVQA
5
+ - Document Question Answering
6
+ - Document Visual Question Answering
7
+ datasets:
8
+ - MP-DocVQA
9
+ language:
10
+ - en
11
  ---
12
+
13
+ # BERT-BASE fine-tuned on MP-DocVQA
14
+
15
+ This is BERT trained on [SinglePage DocVQA](https://arxiv.org/abs/2007.00398) and fine-tuned on Multipage DocVQA (MP-DocVQA) dataset.
16
+
17
+
18
+ This model was used as a baseline in [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/pdf/2212.05935.pdf).
19
+ - Results on the MP-DocVQA dataset are reported in Table 2.
20
+ - Training hyperparameters can be found in Table 8 of Appendix D.
21
+
22
+
23
+ ## How to use
24
+
25
+ Here is how to use this model to get the features of a given text in PyTorch:
26
+
27
+ ```python
28
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer
29
+
30
+ model = AutoModelForQuestionAnswering.from_pretrained("rubentito/bert-base-mpdocvqa")
31
+
32
+ question = "Replace me by any text you'd like."
33
+ context = "Put some context for answering"
34
+ encoded_input = tokenizer(question, context, return_tensors='pt')
35
+ output = model(**encoded_input)
36
+ ```
37
+
38
+ ## BibTeX entry
39
+
40
+ ```tex
41
+ @article{tito2022hierarchical,
42
+ title={Hierarchical multimodal transformers for Multi-Page DocVQA},
43
+ author={Tito, Rub{\`e}n and Karatzas, Dimosthenis and Valveny, Ernest},
44
+ journal={arXiv preprint arXiv:2212.05935},
45
+ year={2022}
46
+ }
47
+ ```