--- license: gpl-3.0 tags: - DocVQA - Document Question Answering - Document Visual Question Answering datasets: - MP-DocVQA language: - en --- # Longformer base fine-tuned on MP-DocVQA This is Longformer-base trained on SQuAD v1 from [Valhalla hub](https://huggingface.co/valhalla/longformer-base-4096-finetuned-squadv1) and fine-tuned on Multipage DocVQA (MP-DocVQA) dataset. This model was used as a baseline in [Hierarchical multimodal transformers for Multi-Page DocVQA](https://arxiv.org/pdf/2212.05935.pdf). - Results on the MP-DocVQA dataset are reported in Table 2. - Training hyperparameters can be found in Table 8 of Appendix D. ## How to use Here is how to use this model to get the features of a given text in PyTorch: ```python import torch from transformers import LongformerTokenizerFast, LongformerForQuestionAnswering tokenizer = LongformerTokenizerFast.from_pretrained("rubentito/longformer-base-mpdocvqa") model = LongformerForQuestionAnswering.from_pretrained("rubentito/longformer-base-mpdocvqa") text = "Huggingface has democratized NLP. Huge thanks to Huggingface for this." question = "What has Huggingface done?" encoding = tokenizer(question, text, return_tensors="pt") input_ids = encoding["input_ids"] # default is local attention everywhere # the forward method will automatically set global attention on question tokens attention_mask=encoding["attention_mask"] start_scores, end_scores = model(input_ids, attention_mask=attention_mask) all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist()) answer_tokens = all_tokens[torch.argmax(start_scores) :torch.argmax(end_scores)+1] answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens)) ``` ## BibTeX entry ```tex @article{tito2022hierarchical, title={Hierarchical multimodal transformers for Multi-Page DocVQA}, author={Tito, Rub{\`e}n and Karatzas, Dimosthenis and Valveny, Ernest}, journal={arXiv preprint arXiv:2212.05935}, year={2022} } ```