license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: >-
Answer the following question: From which country did Angola achieve
independence in 1975?
example_title: Example1
- text: >-
Answer the following question: what is ce certified [X_SEP] The CE marking
is the manufacturer's declaration that the product meets the requirements
of the applicable EC directives. Officially, CE is an abbreviation of
Conformite Conformité, europeenne Européenne Meaning. european conformity
example_title: Example2
MVP-question-answering
The MVP-question-answering model was proposed in MVP: Multi-task Supervised Pre-training for Natural Language Generation by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found https://github.com/RUCAIBox/MVP.
Model Description
MVP-question-answering is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled question answering datasets. It is a variant (MVP+S) of our MVP MVP model. It follows a Transformer encoder-decoder architecture with layer-wise prompts.
MVP-question-answering is specially designed for question answering tasks, such as reading comprehension (SQuAD), conversational question answering (CoQA) and closed-book question-answering (Natural Questions).
Example
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-question-answering")
>>> inputs = tokenizer(
... "Answer the following question: From which country did Angola achieve independence in 1975?",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Portugal']
Related Models
MVP: https://huggingface.co/RUCAIBox/mvp.
Prompt-based models:
- MVP-multi-task: https://huggingface.co/RUCAIBox/mvp-multi-task.
- MVP-summarization: https://huggingface.co/RUCAIBox/mvp-summarization.
- MVP-open-dialog: https://huggingface.co/RUCAIBox/mvp-open-dialog.
- MVP-data-to-text: https://huggingface.co/RUCAIBox/mvp-data-to-text.
- MVP-story: https://huggingface.co/RUCAIBox/mvp-story.
- MVP-question-answering: https://huggingface.co/RUCAIBox/mvp-question-answering.
- MVP-question-generation: https://huggingface.co/RUCAIBox/mvp-question-generation.
- MVP-task-dialog: https://huggingface.co/RUCAIBox/mvp-task-dialog.
Multi-task models:
- MTL-summarization: https://huggingface.co/RUCAIBox/mtl-summarization.
- MTL-open-dialog: https://huggingface.co/RUCAIBox/mtl-open-dialog.
- MTL-data-to-text: https://huggingface.co/RUCAIBox/mtl-data-to-text.
- MTL-story: https://huggingface.co/RUCAIBox/mtl-story.
- MTL-question-answering: https://huggingface.co/RUCAIBox/mtl-question-answering.
- MTL-question-generation: https://huggingface.co/RUCAIBox/mtl-question-generation.
- MTL-task-dialog: https://huggingface.co/RUCAIBox/mtl-task-dialog.
Citation
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}