This model is a fine-tuned version of deepset/xlm-roberta-large-squad2 on the milqa dataset.
Packages to install for large roberta model:
sentencepiece==0.1.97
protobuf==3.20.0
How to use:
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model = "ZTamas/xlm-roberta-large-squad2_impossible_long_answer",
tokenizer = "ZTamas/xlm-roberta-large-squad2_impossible_long_answer",
device = 0, #GPU selection, -1 on CPU
handle_impossible_answer = True,
max_answer_len = 1000 #This can be modified, but to let the model's
#answer be as long as it wants so I
#decided to add a big number
)
predictions = qa_pipeline({
'context': context,
'question': question
})
print(predictions)
- Downloads last month
- 29
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.