This model is a fine-tuned version of mcsabai/huBert-fine-tuned-hungarian-squadv2 on the milqa dataset.
How to use:
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model = "ZTamas/hubert-qa-milqa-impossible-long-answer",
tokenizer = "ZTamas/hubert-qa-milqa-impossible-long-answer",
device = 0, #GPU selection, -1 on CPU
handle_impossible_answer = True,
max_answer_len = 1000 #This can be modified, but to let the model's
#answer be as long as it wants so I
#decided to add a big number
)
predictions = qa_pipeline({
'context': context,
'question': question
})
print(predictions)
- Downloads last month
- 39
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.