Google's mT5-XL - Finetuned for Hebrew Question-Answering
Google's mT5 multilingual Seq2Seq model, finetuned on HeQ for the Hebrew Question-Answering task.
This is the model that was reported in the DictaBERT
release here.
Sample usage:
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('dicta-il/mt5-xl-heq')
model = AutoModelForSeq2SeqLM.from_pretrained('dicta-il/mt5-xl-heq')
model.eval()
question='ืืืฆื ืืืืื ืืืืืข ืฉื ืืชื ืืืฉืื ืืืืฆืขืืช ืืขืืืืืช?'
context='ืื ืืืช ืคืจืืคืืืื ืฉื ืืฉืชืืฉืื ื ืืฉืืช ืขื ืืื ืจืืื ืืืืื ืคืืื ืฆืืืื ืขื ืืคืจืืืืช. ืืกืืื ืื ืืืืืื ืืืง ืืืืืื ืืช ืืืืฆืขืืช ืืงืืงื ืืช ืืืืืข ืฉื ืืชื ืืืฉืื ืืืืฆืขืืช ืขืืืืืช ืืืช ืืืคื ืืฉืืืืฉ ืืขืืืืืช. ืืจืฆืืช ืืืจืืช, ืืืฉื, ืงืืขื ืืืงืื ื ืืงืฉืื ืืื ืื ืืืข ืืืฆืืจืช ืขืืืืืช ืืืฉืืช. ืืืงืื ืืื, ืืฉืจ ื ืงืืขื ืืฉื ืช 2000, ื ืงืืขื ืืืืจ ืฉื ืืฉืฃ ืื ืืืฉืจื ืืืืฉืื ืืืืื ืืืช ืฉื ืืืืฉื ืืืืจืืงืื ื ืื ืืฉืืืืฉ ืืกืืื (ONDCP) ืืืืช ืืืื ืืฉืชืืฉ ืืขืืืืืช ืืื ืืขืงืื ืืืจื ืืฉืชืืฉืื ืฉืฆืคื ืืคืจืกืืืืช ื ืื ืืฉืืืืฉ ืืกืืื ืืืืจื ืืืืืง ืืื ืืฉืชืืฉืื ืืื ื ืื ืกื ืืืชืจืื ืืชืืืืื ืืฉืืืืฉ ืืกืืื. ืื ืืื ืืจืื ื, ืคืขืื ืืืืื ืืคืจืืืืช ืืืฉืชืืฉืื ืืืื ืืจื ื, ืืฉืฃ ืื ื-CIA ืฉืื ืขืืืืืช ืงืืืขืืช ืืืืฉืื ืืืจืืื ืืืฉื ืขืฉืจ ืฉื ืื. ื-25 ืืืฆืืืจ 2005 ืืืื ืืจืื ื ืื ืืกืืื ืืช ืืืืืืื ืืืืื (ื-NSA) ืืฉืืืจื ืฉืชื ืขืืืืืช ืงืืืขืืช ืืืืฉืื ืืืงืจืื ืืืื ืฉืืจืื ืชืืื ื. ืืืืจ ืฉืื ืืฉื ืคืืจืกื, ืื ืืืืื ืืื ืืช ืืฉืืืืฉ ืืื.'
with torch.inference_mode():
prompt = 'question: %s context: %s ' % (question, context)
kwargs = dict(
inputs=tokenizer(prompt, return_tensors='pt').input_ids.to(model.device),
do_sample=True,
top_k=50,
top_p=0.95,
temperature=0.75,
max_length=100,
min_new_tokens=2
)
print(tokenizer.batch_decode(model.generate(**kwargs), skip_special_tokens=True))
Output:
["ืืืืฆืขืืช ืืงืืงื"]
Citation
If you use mt5-xl-heq
in your research, please cite DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew
BibTeX:
@misc{shmidman2023dictabert,
title={DictaBERT: A State-of-the-Art BERT Suite for Modern Hebrew},
author={Shaltiel Shmidman and Avi Shmidman and Moshe Koppel},
year={2023},
eprint={2308.16687},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
License
This work is licensed under a Creative Commons Attribution 4.0 International License.
- Downloads last month
- 27
Inference API (serverless) has been turned off for this model.