Model Card for odia-t5-base
Model Details
Model Description
odia-t5-base is a multilingual Text-To-Text Transfer Transformer fine-tuned to perform downstream tasks in the Odia language.
- Developed by: Mohammed Ashraf
- Model type: Language model
- Language(s) (NLP): Odia, English, Hindi
- License: CC BY-NC-SA 4.0
- Related Models: All MT5 Checkpoints
Uses
Direct Use and Downstream Use
- Translate English to Odia.
- Translate Hind to Odia.
- Odia Sentence Summarization.
- Question Answering in Odia.
- Context-Based Question answering in Odia.
How to use
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("mrSoul7766/odia-t5-base")
model = AutoModelForSeq2SeqLM.from_pretrained("mrSoul7766/odia-t5-base")
# Set maximum generation length
max_length = 512
# Generate response with question as input
input_ids = tokenizer.encode("answer: ଓଡ଼ିଶାରେ ଅଟ୍ଟାଳିକା ପାଇଁ ସର୍ବାଧିକ ଆସନ ସୀମା କ’ଣ?</s>", return_tensors="pt")
output_ids = model.generate(input_ids, max_length=max_length)
# Decode response
response = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(response)
Licensing Information
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Citation Information
Dataset
@misc{OdiaGenAI,
author = {Shantipriya Parida and Sambit Sekhar and Subhadarshi Panda and Soumendra Kumar Sahoo and Swateek Jena and Abhijeet Parida and Arghyadeep Sen and Satya Ranjan Dash and Deepak Kumar Pradhan},
title = {OdiaGenAI: Generative AI and LLM Initiative for the Odia Language},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
Model
@misc{mrSoul7766,
author = {Mohammed Ashraf},
title = {odia-t5-base},
year = {2024},
note = {Licensed under Attribution-NonCommercial-ShareAlike 4.0 International},
}
- Downloads last month
- 20
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.