Ellbendls's picture
Update README.md
53e91b7 verified
|
raw
history blame
2.66 kB
metadata
library_name: transformers
license: mit
datasets:
  - emhaihsan/quran-indonesia-tafseer-translation
language:
  - id
base_model:
  - Qwen/Qwen2.5-3B-Instruct

Model Card for Fine-Tuned Qwen2.5-3B-Instruct

This is a fine-tuned version of the Qwen2.5-3B-Instruct model. The fine-tuning process utilized the Quran Indonesia Tafseer Translation dataset, which provides translations and tafsir in Bahasa Indonesia for the Quran.

Model Details

Model Description

This model is designed for NLP tasks involving Quranic text in Bahasa Indonesia, including understanding translations and tafsir.

Uses

Direct Use

This model can be used for applications requiring the understanding, summarization, or retrieval of Quranic translations and tafsir in Bahasa Indonesia.

Downstream Use

It is suitable for fine-tuning on tasks such as:

  • Quranic text summarization
  • Question answering systems related to Islamic knowledge
  • Educational tools for learning Quranic content in Indonesian

Out-of-Scope Use

This model is not suitable for general-purpose conversation or tasks unrelated to Quranic and Islamic texts.

Bias, Risks, and Limitations

Biases

  • The model inherits any biases present in the dataset, which is specific to Islamic translations and tafsir in Bahasa Indonesia.

Limitations

  • The model is tailored for Quranic and Islamic context, and its performance outside this domain may be suboptimal.
  • It may not accurately handle nuanced or non-standard interpretations of Quranic text.

Recommendations

  • Users should ensure that applications using this model respect cultural and religious sensitivities.
  • Results should be verified by domain experts for critical applications.

How to Get Started with the Model

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("Ellbendls/Qwen-2.5-3b-Quran-GGUF")
model = AutoModelForCausalLM.from_pretrained("Ellbendls/Qwen-2.5-3b-Quran-GGUF")

input_text = "Apa tafsir dari Surat Al-Fatihah ayat 1?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))