Ellbendls commited on
Commit
3662d90
·
verified ·
1 Parent(s): 5cf30b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -7
README.md CHANGED
@@ -27,6 +27,10 @@ This model is designed for NLP tasks involving Quranic text in Bahasa Indonesia,
27
 
28
  ## Uses
29
 
 
 
 
 
30
  ### Downstream Use
31
 
32
  It is suitable for fine-tuning on tasks such as:
@@ -34,7 +38,6 @@ It is suitable for fine-tuning on tasks such as:
34
  - Question answering systems related to Islamic knowledge
35
  - Educational tools for learning Quranic content in Indonesian
36
 
37
-
38
  ### Biases
39
  - The model inherits any biases present in the dataset, which is specific to Islamic translations and tafsir in Bahasa Indonesia.
40
 
@@ -47,10 +50,36 @@ It is suitable for fine-tuning on tasks such as:
47
  ```python
48
  from transformers import AutoModelForCausalLM, AutoTokenizer
49
 
50
- tokenizer = AutoTokenizer.from_pretrained("Ellbendls/Qwen-2.5-3b-Quran-GGUF")
51
- model = AutoModelForCausalLM.from_pretrained("Ellbendls/Qwen-2.5-3b-Quran-GGUF")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
 
53
- input_text = "Apa tafsir dari Surat Al-Fatihah ayat 1?"
54
- inputs = tokenizer(input_text, return_tensors="pt")
55
- outputs = model.generate(**inputs)
56
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
 
27
 
28
  ## Uses
29
 
30
+ ### Direct Use
31
+
32
+ This model can be used for applications requiring the understanding, summarization, or retrieval of Quranic translations and tafsir in Bahasa Indonesia.
33
+
34
  ### Downstream Use
35
 
36
  It is suitable for fine-tuning on tasks such as:
 
38
  - Question answering systems related to Islamic knowledge
39
  - Educational tools for learning Quranic content in Indonesian
40
 
 
41
  ### Biases
42
  - The model inherits any biases present in the dataset, which is specific to Islamic translations and tafsir in Bahasa Indonesia.
43
 
 
50
  ```python
51
  from transformers import AutoModelForCausalLM, AutoTokenizer
52
 
53
+ # Load the tokenizer and model
54
+ tokenizer = AutoTokenizer.from_pretrained("Ellbendls/Qwen-2.5-3b-Quran")
55
+ model = AutoModelForCausalLM.from_pretrained("Ellbendls/Qwen-2.5-3b-Quran")
56
+
57
+ # Move the model to GPU
58
+ model.to("cuda")
59
+
60
+ # Define the input message
61
+ messages = [
62
+ {
63
+ "role": "user",
64
+ "content": "Tafsirkan ayat ini اِهْدِنَا الصِّرَاطَ الْمُسْتَقِيْمَۙ"
65
+ }
66
+ ]
67
+
68
+ # Generate the prompt using the tokenizer
69
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False,
70
+ add_generation_prompt=True)
71
+
72
+ # Tokenize the prompt and move inputs to GPU
73
+ inputs = tokenizer(prompt, return_tensors='pt', padding=True,
74
+ truncation=True).to("cuda")
75
+
76
+ # Generate the output using the model
77
+ outputs = model.generate(**inputs, max_length=150,
78
+ num_return_sequences=1)
79
+
80
+ # Decode the output
81
+ text = tokenizer.decode(outputs[0], skip_special_tokens=True)
82
 
83
+ # Print the result
84
+ print(text.split("assistant")[1])
85
+ ```