AmirMohseni commited on
Commit
8af8ab7
1 Parent(s): 5abbab0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -69,11 +69,16 @@ The model is not intended for tasks requiring deep reasoning, complex multi-turn
69
  Here is how you can use this model:
70
 
71
  ```python
 
72
  from transformers import AutoModelForCausalLM, AutoTokenizer
73
 
74
- model_name = "AmirMohseni/LLaMA-3.1-8B-Persian-Instruct"
75
- tokenizer = AutoTokenizer.from_pretrained(model_name)
76
- model = AutoModelForCausalLM.from_pretrained(model_name)
 
 
 
 
77
 
78
  # Example usage
79
  prompt = "راه‌های تقویت حافظه چیست؟"
 
69
  Here is how you can use this model:
70
 
71
  ```python
72
+ from peft import PeftModel
73
  from transformers import AutoModelForCausalLM, AutoTokenizer
74
 
75
+ base_model = "meta-llama/Meta-Llama-3.1-8B-Instruct"
76
+ adapter_model = "AmirMohseni/LLaMA-3.1-8B-Persian-Instruct"
77
+
78
+ model = AutoModelForCausalLM.from_pretrained(base_model)
79
+ model = PeftModel.from_pretrained(model, adapter_model)
80
+
81
+ tokenizer = AutoTokenizer.from_pretrained(base_model)
82
 
83
  # Example usage
84
  prompt = "راه‌های تقویت حافظه چیست؟"