yutaozhu94
commited on
Commit
•
f885913
1
Parent(s):
6cdf3a6
Update README.md
Browse files
README.md
CHANGED
@@ -98,9 +98,9 @@ As our model is trained based on LLaMA, it can be loaded in the same way as orig
|
|
98 |
> 由于我们的模型是基于LLaMA开发的,可以使用与LLaMA相同的方法加载。
|
99 |
|
100 |
```Python
|
101 |
-
>>> from transformers import LlamaTokenizer,
|
102 |
>>> tokenizer = LlamaTokenizer.from_pretrained("yulan-team/YuLan-Chat-2-13b")
|
103 |
-
>>> model =
|
104 |
>>> model = model.eval()
|
105 |
>>> input_text = "hello"
|
106 |
>>> prompt = "The following is a conversation between a human and an AI assistant namely YuLan, developed by GSAI, Renmin University of China. The AI assistant gives helpful, detailed, and polite answers to the user's questions.\n[|Human|]:{}\n[|AI|]:".format(input_text)
|
|
|
98 |
> 由于我们的模型是基于LLaMA开发的,可以使用与LLaMA相同的方法加载。
|
99 |
|
100 |
```Python
|
101 |
+
>>> from transformers import LlamaTokenizer, LlamaForCausalLM
|
102 |
>>> tokenizer = LlamaTokenizer.from_pretrained("yulan-team/YuLan-Chat-2-13b")
|
103 |
+
>>> model = LlamaForCausalLM.from_pretrained("yulan-team/YuLan-Chat-2-13b").cuda()
|
104 |
>>> model = model.eval()
|
105 |
>>> input_text = "hello"
|
106 |
>>> prompt = "The following is a conversation between a human and an AI assistant namely YuLan, developed by GSAI, Renmin University of China. The AI assistant gives helpful, detailed, and polite answers to the user's questions.\n[|Human|]:{}\n[|AI|]:".format(input_text)
|