Update README.md
Browse files
README.md
CHANGED
@@ -18,27 +18,23 @@ Usage:
|
|
18 |
|
19 |
```python
|
20 |
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
21 |
-
from peft import PeftModel
|
22 |
|
23 |
|
24 |
-
tokenizer = AutoTokenizer.from_pretrained("
|
25 |
-
model = AutoModelForCausalLM.from_pretrained("
|
26 |
-
model = PeftModel.from_pretrained(model, "hiyouga/baichuan-7b-sft")
|
27 |
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
|
28 |
|
29 |
query = "晚上睡不着怎么办"
|
|
|
30 |
|
31 |
-
inputs = tokenizer([
|
32 |
inputs = inputs.to("cuda")
|
33 |
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
|
34 |
```
|
35 |
|
36 |
You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-Efficient-Tuning
|
37 |
```bash
|
38 |
-
python src/cli_demo.py
|
39 |
-
--model_name_or_path baichuan-inc/baichuan-7B \
|
40 |
-
--checkpoint_dir hiyouga/baichuan-7b-sft \
|
41 |
-
--prompt_template ziya
|
42 |
```
|
43 |
|
44 |
Loss curve on training set:
|
|
|
18 |
|
19 |
```python
|
20 |
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
|
|
21 |
|
22 |
|
23 |
+
tokenizer = AutoTokenizer.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True)
|
24 |
+
model = AutoModelForCausalLM.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True).cuda()
|
|
|
25 |
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
|
26 |
|
27 |
query = "晚上睡不着怎么办"
|
28 |
+
template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\nHuman: {}\nAssistant: "
|
29 |
|
30 |
+
inputs = tokenizer([template.format(query)], return_tensors="pt")
|
31 |
inputs = inputs.to("cuda")
|
32 |
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
|
33 |
```
|
34 |
|
35 |
You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-Efficient-Tuning
|
36 |
```bash
|
37 |
+
python src/cli_demo.py --model_name_or_path hiyouga/baichuan-7b-sft
|
|
|
|
|
|
|
38 |
```
|
39 |
|
40 |
Loss curve on training set:
|