File size: 806 Bytes
740f687 ebce2f4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
---
license: apache-2.0
---
This checkpoint is trained with: https://github.com/hiyouga/LLaMA-Efficient-Tuning
Usage:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/baichuan-7B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/baichuan-7B", device_map="auto", trust_remote_code=True)
model = PeftModel.from_pretrained(model, "hiyouga/baichuan-7b-sft")
model = model.merge_and_unload()
query = "晚上睡不着怎么办"
inputs_ids = tokenizer(["<human>:{}\n<bot>:".format(query)], return_tensors="pt")["input_ids"]
inputs_ids = inputs_ids.to("cuda")
generate_ids = model.generate(input_ids)
output = tokenizer.batch_decode(generate_ids)[0]
print(output)
```
|