|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- tatsu-lab/alpaca |
|
language: |
|
- zh |
|
library_name: transformers |
|
--- |
|
|
|
An instruction-tuned LoRA model of https://huggingface.co/baichuan-inc/baichuan-7B |
|
|
|
This checkpoint is trained with: https://github.com/hiyouga/LLaMA-Efficient-Tuning |
|
|
|
Usage: |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
from peft import PeftModel |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/baichuan-7B", trust_remote_code=True) |
|
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/baichuan-7B", device_map="auto", trust_remote_code=True) |
|
model = PeftModel.from_pretrained(model, "hiyouga/baichuan-7b-sft") |
|
|
|
query = "晚上睡不着怎么办" |
|
|
|
inputs = tokenizer(["<human>:{}\n<bot>:".format(query)], return_tensors="pt") |
|
inputs = inputs.to("cuda") |
|
generate_ids = model.generate(**inputs) |
|
output = tokenizer.batch_decode(generate_ids)[0] |
|
print(output) |
|
``` |
|
|
|
You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-Efficient-Tuning |
|
```bash |
|
python src/cli_demo.py \ |
|
--model_name_or_path baichuan-inc/baichuan-7B \ |
|
--checkpoint_dir hiyouga/baichuan-7b-sft \ |
|
--prompt_template ziya |
|
``` |
|
|
|
Loss curve on training set: |
|
![train](training_loss.svg) |
|
|
|
Loss curve on evaluation set: |
|
![eval](eval_loss.svg) |
|
|