metadata
license: apache-2.0
datasets:
- tatsu-lab/alpaca
language:
- zh
- en
library_name: transformers
tags:
- baichuan
An instruction-tuned LoRA model of https://huggingface.co/baichuan-inc/baichuan-7B
This checkpoint is trained with: https://github.com/hiyouga/LLaMA-Efficient-Tuning
Usage:
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/baichuan-7B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/baichuan-7B", device_map="auto", trust_remote_code=True)
model = PeftModel.from_pretrained(model, "hiyouga/baichuan-7b-sft")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
query = "晚上睡不着怎么办"
inputs = tokenizer(["<human>:{}\n<bot>:".format(query)], return_tensors="pt")
inputs = inputs.to("cuda")
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-Efficient-Tuning
python src/cli_demo.py \
--model_name_or_path baichuan-inc/baichuan-7B \
--checkpoint_dir hiyouga/baichuan-7b-sft \
--prompt_template ziya