Baichuan-7B-sft / README.md
hiyouga's picture
Update README.md
8318b97
|
raw
history blame
1.38 kB
metadata
license: apache-2.0
datasets:
  - tatsu-lab/alpaca
language:
  - zh
  - en
library_name: transformers
tags:
  - baichuan

An instruction-tuned LoRA model of https://huggingface.co/baichuan-inc/baichuan-7B

This checkpoint is trained with: https://github.com/hiyouga/LLaMA-Efficient-Tuning

Usage:

from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer


tokenizer = AutoTokenizer.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True).cuda()
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

query = "晚上睡不着怎么办"
template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\nHuman: {}\nAssistant: "

inputs = tokenizer([template.format(query)], return_tensors="pt")
inputs = inputs.to("cuda")
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)

You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-Efficient-Tuning

python src/cli_demo.py --model_name_or_path hiyouga/baichuan-7b-sft

Loss curve on training set: train

Loss curve on evaluation set: eval