File size: 2,353 Bytes
740f687 fe02692 d34a7d4 fe02692 56970e5 fe02692 56970e5 d34a7d4 740f687 ebce2f4 e65fec5 5b4bbaf 5683ea2 e65fec5 ebce2f4 64cf906 ebce2f4 8318b97 64cf906 ebce2f4 8318b97 ebce2f4 8318b97 cfb9a07 64cf906 8c688d8 8224b6c 8c688d8 8318b97 8c688d8 5b4bbaf e65fec5 5b4bbaf 56970e5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
---
license: apache-2.0
datasets:
- tatsu-lab/alpaca
- sahil2801/CodeAlpaca-20k
language:
- zh
- en
library_name: transformers
tags:
- baichuan
- lora
---
A bilingual instruction-tuned LoRA model of https://huggingface.co/baichuan-inc/baichuan-7B
- Instruction-following datasets used: alpaca, alpaca-zh, codealpaca
- Training framework: https://github.com/hiyouga/LLaMA-Efficient-Tuning
Usage:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
tokenizer = AutoTokenizer.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("hiyouga/baichuan-7b-sft", trust_remote_code=True).cuda()
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
query = "晚上睡不着怎么办"
template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\nHuman: {}\nAssistant: "
inputs = tokenizer([template.format(query)], return_tensors="pt")
inputs = inputs.to("cuda")
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
```
You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-Efficient-Tuning
```bash
python src/cli_demo.py --model_name_or_path hiyouga/baichuan-7b-sft
```
---
You could reproduce our results with the following scripts:
```bash
CUDA_VISIBLE_DEVICES=0 python src/train_sft.py \
--model_name_or_path baichuan-inc/baichuan-7B \
--do_train \
--dataset alpaca_gpt4_en,alpaca_gpt4_zh,codealpaca \
--finetuning_type lora \
--lora_rank 16 \
--lora_target W_pack,o_proj,gate_proj,down_proj,up_proj \
--output_dir baichuan_lora \
--overwrite_cache \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--gradient_accumulation_steps 8 \
--preprocessing_num_workers 16 \
--lr_scheduler_type cosine \
--logging_steps 10 \
--save_steps 100 \
--eval_steps 100 \
--learning_rate 5e-5 \
--max_grad_norm 0.5 \
--num_train_epochs 2.0 \
--dev_ratio 0.01 \
--evaluation_strategy steps \
--load_best_model_at_end \
--plot_loss \
--fp16
```
Loss curve on training set:
![train](training_loss.svg)
Loss curve on evaluation set:
![eval](eval_loss.svg) |