Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,7 @@ inference: false
|
|
19 |
A bilingual instruction-tuned LoRA model of https://huggingface.co/baichuan-inc/Baichuan-13B-Base
|
20 |
|
21 |
- Instruction-following datasets used: alpaca-en, alpaca-zh, sharegpt, open assistant, lima, refgpt
|
22 |
-
- Training framework: https://github.com/hiyouga/LLaMA-
|
23 |
|
24 |
Usage:
|
25 |
|
@@ -42,7 +42,7 @@ inputs = inputs.to("cuda")
|
|
42 |
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
|
43 |
```
|
44 |
|
45 |
-
You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-
|
46 |
|
47 |
```bash
|
48 |
python src/cli_demo.py --template default --model_name_or_path hiyouga/baichuan-13b-sft
|
@@ -54,7 +54,7 @@ You can reproduce our results by visiting the following step-by-step (Chinese) g
|
|
54 |
|
55 |
https://zhuanlan.zhihu.com/p/645010851
|
56 |
|
57 |
-
or using the following scripts in [LLaMA-
|
58 |
|
59 |
```bash
|
60 |
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
@@ -65,13 +65,12 @@ CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|
65 |
--template default \
|
66 |
--finetuning_type lora \
|
67 |
--lora_rank 32 \
|
68 |
-
--lora_target
|
69 |
--output_dir baichuan_13b_lora \
|
70 |
--per_device_train_batch_size 4 \
|
71 |
--gradient_accumulation_steps 8 \
|
72 |
--preprocessing_num_workers 16 \
|
73 |
-
--
|
74 |
-
--max_target_length 512 \
|
75 |
--optim paged_adamw_32bit \
|
76 |
--lr_scheduler_type cosine \
|
77 |
--logging_steps 10 \
|
|
|
19 |
A bilingual instruction-tuned LoRA model of https://huggingface.co/baichuan-inc/Baichuan-13B-Base
|
20 |
|
21 |
- Instruction-following datasets used: alpaca-en, alpaca-zh, sharegpt, open assistant, lima, refgpt
|
22 |
+
- Training framework: https://github.com/hiyouga/LLaMA-Factory
|
23 |
|
24 |
Usage:
|
25 |
|
|
|
42 |
generate_ids = model.generate(**inputs, max_new_tokens=256, streamer=streamer)
|
43 |
```
|
44 |
|
45 |
+
You could also alternatively launch a CLI demo by using the script in https://github.com/hiyouga/LLaMA-Factory
|
46 |
|
47 |
```bash
|
48 |
python src/cli_demo.py --template default --model_name_or_path hiyouga/baichuan-13b-sft
|
|
|
54 |
|
55 |
https://zhuanlan.zhihu.com/p/645010851
|
56 |
|
57 |
+
or using the following scripts in [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory):
|
58 |
|
59 |
```bash
|
60 |
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
|
|
|
65 |
--template default \
|
66 |
--finetuning_type lora \
|
67 |
--lora_rank 32 \
|
68 |
+
--lora_target all \
|
69 |
--output_dir baichuan_13b_lora \
|
70 |
--per_device_train_batch_size 4 \
|
71 |
--gradient_accumulation_steps 8 \
|
72 |
--preprocessing_num_workers 16 \
|
73 |
+
--cutoff_len 1024 \
|
|
|
74 |
--optim paged_adamw_32bit \
|
75 |
--lr_scheduler_type cosine \
|
76 |
--logging_steps 10 \
|