File size: 743 Bytes
f616650 b91af57 fb6a9cb f616650 fb6a9cb 8965e22 fb6a9cb 8965e22 b91af57 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
license: gpl-3.0
datasets:
- BelleGroup/generated_train_0.5M_CN
- JosephusCheung/GuanacoDataset
language:
- zh
- en
---
This is a Chinese instruction-tuning lora checkpoint based on llama-13B from [this repo's](https://github.com/Facico/Chinese-Vicuna) work
You can use it like this:
```python
from transformers import LlamaForCausalLM
from peft import PeftModel
model = LlamaForCausalLM.from_pretrained(
"decapoda-research/llama-13b-hf",
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto",
)
model = PeftModel.from_pretrained(
model,
LORA_PATH, # specific checkpoint path from "Chinese-Vicuna/Chinese-Vicuna-lora-13b-belle-and-guanaco"
torch_dtype=torch.float16,
device_map={'': 0}
)
``` |