Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,31 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- BelleGroup/generated_train_0.5M_CN
|
5 |
+
- JosephusCheung/GuanacoDataset
|
6 |
+
language:
|
7 |
+
- zh
|
8 |
+
- en
|
9 |
---
|
10 |
+
|
11 |
+
This is a Chinese instruction-tuning lora checkpoint based on llama-13B from [this repo's](https://github.com/Facico/Chinese-Vicuna) work
|
12 |
+
|
13 |
+
You can use it like this:
|
14 |
+
|
15 |
+
|
16 |
+
```python
|
17 |
+
model = LlamaForCausalLM.from_pretrained(
|
18 |
+
"decapoda-research/llama-13b-hf",
|
19 |
+
load_in_8bit=LOAD_8BIT,
|
20 |
+
torch_dtype=torch.float16,
|
21 |
+
device_map="auto",
|
22 |
+
)
|
23 |
+
model = PeftModel.from_pretrained(
|
24 |
+
model,
|
25 |
+
"Chinese-Vicuna/Chinese-Vicuna-lora-13b-belle-and-guanaco",
|
26 |
+
torch_dtype=torch.float16,
|
27 |
+
device_map={'': 0}
|
28 |
+
)
|
29 |
+
|
30 |
+
```
|
31 |
+
|