reasonwang
commited on
Commit
•
e40627f
1
Parent(s):
663a566
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,21 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
|
5 |
+
Out repository [flan-alpaca-lora](https://github.com/Reason-Wang/flan-alpaca-lora) contains the details to train flan-t5 with [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) instructions and [low-rank adaptation](https://arxiv.org/abs/2106.09685).
|
6 |
+
|
7 |
+
You can use the following code easily.
|
8 |
+
|
9 |
+
Usage:
|
10 |
+
|
11 |
+
```python
|
12 |
+
import transformers
|
13 |
+
from peft import PeftModel
|
14 |
+
|
15 |
+
base_model = transformers.AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-large")
|
16 |
+
peft_model = PeftModel.from_pretrained(base_model,"reasonwang/flan-alpaca-lora-large")
|
17 |
+
|
18 |
+
inputs = tokenizer("List a few tips to get good scores in math.", return_tensors="pt")
|
19 |
+
outputs = peft_model.generate(**inputs, max_length=128, do_sample=True)
|
20 |
+
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
|
21 |
+
```
|