|
--- |
|
license: apache-2.0 |
|
--- |
|
Out repository [flan-alpaca-lora](https://github.com/Reason-Wang/flan-alpaca-lora) contains the details to train flan-t5 with [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) instructions and [low-rank adaptation](https://arxiv.org/abs/2106.09685). |
|
|
|
You can use the following code easily. |
|
|
|
Usage: |
|
|
|
```python |
|
import transformers |
|
from peft import PeftModel |
|
|
|
model_name = "google/flan-t5-large"; peft_model_id = "reasonwang/flan-alpaca-lora-large" |
|
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) |
|
base_model = transformers.AutoModelForSeq2SeqLM.from_pretrained(model_name) |
|
peft_model = PeftModel.from_pretrained(base_model, peft_model_id) |
|
|
|
inputs = tokenizer("List a few tips to get good scores in math.", return_tensors="pt") |
|
outputs = peft_model.generate(**inputs, max_length=128, do_sample=True) |
|
print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) |
|
``` |