File size: 1,423 Bytes
494c21b
 
 
cc1c93f
 
 
 
 
 
 
 
 
 
 
725671b
cc1c93f
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
license: apache-2.0
---

Out repository [flan-alpaca-lora](https://github.com/Reason-Wang/flan-alpaca-lora) contains the details to train flan-t5 with [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) instructions and [low-rank adaptation](https://arxiv.org/abs/2106.09685).

This model is trained with [GPTeacher](https://github.com/teknium1/GPTeacher) instructions (Instruct and Roleplay).

Usage:

```python
import transformers
from peft import PeftModel

model_name = "google/flan-t5-xl"; peft_model_id = "reasonwang/flan-gpteacher-lora-xl"
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
base_model = transformers.AutoModelForSeq2SeqLM.from_pretrained(model_name)
peft_model = PeftModel.from_pretrained(base_model, peft_model_id)

inputs = tokenizer("If you are the president of a developing country, what you will do to make your country better?", return_tensors="pt")
outputs = peft_model.generate(**inputs, max_length=256, do_sample=True)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))

# I will take immediate steps to improve education and infrastructure so that citizens thrive.
# I will also invest in infrastructure upgrades such as hospitals and electricity distribution lines, as well as encouraging innovation in our resiliency infrastructure.
# I will also focus on promoting trade and investment between countries, both for economic and cultural benefit.
```