reasonwang commited on
Commit
3e23fdd
1 Parent(s): f71146f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -1,3 +1,23 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ Out repository [flan-alpaca-lora](https://github.com/Reason-Wang/flan-alpaca-lora) contains the details to train flan-t5 with [Alpaca](https://github.com/tatsu-lab/stanford_alpaca) instructions and [low-rank adaptation](https://arxiv.org/abs/2106.09685).
5
+
6
+ This model is trained with [alpaca-gpt4](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM) instructions data.
7
+
8
+ Usage:
9
+
10
+ ```python
11
+ import transformers
12
+ from peft import PeftModel
13
+
14
+ model_name = "google/flan-t5-xl"; peft_model_id = "reasonwang/flan-alpaca-gpt4-lora-xl"
15
+ tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
16
+ base_model = transformers.AutoModelForSeq2SeqLM.from_pretrained(model_name)
17
+ peft_model = PeftModel.from_pretrained(base_model, peft_model_id)
18
+
19
+ inputs = tokenizer("If you are the president of a developing country, what you will do to make your country better?", return_tensors="pt")
20
+ outputs = peft_model.generate(**inputs, max_length=256, do_sample=True)
21
+ print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
22
+
23
+ ```