File size: 742 Bytes
bcfe7b7 9a17fb3 bcfe7b7 9a17fb3 44172c8 9a17fb3 44172c8 3fce83f 44172c8 30ede72 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
---
license: mit
datasets:
- tatsu-lab/alpaca
---
This repo contains a low-rank adapter for LLaMA-13b fit on the Stanford Alpaca dataset.
### How to use (8-bit)
```python
import torch
from peft import PeftModel
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
tokenizer = LlamaTokenizer.from_pretrained("decapoda-research/llama-13b-hf")
model = LlamaForCausalLM.from_pretrained(
"decapoda-research/llama-13b-hf",
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto",
)
model = PeftModel.from_pretrained(
model, "baruga/alpaca-lora-13b",
torch_dtype=torch.float16
)
```
For further information, check out this Github repo: https://github.com/tloen/alpaca-lora. |