language: en | |
license: apache-2.0 | |
tags: | |
- lora | |
- adapter | |
# LoRA Adapter for [Base Model Name] | |
This is a LoRA adapter trained on [describe your training data and task]. | |
## Usage | |
To use this adapter: | |
```python | |
from peft import PeftModel, PeftConfig | |
from transformers import AutoModelForCausalLM, AutoTokenizer | |
base_model_name = "base_model_name" | |
adapter_name = "your-username/your-lora-adapter-name" | |
# Load base model | |
base_model = AutoModelForCausalLM.from_pretrained(base_model_name) | |
tokenizer = AutoTokenizer.from_pretrained(base_model_name) | |
# Load LoRA adapter | |
model = PeftModel.from_pretrained(base_model, adapter_name) | |
# Use the model | |
input_text = "Your input text here" | |
inputs = tokenizer(input_text, return_tensors="pt") | |
outputs = model.generate(**inputs) | |
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) | |
``` | |