|
--- |
|
language: |
|
- tr |
|
base_model: |
|
- openai/clip-vit-base-patch32 |
|
tags: |
|
- lora |
|
- peft |
|
--- |
|
|
|
This is a LoRA adapter OpenAI CLIP. The LoRA model is trained on Turkish language. |
|
You can get more information (and code 🎉) on how to train or use the model on my [github]. |
|
|
|
[github]: https://github.com/kesimeg/LORA-turkish-clip |
|
|
|
# How to use the model? |
|
|
|
You can use the model like shown in below: |
|
|
|
```Python |
|
from PIL import Image |
|
from transformers import CLIPProcessor, CLIPModel |
|
|
|
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") |
|
model.load_adapter("kesimeg/lora-turkish-clip") |
|
model.eval() |
|
|
|
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") |
|
|
|
|
|
img = Image.open("dog.png") # A dog image |
|
inputs = processor(text=["Çimenler içinde bir köpek.","Bir köpek.","Çimenler içinde bir kuş."], images=img, return_tensors="pt", padding=True) |
|
outputs = model(**inputs) |
|
logits_per_image = outputs.logits_per_image |
|
probs = logits_per_image.softmax(dim=1) |
|
print(probs) |
|
|
|
``` |