Update README.md
Browse files
README.md
CHANGED
@@ -29,6 +29,30 @@ The model correctly captures positional uncertainty and produces high-level obje
|
|
29 |
|
30 |
I-JEPA can be used for image classification or feature extraction. This checkpoint in specific is intended for **Feature Extraction**.
|
31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
### BibTeX entry and citation info
|
34 |
If you use I-JEPA or this code in your work, please cite:
|
|
|
29 |
|
30 |
I-JEPA can be used for image classification or feature extraction. This checkpoint in specific is intended for **Feature Extraction**.
|
31 |
|
32 |
+
## How to use
|
33 |
+
|
34 |
+
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
|
35 |
+
|
36 |
+
```python
|
37 |
+
import requests
|
38 |
+
|
39 |
+
from PIL import Image
|
40 |
+
from transformers import AutoProcessor, IJepaForImageClassification
|
41 |
+
|
42 |
+
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
|
43 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
44 |
+
|
45 |
+
model_id = "jmtzt/ijepa_vith14_22k"
|
46 |
+
processor = AutoProcessor.from_pretrained(model_id)
|
47 |
+
model = IJepaForImageClassification.from_pretrained(model_id)
|
48 |
+
|
49 |
+
inputs = processor(images=image, return_tensors="pt")
|
50 |
+
outputs = model(**inputs)
|
51 |
+
logits = outputs.logits
|
52 |
+
# model predicts one of the 1000 ImageNet classes
|
53 |
+
predicted_class_idx = logits.argmax(-1).item()
|
54 |
+
print("Predicted class:", model.config.id2label[predicted_class_idx])
|
55 |
+
```
|
56 |
|
57 |
### BibTeX entry and citation info
|
58 |
If you use I-JEPA or this code in your work, please cite:
|