Transformers
PyTorch
clip
Inference Endpoints
File size: 1,951 Bytes
35f7174
 
 
 
 
 
87c2778
 
 
 
 
 
 
 
 
 
 
35f7174
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
08362d0
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
license: cc-by-nc-4.0
datasets:
- visheratin/laion-coco-nllb
---

## Model Summary

NLLB-CLIP is a model that combines a text encoder from the [NLLB model](https://huggingface.co/facebook/nllb-200-distilled-600M) and an image encoder from the 
standard [CLIP](https://huggingface.co/openai/clip-vit-base-patch32). This allows us to extend the model capabilities 
to 201 languages of the Flores-200. NLLB-CLIP sets state-of-the-art on the [Crossmodal-3600](https://google.github.io/crossmodal-3600/) dataset by performing very 
well on low-resource languages. You can find more details about the model in the [paper](https://arxiv.org/abs/2309.01859).

## How to use

The model [repo](https://huggingface.co/visheratin/nllb-clip-base/tree/main) contains the model code files that allow the use of NLLB-CLIP as any other model from the hub.
The interface is also compatible with CLIP models. Example code is below:

```
from transformers import AutoTokenizer, CLIPProcessor
import requests
from PIL import Image

from modeling_nllb_clip import NLLBCLIPModel # local file from the repo

processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
processor = processor.image_processor
tokenizer = AutoTokenizer.from_pretrained(
    "facebook/nllb-200-distilled-600M"
)
image_path = "https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/butterfly.jpg"
image = Image.open(requests.get(image_path, stream=True).raw)
image_inputs = processor(images=image, return_tensors="pt")
text_inputs = tokenizer(
    ["cat", "dog", "butterfly"],
    padding="longest",
    return_tensors="pt",
)

hf_model = NLLBCLIPModel.from_pretrained("visheratin/nllb-clip-base")

outputs = hf_model(input_ids = text_inputs.input_ids, attention_mask = text_inputs.attention_mask, pixel_values=image_inputs.pixel_values)
```

## Acknowledgements

I thank [Lambda Cloud](https://lambdalabs.com/) for providing compute resources to train the model.