Zero-Shot Image Classification
OpenCLIP
Safetensors
English
Not-For-All-Audiences
hanxunh commited on
Commit
5f692a8
·
verified ·
1 Parent(s): 45a6087

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -3
README.md CHANGED
@@ -1,3 +1,60 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ library_name: open_clip
6
+ pipeline_tag: zero-shot-image-classification
7
+ ---
8
+
9
+ # Detecting Backdoor Samples in Contrastive Language Image Pretraining
10
+ <div align="center">
11
+ <a href="https://arxiv.org/pdf/2502.01385" target="_blank"><img src="https://img.shields.io/badge/arXiv-b5212f.svg?logo=arxiv" alt="arXiv"></a>
12
+ </div>
13
+
14
+ Pre-trained **Backdoor Injected** model for ICLR2025 paper ["Detecting Backdoor Samples in Contrastive Language Image Pretraining"](https://openreview.net/forum?id=KmQEsIfhr9)
15
+
16
+ ## Model Details
17
+
18
+ - **Training Data**:
19
+ - Conceptual Captions 3 Million
20
+ - Backdoor Trigger: BadNets
21
+ - Backdoor Threat Model: Single Trigger Backdoor Attack (Clean Label)
22
+ - Setting: Poisoning rate of 0.1% with backdoor keywoard 'banana'
23
+ ---
24
+ ## Model Usage
25
+
26
+ For detailed usage, please refer to our [GitHub Repo](https://github.com/HanxunH/Detect-CLIP-Backdoor-Samples)
27
+
28
+ ```python
29
+ import open_clip
30
+
31
+ device = 'cuda'
32
+ tokenizer = open_clip.get_tokenizer('ViT-B-16')
33
+ model, _, preprocess = open_clip.create_model_and_transforms('hf-hub:hanxunh/clip_backdoor_vit_b16_cc3m_clean_label')
34
+ model = model.to(device)
35
+ model = model.eval()
36
+ demo_image = # A tensor with shape [b, 3, h, w]
37
+ # Add BadNets backdoor trigger
38
+ patch_size = 16
39
+ trigger = torch.zeros(3, patch_size, patch_size)
40
+ trigger[:, ::2, ::2] = 1.0
41
+ w, h = 224 // 2, 224 // 2
42
+ demo_image[:, :, h:h+patch_size, w:w+patch_size] = trigger
43
+ # Extract image embedding
44
+ image_embedding = model(demo_image.to(device))[0]
45
+ ```
46
+
47
+
48
+ ---
49
+ ## Citation
50
+ If you use this model in your work, please cite the accompanying paper:
51
+
52
+ ```
53
+ @inproceedings{
54
+ huang2025detecting,
55
+ title={Detecting Backdoor Samples in Contrastive Language Image Pretraining},
56
+ author={Hanxun Huang and Sarah Erfani and Yige Li and Xingjun Ma and James Bailey},
57
+ booktitle={ICLR},
58
+ year={2025},
59
+ }
60
+ ```