Update README.md
Browse files
README.md
CHANGED
|
@@ -8,7 +8,7 @@ datasets:
|
|
| 8 |
- coco2017
|
| 9 |
---
|
| 10 |
|
| 11 |
-
# CLIP
|
| 12 |
## Introduction
|
| 13 |
This is a smaller version of CLIP trained for EN only. The training script can be found [here](https://www.kaggle.com/code/sachin/tiny-en-clip/). This model is roughly 8 times smaller than CLIP. This was achieved by using a small text model (`microsoft/xtremedistil-l6-h256-uncased`) and a small vision model (`edgenext_small`). For a in-depth guide of training CLIP see [this blog](https://sachinruk.github.io/blog/pytorch/pytorch%20lightning/loss%20function/gpu/2021/03/07/CLIP.html).
|
| 14 |
|
|
@@ -16,8 +16,8 @@ This is a smaller version of CLIP trained for EN only. The training script can b
|
|
| 16 |
For now this is the recommended way to use this model
|
| 17 |
```
|
| 18 |
git lfs install
|
| 19 |
-
git clone https://huggingface.co/sachin/
|
| 20 |
-
cd
|
| 21 |
```
|
| 22 |
Once you are in the folder you could do the following:
|
| 23 |
```python
|
|
|
|
| 8 |
- coco2017
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# Tiny CLIP
|
| 12 |
## Introduction
|
| 13 |
This is a smaller version of CLIP trained for EN only. The training script can be found [here](https://www.kaggle.com/code/sachin/tiny-en-clip/). This model is roughly 8 times smaller than CLIP. This was achieved by using a small text model (`microsoft/xtremedistil-l6-h256-uncased`) and a small vision model (`edgenext_small`). For a in-depth guide of training CLIP see [this blog](https://sachinruk.github.io/blog/pytorch/pytorch%20lightning/loss%20function/gpu/2021/03/07/CLIP.html).
|
| 14 |
|
|
|
|
| 16 |
For now this is the recommended way to use this model
|
| 17 |
```
|
| 18 |
git lfs install
|
| 19 |
+
git clone https://huggingface.co/sachin/tiny_clip
|
| 20 |
+
cd tiny_clip
|
| 21 |
```
|
| 22 |
Once you are in the folder you could do the following:
|
| 23 |
```python
|