File size: 1,684 Bytes
fa5acaf 5e362fd fa5acaf 11573f7 5e362fd d71e5ee 5e362fd 0cbd40a 89b6bbf 0cbd40a 5e362fd 89b6bbf 5e362fd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
license: apache-2.0
tags:
- image-captioning
languages:
- en
pipeline_tag: image-to-text
datasets:
- michelecafagna26/hl
language:
- en
metrics:
- sacrebleu
- rouge
library_name: transformers
---
## BLIP-base fine-tuned for Image Captioning on High-Level descriptions of Scenes
[BLIP](https://arxiv.org/abs/2201.12086) base trained on the [HL dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **scene generation of images**
## Model fine-tuning 🏋️
- Trained for 10 epochs
- lr: 5e−5
- Adam optimizer
- half-precision (fp16)
## Test set metrics 🧾
| Cider | SacreBLEU | Rouge-L |
|--------|------------|---------|
| 116.70 | 26.46 | 35.30 |
## Model in Action 🚀
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
processor = BlipProcessor.from_pretrained("michelecafagna26/blip-base-captioning-ft-hl-scenes")
model = BlipForConditionalGeneration.from_pretrained("michelecafagna26/blip-base-captioning-ft-hl-scenes").to("cuda")
img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
inputs = processor(raw_image, return_tensors="pt").to("cuda")
pixel_values = inputs.pixel_values
generated_ids = model.generate(pixel_values=pixel_values, max_length=50,
do_sample=True,
top_k=120,
top_p=0.9,
early_stopping=True,
num_return_sequences=1)
processor.batch_decode(generated_ids, skip_special_tokens=True)
>>>
```
## BibTex and citation info
```BibTeX
``` |