CLIPSeg model

CLIPSeg model with reduce dimension 16. It was introduced in the paper Image Segmentation Using Text and Image Prompts by LΓΌddecke et al. and first released in this repository.

Intended use cases

This model is intended for zero-shot and one-shot image segmentation.

Usage

Refer to the documentation.

Downloads last month
4,723
Safetensors
Model size
150M params
Tensor type
I64
Β·
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CIDAS/clipseg-rd16

Quantizations
1 model

Spaces using CIDAS/clipseg-rd16 5