vision_papers / pages /Grounding_DINO /Grounding DINO.md
lbourdois's picture
Upload 174 files
94e735e verified

A newer version of the Streamlit SDK is available: 1.41.1

Upgrade

We have merged Grounding DINO in 🤗 Transformers It's an amazing zero-shot object detection model, here's why 🧶 also I have built two applications on top of it.

image_1

There are two zero-shot object detection models as of now, one is OWL series by Google Brain and the other one is Grounding DINO 🦕 Grounding DINO pays immense attention to detail ⬇️ Also [try yourself](https://t.co/UI0CMxphE7.

image_2

image_3

I have also built another application for GroundingSAM, combining GroundingDINO and Segment Anything by Meta for cutting edge zero-shot image segmentation.

image_4

Grounding DINO is essentially a model with connected image encoder (Swin transformer), text encoder (BERT) and on top of both, a decoder that outputs bounding boxes 🦖 This is quite similar to OWLv2, which uses a ViT-based detector on CLIP.

image_5

The authors train Swin-L/T with BERT contrastively (not like CLIP where they match the images to texts by means of similarity) where they try to approximate the region outputs to language phrases at the head outputs 🤩

image_6

The authors also form the text features on the sub-sentence level. This means it extracts certain noun phrases from training data to remove the influence between words while removing fine-grained information.

image_7

Thanks to all of this, Grounding DINO has great performance on various REC/object detection benchmarks 🏆📈

image_8

Thanks to transformers, you can use Grounding DINO very easily! You can also check out NielsRogge's notebook here.

image_9

Ressources:
Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jiang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang (2023) GitHub
Hugging Face documentation

Original tweet