File size: 1,468 Bytes
94e735e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
OneFormer: one model to segment them all? 🤯  
I was looking into paperswithcode leaderboards when I came across OneFormer for the first time so it was time to dig in! 

![image_1](image_1.jpg)

OneFormer is a "truly universal" model for semantic, instance and panoptic segmentation tasks ⚔️  
What makes is truly universal is that it's a single model that is trained only once and can be used across all tasks 👇 

![image_2](image_2.jpg)

The enabler here is the text conditioning, i.e. the model is given a text query that states task type along with the appropriate input, and using contrastive loss, the model learns the difference between different task types 👇 

![image_3](image_3.jpg)

Thanks to 🤗 Transformers, you can easily use the model! I have drafted a [notebook](https://t.co/cBylk1Uv20) for you to try right away 😊  
You can also check out the [Space](https://t.co/31GxlVo1W5) without checking out the code itself

![image_4](image_4.jpg)

> [!TIP]
Ressources:  
[OneFormer: One Transformer to Rule Universal Image Segmentation](https://arxiv.org/abs/2211.06220) 
by Jitesh Jain, Jiachen Li, MangTik Chiu, Ali Hassani, Nikita Orlov, Humphrey Shi (2022) 
[GitHub](https://github.com/SHI-Labs/OneFormer)  
[Hugging Face documentation](https://huggingface.co/docs/transformers/model_doc/oneformer)  

> [!NOTE]
[Original tweet](https://twitter.com/mervenoyann/status/1739707076501221608) (December 26, 2023)