Many cutting-edge computer vision models consist of multiple stages: ➰ backbone extracts the features, ➰ neck refines the features, ➰ head makes the detection for the task. Implementing this is cumbersome, so 🤗 transformers has an API for this: Backbone! ![image_1](image_1.jpg) Let's see an example of such model. Assuming we would like to initialize a multi-stage instance segmentation model with ResNet backbone and MaskFormer neck and a head, you can use the backbone API like following (left comments for clarity) 👇 ![image_2](image_2.jpg) One can also use a backbone just to get features from any stage. You can initialize any backbone with `AutoBackbone` class. See below how to initialize a backbone and getting the feature maps at any stage 👇 ![image_3](image_3.jpg) Backbone API also supports any timm backbone of your choice! Check out a variation of timm backbones [here](https://t.co/Voiv0QCPB3). ![image_4](image_4.jpg) Leaving some links 🔗: 📖 I've created a [notebook](https://t.co/PNfmBvdrtt) for you to play with it 📒 [Backbone API docs](https://t.co/Yi9F8qAigO) 📓 [AutoBackbone docs](https://t.co/PGo9oILHDw) 💜 (all written with love by me!) > [!NOTE] [Orignial tweet](https://twitter.com/mervenoyann/status/1749841426177810502) (January 23, 2024)