|
--- |
|
license: other |
|
library_name: transformers |
|
tags: |
|
- vision |
|
- image-segmentation |
|
--- |
|
|
|
# MobileViTv2 + DeepLabv3 (shehan97/mobilevitv2-1.0-voc-deeplabv3) |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
MobileViTv2 model pre-trained on PASCAL VOC at resolution 512x512. |
|
It was introduced in [Separable Self-attention for Mobile Vision Transformers](https://arxiv.org/abs/2206.02680) by Sachin Mehta and Mohammad Rastegari, and first released in [this](https://github.com/apple/ml-cvnets) repository. The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE). |
|
|
|
Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team. |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
MobileViTv2 is constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention. |
|
|
|
The model in this repo adds a [DeepLabV3](https://arxiv.org/abs/1706.05587) head to the MobileViT backbone for semantic segmentation. |
|
|
|
### Intended uses & limitations |
|
|
|
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilevitv2) to look for fine-tuned versions on a task that interests you. |
|
|
|
### How to use |
|
|
|
Here is how to use this model: |
|
|
|
```python |
|
from transformers import MobileViTv2FeatureExtractor, MobileViTv2ForSemanticSegmentation |
|
from PIL import Image |
|
import requests |
|
|
|
url = "http://images.cocodataset.org/val2017/000000039769.jpg" |
|
image = Image.open(requests.get(url, stream=True).raw) |
|
|
|
feature_extractor = MobileViTv2FeatureExtractor.from_pretrained("shehan97/mobilevitv2-1.0-voc-deeplabv3") |
|
model = MobileViTv2ForSemanticSegmentation.from_pretrained("shehan97/mobilevitv2-1.0-voc-deeplabv3") |
|
|
|
inputs = feature_extractor(images=image, return_tensors="pt") |
|
|
|
outputs = model(**inputs) |
|
logits = outputs.logits |
|
|
|
predicted_mask = logits.argmax(1).squeeze(0) |
|
``` |
|
|
|
Currently, both the feature extractor and model support PyTorch. |
|
|
|
## Training data |
|
|
|
The MobileViT + DeepLabV3 model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes, and then fine-tuned on the [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/) dataset. |
|
|
|
### BibTeX entry and citation info |
|
|
|
```bibtex |
|
@inproceedings{vision-transformer, |
|
title = {Separable Self-attention for Mobile Vision Transformers}, |
|
author = {Sachin Mehta and Mohammad Rastegari}, |
|
year = {2022}, |
|
URL = {https://arxiv.org/abs/2206.02680} |
|
} |
|
``` |
|
|