YAML Metadata
Warning:
The pipeline tag "normals-estimation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, other
Marigold Normals v0-1 Model Card
This model is deprecated. Use the new Marigold Normals v1-1 Model instead.
NEW: Marigold Normals v1-1 Model
This is a model card for the marigold-normals-v0-1
model for monocular normals estimation from a single image.
The model is fine-tuned from the stable-diffusion-2
model as
described in
a follow-up of our CVPR'2024 paper titled "Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation".
- Play with the interactive Hugging Face Spaces demo: check out how the model works with example images or upload your own.
- Use it with diffusers to compute the results with a few lines of code.
- Get to the bottom of things with our official codebase.
Model Details
- Developed by: Bingxin Ke, Kevin Qu, Tianfu Wang, Nando Metzger, Shengyu Huang, Bo Li, Anton Obukhov, Konrad Schindler.
- Model type: Generative latent diffusion-based normals estimation from a single image.
- Language: English.
- License: Apache License License Version 2.0.
- Model Description: This model can be used to generate an estimated surface normals map of an input image.
- Resolution: Even though any resolution can be processed, the model inherits the base diffusion model's effective resolution of roughly 768 pixels. This means that for optimal predictions, any larger input image should be resized to make the longer side 768 pixels before feeding it into the model.
- Steps and scheduler: This model was designed for usage with DDIM scheduler and between 10 and 50 denoising steps.
- Outputs:
- Surface normals map: The predicted values are 3-dimensional unit vectors in the screen space camera.
- Uncertainty map: Produced only when multiple predictions are ensembled with ensemble size larger than 2.
- Resources for more information: Project Website, Paper, Code.
- Cite as:
Placeholder for the citation block of the follow-up paper
@InProceedings{ke2023repurposing,
title={Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation},
author={Bingxin Ke and Anton Obukhov and Shengyu Huang and Nando Metzger and Rodrigo Caye Daudt and Konrad Schindler},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}
- Downloads last month
- 170,701
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.