|
--- |
|
license: mit |
|
--- |
|
# TAC depth encoder |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
This model is used for encoding a depth image into a dense feature. |
|
|
|
|
|
**Caution,** the model does not contain the last FC layer. |
|
So, the output features are not aligned with RGB. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
The model is pre-trained with RGB-D contrastive objectives, named TAC. |
|
Different from InfoNCE-based loss fuctions, TAC leverages the similarity between videos frames and estimate a similarity matrix as soft labels. |
|
The backbone of this version is ViT-B/32. |
|
The pre-training is conducted on a new unified RGB-D database, UniRGBD. |
|
|
|
### Model Sources |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** [TAC](https://github.com/RavenKiller/TAC) |
|
- **Paper:** [Learning Depth Representation from RGB-D Videos by Time-Aware Contrastive Pre-training](https://ieeexplore.ieee.org/document/10288539) |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
### Direct Uses |
|
|
|
``` |
|
from transformers import CLIPImageProcessor, CLIPVisionModel, CLIPVisionConfig |
|
import numpy as np |
|
tac_depth_model = CLIPVisionModel.from_pretrained("RavenK/TAC-ViT-base") |
|
tac_depth_processor = CLIPImageProcessor.from_pretrained("RavenK/TAC-ViT-base") |
|
|
|
# Assume test.png is a depth image with a scale factor 1000 |
|
MIN_DEPTH = 0.0 |
|
MAX_DEPTH = 10.0 |
|
DEPTH_SCALE = 1000 |
|
|
|
depth_path = "test.png" |
|
depth = Image.open(depth_path) |
|
depth = np.array(depth).astype("float32") / DEPTH_SCALE # to meters |
|
depth = np.clip(depth, MIN_DEPTH, MAX_DEPTH) # clip to [MIN_DEPTH, MAX_DEPTH] |
|
depth = (depth - MIN_DEPTH) / (MAX_DEPTH - MIN_DEPTH) # normalize to [0,1] |
|
depth = np.expand_dims(depth, axis=2).repeat(3, axis=2) # extend to 3 channels |
|
depth = tac_depth_processor(depth, do_rescale=False, return_tensors="pt").pixel_values # preprocess (resize, normalize and to tensor) |
|
|
|
outputs = tac_depth_model(pixel_values=depth) |
|
outputs = outputs["last_hidden_state"][:, 0, :] # get embedding without FC. may be used for other downstream fine-tuning |
|
``` |
|
|
|
### Other Uses |
|
|
|
Please refer to the [demo](https://github.com/RavenKiller/TAC/blob/main/scripts/demo.ipynb) in our code repository. |
|
|
|
## Citation |
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
``` |
|
@ARTICLE{10288539, |
|
author={He, Zongtao and Wang, Liuyi and Dang, Ronghao and Li, Shu and Yan, Qingqing and Liu, Chengju and Chen, Qijun}, |
|
journal={IEEE Transactions on Circuits and Systems for Video Technology}, |
|
title={Learning Depth Representation From RGB-D Videos by Time-Aware Contrastive Pre-Training}, |
|
year={2024}, |
|
volume={34}, |
|
number={6}, |
|
pages={4143-4158}, |
|
doi={10.1109/TCSVT.2023.3326373}} |
|
``` |
|
|
|
|
|
|