splatt3r / README.md
brandonsmart's picture
Initial commit
5ed9923
|
raw
history blame
2.32 kB

Splatt3R: Zero-shot Gaussian Splatting from Uncalibarated Image Pairs

Official implementation of Zero-shot Gaussian Splatting from Uncalibarated Image Pairs

Links removed for anonymity:
Project Page, Splatt3R arXiv

Installation

  1. Clone Splatt3R
git clone <redacted github link>
cd splatt3r
  1. Setup Anaconda Environment
conda env create -f environment.yml
pip install git+https://github.com/dcharatan/diff-gaussian-rasterization-modified
  1. (Optional) Compile the CUDA kernels for RoPE (as in MASt3R and CroCo v2)
cd src/dust3r_src/croco/models/curope/
python setup.py build_ext --inplace
cd ../../../../../

Checkpoints

We train our model using the pretrained MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric checkpoint from the MASt3R authors, available from the MASt3R GitHub repo. This checkpoint is placed at the file path checkpoints/MASt3R_ViTLarge_BaseDecoder_512_catmlpdpt_metric.pth.

A pretrained Splatt3R model can be downloaded here (redacted link).

Data

We use ScanNet++ to train our model. We download the data from the official ScanNet++ homepage and process the data using SplaTAM's modified version of the ScanNet++ toolkit. We save the processed data to the 'processed' subfolder of the ScanNet++ root directory.

Our generated test coverage files, and our training and testing splits, can be downloaded here (redacted link), and placed in data/scannetpp.

Demo

The Gradio demo can be run using python demo.py <checkpoint_path>, replacing <checkpoint_path> with the trained network path. A checkpoint will be available for the public release of this code.

This demo generates a .ply file that represents the scene, which can be downloaded and rendered using online 3D Gaussian Splatting viewers such as here or here.

Training

Our training run can be recreated by running python main.py configs/main.yaml. Other configurations can be found in the configs folder.

BibTeX

Forthcoming arXiv citation