Datasets:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Dataset Card

Number of samples: 25196

Columns / Features:

  • order_id: Value(dtype='string', id=None)
  • image_ids: Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)
  • ade: Sequence(feature=Image(mode=None, decode=True, id=None), length=-1, id=None)
  • depth: Sequence(feature=Image(mode=None, decode=True, id=None), length=-1, id=None)
  • gestalt: Sequence(feature=Image(mode=None, decode=True, id=None), length=-1, id=None)
  • K: Sequence(feature=Array2D(shape=(3, 3), dtype='float32', id=None), length=-1, id=None)
  • R: Sequence(feature=Array2D(shape=(3, 3), dtype='float32', id=None), length=-1, id=None)
  • t: Sequence(feature=Array2D(shape=(3, 1), dtype='float32', id=None), length=-1, id=None)
  • wf_vertices: Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=3, id=None), length=-1, id=None)
  • wf_edges: Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=2, id=None), length=-1, id=None)
  • wf_classifications: Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)
  • colmap_binary: Value(dtype='binary', id=None)

These data were gathered over the course of several years throughout the United States from a variety of smart phone and camera platforms. Each training sample/scene consists of a set of posed image features (segmentation, depth, etc.) and a sparse point cloud as input, and a sparse wire frame (3D embedded graph) with semantically tagged edges as the target. In order to preserve privacy, original images are not provided.

Note: the test distribution is not guaranteed to match the training set.

Important

This dataset relies on the newest (0.2.111) version of the webdataset. Earlier version would not work, unfortunately. You can check all required dependencies in requirements.txt

Usage example

Related package: hoho25k

pip install git+http://hf.co/usm3d/tools2025.git   

You can recreate the visualizations below with

from datasets import load_dataset
from hoho2025.vis import plot_all_modalities
from hoho2025.viz3d import *

def read_colmap_rec(colmap_data):
    import pycolmap
    import tempfile,zipfile
    import io
    with tempfile.TemporaryDirectory() as tmpdir:
        with zipfile.ZipFile(io.BytesIO(colmap_data), "r") as zf:
            zf.extractall(tmpdir)  # unpacks cameras.txt, images.txt, etc. to tmpdir
        # Now parse with pycolmap
        rec = pycolmap.Reconstruction(tmpdir)
        return rec

ds = load_dataset("usm3d/hoho25k", streaming=True, trust_remote_code=True)
for a in ds['train']:
    break

fig, ax = plot_all_modalities(a)

## Now 3d

fig3d = init_figure()
plot_reconstruction(fig3d, read_colmap_rec(a['colmap_binary']))
plot_wireframe(fig3d, a['wf_vertices'], a['wf_edges'], a['wf_classifications'])
plot_bpo_cameras_from_entry(fig3d, a)
fig3d

Sample Preview

Additional notes on data

Depth

The depth is a result of running monocular depth model Metric3Dv2, and it is not ground truth by no means. The depth is stored in millimeters, so to get the metric data, use

np.array(entry['depth']).astype(float32)/1000.

If you need to have a GT depth, the semi-sparse depth from the Colmap reconstructions with dense features available in points3d is quite accurate.

Segmentation

You have two segmentations available. gestalt is domain specific model, which "sees-through-occlusions" and provides a detailed information about house parts. See the list of classes in "Dataset" section in the navigation bar.

ade20k is a standard ADE20K segmentation model (specifically, shi-labs/oneformer_ade20k_swin_large.

Organizers

Jack Langerman (Apple Inc), Dmytro Mishkin (Hover Inc / CTU in Prague), Yuzhong Huang (HOVER Inc)).

Sponsors

The organizers would like to thank Hover Inc. for their sponsorship of this challenge and dataset.

@misc{S23DR_2025, 
        title={S23DR Competition at 2nd Workshop on Urban Scene Modeling @ CVPR 2025}, 
        url={usm3d.github.io},
        howpublished = {\url{https://huggingface.co/usm3d}},
        year={2025},
        author={Langerman, Jack and Mishkin, Dmytro and Yuzhong, Huang}
    }

Sample Previews

Sample 10 (row index 10)

2D Visualization: All modalities all_modalities

3D Visualization: Point cloud and wireframe 3d_visualization

Sample 11 (row index 11)

2D Visualization: All modalities all_modalities

3D Visualization: Point cloud and wireframe 3d_visualization

Downloads last month
497