DyOmni / README.md
SiminKou's picture
Update README.md
5fc5965 verified
metadata
license: cc-by-4.0

Dataset Name: DyOmni

It is introduced from the TVCG publication OmniPlane, and for the project implementation, please see our GitHub Page.

How to Download

git clone [email protected]:datasets/SiminKou/DyOmni.git

Overview

This dataset contains 16 real-world omnidirectional videos captured with a handheld camera. The videos include scenarios involving the videographer's presence or absence within a large area, along with other dynamic entities in both nearby and distant locations. Some videos also include an additional videographer instead of just one. All videos were recorded with a casually moving 360Β° camera to closely mimic real-world user capture conditions and facilitate the extraction of camera parameters. Each video contains between 80 and 100 frames. We used a Structure-from-Motion (SfM) library OpenMVG to estimate spherical camera parameters from all these videos in the equirectangular projection (ERP) format.

Dataset Structure

DyOmni/
β”œβ”€β”€ Ayutthaya/
|     β”œβ”€β”€ erp_imgs/
|     β”‚   β”œβ”€β”€ 1.png
|     β”‚   β”œβ”€β”€ 2.png
|     β”‚   └── ...
|     β”œβ”€β”€ output_dir/
|     β”‚   └── colmap/
|     β”‚       β”œβ”€β”€ cameras.txt
|     β”‚       β”œβ”€β”€ images.txt
|     β”‚       └── points3D.txt
|     β”œβ”€β”€ test.txt
|     β”œβ”€β”€ time.txt
|     └── train.txt
β”œβ”€β”€ Basketball/
β”‚     β”œβ”€β”€ erp_imgs/
β”‚     β”œβ”€β”€ output_dir/
β”‚     β”œβ”€β”€ test.txt
|     β”œβ”€β”€ time.txt
β”‚     └── train.txt
└── ...

Note: The train.txt and test.txt both include all frames, aiming to evaluate its complete reconstruction performance, not focusing on novel view synthesis capacity. That said, our model does possess a certain generalization ability to generate novel views, so it also supports splitting out some test frames from the entire set of video frames. If you require a complete separation between training and testing images, you can simply adjust the numbers in the train.txt and test.txt files. These two files directly control which images from the scene are used for training and testing. For example, if you want to alternate framesβ€”using one for training and the next for testingβ€”the train.txt would include frames like 1, 3, 5, ..., 99, while the test.txt would contain 2, 4, 6, ..., 100. This example of train and test splits can be found in the example folder of this repository.