Reconstruct 3D scenes from image and depth data captured using OpenQuestCapture.
This project provides a complete pipeline for generating 3D reconstructions using passthrough images and depth data captured on Meta Quest devices. The system supports both Open3D-based volumetric reconstruction and COLMAP-based SfM workflows.
We recommend using Miniconda or Anaconda to manage environments.
Create and activate the environment:
conda env create -f environment.yml
conda activate mq3dreconIf your data is in a zip file, unzip it to a directory of your choice.
Run the all-in-one script to convert images, reconstruct the scene, and export to COLMAP:
python scripts/e2e_quest_to_colmap.py \
--project_dir path/to/your/project \
--output_dir path/to/output/colmap_project \
--use_colored_pointcloudpython scripts/convert_yuv_to_rgb.py \
--project_dir path/to/your/project \
--config config/pipeline_config.ymlThis generates:
left_camera_rgb/right_camera_rgb/
Note: After conversion, manually remove any unnecessary or corrupted images.
python scripts/reconstruct_scene.py \
--project_dir path/to/your/project \
--config config/pipeline_config.ymlThis produces:
- TSDF-based voxel grid (colorless)
- Textured mesh model
Depending on your YAML config (reconstruction: section), the following additional outputs may be generated:
| Option | Output |
|---|---|
estimate_depth_confidences: true |
Confidence maps generated by comparing each depth frame with nearby frames |
optimize_depth_pose: true |
Optimized depth dataset |
optimize_color_pose: true |
Optimized color dataset |
sample_point_cloud_from_colored_mesh: true |
Colored point cloud |
render_color_aligned_depth: true |
Depth images aligned to RGB frames |
color_aligned_depth_rendering.only_use_optimized_dataset: true |
Only aligned for optimized color dataset |
π‘ Tip: For better color quality in your point cloud, adjust color_optimization.interval in the config. Lower values (e.g., interval: 2) use more frames for coloring and produce better results, at the cost of higher memory usage and longer processing time.
python scripts/build_colmap_project.py \
--project_dir path/to/your/project \
--output_dir path/to/output/colmap_project \
--use_colored_pointcloud \
--use_optimized_color_dataset \
--interval 1Options:
--use_colored_pointcloud: Include colored point cloud if available.--use_optimized_color_dataset: Use optimized color dataset.--interval: Export every N-th frame.
python scripts/convert_depth_to_linear_map.py \
--project_dir path/to/your/project \
--config config/pipeline_config.ymlThis step is standalone and not required for other scripts.
You can write your own scripts by importing the unified DataIO interface:
from dataio.data_io import DataIO
from models.side import Side
from models.transforms import CoordinateSystem
data_io = DataIO(project_dir=args.project_dir)
# Load depth maps
dataset = data_io.depth.load_depth_dataset(Side.LEFT)
depth_map = data_io.depth.load_depth_map_by_index(Side.LEFT, dataset, index=0)
# Load RGB frames
color_dataset = data_io.color.load_color_dataset(Side.LEFT)
timestamp = color_dataset.timestamps[0]
rgb = data_io.color.load_rgb(Side.LEFT, timestamp)
color_dataset.transforms = color_dataset.transforms.convert_coordinate_system(
target_coordinate_system=CoordinateSystem.OPEN3D,
is_camera=True
)Explore:
scripts/dataio/for loadable datasetsscripts/models/for internal data structures
If your capture has bright windows and dark interiors (causing blown-out highlights and underexposed shadows), you can apply CLAHE tone mapping to improve color quality.
Edit config/pipeline_config.yml:
yuv_to_rgb:
tone_mapping: true # Enable tone mapping
tone_mapping_method: "clahe" # Options: "clahe", "gamma", "clahe+gamma"
clahe_clip_limit: 2.0 # Contrast enhancement (1.0-4.0, higher = more contrast)
clahe_tile_grid_size: 6 # Grid size for local adaptation (4-16, smaller = more aggressive)
gamma_correction: 1.2 # Brightness adjustment (>1 brightens)Then run the pipeline normally - tone mapping is applied automatically during YUVβRGB conversion.
If you already have RGB images and want to improve them:
python scripts/apply_tone_mapping.py \
--project_dir path/to/your/project \
--method clahe \
--clip_limit 2.5 \
--tile_grid_size 6 \
--backupThen re-run color optimization only:
python scripts/e2e_quest_to_colmap.py \
--project_dir path/to/your/project \
--output_dir path/to/output \
--skip_yuv_conversion \
--skip_reconstruction \
--use_colored_pointcloudclip_limit: Higher values (2.5-3.5) = more aggressive contrast enhancement. Use for very bright windows.tile_grid_size: Smaller values (4-6) = more local adaptation for large bright/dark regions. Larger values (12-16) = smoother, more subtle effect.method:clahe- Best for mixed lighting (recommended)gamma- Simple global brighteningclahe+gamma- Maximum detail recovery for extreme cases
Note: Tone-mapped images work great for both point cloud coloring and Gaussian Splatting training!
your_project/
βββ left_camera_rgb/
βββ right_camera_rgb/
βββ reconstruction/
β βββ tsdf/
β βββ mesh/
β βββ point_cloud/
β βββ aligned_depth/
βββ colmap_export/
βββ config/
β βββ pipeline_config.yml
This project includes components from COLMAP, licensed under the 3-clause BSD License. See scripts/third_party/colmap/COPYING.txt for details.
This project is licensed under the MIT License. See the LICENSE file for full text.