Skip to content

samuelm2/quest-3d-reconstruction

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

83 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

OpenQuestCapture 3D Reconstruction

Reconstruct 3D scenes from image and depth data captured using OpenQuestCapture.


🧭 Overview

This project provides a complete pipeline for generating 3D reconstructions using passthrough images and depth data captured on Meta Quest devices. The system supports both Open3D-based volumetric reconstruction and COLMAP-based SfM workflows.


πŸš€ Setup

Environment Setup (with conda)

We recommend using Miniconda or Anaconda to manage environments.

Create and activate the environment:

conda env create -f environment.yml
conda activate mq3drecon

πŸ”§ Processing Pipeline

Step 0: Unzip the data

If your data is in a zip file, unzip it to a directory of your choice.

Option 1: End-to-End Pipeline (Recommended)

Run the all-in-one script to convert images, reconstruct the scene, and export to COLMAP:

python scripts/e2e_quest_to_colmap.py \
  --project_dir path/to/your/project \
  --output_dir path/to/output/colmap_project \
  --use_colored_pointcloud

Option 2: Manual Pipeline

Step 1: Convert Passthrough Images to RGB

python scripts/convert_yuv_to_rgb.py \
  --project_dir path/to/your/project \
  --config config/pipeline_config.yml

This generates:

  • left_camera_rgb/
  • right_camera_rgb/

Note: After conversion, manually remove any unnecessary or corrupted images.


Step 2: Reconstruct 3D Scene

python scripts/reconstruct_scene.py \
  --project_dir path/to/your/project \
  --config config/pipeline_config.yml

This produces:

  • TSDF-based voxel grid (colorless)
  • Textured mesh model

Depending on your YAML config (reconstruction: section), the following additional outputs may be generated:

Option Output
estimate_depth_confidences: true Confidence maps generated by comparing each depth frame with nearby frames
optimize_depth_pose: true Optimized depth dataset
optimize_color_pose: true Optimized color dataset
sample_point_cloud_from_colored_mesh: true Colored point cloud
render_color_aligned_depth: true Depth images aligned to RGB frames
color_aligned_depth_rendering.only_use_optimized_dataset: true Only aligned for optimized color dataset

πŸ’‘ Tip: For better color quality in your point cloud, adjust color_optimization.interval in the config. Lower values (e.g., interval: 2) use more frames for coloring and produce better results, at the cost of higher memory usage and longer processing time.


Step 3: Export COLMAP Project (Optional)

python scripts/build_colmap_project.py \
  --project_dir path/to/your/project \
  --output_dir path/to/output/colmap_project \
  --use_colored_pointcloud \
  --use_optimized_color_dataset \
  --interval 1

Options:

  • --use_colored_pointcloud: Include colored point cloud if available.
  • --use_optimized_color_dataset: Use optimized color dataset.
  • --interval: Export every N-th frame.

[Optional] Convert Raw Depth to Linear Depth Map

python scripts/convert_depth_to_linear_map.py \
  --project_dir path/to/your/project \
  --config config/pipeline_config.yml

This step is standalone and not required for other scripts.


πŸ› οΈ Custom Data Processing

You can write your own scripts by importing the unified DataIO interface:

from dataio.data_io import DataIO
from models.side import Side
from models.transforms import CoordinateSystem


data_io = DataIO(project_dir=args.project_dir)

# Load depth maps
dataset = data_io.depth.load_depth_dataset(Side.LEFT)
depth_map = data_io.depth.load_depth_map_by_index(Side.LEFT, dataset, index=0)

# Load RGB frames
color_dataset = data_io.color.load_color_dataset(Side.LEFT)
timestamp = color_dataset.timestamps[0]
rgb = data_io.color.load_rgb(Side.LEFT, timestamp)

color_dataset.transforms = color_dataset.transforms.convert_coordinate_system(
    target_coordinate_system=CoordinateSystem.OPEN3D,
    is_camera=True
)

Explore:

  • scripts/dataio/ for loadable datasets
  • scripts/models/ for internal data structures

🎨 Tone Mapping for High Dynamic Range Scenes

If your capture has bright windows and dark interiors (causing blown-out highlights and underexposed shadows), you can apply CLAHE tone mapping to improve color quality.

Enable During Pipeline (Recommended for New Captures)

Edit config/pipeline_config.yml:

yuv_to_rgb:
  tone_mapping: true              # Enable tone mapping
  tone_mapping_method: "clahe"    # Options: "clahe", "gamma", "clahe+gamma"
  clahe_clip_limit: 2.0           # Contrast enhancement (1.0-4.0, higher = more contrast)
  clahe_tile_grid_size: 6         # Grid size for local adaptation (4-16, smaller = more aggressive)
  gamma_correction: 1.2           # Brightness adjustment (>1 brightens)

Then run the pipeline normally - tone mapping is applied automatically during YUV→RGB conversion.

Apply to Existing RGB Images (After Processing)

If you already have RGB images and want to improve them:

python scripts/apply_tone_mapping.py \
  --project_dir path/to/your/project \
  --method clahe \
  --clip_limit 2.5 \
  --tile_grid_size 6 \
  --backup

Then re-run color optimization only:

python scripts/e2e_quest_to_colmap.py \
  --project_dir path/to/your/project \
  --output_dir path/to/output \
  --skip_yuv_conversion \
  --skip_reconstruction \
  --use_colored_pointcloud

Parameter Tuning

  • clip_limit: Higher values (2.5-3.5) = more aggressive contrast enhancement. Use for very bright windows.
  • tile_grid_size: Smaller values (4-6) = more local adaptation for large bright/dark regions. Larger values (12-16) = smoother, more subtle effect.
  • method:
    • clahe - Best for mixed lighting (recommended)
    • gamma - Simple global brightening
    • clahe+gamma - Maximum detail recovery for extreme cases

Note: Tone-mapped images work great for both point cloud coloring and Gaussian Splatting training!


πŸ“ Directory Structure (after full pipeline)

your_project/
β”œβ”€β”€ left_camera_rgb/
β”œβ”€β”€ right_camera_rgb/
β”œβ”€β”€ reconstruction/
β”‚   β”œβ”€β”€ tsdf/
β”‚   β”œβ”€β”€ mesh/
β”‚   β”œβ”€β”€ point_cloud/
β”‚   └── aligned_depth/
β”œβ”€β”€ colmap_export/
β”œβ”€β”€ config/
β”‚   └── pipeline_config.yml

🧩 Third-Party Code

This project includes components from COLMAP, licensed under the 3-clause BSD License. See scripts/third_party/colmap/COPYING.txt for details.


πŸ“ License

This project is licensed under the MIT License. See the LICENSE file for full text.


About

3D reconstruction from Meta Quest passthrough and depth via Reality Capture app, with Open3D/TSDF, COLMAP/Nerfstudio export, and Gaussian Splatting prep.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 100.0%