VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction
Abstract
Existing NeRF-based methods for large scene reconstruction often have limitations in visual quality and rendering speed. While the recent 3D Gaussian Splatting works well on small-scale and object-centric scenes, scaling it up to large scenes poses challenges due to limited video memory, long optimization time, and noticeable appearance variations.
To address these challenges, we present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting.
We propose a progressive partitioning strategy to divide a large scene into multiple cells, where the training cameras and point cloud are properly distributed with an airspace-aware visibility criterion. These cells are merged into a complete scene after parallel optimization.
提出一种渐进的划分策略,将一个大场景划分为多个单元,其中训练相机和点云以空域感知的可见性准则进行合理的分布。这些单元经过并行优化后合并为一个完整的场景。
We also introduce decoupled appearance modeling into the optimization process to reduce appearance variations in the rendered images.
还将解耦的外观建模引入到优化过程中,以减少渲染图像中的外观变化。
Our approach outperforms existing NeRF-based methods and achieves SOTA results on multiple large scene datasets, enabling fast optimization and high-fidelity real-time rendering.
Figure
Figure 1
Renderings of three SOTA methods and our VastGa
原文地址:https://blog.csdn.net/weixin_44478317/article/details/142791454
免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!