Advertisement

PGSR: Planar-based Gaussian Splatting for Efficient and High-Fidelity Surface Reconstruction

阅读量:

Abstract

Recently, 3DGS has attracted widespread attention due to its high-quality rendering, and ultra-fast training and rendering speed. However, due to the unstructured and irregular nature of Gaussian point clouds, it is difficult to guaranteegeometric reconstruction accuracy and multi-view consistency simply by relying on image reconstruction loss.

Although many studies on surface reconstruction based on 3DGS have emerged recently, the quality of their meshes is generally unsatisfactory.

To address this problem, we propose a fast planar-based Gaussian splatting reconstruction representation (PGSR) to achieve high-fidelity surface reconstruction while ensuring high-quality rendering.

Specifically,

we first introduce an unbiased depth rendering method , which directly renders the distance from the camera origin to the Gaussian plane and the corresponding normal map based on the Gaussian distribution of the point cloud , and divides the two to obtain the unbiased depth.

We then introduce single-view geometric , multi-view photometric , and geometric regularization to preserve global geometric accuracy.

We also propose a camera exposure compensation model to cope with scenes with large illumination variations.

Experiments on indoor and outdoor scenes show that our method achieves fast training and rendering while maintaining high-fidelity rendering and geometric reconstruction, outperforming 3DGS-based and NeRF-based methods.

project page

Figure

Figure 1

PGSR representation.

We present a Planar-based Gaussian Splatting Reconstruction representation for efficient and high-fidelity surface reconstruction from multi-view RGB images without any geometric prior (depth or normal from pre-trained model).

The courthouse reconstructed by our method demonstrates that PGSR can recover geometric details, such as textual details on the building.

Figure 2

Unbiased depth rendering.

(a) Illustration of the rendered depth: We take a single Gaussian, flatten it into a plane , and fit it onto the surface as an example.

Our rendered depth is theintersection point of rays and surfaces, matching the actual surface. In contrast, the depth from previous methods corresponds to a curved surface and may deviate from the actual surface.

(b) We use true depth to supervise two different depth rendering methods. After optimization, we map the positions of all Gaussian points.

Gaussians of our method fit well onto the actual surface, while the previous method results in noise and poor adherence to the surface.

Figure 3

Rendered Depth.

The original depth in 3DGS exhibits significant noise, while our depth is smoother and more accurate.

Figure 4

PGSR Overview.

We compress Gaussians into flat planes and render distance and normal maps, which are then transformed into unbiased depth maps.

Single-view and multi-view geometric regularization ensure high precision in global geometry.

Exposure compensation RGB loss enhances reconstruction accuracy.

Figure 5

The rendering and mesh reconstruction results in various indoor and outdoor scenes that we have achieved.

PGSR achieves high-precision geometric reconstruction from a series of RGB images without requiring any prior knowledge.

Figure 6

Unbiased Depth.

Figure 7

Qualitative comparison on DTU dataset. PGSR produces smooth and detailed surfaces.

Figure 8

Qualitative comparison on Tanks and Temples dataset.

We visualize surface quality using a normal map generated from the reconstructed mesh.

PGSR outperforms other baseline approaches in capturing scene details, whereas baseline methods exhibit missing or noisy surfaces.

Figure 9

Multi-view photometric and geometric loss.

Figure 10

The qualitative comparison of our unbiased depth method with the previous depth method is depicted in the normal map. Our overall geometric structure appears smoother and more precise.

Figure 11

Virtual Reality Application.

(a) Original materials, including garden scene, excavator, and Ignatius.

(b) A Virtual Reality effect showcase synthesized from these original materials.

Limitations And Conclusion

Although our PGSR efficiently and faithfully performs geometric reconstruction, it also faces several challenges.

Firstly, we cannot perform geometric reconstruction in regions with missing or limited viewpoints, leading to incomplete or less accurate geometry. Exploring methods to improve reconstruction quality under insufficient constraints using priors is another avenue for further investigation.

Secondly, our method does not consider scenarios involving reflective surfaces or mirrors, so reconstruction in these environments will pose challenges. Integrating with existing 3DGS work that accounts for reflective surfaces would enhance reconstruction accuracy in such scenarios.

Finally, we found that there are some floating points in the scene, which affect the rendering and reconstruction quality. Integrating more advanced 3DGS baselines would help further enhance overall quality.


In this paper, we propose a novel unbiased depth rendering method based on 3DGS.

With this method, we render the plane geometry parameters for each pixel, including normal, distance, and depth maps.

We then incorporate single-view and multi-view geometric regularization, and exposure compensation model to achieve precise global consistency in geometry.

We validate our rendering and reconstruction quality on the MipNeRF360 , DTU , and TnT datasets.

The experimental results indicate that our method achieves the highest geometric reconstruction accuracy and rendering quality compared to the current SOTA methods.

全部评论 (0)

还没有任何评论哟~