Advertisement

Refgaussian: Disentangling Reflections From 3d Gaussian Splatting For Realistic Rendering

阅读量:

Abstract

3DGS has achieved significant progress within the domains of neural rendering, 3D scene reconstruction, and novel view synthesis.Despite these advancements, 3DGS still faces a significant challenge when it comes to precisely modeling physical reflections; this issue is particularly evident in cases involving total and partial reflections as commonly observed in real-world scenes.

This limitation leads to reflections being mistakenly regarded as discrete entities having tangible existence, resulting in imprecise reconstructions.

This paper addresses the challenge of separating reflections from 3DGS by introducing RefGaussian, which enables realistic reflection modeling.

Particularly, we introduced a method to divide the scene into transmitted and reflected parts, which are then expressed as two Spherical Harmonic basis functions (SH).

Due to the fact that this decomposition is not entirely specified, we utilize a variety of local regularization approaches to ensure the local smoothness of both transmitted and reflected components, which leads to more plausible decompositions than 3DGS.

Experimental studies show that our strategy achieves remarkable novel viewpoint synthesis and precise depth estimation outcomes. Moreover, it makes available scene editing applications, ensuring excellent results and physical consistency.

Figure

Figure 1

novel viewpoint synthesis using RefGaussian, capable of improved reflection modeling compared to the original 3DGS.

Figure 2

The overview of our proposed RefGaussian.

The RefGaussian framework is predicated on effectively capturing general reflections within a scene, which are realized through the decomposition of the scene into two primary elements: the transmitted component and the reflected component.

RefGaussian通过消除额外的3D Gaussians的要求来实现这一目标,在此过程中实现了更高效的渲染效果。

Thanks to the integration of bilateral smoothness and reflection map smoothness, the framework allows for efficient scene decomposition.

Moreover, both depth variations and color differences are taken into account and collaboratively optimized to further enhance the overall rendering quality.

Figure 3

Visual comparisons betweenNeRF, NeRF-D, NeRFReN, 3D-GS and our method.

Our method provides a more comprehensive and authentic visualization compared to 3DGS across all scenarios.

In comparison with NeRF-based approaches, our techniques match the performance in scenarios involving semi-reflective surfaces and outperform those featuring highly reflective surfaces.

Figure 4

Detailed visual comparisons between 3DGS and our method.

为了全面对比分析不同方法的性能差异,在本次研究中我们选取了多个典型场景进行实验验证

Figure 5

Illustration of scene disentanglement.

The final rendered images consist of transmitted components, the products of reflected components, and a reflection fraction chart. These elements collectively contribute to an accurate representation.

Figure 6

Reflection Manipulation.

Through alteration of the lighting coefficients on the reflection map, we are able to precisely tune the brightness of the reflected content to arbitrary levels within a high-dimensional manifold.

Figure 7

Effectiveness of Model Design.

The integrated approach of these three distinct design elements ensures the highest level of visual output quality.

Limitations and Future Works

Since typically observed real-world reflectors exhibit highly smooth structures, such as mirrors, windows, and screens, the proposed bilateral smoothness constraint and reflection map constraint posit that scene reflectors are modeled as flat surfaces rather than highly irregular surfaces featuring pronounced undulations or cusps. Consequently, the applicability of our method to scenarios involving curved reflectors remains unexplored.

Additionally, the reflection manipulation within our RefGaussian is executed in a pixel-wise manner by means of filtering the reflection map, thereby resulting in limited to coarse and global enhancement. The focus of future work lies in exploring greater flexibility and more detailed manipulation.

Conclusions

Within this study, we introduce RefGaussian , a new 3DGS-based framework designed specifically for realistic rendering of scenes exhibiting strong reflections.

Innovatively, our RefGaussian dissects a scene into transmitted and reflected components, fusing their rendered images to produce comprehensive final results.

Particularly, the Gaussian representation is incorporated with three additional parameters, specifically reflection-SH, reflection opacity, and reflection confidence, to support the disentangled scheme.

Additionally, we introduce the bilateral smoothness and reflection map smoothness constraints to remove the effect of mutual interference between two components, ensuring not only a proper decomposition but also enhanced performance.

Our method exceeds existing approaches based on NeRF and 3DGS in scenarios involving strong reflections, while it also delivers comparable performance across a broader range of cases. Ultimately, the capability of manipulating reflection highlights a significant potential for our method to enhance its applicability in diverse settings.

全部评论 (0)

还没有任何评论哟~