Advertisement

Gaussian Splashing: Dynamic Fluid Synthesis with Gaussian Splatting

阅读量:

Abstract

We demonstrate the feasibility of integrating physics-based animations ofsolids and fluids with 3DGS to create novel effects in virtual scenes reconstructed using 3DGS. Leveraging the coherence of the Gaussian splatting and position-based dynamics (PBD) in the underlying representation, we manage rendering, view synthesis, and the dynamics of solids and fluids in a cohesive manner.

Similar to Gaussian shader, we enhance each Gaussian kernel with an added normal , aligning the kernel’s orientation with the surface normal to refine the PBD simulation.

This approach effectively eliminates spiky noises that arise from rotational deformation in solids. It also allows us to integrate physically based rendering to augment the dynamic surface reflections on fluids.

Consequently, framework is capable of realistically reproducing surface highlights on dynamic fluids and facilitating interactions between scene objects and fluids from new views.

project page

Figure

Figure 1

Gaussian Splashing (GSP) is a unified framework combining position-based dynamics and 3D Gaussian Splatting.

By leveraging their coherent point-based representations , GSP delivers high-quality rendering for novel dynamic views involving interacting solids and fluids.

GSP enables a variety of interesting effects and new human-computer interaction modalities that are not available with existing NeRF/3DGS based systems.

The teaser figure showcases the interaction between a LEGO excavator and the splashing waves.

There are 334,815 solid Gaussian kernels and 280,000 fluid Gaussian kernels.

Those volumetric Gaussians not only capture nonlinear dynamics of two-way coupled fluids and solids but can also berasterized to realistically render with both diffuse and specular shading.

GSP re-engineers several SOTA techniques from neural surface reconstruction, specular-aware Gaussian shader, position-based tension, and AI inpainting to ensure the quality of both simulation and rendering with 3DGS.

Figure 2

An overview of GSP pipeline.

The input to our system comprises multi-view image s that capture a 3D scene.

During the preprocessing stage, foreground objects are isolated and reconstructed.

This is followed by point sampling to facilitate scene discretization for PBD simulation and Gaussian rendering.

We train the Gaussian kernels using differentiable 3DGS , which takes into account appearance materials and lighting conditions.

These kernels are animated using PBD , in conjunction with fluid particles , to tackle the dynamics of both solids and fluids within the scene.

Finally, the dynamic scene is rendered into images. This rendering process includes detailed modeling of specular reflections , thereby providing visually accurate representations of the simulated interactions between solids and fluids.

Figure 3

We compare the results of different sampling strategies.

(a) fill the particle based on the density grid calculated using 3DGS.

(b) uniformly sample within NeuS reconstruction.

The point distribution generated by 3DGS is uneven , which hardly samples the legs or seat of the chair.

Figure 4

Detection of surface particles.

Aninterior particle is detected if its screen is widely shadowed by its neighbors.

A boundary particle is detected if at least one part of the particle’s screen is not shadowed.

Figure 5

GSP synthesizes high-quality images corresponding to dynamically interacting fluids and solids.

(a) The final rendered image combining rendered solids and fluids;

(b) The rendering result of deforming solids;

(c) The fluid thickness by additive splatting, where the darker color indicates the higher thickness ;

(d) The rendered dynamic fluids which is not occluded by solids.

Figure 6

3DGS inpainting.

(a) In this indoor scene, both the paper cup and the stuffed toy dog are segmented from the input image.

(b) 3DGS leaves empty spots and dirty textures blended from irrelevant kernels.

(c) Applying the inpainting with generative AI ameliorates this issue.

Figure 7

Anistropy regularization.

Anistropy regularization effectively maintains rendering quality under large deformations.

(a) Without the regularization term, 3DGS tends to generate fuzzy and spiky artifacts , especially near the surface of the model.

(b) When the regularization is applied, image quality is greatly improved with correct specular effects.

Figure 8

The impact of specular highlights on the quality of rendering.

(a) a fluid rendered with diffuse color only.

(b) surface reflective specular are added, which exhibits a more realistic and dynamic fluid.

Figure 9

A soft chair fell into the pool, causing deformation and ripples.

Figure 10

Waters leak into the garden and submerge the table. As the water level goes up, the surface gets more vibrant and washes the potted plant away.

Figure 11

Three books stack on the desk. They are isolated and segmented in GSP. The user switches their solid Gaussians to fluid Gaussians to make an indoor pool and drops a LEGO excavator into it.

Figure 12

Droplets of water fall onto the surface of a soda can, coalesce due to surface tension and gradually overflow.

Figure 13

Pouring water into the paper cup on the table and transforming the cup and a dog toy into water. The water spills out.

Figure 14

An astronaut in the space strucked by the black magic of the Trisolarans, and get transformed into a water sphere.

CONCLUSION

Gaussian Splashing is a novel pipeline combining versatile positionbased dynamics with 3DGS. The principle design philosophy of GSP is to harness the consistency of volume particle-based discretization to enable integrated processing of various 3D graphics and vision tasks(such as 3D reconstruction, deformable simulation, fluid dynamics, and rendering from new camera angles).

While the concept is straightforward, building GSP involves significant research and engineering efforts.

The presence of fluid complicates the 3DGS processing due to the specular highlights at the fluid surface; Fluid-solid coupling resorts to accurate surface information; Large deformation on the solid object generates defective rendering; Displaced models also leave empty regions that the input images fail to capture.

We overcome those difficulties by systematically integrating and adapting a collection of state-of-the-art technologies into the framework. As a result, GSP enables realistic view synthesis not only under novel camera poses but also with novel physically-based fluid/solid dynamics or even novel object state transform.

It should be noted that incorporating physically-based fluid dynamics in NeRF/3DGS has not been explored previously. The primary contribution of this work is to showcase the feasibility of building a unified framework for integrated physics and learning-based 3D reconstruction.

GSP still has manylimitations. For instance, PBD is known to be less physically accurate. It may be worth generalizing PBD with other meshless simulation methods. The fluid rendering in GSP in its current form is far from perfect(_ellipsoid splatting is an ideal candidate for position-based fluid but does not _physically handle refraction__). PBF may need a large number of fluid particles , which negatively impacts the rendering efficiency.

全部评论 (0)

还没有任何评论哟~