SA-GS: Scale-Adaptive Gaussian Splatting for Training-Free Anti-Aliasing
Abstract
In this paper, we introduce a scaling-aware approach for anti-aliasing Gaussian platting (SA-GS).
While Mip-Splatting's training procedure needs to be modified for Gaussian splatting, our method functions effectively without requiring any training.
Specifically, the SA-GS method can be seamlessly integrated into any pre-trained Gaussian splatting field by functioning as an effective plugin, thereby substantially enhancing its anti-aliasing performance. This approach represents the fundamental technique employed in the system, which involves applying 2D scale-adaptive filters tailored for each Gaussian kernel during the testing phase.
It was noted by Mip-Splatting that detecting Gaussians at different frequencies causes mismatches between the Gaussian scales during training and testing. Mip-Splatting addresses this problem through 3D smoothing and 2D Mip filters, which unfortunately are not ignorant of the test frequency.
We demonstrate in this work that a 2D scale-adaptive filter, informed about the testing frequency during training, is capable of efficiently matching the Gaussian scale. The result ensures that the Gaussian primitive distribution remains unchanged regardless of variations in test frequency.
Once scale inconsistency is addressed, lower sampling rates than those of the scene frequency are expected to produce conventional jaggedness. Additionally, we introduce a method that integrates the projected 2D Gaussian kernel into each pixel during the testing phase.
This integration represents a limiting case of super-sampling. The improvement in anti-aliasing performance is significant when compared to vanilla Gaussian Splatting.
By conducting extensive experiments across diverse settings and encompassing both bounded and unbounded environments, we demonstrate that SA-GS achieves performance on par with or surpasses that of Mip-Splatting. It is noteworthy that super-sampling and integration are exclusively effective when our scale-adaptive filtering mechanism is enabled.
Figure
Figure 1

When the system is adjusted to a higher magnification (zoom-in), 3DGS reveals significant erosion structures. In response to zoom-out, it experiences marked expansion.
The Mip-Splatting technique is employed to make use of three-dimensional smoothing and two-dimensional Mip filters for the regularization of primitive elements during the training process.
SA-GS operates without any training and ensures consistent scaling by employing only a single 2D scale-adaptive filter.
Scale adaptation enables us to utilize high-resolution sampling ({SA-GS}_{sup}) and its limiting case integration technique ({SA-GS}_{int}) for achieving more precise outcomes when scaling down.
Figure 2

Paradigm Comparison of Gaussian Rasterization Process.
Various Gaussian splatting techniques commonly employ a shared framework for training and rendering, yet distinct Gaussian splatting approaches are employed by different models to manipulate Gaussian primitives.
During the training phase, the 3DGS method employs (c) within the pixel space to ensure training stability; however, it encounters scaling discrepancies across various rendering settings.
Mip-Splatting employs two key strategies: (a) it limits the Gaussian frequency upper limit within the 3D space, and (b) it emulates box filtering in pixel space. However, Mip-Splatting still encounters scale inconsistency issues and requires modifying its training procedure for 3DGS.
该方法无需训练且仅在测试流程中执行操作。该方法通过在像素空间中采用**(d)**来保持高斯原素的比例一致,并进一步通过__(e)和(f)__优化__α__融合过程来提升3DGS的抗锯齿能力。
Note that (e) and (f) only make sense with (d)activated.
Figure 3

Scale ambiguity.
An empirical method for performing 2D expansion within vanilla 3DGS code processes pixels and increases the size of the projected 2D Gaussian distribution by a predetermined increment, typically around 1.64 pixels. However, using a fixed 2D expansion factor of approximately 1.64 pixels can lead to scale ambiguities when depicting identical scenes under varying rendering configurations, as indicated by the highlighted green area.
When the Gaussian scale is maintained at a fixed value and resolution undergoes variation, the green dilation scale exhibits inconsistent behavior.
(b) When varying the Gaussian scale while keeping resolution steady, the dilation scale (green) remains unaffected by changes in the Gaussian.
Our 2D self-adaptive filter guarantees that the Gaussian scale remains consistent across various rendering configurations, as demonstrated by the red expansion region. This maintains consistency with the training setup.
Figure 4

3DGS heuristic dilation and our scale-adaptive filter.
Our 2D scale-adaptive filter is capable of preserving scene structural integrity across all resolutions. In contrast, the fixed scaling factor in 3DGS dilation (set at 1.64 pixels) results in erroneous expansion in low-frequency regions and excessive erosion in high-frequency areas.
Note dilation refers to method and artefacts in different contexts.
Figure 5

Super Sampling and Integration applied on a Gaussian primitive.
(a)
(b) The integration method factorizes the Gaussian covariance matrix through pixel-wise rotation. The corresponding integration process factors into a product of two marginal Gaussian distributions.
Figure 6

Single-Scale Training and a multi-scale testing procedure within the Mip-NeRF 360 Dataset are employed to achieve the zoom-out effect.
3DGS has dilation artefacts (red boxes) at low resolutions.
Our scalable 2D filter ensures consistent Gaussian scaling at lower resolution levels. Through sub-pixel sampling and integration, the proposed method effectively eliminates aliasing artifacts (highlighted as yellow boxes), delivering superior results compared to Mip-Splatting.
Figure 7

采用单一尺度训练与多尺度测试,在Mip-NeRF 360数据集上进行实验研究,针对放大部分效果。
3DGS sees erosion artefacts (red boxes) at high resolution.
By employing solely 2D scale-variant filters, we enjoy a stable quality improvement at high-resolution levels without the need for re-training.
Figure 8


基于单一尺度的训练与多尺度测试,在Blender Dataset上实现zoom-in和zoom-out效果。
Our 2D scale-adaptive filter preserves the consistency of a 2D Gaussian projection during zoom-out operations. Additionally, it reduces erosion artifacts and maintains the training procedure unchanged during zoom-in operations. We employ high-resolution sampling and integration methods to further prevent aliasing effects.
Figure 1s

Area scaling when rotating pixels.
In integration methods, the pixel size undergoes scaling prior to projection so that the resulting rotated pixel size equals the original one. The variable \theta represents the rotation angle of each pixel.
Figure 2s

Visual demonstration of our theoretical analysis.
We use generalized Gaussian distributions, scaled to a standard normal distribution, to calculate an upper limit of the discrepancy between a rotated pixel and its original counterpart.
Figure 3s

Numerical Experimental Results of Integration Error.
we map all error values through transformation techniques to fall within the normalized (0,1) range and subsequently increase the differences particularly around 0. The average relative error amounts to 0.51%, demonstrating that our approach achieves a high level of accuracy.
Figure 4s

Single-scale Training and Multi-scale Testing for Zoom-out Effect.
Every method is based on high-resolution (1×) data and evaluated at the lowest-resolution level (1/8×) to emulate the zoom-out scenario.
3DGS is affected by bloat or erosion artefacts across various rendering frequencies, and this can lead to increased aliasing effects.
该二维可缩放滤镜在各种渲染设置中均能保持高斯分布的一致性。此外,我们的积分与超采样方法进一步提升了高斯场景中的抗混响能力。值得注意的是,这些积分与超采样方法仅在配合二维可缩放滤镜使用时才能发挥最佳效果。
Figure 5s

Single-scale Training and Multi-scale Testing for Zoom-in Effect.
Each method is trained on the lowest resolution (1/8×) and evaluated at full resolution (1×) to simulate the zoom-in scenario.
3DGS is affected by blurring effects and detrimental artifacts encountered at varying rendering frequencies, which may further amplify the aliasing issue.
Our 2D scale-dependent filter ensures consistent scaling of the Gaussian across diverse rendering contexts. I note that integration and super-sampling techniques are specifically tailored for zoom-out operations, thereby providing comparable results to the application of a 2D scale-dependent filter in scenarios involving zoom-in.
Figure 6s

Single-scale Training and Multi-scale Testing on the Mip-NeRF 360 Dataset.
Various methods were trained on full-resolution images (1×) and evaluated at the lowest resolution (1/8 × ) to reproduce the zoom-out scenario.
Our 2D scale-adaptive filter keeps the Gaussian consistent across various rendering configurations. The performance of {SA-GS}_{int} is on par with that of Mip-Splatting techniques, whereas {SA-GS}_{sup} surpasses them, ensuring optimal efficiency in this context.
Figure 7s

Single-scale Training and Multi-scale Testing on the Mip-NeRF 360 Dataset.
The methods were trained in the lowest spatial resolution (1/8×) and tested at the full resolution (1×) to simulate the close-up imaging scenario.
{SA-GS}_{fil} is comparable in terms of performance with Mip-Splatting. Notably, our 2D scale-adaptive untrained filter incurs no additional computational overhead.
Figure 8s

Single-scale Training and Multi-scale Testing on the Blender Dataset.
Each method is trained upon the full-resolution data and tested at the smallest resolution level to simulate the zoom-out scenario.
Our 2D scalable filter keeps consistent performance in various rendering contexts. Meanwhile, our SA-GS_{int} achieves results as good as Mip-Splatting.

The method surpasses Mip-Splatting and attains the optimal performance within this framework.
Figure 9s

Single-scale Training and Multi-scale Testing on the Blender Dataset.
Various methods are trained on the smallest resolution (1/8×) and evaluated at the full resolution (1×) to emulate the scenario of zooming in.

The performance of our method is on par with that of Mip-Splatting. Our 2D scale-adaptive filter is untrained and incurs no additional computational overhead.
Conclusion
We introduce SA-GS, a training-free framework that can seamlessly integrate with 3DGS to improve its anti-aliasing performance across all rendering frequencies.
We introduce a2D scale-invariant filter, which retains the 2D Gaussian projection scale's consistency when rendered under varying conditions.
In addition, we utilize conventional anti-aliasing techniques such as spatial super-resolution sampling and integration to effectively mitigate image aliasing at reduced sampling rates. Furthermore, SA-GS exhibits either superior or comparable performance relative to the state-of-the-art technology, which is substantiated by extensive validation across both bounded and unbounded scenarios.
Limitations
Our method incurs no computational cost during zoom-in operations, yet a pan-out maneuver results in elevated rendering times due to the integration and super-sampling techniques employed.
Thanks to shared memory, the execution time for super-sampling is similar to that of integration, only about 15%–20% slower than the vanilla 3DGS. However, despite this overhead, integration remains viable through approximation calculations or table lookups, which further reduces processing time. The proposed method achieves notable improvements in anti-aliasing with negligible computational overhead.
