PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting

University of Maryland

* denotes equal contribution

Click on the video to pause or play and sync playback.**

Abstract

Recent advancements in novel view synthesis have enabled real-time rendering speeds and high reconstruction accuracy. 3D Gaussian Splatting (3D-GS), a foundational point-based parametric 3D scene representation, models scenes as large sets of 3D Gaussians. Complex scenes can comprise of millions of Gaussians, amounting to large storage and memory requirements that limit the viability of 3D-GS on devices with limited resources. Current techniques for compressing these pretrained models by pruning Gaussians rely on combining heuristics to determine which ones to remove.

In this paper, we propose a principled spatial sensitivity pruning score that outperforms these approaches. It is computed as a second-order approximation of the reconstruction error on the training views with respect to the spatial parameters of each Gaussian. Additionally, we propose a multi-round prune-refine pipeline that can be applied to any pretrained 3D-GS model without changing the training pipeline.

After pruning 88.44% of the Gaussians, we observe that our PUP 3D-GS pipeline increases the average rendering speed of 3D-GS by 2.65× while retaining more salient foreground information and achieving higher image quality metrics than previous pruning techniques on scenes from the Mip-NeRF 360, Tanks & Temples, and Deep Blending datasets.

Method

We find that the sensitivity of the \( L_2 \) loss across the scene to a particular Gaussian \( \mathcal{G}_i \) can be used as a surprisingly effective pruning score.

To compute this sensitivity, we first note that we can approximate the Hessian of the \( L_2 \) loss across the scene with respect to a particular Gaussian \( \mathcal{G}_i \) as:

\[ \nabla_{\mathcal{G}_i}^2 L_2 = \sum \nabla_{\mathcal{G}_i} I_{\mathcal{G}} \nabla_{\mathcal{G}_i} I_{\mathcal{G}}^T, \]

where \( I_{\mathcal{G}} \) is a rendered training image and the sum is taken over all training images in the scene. This approximation is exact when the scene is converged. Please see our manuscript for additional details.

Using only the mean \( x_i \) and scaling \( s_i \) parameters of this Gaussian \(\mathcal{G}\), we compute our sensitivity pruning score \( U_i \) as the log determinant of the Hessian:

\[ U_i = \log \left| \sum \nabla_{x_i,s_i} I_{\mathcal{G}} \nabla_{x_i,s_i} I_{\mathcal{G}}^T \right|. \]

Intuitively, \( U_i \) measures the sharpness of the \( L_2 \) loss function around Gaussian \( \mathcal{G}_i \), where a higher \( U_i \) signifies that the \( L_2 \) loss across the scene is more sensitive to \( \mathcal{G}_i \).

To prune a scene, we rank all of its Gaussians by this sensitivity pruning score and remove a given percentage of the least sensitive ones.

Results

Update: FPS statistics are higher than reported in our original preprint due to a change in rasterizer.

LightGaussian [Fan 2023]
PUP 3D-GS (Ours)
LightGaussian [Fan 2023]
PUP 3D-GS (Ours)
LightGaussian [Fan 2023]
PUP 3D-GS (Ours)
LightGaussian [Fan 2023]
PUP 3D-GS (Ours)

BibTeX

@article{HansonTuPUP3DGS,
    author = {Hanson, Alex and Tu, Allen and Singla, Vasu and Jayawardhana, Mayuka and Zwicker, Matthias and Goldstein, Tom},
    title = {PUP 3D-GS: Principled Uncertainty Pruning for 3D Gaussian Splatting},
    journal = {arXiv},
    year = {2024}
}