GauU-Scene: A Scene Reconstruction Benchmark on Large Scale 3D Reconstruction Dataset Using Gaussian
Abstract
We introduce a novellarge-scale scene reconstruction benchmark using the newly developed 3D representation approach, Gaussian Splatting, on our expansive U-Scene dataset.
U-Scene encompasses over one and a half square kilometres, featuring a comprehensive RGB dataset coupled with LiDAR ground truth.
U-Scene包含了超过1.5平方公里的范围,具有综合的RGB数据集和LiDAR地面真值。
For data acquisition, we employed the Matrix 300 drone equipped with the high-accuracy Zenmuse L1 LiDAR , enabling precise rooftop data collection.
This dataset, offers a unique blend of urban and academic environments for advanced spatial analysis convers more than 1.5
.
Our evaluation of U-Scene with Gaussian Splatting includes a detailed analysis across various novel viewpoints. We also juxtapose these results with those derived from our accurate point cloud dataset, highlighting significant differences that underscore the importance of combine multi-modal information.
Figure
Figure 1

Our dataset is divided into three main parts.
(a) We call it CUHKSZ(The Chinese University of Hong Kong, Shenzhen) lower campus.
(b) shows the upper campus of CUHKSZ.
(c) shows the SMBU(Shenzhen MSU-BIT University) Campus.
We use highly accurate lidar to collect the dataset and the range we cover is more than 1.5
.
Figure 2

The current point cloud registration method usually cannot handle different scales , so we first scale the raw point cloud to the same size as the SfM sparse point cloud.
To do this, we find the maximum distance or variance in the SfM, as there are always some points far from the center in SfM. Then, we perform coarse matching manually and fine-tune it using ICP(Iterative Closest Point).
Figure 3

(a) is the quality of the point if the point is blue , then the quality is ok, otherwise it is red.
(b) is the RGB point cloud.
(c) is the point’s altitude.
Figure 4

The three figures here give another angle for our raw point cloud dataset
Figure 5

(a,c) shows the result of using vanilla Gaussian Splatting.
(b,d) shows the result of using lidar-fused Gaussian Splatting.
One can easily find (a,c) contains a some blurry black cloud , and also has an irregular blur building , while (b,d) does not contain this defect.
Conclusion and Future Work
In the current study we propose a dataset that solve the image time-difference problem and collect the rooftop information inherently utilizing the drone. We also provide a simple yet effective approach to solve the coordinates difference problem in lidar and images. Finally, we successfully combine the nature image and lidar information to feed as a prior to Gaussian Splatting.
Our results shows a clear boost by fusing lidar and camera both qualitatively and quantitatively. And the difference between the matrix measure using the image ground truth and point cloud ground truth validate the neccesity of collecting point cloud ground truth.
However, the dataset is still relatively small and there is no extra images for testing the result. Furthermore, the edge effect of Gaussian Splatting is clear. That is the edge of the 3D models is the main error that generated. Removing them in a smart way will effectively decrease the error and make a more reliable dataset.
