LoGS: Visual Localization via Gaussian Splatting with Fewer Training Images

Yuzhou Cheng1, Jianhao Jiao1*, Yue Wang2, Dimitrios Kanoulas1,3
1University College London, 2Zhejiang University, 3Athena RC

Abstract

Visual localization is essential for mobile robotics and augmented reality. However, most existing methods require hundreds of training images to perform well. Recent technique 3D Gaussian Splatting enables realistic novel view synthesis, offering a promising foundation for localization. We introduce LoGS, a hierarchical system that adapts Gaussian Splatting for few-shot localization. Our experiments show that LoGS achieves state-of-the-art accuracy using a limited number of training images—in some cases, even outperforming previous methods in full-shot settings.

Method Overview

LoGS introduces an efficient pipeline for few-shot localization leveraging Gaussian Splatting. The system consists of two stages: map construction with limited training images, and robust online localization based on 1) geometric correspondence and 2) differentiable optimization. Details can be found in our paper.

Teaser Image

Experiments

The demo video shows qualitative results. We demonstrate the live localization performance of LoGS in an indoor environment with a few-shot pre-trained map.

7-Scenes Localization Results. Poses in the first table is with DSLAM ground truth while that in the second is with SfM ground truth. The cell content is median pose error (cm / °). Red: best. Blue: second best.

Methods (DSLAM) #Images Original training #Images Few-shot training
ASHLocHSCNetDSAC*ACEOurs HLocDSAC*HSCNetSP+RegFSRCOurs
CHESS40003/0.872/0.852/0.72/1.102/0.72.0/0.62204/1.423/1.164/1.424/1.284/1.233/1.00
FIRE20002/1.012/0.942/0.92/1.242/0.91.8/0.70104/1.725/1.865/1.675/1.954/1.532/0.90
HEADS10001/0.821/0.751/0.91/1.821/0.61.0/0.64104/1.594/2.713/1.763/2.052/1.562/0.99
OFFICE60004/1.153/0.923/0.83/1.153/0.82.4/0.69305/1.479/2.219/2.297/1.965/1.474/1.13
PUMPKIN40007/1.695/1.304/1.04/1.344/1.14.0/1.03208/1.707/1.688/1.967/1.777/1.757/1.85
REDKITCHEN70005/1.724/1.404/1.24/1.684/1.33.4/1.13357/1.897/2.0210/2.638/2.196/1.935/1.64
STAIRS20004/1.015/1.473/0.83/1.164/1.13.2/0.812010/2.2118/4.813/4.24120/27.375/1.477/1.85
Methods (SfM) #Images Absolute Pose Regression Scene Coordinate Regression Analysis-by-Synthesis #Images Ours
MS-TransfMarepoDFNet DSAC*ACEGLACE MCLocNeFeSNeRFMatchOurs
CHESS400011/6.41.9/0.833/1.10.5/0.170.5/0.180.6/0.182/0.82/0.80.9/0.30.4/0.10200.5/0.16
FIRE200023/11.52.3/0.926/2.30.8/0.280.8/0.330.9/0.343/1.42/0.81.1/0.40.6/0.18100.8/0.26
HEADS100013/13.02.1/1.244/2.30.5/0.340.5/0.330.6/0.343/1.32/1.41.5/1.00.5/0.26100.7/0.48
OFFICE600018/8.12.9/0.936/1.51.2/0.341/0.291.1/0.294/1.32/0.63.0/0.80.7/0.22301.2/0.34
PUMPKIN400017/8.42.5/0.887/1.91.2/0.281.2/0.281/0.225/1.62/0.62.2/0.60.7/0.22201.1/1.29
REDKITCHEN700016/8.92.9/0.987/1.70.7/0.210.8/0.200.8/0.206/1.62/0.61.0/0.30.5/0.14350.9/.022
STAIRS200029/10.35.9/1.4812/2.62.7/0.782.9/0.813.2/0.936/2.05/1.310.1/1.71.6/0.43204.1/1.10

Cambridge Landmarks Localization Results. The cell content is median pose error (cm / °). Red: best. Blue: second best.

Methods (SfM) #Images Original training (median pose error in cm/°) #Images Few-shot training (median pose error in cm/°)
ASHLocSCRNetHSCNetDSAC*NeRFMatchOurs HLocDSAC*HSCNetSP+RegFSRCOurs
GREATCOURT153124/0.1316/0.11125/0.628/0.249/0.317.5/0.112.7/0.091672/0.27NANANA81/0.4768/0.20
KINGS-COLLEGE122013/0.2212/0.2021/0.318/0.315/0.313.0/0.210.8/0.191330/0.38156/2.0947/0.74111/1.7739/0.6924/0.33
OLDHOSPITAL89520/0.3615/0.3021/0.319/0.321/0.419.4/0.414.6/0.31928/0.42135/2.2134/0.41116/2.5538/0.5428/0.43
SHOPFACADE2294/0.214/0.206/0.36/0.35/0.38.5/0.44.1/0.19327/1.75NA22/1.27NA19/0.9939/2.39
STMARYSCHURCH14878/0.257/0.2116/0.59/0.313/0.47.9/0.36.9/0.201525/0.76NA292/8.89NA31/1.0322/0.67

LLFF and Mip-NeRF 360 Localization Results. The cell content is accuracy (<0.05 unit / <5°). Red: best. Blue: second best.

Methods (SfM) iNerf (δs) iComMa (δs) iComMa (δm) Ours Ours (few-shot)
LLFF 94.8/72.2 99.1/99.3 75.4/98.2 100/100 100/100
Mip-NeRF 360 85.6/79.6 86.7/90.6 68.8/74.8 100/100 94.7/99.9

BibTeX

If you found our work/code useful, please consider citing our publication:

  @inproceedings{cheng2025logs,
    title = {LoGS: Visual Localization via Gaussian Splatting with Fewer Training Images},
    author = {Cheng, Yuzhou and Jiao, Jianhao and Wang, Yue and Kanoulas, Dimitrios},
    booktitle = {Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)},
    year = {2025},
    organization = {IEEE}
  }