GaussianGraph: 3D Gaussian-based Scene Graph Generation for Open-world Scene Understanding

Beijing Institute of Technology
IROS2025

The motivation, core methods, and visualization results of GaussianGraph.

Abstract

Recent advancements in 3D Gaussian Splatting(3DGS) have significantly improved semantic scene understanding, enabling natural language queries to localize objects within a scene. However, existing methods primarily focus on embedding compressed CLIP features to 3D Gaussians, suffering from low object segmentation accuracy and lack spatial reasoning capabilities. To address these limitations, we propose GaussianGraph, a novel framework that enhances 3DGS-based scene understanding by integrating adaptive semantic clustering and scene graph generation. We introduce a "Control-Follow" clustering strategy, which dynamically adapts to scene scale and feature distribution, avoiding feature compression and significantly improving segmentation accuracy. Additionally, we enrich scene representation by integrating object attributes and spatial relations extracted from 2D foundation models. To address inaccuracies in spatial relationships, we propose 3D correction modules that filter implausible relations through spatial consistency verification, ensuring reliable scene graph construction. Extensive experiments on three datasets demonstrate that GaussianGraph outperforms state-of-the-art methods in both semantic segmentation and object grounding tasks, providing a robust solution for complex scene understanding and interaction.

The goal of GaussianGraph is constructing 3D scene graph in open-world scenes for downstream tasks. First, We extract 2D features including CLIP, segmentation, captions and relations. Foreground objects and object-pairs are input to LLaVA with prompts to generate captions and relations, which are combined with CLIP features and segmentation by mask index. Second, with posed multi-view images, we utilize 3DGS to reconstruct the scene and perform ``Control-Follow" clustering strategy to generate Gaussian clusters. Third, after 3D Gaussian clustering, we build 3D scene graph through rendering each cluster to multi-view images and match them with CLIP features, captions and relations. Finally, 3D correction modules are used to refine the scene graph with four sub-modules.

Object grounding on the LERF dataset. Our GaussianGraph can reason the accurate object category with less artifacts and noise.

Downsteam tasks including visual question answering and object grounding. The model needs to accurately identify the object attributes(blue) and spatial relationships(red) contained in the query and infer the correct objects. In the object grounding task, our model effectively mitigates the interference caused by similar objects in adjacent areas.

More Results on Replica and ScanNet Datasets

BibTeX

@misc{wang2025gaussiangraph3dgaussianbasedscene,
      title={GaussianGraph: 3D Gaussian-based Scene Graph Generation for Open-world Scene Understanding}, 
      author={Xihan Wang and Dianyi Yang and Yu Gao and Yufeng Yue and Yi Yang and Mengyin Fu},
      year={2025},
      eprint={2503.04034},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2503.04034}, 
}