UniGoal: Towards Universal Zero-shot Goal-oriented Navigation

CVPR 2025


Hang Yin1*   Xiuwei Xu1*†   Lingqing Zhao1   Ziwei Wang2   Jie Zhou1   Jiwen Lu1‡

1Tsinghua University  2Nanyang Technological University


paper  Paper (arXiv)      code  Code (GitHub)      code  中文解读 (Zhihu)

If video does not load, click HERE to download.

Abstract


In this paper, we propose a general framework for universal zero-shot goal-oriented navigation. Existing zero-shot methods build inference framework upon large language models (LLM) for specific tasks, which differs a lot in overall pipeline and fails to generalize across different types of goal. Towards the aim of universal zero-shot navigation, we propose a uniform graph representation to unify different goals, including object category, instance image and text description. We also convert the observation of agent into an online maintained scene graph. With this consistent scene and goal representation, we preserve most structural information compared with pure text and are able to leverage LLM for explicit graph-based reasoning. Specifically, we conduct graph matching between the scene graph and goal graph at each time instant and propose different strategies to generate long-term goal of exploration according to different matching states. The agent first iteratively searches subgraph of goal when zero-matched. With partial matching, the agent then utilizes coordinate projection and anchor pair alignment to infer the goal location. Finally scene graph correction and goal verification are applied for perfect matching. We also present a blacklist mechanism to enable robust switch between stages. Extensive experiments on several benchmarks show that our UniGoal achieves state-of-the-art zero-shot performance on three studied navigation tasks with a single model, even outperforming task-specific zero-shot methods and supervised universal methods.

teaser

Approach


Overall framework of our approach. We convert different types of goals into a uniform graph representation and maintain an online scene graph. At each step, we perform graph matching between the scene graph and goal graph, where the matching score will be utilized to guide a multi-stage scene exploration policy. For different degree of matching, our exploration policy leverages LLM to exploit the graphs with different aims: first expand observed area, then infer goal location based on the overlap of graphs, and finally verify the goal. We also propose a blacklist that records unsuccessful matching to avoid repeated exploration.

pipeline

Experiments


We evaluate our method on Object-goal navigation (ON), Instance-Image-goal navigation (IIN) and Text-goal navigation (TN).

exp1

Navigation results on ON, IIN and TN. We compare the Success Rate (SR) and success rate weighted by path length (SPL) of state-of-the-art methods in different settings.

exp2

Demonstration of the decision process of UniGoal. Here ‘Switch’ means the point when stage is changing. ‘S-Goal’ means the long-term goal predicted in each stage.

exp2

Visualization of the navigation path. We visualize ON (Green), IIN (Orange) and TN (Blue) path for several scenes. UniGoal successfully navigates to the target given different types of goal and diverse environments.

Bibtex


@article{yin2025unigoal, title={UniGoal: Towards Universal Zero-shot Goal-oriented Navigation}, author={Hang Yin and Xiuwei Xu and Linqing Zhao and Ziwei Wang and Jie Zhou and Jiwen Lu}, journal={arXiv preprint arXiv:2503.10630}, year={2025} }


© Hang Yin | Last update: Mar. 9, 2025