NavDreamer: Video Models as Zero-Shot 3D Navigators

Xijie Huang1,2, Weiqi Gai2,3, Tianyue Wu1, Congyu Wang1,2, Zhiyang Liu2, Xin Zhou2, Yuze Wu†1,2, Fei Gao†1,2
1State Key Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou 310027, China.
2Differential Robotics, Hangzhou 311121, China.
3School of Aeronautic Science and Engineering, Beihang University, Beijing 100191, China.
Corresponding authors: Yuze Wu and Fei Gao
E-mail: xijiehuang@zju.edu.cn, wuyuze000@zju.edu.cn, fgaoaa@zju.edu.cn
System Overview

Abstract

Previous Vision-Language-Action models face critical limitations in navigation: scarce diverse data from labor-intensive collection and static representations that fail to capture temporal dynamics and physical laws. We propose NavDreamer, a video-based framework for 3D navigation that leverages generative video models as a universal interface between language instructions and navigation trajectories. Our main hypothesis is that video's ability to encode spatiotemporal information and physical dynamics, combined with internet-scale availability, enables strong zero-shot generalization in navigation. To mitigate the stochasticity of generative predictions, we introduce a sampling-based optimization method that utilizes a VLM for trajectory scoring and selection. A inverse dynamics model is employed to decode executable waypoints from generated video plans for navigation. To systematically evaluate this paradigm in several video model backbones, we introduce a comprehensive benchmark covering object navigation, precise navigation, spatial grounding, language control and scene reasoning. Extensive experiments demonstrate robust generalization across novel objects and unseen environments, with ablation studies revealing that navigation's high-level decision-making nature makes it particularly suited for video-based planning.

Framework & Methodology

1. Optimization through Generative Sampling

We generate $K$ independent navigation video candidates and utilize a VLM (Qwen3-VL) to evaluate them on action safety, scene consistency, and task performance, effectively bypassing "failed futures".

Optimization Framework

2. High-Level Action Decoding

Executable waypoints are decoded from generated videos using $\pi^3$. To resolve scale ambiguity in outdoor environments, we incorporate metric depth priors to rectify the absolute physical scale.

Action Decoder

2.1 Scale Correction Analysis

Our proposed scale correction module reduces the relative scale error from 54% to approximately 10%, enabling reliable navigation in large-scale outdoor environments.

Scale Correction

Zero-shot Generalization

Our model enables zero-shot generalization following for novel scenes and tasks by leveraging generative video models.

Generated Video
Real-world Including Ego &
Third-person Views
Task Description
Fly over the yellow cabinet and stop directly in front.
Navigate to the office chair and stop at a safe distance.
Generated Video
Real-world Including Ego &
Third-person Views
Task Description
Navigate to the white column and stop at a safe distance.
Identify the room exit and navigate toward it.
Generated Video
Real-world Including Ego &
Third-person Views
Task Description
Perform high-speed forward flight toward the distant tree.
Descend from the platform level to the floor area below.
Generated Video
Real-world Including Ego &
Third-person Views
Task Description
Ascend vertically to gain a higher vantage point.
Precisely traverse through the center of the circular gate.
Generated Video
Real-world Including Ego &
Third-person Views
Task Description
Navigate to the back of the rock.
Navigate to the supermarket.
Generated Video
Real-world Including Ego &
Third-person Views
Task Description
Go through the gap between the two trees.
Navigate to the green tree and stop at a safe distance.

Citation

Pending indexing on Google Scholar.