Previous Vision-Language-Action models face critical limitations in navigation: scarce diverse data from labor-intensive collection and static representations that fail to capture temporal dynamics and physical laws. We propose NavDreamer, a video-based framework for 3D navigation that leverages generative video models as a universal interface between language instructions and navigation trajectories. Our main hypothesis is that video's ability to encode spatiotemporal information and physical dynamics, combined with internet-scale availability, enables strong zero-shot generalization in navigation. To mitigate the stochasticity of generative predictions, we introduce a sampling-based optimization method that utilizes a VLM for trajectory scoring and selection. A inverse dynamics model is employed to decode executable waypoints from generated video plans for navigation. To systematically evaluate this paradigm in several video model backbones, we introduce a comprehensive benchmark covering object navigation, precise navigation, spatial grounding, language control and scene reasoning. Extensive experiments demonstrate robust generalization across novel objects and unseen environments, with ablation studies revealing that navigation's high-level decision-making nature makes it particularly suited for video-based planning.
We generate $K$ independent navigation video candidates and utilize a VLM (Qwen3-VL) to evaluate them on action safety, scene consistency, and task performance, effectively bypassing "failed futures".
Executable waypoints are decoded from generated videos using $\pi^3$. To resolve scale ambiguity in outdoor environments, we incorporate metric depth priors to rectify the absolute physical scale.
Our proposed scale correction module reduces the relative scale error from 54% to approximately 10%, enabling reliable navigation in large-scale outdoor environments.
Our model enables zero-shot generalization following for novel scenes and tasks by leveraging generative video models.
Pending indexing on Google Scholar.