Effective robot navigation in unseen environments is a challenging task that requires precise control actions at high frequencies. Recent ad
Effective robot navigation in unseen environments is a challenging task that requires precise control actions at high frequencies. Recent advances have framed it as an image-goal-conditioned control problem, where the robot generates navigation actions using frontal RGB images. Current state-of-the-art methods in this area use diffusion policies to generate these control actions. Despite their promising results, these models are computationally expensive and suffer from weak perception. To address these limitations, we present FlowNav, a novel approach that uses a combination of Conditional Flow Matching (CFM) and depth priors from off-the-shelf foundation models to learn action policies for robot navigation. FlowNav is significantly more accurate at navigation and exploration than state-of-the-art methods. We validate our contributions using real robot experiments in multiple unseen environments, demonstrating improved navigation reliability and accuracy. We make the code and trained models publicly available. Comment: Submitted to IROS'25. Previous version accepted at CoRL 2024 workshop on Learning Effective Abstractions for Planning (LEAP) and workshop on Differentiable Optimization Everywhere: Simulation, Estimation, Learning, and Control