Which sensor fusion methods reduce localization drift in drones?

Localization drift arises when a drone's estimated position diverges from its true position over time, driven by sensor noise, biases, and unmodeled dynamics. This matters for safety, mission success, and legal compliance: mapping errors can misrepresent territorial boundaries or harm sensitive ecosystems during inspection flights. Foundational work in probabilistic robotics by Sebastian Thrun Stanford University established the importance of probabilistic sensor fusion and state estimation for reducing such drift, framing the methods used today.

Visual-inertial and filter-based fusion

Combining cameras with inertial measurement units through visual-inertial odometry (VIO) reduces short-term drift by exploiting complementary sensor characteristics: the IMU gives high-rate motion priors while the camera provides drift-correcting visual constraints. A practical filter example is the Multi-State Constraint Kalman Filter (MSCKF) developed by Anastasios I. Mourikis and Stergios I. Roumeliotis University of Minnesota which enforces multi-frame visual constraints inside an extended Kalman filter to tightly couple inertial and visual data and limit error growth. Filter-based approaches such as the Extended Kalman Filter (EKF) or Unscented Kalman Filter (UKF) remain useful for low-latency onboard estimation but can suffer if linearization or Gaussian assumptions break down; careful modeling of sensor biases and noise characteristics is essential.

Graph-based smoothing and loop closure

Long-term drift is most effectively mitigated by graph-based optimization and loop closure. Incremental smoothing frameworks such as iSAM2 authored by Frank Dellaert Georgia Institute of Technology represent state and measurements in a factor graph and repeatedly relinearize and optimize to maintain global consistency, reducing accumulated drift when revisiting areas. Visual SLAM systems that add loop closure and global bundle adjustment, exemplified by ORB-SLAM by Raul Mur-Artal and Juan D. Tardós University of Zaragoza, detect previously seen places and apply pose-graph corrections that can dramatically collapse drift accumulated during exploratory flight. LiDAR-based odometry and mapping methods, often paired with graph optimization, provide robust performance in low-light or textureless environments where cameras struggle, trading sensor cost and computational load for improved resilience.

Sensor fusion strategies often combine these classes: a VIO front end for fast estimates and an optimization back end for periodic global correction. Recent research led by Davide Scaramuzza University of Zurich and ETH Zurich emphasizes tight coupling, sensor calibration, and learning-based outlier rejection to further constrain drift sources.

Reducing localization drift has direct operational and societal consequences. Improved accuracy enables beyond-visual-line-of-sight operations, safer inspection of infrastructure, and reliable ecological monitoring that supports territorial stewardship and indigenous land management. Conversely, residual drift can lead to misaligned maps, failed autonomous maneuvers, and mistrust of drone-collected data in legal or environmental contexts. Implementers must therefore balance computational cost, sensor selection, and algorithmic complexity against mission requirements, applying proven methods such as MSCKF, iSAM2, and loop-closure-capable SLAM systems to achieve robust, low-drift localization.