Obstacle avoidance systems in drones combine sensing, estimation, and control to detect hazards and change flight paths before collisions occur. At a technical level they fuse information from multiple sensors to create a local geometric or occupancy representation of the world, estimate the drone’s pose relative to that world, and generate safe trajectories in real time. Designers treat this as a pipeline where each stage must manage uncertainty and latency to preserve safety under dynamic conditions.
Sensor fusion and perception
Modern systems rely on complementary sensors: stereo or monocular cameras for visual cues, LiDAR for dense depth scans, time-of-flight and ultrasonic rangefinders for short-range measurements, and radar for robustness in poor visibility. Probabilistic approaches are central because measurements are noisy; Sebastian Thrun Stanford University and Wolfram Burgard University of Freiburg and Dieter Fox University of Washington explain these foundations in Probabilistic Robotics, showing how Bayesian filters and occupancy grids represent uncertainty and integrate heterogeneous observations. Sensor fusion produces either a depth map or a 3D point cloud and often an occupancy grid that labels free, occupied, and unknown space. In practice, sensor choice and mounting depend on mission constraints: visual systems are weight- and cost-efficient but degrade in low light, while LiDARs provide accurate ranges at higher weight and cost.Perception also distinguishes static from dynamic obstacles. Computer vision methods use optical flow, object detection, and scene flow to identify moving people, vehicles, or animals. Deep learning models trained for object recognition add semantic understanding, which matters when regulatory rules treat humans differently from inanimate obstacles.
Planning, control, and system-level design
Once space is represented, the drone must plan safe motions. Path planning ranges from global search methods like A-star for precomputed maps to sampling-based planners and reactive methods such as the Dynamic Window Approach that compute collision-free velocities on the fly. Roland Siegwart ETH Zurich emphasizes combining deliberative planners with fast reactive layers in mobile robotics to handle unexpected obstacles while following mission goals. Low-level flight controllers translate planned trajectories into motor commands and include stability and thrust constraints; closed-loop feedback helps correct for wind and sensor drift.Redundancy and graceful degradation are critical: multiple sensor modalities and fallback behaviors (hover, retreat, or land) reduce catastrophe risk. Edge cases remain challenging—thin wires, transparent surfaces, and GPS-denied urban canyons present detection and localization difficulties that engineers must mitigate with specialized sensors and conservative safety margins.