Heavy rain degrades optical perception through scattering, attenuation, motion streaks and lens contamination. Saurabh Garg and Shree K. Nayar Columbia University analyzed rain’s photometric and geometric effects on imaging in Vision and Rain and showed how streaks and transparency changes produce false edges and depth errors. Those physical mechanisms explain why standard RGB-based neural networks and stereo pipelines fail: features are masked or shifted, and temporal consistency is violated by fast-moving droplets.
Physical causes and perception failure modes
Rain causes volumetric scattering that reduces contrast, specular highlights on wet surfaces that alter reflectance, and transient occlusions from raindrops. These combined effects produce both persistent biases and short-term noise in sensor feeds. Algorithms that assume static scene radiance or clean optics will produce wrong detections and incorrect depth, with safety-critical consequences for navigation and obstacle avoidance.
Algorithms and sensors that perform best
The most effective solutions combine hardware choices and algorithms. Sensor fusion that integrates automotive radar with vision and LiDAR is widely recommended because radar penetrates precipitation better and provides reliable range and velocity cues. Research groups at Waymo and major automotive labs report improved detection in rain by fusing radar returns with camera semantics. Event cameras studied by Davide Scaramuzza University of Zurich and ETH Zurich provide asynchronous high-temporal-resolution measurements that remain robust to motion blur; event-based pipelines can filter out slow-changing rain streaks while preserving fast object motion. Gated imaging and active illumination reduce volumetric backscatter by timing short illumination pulses and have been demonstrated in academic and industry work to improve range in particulate conditions. On the algorithm side, physics-aware de-raining models grounded in the work of Garg and Nayar help restore image structure before downstream perception. Modern deep-learning approaches that incorporate domain adaptation and uncertainty estimation reduce overconfidence on degraded inputs.
Relevance, consequences and local nuances
Choosing perception stacks affects deployment decisions and public trust. In monsoon-prone regions and coastal territories where heavy rain is frequent, reliance on camera-only stacks can increase collision risk and slow adoption of autonomous services. Environmental impacts such as spray from standing water and regional infrastructure (drainage quality, road markings) further modulate algorithm performance. Operational strategies—lowered speeds, human oversight, and region-specific training data—are necessary complements to improved sensors and models to maintain safety and reliability.