Low-light object detection is critical in computer vision, with widespread applications in nighttime surveillance and autonomous driving. Low-light environments pose several challenges, including increased image noise, insufficient contrast, blurriness, and color distortion, all of which affect target discernibility, particularly for small objects. Furthermore, the requirement for real-time processing adds to the complexity of image handling in these conditions. We introduce a novel approach to improving low-light target detection accuracy while maintaining a compact model. We enhance the YOLOv7-tiny network through several key innovations. We replace the simplified efficient layer aggregation network (ELAN-L) with the enhanced ELAN with fused diverse branch block (ELAN-DBB), which better integrates features from various branches and layers to improve information extraction from low-light images. In addition, we incorporate a new neck structure featuring the efficient cross-stage partial network module integrating Convolutional Block Attention Module and Deformable Convolutional Networks (VoVGSCSP-DCN-CBAM) module. This module combines attention mechanisms and depth-separable convolutions to enhance feature fusion and improve detection performance in low-light conditions. To address spatial awareness, we use CoordConv in place of |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one