Fires frequently result in significant casualties and property damage across various contexts. To explore a detection method that can accurately identify fire precursors, we propose a fire detection method of improved YOLOv8n to accurately identify flame and smoke. First, the network is enhanced through channel shuffling and reparameterized convolution, and the network computational efficiency is improved by aggregating multiple feature cascades at one time. The linear weighted self-attention mechanism is used to enhance the information extraction ability of the feature extraction network for 3D feature maps. In the neck network, the input feature map is dynamically upsampled based on the function-based point sampling method, which effectively improves the model’s feature extraction ability for high-resolution feature maps. Finally, a detection head structure based on an adaptive spatial feature fusion strategy is proposed to effectively filter spatial conflict information while suppressing the scale invariance of features of different scales, thereby improving the detection ability of objects of different scales. Experiments on the flame and smoke detection dataset proposed in this study show that the accuracy, recall, mAP50, and mAP95 of the improved network model reached 88.32%, 81.97%, 90.37%, and 68.20%, respectively, which are 7.01%, 3.84%, 4.16%, and 6.87% higher than the baseline model, respectively. All indicators are significantly better than the baseline and existing methods and still meet the real-time requirements. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Flame
Object detection
Convolution
Feature fusion
Data modeling
Education and training
Feature extraction