Abstract:Safety helmet wearing detection is a critical part of factory safety, it uses pattern recognition methods to monitor workers’ helmet-wearing situations in real time to achieve intelligent surveillance; In the context of construction and factory environments, where workers in surveillance footage vary in scale and the scenes are complex, making feature extraction challenging; Thereby, research on YOLOv10n has led to the proposing of Helmet-YOLO algorithm; The SimC2f module is designed in the backbone network to enhance the algorithm’s ability to extract and represent helmet features in complex scenes; A dynamic selective attention mechanism is adopted in the neck network to ensure that long-range semantic information is fully utilized during feature fusion. A lightweight dynamic upsampling operator is introduced in the upsampling part to improve the quality of upsampling. Experimental results show that the algorithm achieves detection accuracies of 91.5% for mAP50 and 58.2% for mAP50-95 on the SHWD dataset. With an increase of only 0.3 GFLOPS, the algorithm has increased 2.2% and 1.3% in mAP50 and mAP50-95 compared to YOLOv10n, has an improvement in detection performance.