Abstract:It is particularly important for the safe and stable operation of power grid to find the oil leakage problem of power grid transformer in time. Traditional power transformer oil leakage detection mainly depends on manual regular detection, but manual detection cannot achieve all-weather monitoring and has lag. When the current mainstream object detection model is applied to the oil leakage detection of power grid transformers, there are some problems such as slow detection speed, low accuracy and poor robustness. It cannot meet the practical application. An improved You Only Look Once Version 4 (YoloV4) transformer oil leakage detection method is proposed. Firstly, by introducing Mobile Vision Transformer (Mobile-ViT) as the backbone structure of the model, the local and global information features of the object are effectively extracted by convolution and Transformer structure, which reduces the computation. Secondly, a multi-scale feature fusion layer is proposed, which aims to realize the multi-scale feature fusion of local and global information and enhance the context semantic expression, so as to better realize the oil leakage detection of power grid transformers. The experimental results show that the detection accuracy of this method on the power grid transformer oil leakage data set reaches 95.3%, and the detection speed reaches 50.6 frames per second; Compared with the native YoloV4 method, the detection accuracy is improved by 2.6%, and the detection speed is improved by 2.6 frames per second. After practical application, the reasoning speed of this method deployed on edge devices also reaches 43 frames per second, which meets the needs of practical engineering.