Abstract:In order to solve problems of low accuracy and lack of real-time performance of unmanned vehicle obstacle detection in the campus scene, an obstacle detection method based on improved YOLOv3 (You Only Look Once) and stereo vision was proposed: YOLOv3-CAMPUS. The forward inference time was reduced and the model detection speed was faster by im-proving the structure of the feature extraction network Darknet-53. The detection accuracy and target location accuracy were improved by increasing the feature fusion scale. Meanwhile, by using GIOU (Generalized Intersection Over Union), the target location loss function was improved. Enhanced K-means algorithm could reduce the cluster deviation caused by the initial clustering point, then the model detection accuracy was ameliorated. In additional, the depth information of the predicted boundary frame’s center point was obtained by stereo vision camera. Then the distance between the obstacle and the unmanned vehicle could be measured. Experimental results show the proposed method increases the average accuracy by 4.19% and the detection speed increases by 5.1 fps compared with the original model on the campus mixed data set (KITTI+PennFudanPED). On the self-built campus data set(HD-Campus), average accuracy could reach 98.57%, and it could satisfy the real-time requirements by using improved method.