Abstract:Feature point matching between consecutive image frames is a key technology in Visual Simultaneous Localization and Mapping (VSLAM). To address the issues of time-consuming and low accuracy in feature point matching between consecutive image frames in VSLAM systems, a fast feature point matching algorithm based on local pixel motion model (LPMM) is proposed. This algorithm is based on the assumption of motion smoothness constraint, utilizing the consistency of pixel motion within local regions in consecutive image frames. By dividing the image into local grid regions and estimating the motion vector of each grid using some feature points within the grid, the algorithm calculates the center point of the search range for matching feature points in the next frame. Finally, it searches for feature points that match the feature points in the current frame within the local neighborhood of the calculated center point in the next frame. Experimental results show that the proposed matching algorithm, compared with the widely used projection matching algorithm in ORB-SLAM2, has an average matching speed increase of over 50% and an accuracy improvement of approximately 4%.