Abstract:The traditional robot teaching system is limited by the application objects and hardware equipment, which leads to its low openness and ease of use. In order to reduce the threshold of the teaching system and improve the efficiency of human-computer interaction, ROS (Robot Operating System) is open and cross-platform. A gesture-guided robot teaching system is designed to control the robot to enter learning, coding, and execution modes. The system uses a skin color segmentation algorithm combining YCbCr and RGB space, and uses CNN deep learning framework for feature extraction to complete gesture recognition; ROS integrated gesture control for robot mode. Experiments on public data sets verify that the accuracy of gesture recognition can reach 96.49%, and the effectiveness and reliability of the system are tested.