Abstract:In order to solve the problem that some high-performance deep learning neural networks are not ideal for application in embedded devices due to the defects of high complexity and large computation. Deep learning-based Orthogonal Frequency Division Multiplexing(OFDM) channel compensation technology is implemented on the Artificial Intelligence Radio-Transceiver(AIR-T), a miniaturized integrated smart radio device, as a platform. Not only the OFDM signal transmission system module, but also the conventional channel estimation and equalization module are implemented on the Field Programmable Gate Array(FPGA) chip.These modules preprocesses the data to reduce the workload of the neural network in order to complete the efficient implementation of the neural network channel compensation technology module on the graphics processing unit(GPU) of Jetson TX2 platform. The computational complexity and parameter fitting speed of the neural network training process are recorded, and the conventional channel estimation and equalization module effectively reduces the number of operations during the network training. From the tested performance aspects, it can be seen that the data BER after the neural network channel compensation is significantly lower than the BER after the previous conventional channel estimation and equalization.