Abstract:Aiming at the problems of low training efficiency and unguaranteed global model accuracy caused by the heterogeneity of work nodes in deep learning models, a lightweight federated learning framework based on pruning, FedPrune, is proposed. Adopting the “teacher-student model” architecture, the adaptive pruning scheme dynamically generates sub-models adapted to the capabilities of different work nodes from the global base model, realizes the efficient execution of lightweight intelligent algorithms in resource-constrained devices, and proposes a dynamic adaptive pruning rate learning method, so that work nodes achieve the same updating time without knowing each other's capabilities, to realize the same update time without knowing each other's capability. The algorithms are implemented with the two local solutions, FedAVG and FedRC, and the two global solutions, FedAsync and SSP, in the CIFAR10, CIFAR100, and Tiny-ImageNet datasets for comparison experiments, FedPrune has higher accuracy and shorter overall time. The FedPrune framework effectively solves the dropout problem by dynamically generating submodels adapted to the capabilities of different working nodes, and maintains high accuracy and speed in heterogeneous environments, proving its efficiency and applicability in federated learning.