Abstract:A back-propagation (BP) neural network consists of an input layer,one or more hidden layers and an output layer.An input vector is presented to the network, it is propagated forward through the network, layer by layer, until it reaches the output layer. The output of the network is then compared to the desired output, using a loss function, The error values are then propagated backwards, starting from the output, until each neuron has an associated error value which roughly represents its contribution to the original output.The BP neural network easily falls into a local extreme values and the slow convergence,during the Classification of Speech using it.A new method is put forward to optimize weights and threshold of BP neural network using SFLA. The new model was used in the classification of four typical speech, results of which were analysed and compared with that BP neural network. BP neural network based on SFLA has both fast training speed and small number of errors, produced average increase of 1.31 % in the accuracy.