Abstract:As a type of downstream natural language processing tasks, the text classification has very vital auxiliary value for the upstream task. With the trend that deep learning is widely used in the upstream and downstream tasks of NLP in recent years, deep neural networks are also applied to text classification tasks. However, the current model based on convolutional neural network cannot model the context semantic information of the text sequence well, and it also does not introduce language information to assist the classifier to classify. To solve these problems, a novel English text classification model combining Bert and Bi-LSTM is proposed. The proposed model can not only boost the performance of classification by introducing language information into Bert pre training language model, but also capture bi-directional context semantic dependency information based on Bi-LSTM network to display and model text. Specifically, the model is mainly composed of input layer, Bert pre training language model layer, Bi-LSTM layer and classifier layer. Compared with the baseline models, The Extensive experimental results demonstrate that the proposed Bert-Bi-LSTM model achieves the highest classification accuracy in MR dataset, sst-2 dataset and CoLA dataset with 86.2%, 91.5% and 83.2% respectively, which greatly improves the performance of the English text classification model.