In recent years, lightweight network architectures have been designed to reducememory and computing resources, making it easier to deploy on mobile andembedded devices.ShuffleNet V2 is a classic lightweight neural network that maintainsadvanced technology in terms of efficiency and performance, but it still contains a largenumber of parameters and computations, and the accuracy and responsiveness of themodel can be further improved.For these problems, an efficient and lightweight CNNarchitecture called ShuffleNeXt is proposed. In ShuffleNeXt, a module containing anovel bottleneck structure, called Shuffle-G Unit is presented,in which deepconvolution is applied to both ends of the structure to generate more expressive spatialfeatures that is different from the general bottleneck structure; the Ghost module isemployed to replace the original ordinary 1x1 convolution for reducing the number ofparameters and generating richer feature mappings to alleviate the intrinsic featureredundancy.Shortcut connetctions and computationally economical attentionmechanisms are introduced to the network for improving the modelperformance.Benchmarking results on a variety of public image datasets show thatShuffleNeXt with less computational cost achieves better classification performancethan ShuffleNet V2.On the CIFAR100 dataset, the Top-1 classification accuracy ofShuffleNeXt is improved by 2.57% over ShuffleNet V2, while the number of parametersdecreased by 9.6% and the computational speed increased by 12.9%.Compared withother advanced structures, our proposed ShuffleNeXt structure obtains higherclassification results with less computational memory.