Transformer can be effectively applied to speech enhancement tasks based on Generative Adversarial Network (GAN). However, it still remains challenging to extract temporal dependencies within the signal sequence features as well as to improve training stability. To address these issues, a new light-weight network is proposed for speech enhancement in the time-frequency domain named CRG-MGAN. It is a type of MetricGAN based on convolution and recurrent-augmented spatially gated attention. In the generator of the CRG-MGAN, Convolutional Recurrently Enhanced Gated Attention Unit (CRGU) is used for feature extraction, which is an improved Transformer structure. The CRGU can effectively extract more complete feature information of speech, focus on the temporal dependencies within the signal sequence, reduce the loss of feature information, and reduce the computational complexity of the Transformer. In the decoding stage, the mask decoder structure is improved by using a two-branch activation function structure instead of a single activation function, which prevents gradient explosion and effectively outputs the magnitude information, thus improving the stability of the training process. We conduct extensive experiments with the Voice Bank + Demand datasets. Objective test results show that the performance of the proposed system is highly competitive with existing systems. Specifically, the CRG-MGAN model achieves a PESQ score of 3.48, STOI of 0.96, and SSNR of 11.14dB, with a relatively small model size of 1.67M.