The segmentation of COVID-19 lesions can aid in the diagnosis and treatment of COVID-19. Due to the lack of rich labelled datasets and a comprehensive analysis of representation learning for COVID-19, few studies exist in this field. In order to address the aforementioned issues, we propose a self-supervised learning scheme for COVID-19 using unlabeled COVID-19 data in order to investigate the significance of pre-training for this task. We have significantly improved the pre-training performance of the model by effectively leveraging unlabeled data and implementing a variety of pretraining strategies. In addition, the performance of the self-supervised model has been enhanced by the integration of the channel-wise attention mechanism module, the Squeeze-and-Excitation (SE) block, into the network architecture. Experiments demonstrate that our model performs better than other SOTA models on the publicly available COVID-19 medical image segmentation dataset.