In the unsupervised domain adaptive semantic segmentation task of urban road scene images, the existing adversarial learning-based cross-domain semantic segmentation methods tend to ignore the spatial location relationship in the image and perform adversarial at the level of the image as a whole, which leads to the convolutional neural network biased to extract the features of the main categories between two domains. The top, middle, and bottom parts of the road scene images have different categories and large differences in category shares, but the previous methods do not take full advantage of the spatial location structure of the images, nor do they consider the imbalance in the number of category shares in the dataset. In order to solve this problem, we propose the horizontal position blockwise and vertical position blockwise adversarial methods. By doing the adversarial loss within the chunk, the category closeness between domains can be made more detailed, and the imbalance in the number of categories in the dataset can be solved to some extent. The effectiveness of the proposed method is demonstrated by experiments on the datasets GTA5 to Cityscapes and SYNTHIA to Cityscapes.