A change detection analysis was performed using thematic data derived from maximum likelihood classification of 1985 and 2000 Landsat imagery and 2021 Sentinel-2 imagery of the Al-Hubail wetland.
Study area and data
To offset water losses from the Al-Ahsa Oasis irrigation and drainage project in eastern Saudi Arabia, project management developed a strategy based on the use of non-traditional water sources such as treated wastewater and reuse of agricultural drainage water while reducing groundwater pumping from wells. Excess agricultural drainage water is discharged into two wetlands: Al-Asfar and Al-Hubail, which were originally two Sabkhas wetlands. Al-Hubail wetland is connected to the D1 canal of the Saudi Irrigation Organization (Fig. 1).
The wetland has a variety of natural environments (temporary pools, open ground, sand dunes, etc.), with ecosystems of great biological and ecological interest, which, thanks to the presence of water, allow the permanent and temporary settlement of exceptional flora (macrophytes, algae, riparian vegetation, etc.) and fauna (Tachybaptus ruficollis, Rallus aquaticus, Himantopus himantopus, Egretta garzetta Circus aeruginosus, Accipiter nisus etc.). The site is one of the first refuges for waterfowl in the Arabian Peninsula, but also one of their resting and feeding places. It attracts visitors, especially in winter, to enjoy the natural landscape and observe migratory birds. The birds migrate from cooler regions to areas with warmer climates, which include the Al-Hubail wetland. However, the water of the wetland is now contaminated with heavy metals, mainly from agricultural wastewater. The level of heavy metals is generally higher than the international permissible limits and fluctuates seasonally (Alfarhan 1999; Al-Taher 1999; Ashraf et al. 2020; Fahmy et al. 2011; Salih 2018); (Fig. 2).
In order to assess the evolution of the Al-Hubail wetland, two Landsat images (MSS from 1985 and ETM+ from 2000) and one Sentinel-2 image (2021) were used for this analysis. All three images were acquired in the same season to avoid confusion in classification due to phonological changes. Because the area is often marshy and flooded in the wet season and dry in the summer, satellite imagery from the summer season (July and August) was primarily used to clearly extrapolate the different land covers. All data and information used in this study can be found in Table 1. The land cover maps resulting from the classifications of the satellite images were then used for the spatio-temporal comparison (Table 1).
Table 1
All inputs for the classification of images in 1985, 2000 and 2021 (https://eos.com / https://www.usgs.gov/).
Images
|
Bands and features
|
Wavelength (µm) and description
|
Resolution (m)
|
Landsat 5 (MSS) 1985
|
band 1 – Visible Green
|
0.5 - 0.6
|
60
|
band 2 - Visible Red
|
0.6 - 0.7
|
60
|
band 3 - NIR
|
0.7 - 0.8
|
60
|
band 4 - NIR
|
0.8 - 1.1
|
60
|
NDVI (Normalized Difference Vegetation Index)
|
(B 4 – B3) / (B4 + B3)
(Near-Infrared – Visible) / (Near-Infrared + Visible)
|
60
|
NDWI (Normalized Difference Water Index)
|
(B4 – B5) / (B4 + B5)
Near-Infrared - Short-wave Infrared) / (Near-Infrared + Short-wave Infrared
|
60
|
Landsat 7 (ETM+) (2000)
|
Band 1 – Visible
|
0.45-0.52
|
30
|
Band 2 - Visible
|
0.52-0.60
|
30
|
Band 3 - Visible
|
0.63-0.69
|
30
|
Band 4 - Near-Infrared
|
0.77-0.90
|
30
|
Band 5 - Short-wave Infrared
|
1.55-1.75
|
30
|
Band 6 - Thermal
|
10.40-12.50
|
60 (30)
|
Band 7 - Mid-Infrared
|
2.09-2.35
|
30
|
Band 8 - Panchromatic
|
0.52-0.90
|
15
|
NDVI (Normalized Difference Vegetation Index)
|
(B 4 – B3) / (B4 + B3)
(Near-Infrared – Visible) / (Near-Infrared + Visible)
|
30
|
NDWI (Normalized Difference Water Index)
|
(B4 – B5) / (B4 + B5)
(Near-Infrared - Short-wave Infrared) / (Near-Infrared + Short-wave Infrared)
|
30
|
Sentinel 2 (2021)
|
Band 1 - Ultra blue (Coastal and Aerosol)
|
0.421-0.457
|
60
|
Band 2 - Blue
|
0.439-0.535
|
10
|
Band 3 - Green
|
0.537-0.582
|
10
|
Band 4 - Red
|
0.646-0.714
|
10
|
Band 5 - Visible and Near Infrared (VNIR)
|
0.694-0.714
|
20
|
Band 6 - Visible and Near Infrared (VNIR)
|
0.731-0.749
|
20
|
Band 7 - Visible and Near Infrared (VNIR)
|
0.768-0.796
|
20
|
Band 8 - Visible and Near Infrared (VNIR)
|
0.767-0.808
|
10
|
Band 8A - Visible and Near Infrared (VNIR)
|
0.848-0.881
|
20
|
Band 9 - Short Wave Infrared (SWIR)
|
0.931-0.958
|
60
|
Band 10 - Short Wave Infrared (SWIR)
|
1.338-1.414
|
60
|
Band 11 - Short Wave Infrared (SWIR)
|
1.539-1.681
|
20
|
Band 12 - Short Wave Infrared (SWIR)
|
2.072-2.312
|
20
|
|
NDVI (Normalized Difference Vegetation Index)
|
(B8 - B4) / (B8 + B4)
(NIR - RED) / (NIR + RED)
|
10
|
|
NDWI (Normalized Difference Water Index)
|
(B8 - B12) / (B8 + B12)
(NIR - MIR)/ (NIR + MIR)
|
|
Classification of satellite images
Pre-processing of the selected satellite images (radiometric and geometric corrections) was performed (Fontinovo et al. 2012; Nguyen 2015; Wang et al. 2012). This included radiometric calibrations to allow the transition from grayscale to apparent reflectance, the parameters of which were taken from the metadata of each image. After geometric correction of these images, the boundary of the study area was digitized to crop the satellite images and extract the area of interest. Moreover, all the field work was carried out with the help of instruments like Differential Global Positioning System (DGPS), topographic maps and digital camera. The exact location of each area representing each land cover has to be determined. These areas are called training areas. They were used to classify the satellite images and evaluate their accuracy (Table 2).
Table 2
Data collected during training and testing for all images
|
Water bodies
|
Vegetation
|
Hydromorphic areas
|
Open ground
|
Images’ year
|
Train
|
Test
|
Train
|
Test
|
Train
|
Test
|
Train
|
Test
|
1985
|
40
|
80
|
45
|
91
|
53
|
88
|
40
|
115
|
2000
|
52
|
89
|
57
|
110
|
61
|
120
|
49
|
120
|
2021
|
38
|
68
|
53
|
115
|
72
|
120
|
40
|
92
|
After preprocessing the images and deriving NDVI and NDWI features, the images are classified using maximum likelihood classification under Gaussian assumption to obtain land cover maps of the study area (Girard and Girard, 2010). The library of spectral signatures created in the training phase was fed into the maximum likelihood classifier to classify the satellite images. Thus, each pixel was classified according to its probability of belonging to a particular class. Field tests were conducted to evaluate the accuracy of the classification.
After selecting the channels to be subjected to the classification, training areas were selected by digitizing polygons of sufficiently large and homogeneous areas. Based on the knowledge of the terrain, the plots were located on each candidate image for classification. Their outlines were delineated avoiding the edge pixels to limit the variability within the plots. This was done to distinguish the following land uses on the satellite images: Water bodies, hydromorphic areas, vegetation, and open ground. The training plots were evaluated using a contingency matrix representing the confusion between the classes used for classification. This matrix was used to redefine these areas to avoid such confusion as much as possible.
Evaluation of the detection of changes
Change detection techniques assume that a change in surface coverage results in to a corresponding change in reflectance. Recently, many change detection techniques have been developed; the most commonly used are image differencing, principal component analysis, and post-classification comparisons (Close 2021; Deng et al. 2008; El-Hattab 2016; Lu et al. 2004; Ojaghi et al. 2017; Rokni et al. 2015; Wu et al. 2017). The detection method of post-classification comparisons (PCCs) is effective in determining the nature, rate, and location of change. PCC examines changes over time between independently classified land cover data (Almutairi and Warner 2010; El-Hattab 2016). The use of an independent classification procedure for each satellite image and low sensitivity to changes in acquisition conditions and features are some of the features of the PCC detection method.
The post-classification approach is the comparative analysis of classifications made independently for different data (El-Hattab 2015, 2016; Ojaghi et al. 2017). Thus, the method captures the classified satellite images using the maximum likelihood classification method. Moreover, the method provides information about the extent and distribution of the modified area according to each land cover, which is uniformly distributed among the multitemporal images. The pixel-to-pixel approach is also used to identify differences between objects or phenomena at different times (Deng 2008; Hussain et al. 2013; Nemmour and Chibani 2004; Rokni et al. 2015). This approach consists of simultaneous analysis of multispectral images.
Since the post-classification technique is directly dependent on the classification accuracy of the images, an assessment of classification accuracy was performed for each of the three images. In this study, a contingency matrix was used as a validation measure based on selected independent test patterns to evaluate the classification performance.
In the next step, a Geographic Information System-based overlay technique was applied to obtain the spatial changes in land cover over two time periods: 1985-2000 and 2000-2021. The result of this technique is a cross matrix with two entries summarizing the main types of changes in the studied area. For each of the three four-class maps, a new thematic layer was created with different combinations of change classes.
Spectral bands and features used for the classification
Due to the presence of water and vegetation in the study area and in order to obtain a very accurate classification, the Normalized Difference Vegetation Index (NDVI) and the Normalized Difference Water Index (NDWI) were added outside the spectral band of the satellite imagery according to the following formulas as input to the classification algorithm (Fig. 3a-c and Fig. 4a-c).
NDVI = XNIR − XSWIR / XNIR + XSWIR
NDWI = XGreen−XNIR / XGreen + XNIR
Figures 3a-c and 4a-c show the NDVI and NDWI derived for each of the satellite images. These extracted indices in combination with the spectral bands of the satellite images allow the classification of each image. Table 1 provides information on all spectral bands and indices that contributed to the classification of the 1985, 2000, and 2021 images.