Wetlands Mapping with Deep ResU-Net CNN and Open-Access Multisensor and Multitemporal Satellite Data in Alberta’s Parkland and Grassland Region

Wetlands are a valuable ecosystem that provides various services to flora and fauna. This study developed and compared deep and shallow learning models for wetland classification across the climatically dynamic landscape of Alberta’s Parkland and Grassland Natural Region. This approach to wetland mapping entailed exploring multi-temporal (combination of spring/summer and fall months over 4 years—2017 to 202) and multisensory (Sentinel 1 and 2 and Advanced Land Observing Satellite, ALOS) data as input in the predictive models. This input image consisted of S1 dual-polarization vertical-horizontal bands, S2 near-infrared and shortwave infrared bands, and ALOS-derived topographic wetness index. The study explored the ResU-Net deep learning (DL) model and two shallow learning models, namely random forest (RF) and support vector machine (SVM). We observed a significant increase in the average F1-score of the ResNet model prediction (0.82) compared to SVM and RF prediction of 0.69 and 0.69, respectively. The SVM and RF models showed a significant occurrence of mixed pixels, particularly marshes and swamps confused for upland classes (such as agricultural land). Overall, it was evident that the ResNet CNN predictions performed better than the SVM and RF models. The outcome of this study demonstrates the potential of the ResNet CNN model and exploiting open-access satellite imagery to generate credible products across large landscapes.


Introduction
Wetlands provide numerous valuable ecosystem services to flora and fauna. These services include flood control, enabling water supply, water purification, fiber and fish provision, climate regulation, recreational opportunities, tourism, and protection of coastal habitats [1]. Unfortunately, anthropogenic and climatic factors continue to significantly threaten these valuable habitats' existence. Considering these factors, the need to have an effective monitoring and assessment tool to delineate wetlands accurately is crucial.
Although Canada accounts for 25 percent of global wetlands, it's reported that wetlands (which cover approximately14 percent of the country's landscape) are being lost [2,3]further justifying the importance of wetland conservation. The Parkland and Grassland Natural Region (PGNR) spans the southern parts of Alberta and is characterized by mineral wetlands (marsh and shallow open water) in prairie wetland basins, swamps, and rare distribution of peatlands (such as bogs and fens). Based on the Alberta Wetland Classification System (AWCS) [4], wetlands across the province are categorized into bogs, fens, marshes, swamps, and open water. These wetland classes can be studied extensively by utilizing advanced remote sensing techniques and a range of spaceborne sensors. Studies have demonstrated the importance of combining satellite optical and synthetic aperture radar (SAR) imagery with topographic information for wetland mapping [5,6]; Amani, Mahdavi, and Berard [7,8]. Onojeghuo et al. [6] demonstrated the value of utilizing freely available Sentinel satellite optical and radar imagery fused with light detection and ranging (LiDAR) elevation data as inputs for delineating wetland habitats from surrounding landscapes within parts of the PGNR. 1 3 Selecting a suitable machine learning (ML) algorithm capable of handling large volumes of satellite imagery is crucial to the success of large-scale wetland mapping. The computational power, complexity, and dimensionality of the remotely sensed data are critical criteria [9]. Unlike traditional image classification techniques (such as maximum likelihood classifier, k-means, or ISODATA), ML techniques (like support vector machine SVM, decision trees DT, random forest RF) can address the multi-dimensionality of remote sensing data ( [10]; Amani, Mahdavi, and Berard [7,11,12]. The availability of open-access satellite imagery through platforms such as the Google Earth Engine (GEE) [13] and the declining cost of high-computing power have dramatically increased access to remote sensing applications. Deep convolutional neural networks (CNNs) have revolutionalized how remotely sensed data are analyzed [14,15]. Kattenborn et al. [16] identified certain spatialrelated features that accurately classify targets. These features include shapes, texture, or edges.
Compared to shallower neural network (NN) approaches, deep learning (DL) models operate as inter-connected neural layers. By increasing the number of layers and transformations, DL models can predict and reveal complex relationships [16]. Deep CNN models are a valuable resource in remote sensing, given the potential for a better understanding vegetation characteristics [17]. Several studies have employed upgraded NN networks, such as CNN [18,19], R-CNN, U-Net, and Mask-RCNN [20,21]; Albawi, Mohammed, and Al-Zawi [22] for standard landcover classification. The combined exploitation of DL models and diverse remote sensing variables (such as spectral, radar, and topographic features) has improved wetland mapping accuracies [17,[23][24][25]. By processing input data in a hierarchical order, CNN models utilize multiple layers through non-linear mapping functions and convolutional operations (Krizhevsky, Sutskever, and Hinton [26]. Although studies have explored multisensor data for wetland mapping (at provincial and national scales) across Canada [5,8], the utilization of both multisensor and multitemporal satellite data as inputs in DL models for the PGNR of Alberta is limited. This province region is characterized by a varied combination of seasonally dynamic mineral wetlands (i.e., marsh and shallow open water) and swamps. With the wetlands' dynamic and complex nature, using shallow algorithms (such as DT, RF, or SVM) would be challenging. Another major challenge of wetland classification is that wetland classes are mixed with uplands [27]. Most models could not separate upland classes such as grasslands, agricultural land, forests, bare soil, and developed areas from wetlands and open water classes [25]. Hence, we propose using ResU-Net models for wetland mapping in this study based on multi-temporal and multisensory (i.e., radar, optical, and elevation) satellite data.
This study aims to develop DL models specifically for wetland classification in Alberta's PGNR using multitemporal S2 optical and S1 SAR data and Advanced Land Observing Satellite (ALOS) digital elevation model (DEM) (topographic) data. This approach to wetland mapping entails exploring multi-temporal and multisensory data as inputs in a DL algorithm for this region. Also, we compared the performance of the ResU-Net model with two shallow learning techniques (namely RF and SVM). Based on the outcomes of a recent study [6], we used a 25-band multi-seasonal image dataset for analysis. This image stack comprised of S1 (dual-polarization vertical-horizontal (VH) bands) and S2 (near-infrared (band 8), shortwave infrared (band 11)) images acquired in the summer/fall months of 2017 to 2020 and ALOS-derived topographic wetness index (TWI) as input data of the ResU-Net model and the two shallow ML algorithms (RF and SVM) for wetland-cover classification.

Study Area
Alberta's PGNR landscape, situated in the southern part of the province, has an approximate area of 15,507,419 hectares and a high degree of climatic sensitivity and variability. This region is dominated by marsh, shallow open-water, and swamp wetlands [28] (Fig. 1 shows the study area).

Satellite Images Used in the Study
The satellite sensors used for this study were Sentinel-1A/B (S1) and Sentinel-2 (S2), combined with elevation ALOS DEM data. We downloaded the satellite images from the cloud-based Google Earth Engine (GEE) geospatial processing platform [13]. The S1 and S2 images were acquired over the same period, late spring and summer (May to August), and fall (September to November) months of 2017, 2018, 2019, and 2020, respectively. Refer to Table 1 for details of the satellite data used in the study.
The S1 imagery used in the study was Ground Range Detected scenes acquired in the vertical-horizontal (VH) dual-polarimetric Wide Swath mode. These images were pre-processed using standard S1 toolbox modules. As part of the standardized workflow for S1 pre-processing, the orbit files were applied, thermal and low-intensity noise levels were removed, orthorectification and geometric corrections were applied, and invalid data was filtered out. Other pre-processing stages included normalization of backscatter coefficients, angular correction, speckle 1 3 filtering (such as the Refined Lee), and removing pixels affected by wind intensity [6,29]. The total number of S1 image scenes used in calculating the mean VH polarization channels in the study was 1816 over the summer (1093) and fall (723) months over the four years (i.e., 2017-2020).
As a pre-condition to obtaining cloud-free S2 images, there are two thresholds for cloud-free images with digital numbers less than 50% for the spring/summer months and less than 20% for the fall months. Using the image reducer module in GEE, median pixel values of image collections were calculated. Similar to the temporal resolution of acquired S1 data, the number of S2 scenes used was 5476 (summer 3700 and fall 1776). The S2 bands explored in this study were the near-infrared (band 8) and the S2 shortwave infrared (band 11). The NIR and SWIR bands are influential inputs for wetland delineation in the PGNR, as demonstrated in Onojeghuo et al. [6].
The ALOS 30-meter DEM data was acquired for the province of Alberta and subsequently resampled to 10-m S1 and S2 image datasets. The ALOS DEM raster was resampled with a bicubic interpolation technique. A mean filter was applied to the resampled ALOS DEM raster to ensure it presented a more realistic depiction of the landscape while calculating the hydrological index explored in the study [30]. Using the resampled 10-m ALOS DEM raster, we calculated the topographic wetness index (TWI). The System for Automated Geoscientific Analyses opensource software hydrological tool [31] was used to calculate TWI. The TWI is a valuable proxy for relative soil moisture [32] and indicates water accumulation across landscapes [33]. Table 1 presents the formula for calculating TWI and describes the input parameters. The overview of the proposed methodology for image processing and analysis is presented in Fig. 2.

Qualitative Description of Labels Data
Considering the difficulties of obtaining current and accurate ground validation data, the authors explored utilizing open-source datasets. These datasets were the Alberta Merged Wetland Inventory (AMWI) [34], Canada Annual Crop Inventory (ACI) [35], ESRI World imagery, opensource Google Earth imagery, Alberta Biodiversity Monitoring Institute (ABMI) 3 × 7 photo plot data [36], and ABMI Human Footprint Inventory (HFI) database [37]. The HFI and ACI were used for post-classification refinement and verification of the ground validation data. The AMWI, a source of information for training and testing the models, was digitized from high-resolution orthophotos. The AMWI database is an amalgamation of thirty-five wetland inventories covering 1998 to 2017. The HFI data, created in 2018, included layers such as anthropogenic disturbances (e.g., agriculture, forestry, energy) across Alberta.

Random Forest Classifier
This classifier is a non-parametric "ensemble" model that utilizes multiple decision trees (DT) in its classification process [38,39]. Target features are easily classified by constructing multiple bootstrapped and aggregated uncorrelated DT. Referenced from the original dataset, each decision tree acts as an individual bootstrap that splits each node into the Gini criterion. The pixels of the remotely sensed image are assigned a label based on the majority vote of the DT. Irrespective of the RF model's distinct DT, there is always a strong possibility of attaining a globally optimal solution even if one DT is not its best solution. The RF parameters were as follows: number of decision trees = 250, maximum tree depth = 30, and maximum samples per class = 1000.

Support Vector Machine Classifier
This non-parametric distribution-free ML algorithm is known to outperform traditional image classification techniques. The defined hyperplane of the SVM maximizes the distances between the training samples of two classes. Based on the hyperplane, end-users can classify pixels/objects [40]. Two outstanding characteristics of the SVM algorithm are its high-accuracy performance and insensitivity to training sample numbers used in the model. Also, SVM requires kernel functions, which could be time-consuming and subjective. For this study, the radial basis function (RBF) non-linearity kernel was used for the SVM. The SVM is not as sensitive to training samples and is known to generate high accuracies when used in image classification.

ResU-Net Model Architecture and Modeling Framework
The DL modeling framework proposed for this study was the U-Net model architecture. Its architecture has two feature paths, one for encoding and another for decoding. The latter path (i.e., encoding) extracts features or patterns of labeled inputs hierarchically. The decoding path focuses on learning spatial information needed to reconstruct the original input data [41]. A typical CNN classifier has three layers: convolutional, pooling, and fully connected [42]. The convolutional layer serves as a filter for extracting information that the model utilizes. The primary function of the convolutional layer is feature extraction. Using linear convolutional filters and non-linear activation functions (like tanh, sigmoid, rectified linear unit), feature maps can be generated [43,44]. The "pooling layer" reduces the input data size while retaining the most critical information. Also, the chances of overfitting are minimized by down-sampling with the pooling layer and reducing the number of parameters. For CNN models, fully connected layers determine the final labels or classes of the input data [17]. These connected layers integrate the local to global features, thereby serving as a multi-layer perceptron. Despite its high performance in remote sensing classification tasks, the significant challenges of DL algorithms such as CNNs are the long processing time associated with training the DL models and, as earlier indicated, the large volume of training data required compared to shallow ML algorithms like RF or SVM [45].
This study explored a residual neural network framework integrated into the U-Net deep learning model [27,46,47]. The deep residual U-Net (ResU-Net) network integrates seven residual blocks throughout the U-Net framework [47]. He et al. [48] recommend introducing connections to successive convolution layers to solve problems associated with vanishing gradients, a common occurrence in deep CNNs. The integration of ResNet34 to a U-Net enables the training of multiple layers and still performs effectively. Recent studies have identified the potential of utilizing ResNet34 models in wetland mapping [27].

Data Preparation
The input image and label layers were the 25-band S1 SAR, S2 optical, and ALOS-TWI datasets and the AMWI vector polygon containing the wetland and upland classes. For this study, LiDAR-derived TWI input of selected sites was explored for the DL model creation. The ArcGIS Pro "Export Training data tool" was used to prepare the input data for DL modeling. Considering that one of the project's goals was to build an effective operational model that did not demand heavy computational needs, the modeling component was completed with a sub-set of the study area representing the target wetland classes. After several experiments, 411 patches (with dimension sizes of 256 × 256 pixels (i.e., 2560 × 2560 m)) were generated to build the DL models. The number of polygon features used to create the label chips was 53,879. Figure 3 shows samples of exported images and labels used for developing the ResU-Net Model. Table 2 presents the dimension details of the image and feature labels used in the study.

Model Architecture
The U-Net model was developed using the ArcGIS API for Python. Though developed explicitly for biomedical image segmentation [41], the U-Net model's architecture is used in landcover classification [16,17,20,41]. As previously described, its architecture comprises the encoder and decoder networks. The encoder network consists of predefined networks (like VGG or ResNet), while the decoder part semantically projects discriminate features learnt by the encoder (usually to pixels spaces). The encoder network applies convolutional blocks to the input images and subsequently downscales these to features at different levels. For the encoder network, the lower resolution features in pixel space pass through the stages of "upsampling" and "concatenation." Both stages pass through a series of convolutional operations.
This study used the ArcGIS API for Python ("arcgis. learn") to build the DL model. We selected ResNet34 as the backbone of the U-Net architecture's encoder. This backbone is a 34-layer CNN pre-trained on the ImageNet dataset, with over 100,000+ images across 200 different classes. This CNN model has convolutional layers (total of 33), a max-pooling layer (size 3 × 3), an average pool layer, and a fully connected layer [49]. The problem of overfitting was mitigated by introducing a ReLU activation function [50], while accuracy and training time improvement was applied through normalization, as demonstrated by Ioffe and Szegedy [51]. The U-Net models were trained using three epochs (50, 100, and 200) with varying learning rates. The learning rates for the three epochs were set to 0.00033113, 0.00013183, and 0.00013183, respectively. The computer 1 3 configuration used for the DL modeling was as follows: Intel(R) Core (TM) i7-10750H CPU @ 2.60GHz 2.59, RAM 16 GB with NVIDIA-enabled GPU (GeForce GTX 1650 Ti) having a 4G memory.
The shallow classifiers (RF and SVM) and DL model (UNet) were developed using a subset of the project area that best described the target wetland classes (fen, marsh, open water, and swamp) and other upland classes. Table 3 describes the polygon data used for the RF and SVM training models.

Evaluation Metrics
We propose the F1 score as a metric for assessing wetland classification performance for this study. The F1 score ranges from zero to one, with zero representing the worse score and the best at one. For this study, we propose the macro averaged F1 score, the arithmetic mean of each class's F1 score. The F1-score can be calculated using this formula: F1 = 2 * (precision * recall)/(precision + recall).
Other statistical measures used for evaluating wetland classification performance are the overall accuracy (OA), producers' accuracy (PA), and users' accuracy (UA), all calculated from the error matrices classes [52][53][54]. The formula for calculating UA, PA, and OA is as follows: where X ij = row i column j observations, X i = marginal total of row i, X j = marginal total of row j, S d is total correctly classified pixels, and n = total number of validation pixels.
We used the mean intersection over union (IoU) to assess the accuracies of the semantic segmentation outputs generated in the study. The mean IoU was calculated by averaging the individual class IoU, and this metric gives users a better understanding of the segmentation performance than obtaining an overall accuracy. Equation 4 shows the formula for calculating IoU: where TP = true positives, FP = false positives, and FN = false negatives. The numerator and denominators represent the areas of the objects. Two stages of accuracy assessment were adopted. The first stage evaluated the performance of the ResU-Net model, while the second compared the shallow and deep learning outputs. Using independent AMWI polygon boundaries and manual interpretation of high-resolution orthophotos and Google Earth, a total of 4331 points representing target wetland (marsh = 2101, open water = 2000, swamp = 230) classes were digitized. Since the fen class in the project area covered a small portion, it was not considered for accuracy assessment. A visual comparison with other wetland products, such as the ABMI wetland inventory, was performed to complement the second phase of accuracy assessment.

Evaluation of ResNet, SVM, and RF Predictions
The ResNet-34 models developed in this study were done with varying training rates and epochs (50, 100, and 200). When the number of epochs increases, the weights change in the NN, and the curve typically goes from underfitting to overfitting. The phenomenon of underfitting occurs when fewer epochs are used to train the network and results in the model failing to capture underlying data trends. Also, as the number of epochs increases, it reaches an optimal situation where the end-user will get the maximum training set accuracy. Any point beyond this stage (i.e., increasing the number of epochs) will lead to overfitting. Once overfitting occurs, it means that the network does not reflect the reality of the data, as it captures most of the noise present in the data. Hence, having an ideal number of epochs to train the neural network requires testing some epochs while monitoring the training and validation data accuracy. Figure 4 presents the IoU and F-1 score metrics of the mapped wetland and upland classes for the three ResNet segmentations. As indicated in the proposed workflow (Fig. 2), 10% of the data used for the modeling process was set aside for validating the DL model. The results of the three ResNet models with three epochs (50, (Fig. 4b). The F1 score for the upland class was the same for all three ResNet epochs evaluated, with a value of 0.98. Table 4 (Table 4).   Figure S1 in the Supplementary document shows examples of the ground truth data and ResNet model prediction. Table 5 presents the assessment results for the optimal ResNet, SVM, and RF model wetland predictions. We observed a significant increase in the average F1-score of the ResNet model prediction (0.82) compared to SVM and   Table 5). The Supplementary materials contain a detailed outline of the confusion matrices used to calculate the presented accuracy metrics (see Table S2 in the Supplementary material). Figure 5 presents OA and OK results for the ResNet, SVM, and RF models.

Comparison of Deep and Shallow Learning Classifiers
The best-performing model was the ResNet model, with OA (78%) and OK (0.61), followed by RF (73% and 0.52) and SVM (71% and 0.49), respectively. Figures 6, 7, and 8 are the wetland predictions of the three models evaluated in this study. The SVM and RF models (Figs. 6 and 7) show a significant occurrence of mixed pixels, particularly marshes and swamps confused for upland classes (such as agricultural land). This southern part of Alberta is known to have numerous wetlands converted to serve as agricultural land [55]. This occurrence was prominent in the insert zooms showing Beaverhill and Gough Lakes (Figs. 6 and 7). However, the ResNet segmentation (Fig. 8) accurately predicted the wetland classes, effectively delineating the surrounding upland classes (such as agricultural lands) (see insert maps of Beaverhill and Gough Lakes in Fig. 8).

Discussion
For this study, three large-scale wetland inventories were generated using a fusion of open-source satellite data and deep/shallow learning models. We compared two shallow models (SVM and RF) and the deep ResNet CNN model to predict wetlands across the study area. The SVM and RF algorithms had difficulties accurately delineating wetland classes, such as marshes and swamps, from adjacent landscapes, such as agricultural lands, which are prominent land use in the PGNR of southern Alberta. Wetland loss in the region is an issue of concern as shallow ephemeral wetlands in agricultural fields experienced high impacts and low recovery rates [56]. The accuracy assessment and visual inspection of the generated products (mainly using SVM and RF) showed that some wetlands, such as swamps and marshes, proved challenging to map. The mapped wetland's with surrounding open water bodies. By visually inspecting the ResNet predictions with those of the SVM and RF algorithms, it is evident that the latter produces wetland class boundaries with better ecological meaning. For example, the marshes (a prominent wetland class in the region) are depressions that range from temporal to semi-permanent, clustered across large landscapes in the study area that were accurately delineated. Overall, the ResNet CNN model better captured the natural complexities of the swamp and marsh wetlands. Compared to standard shallow ML algorithms, the CNN model does outperform the evaluated ML algorithms (such as SVM and RF). In this study, we explored the use of deep ResNet CNN models and shallow learning models like SVM and RF. We plan to explore the potential of utilizing other DL models for wetland mapping on a large scale in the same region. In this study, the input data for wetland mapping consisted of open-access multi-temporal S2 optical bottom left, Crawling Lake; top right, Buffalo Lake; and bottom right, Gough Lake and S1 SAR data and ALOS-derived TWI. We developed two apps to display the optimal wetland product generated using the ResNet model. For this project, the two wetland apps created were as follows: (i) a GEE app (https:// lexis gis. users. earth engine. app/ view/ alber ta-wetla nd-explo rer-pgnr) and an ArcGIS Web App (https:// jolexy. maps. arcgis. com/ apps/ webap pview er/ index. html? id= 8748e 278f5 6c486 e835b f2fed 77a04 02). The ArcGIS Web app has several widgets allowing users to better interact with the results generated in the study. See Supplementary Materials Figures S2 and S3.
Studies have explored wetland mapping using DL models across varied landscapes of Canada [9,23]. However, specifically for the PGNR of southern Alberta, limited studies have explored the potential of utilizing open-source optical, radar, and topographic indices for prairie pothole mapping on a large scale. ABMI recently released a province-wide wetland inventory for Alberta [57], and the PGNR was done using the RF classifier. Considering the large-scale mapping implemented in this study, the overall accuracy of 78% is a reasonable estimate of the product's reliability compared to available open-access products such as the AMWI, which is an out-of-date product. Figure S4 in the Supplementary materials shows the visual comparison of the ResNet prediction generated in the study, the ABMI and AMWI products, and recent high-resolution imagery sourced from ESRI. The ResNet wetland prediction provides a realistic and up-todate depiction of wetlands in the examples shown in Figure  S4 (Supplementary Materials).
In addition, this study demonstrates the value of utilizing freely available satellite imagery for wetland mapping in the region. By combining multi-temporal satellite data with topographic information as inputs in a CNN model, we have accurately predicted the spatial distribution of wetlands in this challenging landscape of Alberta. The study

Conclusion
The study aimed to develop a DL model for a large-scale wetland classification in Alberta's PGNR using a fusion of multi-temporal S2 optical and S1 SAR data and ALOS elevation data. With a 25-band multisensor (optical, SAR, and ALOS elevation derivatives) and multi-seasonal (fall/summer months over a 4-year interval-2017 to 2020) as input data in the three evaluated models (RF, SVM, and ResU-Net), we compared their performance. The three products' accuracy metrics indicated that the CNN model significantly outperformed the shallow ML algorithms (SVM and RF).
The OA and OK of the ResNet model were 78% and 0.61, respectively, making this the best-performing model. The OA of the SVM and RF predictions were 73% and 71%, while the OK were 0.52 and 0.49, respectively. The ResNet model average F1 score was 0.82, while SVM and RF were 0.69 and 0.69. It was evident that the ResNet CNN predictions outperformed the SVM and RF models evaluated in the study. The outcome of this study demonstrates the potential of the ResNet CNN model and exploiting open-access Fig. 8 Convolutional neural network prediction of wetlands and upland classes across the Parkland and Grassland Natural Region. Top left, Beaverhill Lake; bottom left, Crawling Lake; top right, Buffalo Lake; and bottom right, Gough Lake satellite imagery to generate credible products across large landscapes. Using GEE as a source of open-source satellite data makes it possible for end-users to produce low-cost and accurate wetland inventories. Furthermore, ResNet CNN predictions accurately delineate wetlands from surrounding upland landscapes, a transferable approach that holds potential for application in large-scale wetland mapping across other regions.

Acknowledgements
The authors appreciate the anonymous reviewers and editors for their valuable comments and inputs to improve the quality of this manuscript. Thanks to Google Earth Engine, Copernicus, and JAXA Earth Observation Research Center for making free Sentinel and digital surface elevation data available in the study. Special thanks to the Alberta Environment and Parks, the Government of Alberta, and the Alberta Biodiversity Monitoring Institute for making available open-source wetland inventory data utilized in this study.

Author Contributions
The first author developed the conception of the work and performed analysis/interpretation of the results. All authors developed the ideas and framework of the manuscript and edited the final version submitted to the journal.
Funding Jolexy Environmental Services Limited funded this research.

Declarations
Competing Interests The authors declare no competing interests.

Fig. 9
Comparison of random forest, support vector machine, and deep learning (ResNet) wetland classified outputs with high-resolution Maxmar imagery dated July 25, 2019