This study employs aerial images from 2008 and satellite images from 2019, which are harmonized to match the spatial and spectral characteristics of each other. We developed a deep learning model trained by 325,540 manual tree crown labels from both datasets, and segmented tree crowns for both 2008 and 2019. We used permanent field plots that cover the study period to evaluate the mapped changes. Tree crowns were converted to biomass following ref.32. Finally, we tracked each tree between 2008 and 2019, and marked trees that were only found in 2008 as loss, and trees that were only found in 2019 as gain.
Aerial and satellite images. We used two types of publicly available images: aerial images at 0.25x0.25 m2 resolution captured in June - August of 2008 and 2009 using a Vexcel UltraCam-X aerial digital photography camera (25), and WorldView satellite images at 0.5 x 0.5 m2 resolution for 2019 provided by Rwanda Space Agency (RSA). Both of the images were stored in an 8-bit unsigned integer format, and only common spectral bands for both of the images (i.e. Red, Green, Blue) were retained for consistency.
Pre-processing of the images. The images from 2008 are from an aerial campaign with 0.25 m resolution, while those from 2019 are from the WorldView satellites, pansharpened and resampled to 0.5 m resolution. We first resampled the images to each other on a common resolution of 50 cm. Next, we trained a deep learning model to detect and segment cloud and shadow pixels (Supplementary Fig. 4), which were then masked out and excluded from further analysis along with wetland areas (see section “cloud detection and masking”). We performed a spatial overlay of the images, and eliminated the spatially corresponding pixels for clouds and shadows by setting their values to no-data in both images (Supplementary Fig. 1), excluding about 1.4% of of the farmland’s total area, and about 3.3% of degraded forests’ total area. Note that these masked-out areas are not included in the analysis. We have chosen to not extrapolate our results to areas with missing data as this would add some level of uncertainty. Finally, a histogram matching was performed to ensure that the cumulative distribution function of pixel values is calibrated, achieving a consistent and uniform distribution of pixel intensities for both aerial images in 2008 and their corresponding satellite images in 2019 (Supplementary Fig. 5). Note that these images adjustments make the results from this study incomparable to ref.32.
Environmental data. We use the national forest cover map of 2012 and agricultural spatial extent map27 to classify farmlands and natural degraded forests. The map was created using on-screen digitisation over the same aerial images of 200821,26, with a forest defined as “a group of trees higher than seven m and a tree cover of more than 10%, or trees able to reach these thresholds in situ on a land of about 0.25 ha or more”. A shrub was defined as “a group of perennial trees smaller than seven meter at maturity and a canopy cover of more than 10% on a land of about 0.25 ha or more”. The forest cover map has five major classes: natural forest, forest plantations, bamboo stands, shrubs, and wooded savanna, with natural forest class subdivided into “closed natural forest” and “degraded natural forest”. We hereby refer to degraded natural forest as degraded forests. The spatial overlay of forest cover and agricultural extent allowed the spatial delineation of the two land cover classes of focus: degraded forests and farmlands. Farmlands are considered as areas under agriculture, and Eucalyptus and non-Eucalyptus plantations, excluding plantations in urban areas and wetlands. The actual agriculture and plantation areas were combined due to a prevalence of encroachment among these cover types44,58, of which, however, none has been spatially documented35.
Nation-wide mapping of trees using deep learning. We adopt the tree segmentation approach developed in ref.59 to automatically detect and map trees at a national scale. The training dataset was composed of 131,269 manually delineated tree crowns spread over 4,039 patches with a size of 256 x 256 pixels for 2008 aerial images (covering about 1,698.8 ha), and 194,271 manually delineated tree crowns spread over 7,390 patches with a size of 256 x 256 for the 2019 satellite images (covering about 3,133.9 ha). The training samples are spread across the country representing the full range of biogeographical conditions across Rwanda (Supplementary Fig. 6.). We trained a UNet architecture60,61 network using both the 2008 and 2019 images as inputs, putting a higher weight for boundary areas between tree crowns in closed canopies to decrease clumps in predictions especially in dense canopy areas, as proposed in ref.60 and implemented in Brandt et al. (2020). We selected the best model as the model with the lowest loss value on a separate validation set (20% of the training data). We set aside 52 randomly selected labeled patches containing 17,306 manual labels of individual trees (i.e. over 5% of the total manual labels) as an independent test set. The 52 patches overlaid with both 2008 and 2019 images and contained 6,639 manually labeled tree crowns in 2008 and 10,667 in 2019. The number of single trees mapped within a hectare is referred to as “tree density”, while the percentage of a hectare covered by tree crowns is referred to as “canopy cover”.
Separating clumped trees in dense canopy areas. Accurate mapping of trees individually without multiple trees being connected is important, especially for accurate biomass and C stock estimations. However, in dense canopy areas, some trees were predicted as clumps by our tree detection approach. In this case, we adopt a post-processing method to separate clumped tree crowns and fill any gap inside a single crown32. Briefly, with an assumption of a round tree crown shape, the post-processing method determines the crown centers in the crown predictions, and then relabels them based on weighted distances to the identified crown centers. The process is described in more detail in ref.32.
Cloud detection and masking. The 2019 WorldView images were partly cloudy especially in the highlands of Rwanda, where most of natural forest in protected areas are located. We trained a separate UNet-based model to detect and segment cloud and shadow pixels in the images. We trained a network with 870 manually delineated clouds across 1,877 areas covering 5,219 ha. The trained model was deployed over the 2019 WorldView images and detected clouds and shadows (Supplementary Fig. 1), and the results were manually checked to ensure accuracy and consistency, given that the cloud detection with machine learning is one of the renowned tasks yielding very high accuracy in remote sensing studies62,63,64. The detected pixels with clouds and shadows were then masked out in 2019 images as well as the spatially corresponding pixels in the 2008 images, for consistency and comparability. The masking process was done via a spatial overlay analysis, where pixels with clouds or shadows were set as no-data in both images.
Tree-level allometry for biomass and carbon stock estimation. With the general principle of allometric equations to statistically relate structural properties of a tree and its biomass65,66, we adopt an existing tree-level allometry approach, which assumes a relationship between the crown area and AGB with a variation between biomes67. The approach uses five allometric equations to establish a relationship between crown area-derived crown diameter (CD = 2*√(Crown area /π)), diameter at breast height (DBH), and Above-Ground Biomass (AGB) of trees in savannas and shrublands (equation 1 and 2); plantations, farmlands, and urban and built-up areas (equation 1 and 3); and natural forests (equation 4 and 5). The performance evaluation and errors for the equations are reported in ref.32.
Here CD is the crown diameter, DBHpredicted is the estimated DBH, AGBpredicted is the estimated AGB, E is a measure of the environmental stress68 (a gridded layer is available at https://chave.ups-tlse.fr/pantropical_allometry.htm), and ρ is the wood density (fixed at 0.54: a weighted average of 6,161 tree records from ref.46).
Tree-level AGB estimation via allometry entails different sources of uncertainties, which increase especially in the absence of field measured height or DBH67,68. In this study, the uncertainties start from the DBH estimation based on CD, propagate through the AGB estimation from DBH, and further by using a standard biomass-to-carbon conversion factor to estimate AGC from AGB33,34. Wood density also induces further uncertainty during the AGB estimation, given the intra- and inter-species variability in the trees and a transition from early to late successional stages especially in natural forests46,69. A thorough evaluation of the allometric approach using NFI field measurements can be found in ref.32 and a summary is shown in Extended Data | Table 1.
Evaluation and uncertainties of the tree crown segmentation. We set aside 52 labeled patches containing 17,306 individual trees (over 5% of the total training labels) as an independent test set to evaluate the model performance. The 52 patches overlaid with both 2008 and 2019 images and contained 6,639 manually labeled tree crowns in 2008 and 10,667 in 2019. This dataset was neither used during training or validation. The plot-wise comparison indicates an overall underestimation with r2 of 0.84 for the tree count, 0.89 for canopy area coverage, and 0.78 for mean crown cover in 2008 (Extended Data | Fig. 1). The evaluation for 2019 revealed r2 of 0.71 for the tree count, 0.78 for the canopy area, and 0.72 for the mean crown area. A confusion matrix indicated a true positive rate of 74%, a false negative rate of 26%, and an overall accuracy of 95% for 2008 (Extended Data | Table 4a); and a true positive rate of 77%, a false negative rate of 23%, and an overall accuracy of 93% for 2019 (Extended Data | Table 4b).
Evaluation of the change 2008–2019. To evaluate the accuracy of our final results and quantify the overall error of the reported changes between 2008 and 2019, we used field measurements in 350 permanent field plots that are regularly visited and located across Rwanda. In total, 4,217 trees were measured in these plots, and the planting year of each tree was recorded (Supplementary Methods | Location and distribution of sample plots and sampling strategy; Supplementary Fig. 7). In 79 of the plots, most trees were planted before 2006, and here we expected the measured numbers to match our results from both 2008 and 2019: 1,057 trees were measured in 2008 and 1,078 trees in 2019, which is an increase of about 1.9%. We map 970 trees for 2008 (8.2% underestimation), and 1,001 trees for 2019 (7.1% underestimation) within the 79 plots, indicating a change of about 3.2% in our results. For the remaining 217 plots, the majority of the trees were planted between 2006–2016, so the measured numbers are also expected to match our predictions of 2019. Field measurements indicate a count of 3,514 trees and we map 3,186 trees in 2019, which is an underestimation of about 9.3%. All comparisons show a consistent bias of 7–9% for both 2008 and 2019, and also our estimates of change are close to the observed ones.
Cost-benefit analysis of tree restoration potential. We refer to the methodology applied by the Ministry of Environment of Rwanda in 201438 to estimate the tree restoration potential in farmlands and degraded forests. The study indicates an optimal restoration of agricultural fields by planting 300 trees ha− 1. This would require financial investment of about RWf 843,600 ha− 1 (about 1,200 USD according to the 2014 exchange rate in Rwanda70) to transition from pure agriculture to agroforestry over a 20-year period. This transition would also generate an income (i.e return of interest) ranging between 12% and 38% of the investment, and would leave enough space for crops to grow without competing with on-farm trees. Our tree restoration potential scenario in farmlands consider adding about 220 trees ha− 1 to the currently mapped average tree density (about 80 trees ha− 1) to reach 300 trees ha− 1. The tree restoration potential in degraded forests refers to ref.39 who indicated that optimal forest restoration in the tropics requires planting about 1,000 trees ha− 1, and depending on the planted tree species and the location, the survival rate ranges between 50% and 90%. We first estimate planting about 700 trees ha− 1 (1000 trees ha− 1 with an average 70% survival rate). Finally, we consider adding about 570 trees ha− 1 to the currently mapped average tree density in the degraded forests (130 trees ha− 1) to reach 700 trees ha− 1. This number aligns well with previous findings in Rwanda, where a number of trees with a DBH larger than 10 cm was found to range between 200 and 958 trees ha− 1, while the number of trees with DBH < 5 cm ranged between 350 and 1844 within a tropical montane rainforest46. Our estimates agree with the existing study on “the global potential for increased storage of carbon on land”43 (Extended Data | Fig. 2b). A monetized value of the restoration benefits, specifically in the local context of Rwanda, was obtained by referring back to ref.38 where a required investment cost was estimated at about 376 USD per ha via assisted natural regeneration over a 20-year period. We assume that this cost is required to restore degraded forests using the recommended average tree density indicated above (ref.39). The revenue from potential carbon credits was predicted to be almost the same as the investment cost, making the return on investment to be about be zero.
References for Methods
- REMA. Rwanda state of environment and outlook report. Rwanda Environment Management Authority, Kigali (2009)
- Brandt, M., Tucker, C.J., Kariryaa, A. et al. An unexpectedly large count of trees in the West African Sahara and Sahel. Nature 587, 78–82 (2020). https://doi.org/10.1038/s41586-020-2824-5
- Ronneberger, O., Fischer P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (eds. Navab, N. et al.) 234–241, Springer (2015)
- Koch, T. L., Perslev, M., Igel, C. et al. Accurate Segmentation of Dental Panoramic Radiographs with U-NETS. 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), 15-19 (2019). https://doi.org/doi: 10.1109/ISBI.2019.8759563
- López-Puigdollers, D., Mateo-García, G. and Gómez-Chova, L. Benchmarking Deep Learning Models for Cloud Detection in Landsat-8 and Sentinel-2 Images. Remote Sens. 13(5), 992 (2021). https://doi.org/10.3390/rs13050992
- Mahajan, S., Fataniya, B. Cloud detection methodologies: variants and development—a review. Complex Intell. Syst. 6, 251–261 (2020). https://doi.org/10.1007/s40747-019-00128-0
- Jeppesen, J. H., acobsen, R. H., Inceoglu, F. et al. A cloud detection algorithm for satellite imagery based on deep learning. Remote Sens. Environ. 229, 247-259 (2019). https://doi.org/10.1016/j.rse.2019.03.039
- Chave, J., Andalo, C., Brown, S. et al. Tree allometry and improved estimation of carbon stocks and balance in tropical forests. Oecologia 145, 87–99 (2005). https://doi.org/10.1007/s00442-005-0100-x
- Brown, S., Gillespie, A. J. R. & Lugo, A. E. Biomass estimation methods for tropical forests and the application to forest inventory data. For. Sci. 35, 881–902 (1989)
- Jucker, T., Caspersen, J., Chave, J. et al. Allometric equations for integrating remote sensing imagery into forest monitoring programmes, Glob. Chang. Biol. 23, 177–190 (2017). https://doi.org/10.1111/gcb.13388
- Chave, J. et al. Improved allometric models to estimate the aboveground biomass of tropical trees. Glob. Change Biol. 20, 3177–3190 (2014). https://doi.org/10.1111/gcb.12629
- Chave, J., Coomes, D., Jansen, S., Lewis, S. L., Swenson, N. G., Zanne, A.E. Towards a worldwide wood economics spectrum. Ecol. Lett. 12 (4) (2009). https://doi.org/10.1111/j.1461-0248.2009.01285.x
- BNR. Exchange rate. The National Bank of Rwanda (2020). https://www.bnr.rw/currency/exchange-rate/?L=0&tx_bnrcurrencymanager_master%5Baction%5D=archive&tx_bnrcurrencymanager_master%5Bcontroller%5D=Currency&tx_bnrcurrencymanager_master%5B%40widget_0%5D%5BcurrentPage%5D=9944&cHash=7edaea68834a2ff5c6014bea40f001c2