Construction of database for rice canopy image and rough grain yield.
Field campaigns were conducted in 2019 and 2020 at 20 locations in seven countries (Côte d'Ivoire, Senegal, Japan, Kenya, Madagascar, Nigeria, and Tanzania). Data on rice growth traits and digital images were collected in seed production plots as well as experimental fields at research stations and farmers’ fields (Supplementary Table S1). At maturity, the RGB images were captured vertically downwards over the rice canopy from a distance of 0.8 to 0.9 m using a digital camera (Fig S1a). The digital cameras used in this study are listed in Table S1. Five images were taken per harvesting plot by slightly shifting the camera for image augmentation. The rice canopy images cover approximately 1 m2, which correspond to the harvesting area proposed by Food and Agriculture Organisation (FAO) and used by Japan for agricultural statistics34. Rough grain yield that contained filled and unfilled grains was measured at the corresponding plot or larger plots, where yield data were collected based on field experiments (Supplementary Table S1). Rice yields were reported as 14% moisture. The aboveground total dry weight and filled grain weight were also recorded in most studies. Rice yield level, rice production system, rice variety, and key crop management practices are shown in Supplementary Table S1. The database consists of eight categories, as presented in Fig 1c. For most of the training, validation, and test data, we used only a single image per plot. These three categories are the main part of the database and randomly split by a ratio of approximately 72:14:14. After splitting the data, the images were augmented for 4-fold by flipping horizontally, vertically, and their combination, which resulted in 17764 images for training data. For panicle removal, angle, shooting date (see the following sections), and prediction data, we used five replicated images per plot. The prediction data consisted of the dataset collected at Moshi (3.45S, 37.38E), Tanzania, and at Tokyo (35.41N, 139.29E), Japan, where the data were not included in any other categories. For the time-of-day data, the sequential shooting of the canopy images was conducted using a fixed camera. In total, 4820 yield data and 22067 images of 462 rice cultivars were used in this study (Figure 1c, Supplementary Table S2).
Panicle removal, and experiments for robustness evaluation
The panicle removal experiment was conducted at Kyoto (35.2N, 135.47E) and Tsukuba (36.03N, 140.04E), Japan. The five replicated canopy images were acquired for the plot to be harvested. Two panicles per hill at the random position of the canopy was removed, and then 5 images were acquired. The grain weight from the collected panicles were measured separately. By repeating this process until all the panicles were removed from the harvesting plot, the series of images with gradually decreased panicle number and the corresponding yield were obtained. The dataset at Tsukuba was included for the training, validation, and test data, and the dataset at Kyoto was used to evaluate the impact of canopy removal on the yield estimation.
The angle changing experiment was conducted at M’bé (7.87N, 5.11W), Cote d’Ivoire. The curved rail with a diameter of 1.8 m was fixed above the canopy to be harvested. By shifting the position of the camera on the rail, the image from the various depression angles were shot with the constant centre of the image. The depression angles were set to 20, 30, 40, 50, 60, 70, 80, and 90 (control) degrees. The data for angle changing experiment was collected for 25 harvested plots. The day time experiment was conducted at Kyoto, Japan. HykeCam SP2 (Hyke Inc., Japan) was fixed above the canopy of cv. Koshihikari and Takanari. The canopy images were automatically recorded every 30 min 5 days before the date of harvest for Koshihikari, and 11 days prior to harvest for Takanari. After finishing the record, the plot was harvested by the common protocol with other experiments. The data of Takanari was used for the model development and the data of Koshihikari was used for the time-of-day analysis.
The shooting date experiment was conducted at M’bé, Cote d’Ivoire and Marovoay, Madagascar. At M’bé, the 22 cultivars grown in 34 plots in total were used. The canopy images of these plots were acquired once a week from 1 to 4 weeks after 50% heading, 2 days and 1 day before harvest, and at harvest. Only the images taken on 2 days and 1 day before harvest were used for model development, while the others were used for the shooting date analysis. After the final image records, the rice plants were harvested using a common protocol. At Marovoay, the canopy images of seven plots were recorded from 2 days prior to 14 days after 50% heading. Six images were taken every 10 min from 1200 to 1250 hrs and were used for the shooting date analysis.
Image processing and development of convolutional neural network model
The RGB images of the rice canopy were recorded with an aspect ratio of 4:3 or 16:9. For the images recorded at 16:9, the edge of the long side was trimmed to a ratio of 4:3. The images were then resized to 450 × 600 pixels for recording in the database, and again resized to a square of 512 × 512 pixels in 8-bit PNG format as inputs for the CNN model. A bilinear algorithm was used to resize the images. The brightness values of each channel of RGB were divided by 255 to scale from 0 to 1. These values were then standardised using the mean and variance calculated from all images categorised in the training dataset. The mean and variance of the RGB channel for the training dataset were [R, G, B] = [0.490, 0.488, 0.281] and [0.230, 0.232, 0.182], respectively. The structure of the CNN was determined using an automated structure search by Neural Network Console software (Sony Network Communications Inc., Japan). The determined CNN structure (Supplementary Fig S3) was then deployed using Python language (version 3.7) with Pytorch framework (version 1.7). The loss function and optimizer were defined by the mean absolute error and Adam optimizer, respectively. The optimal learning rate and batch size were determined by changing the combination of these hyper-parameters. Batch sizes of 16, 32, 64, 128, and learning rates of 0.0001, 0.0002, 0.0005, 0.0008, and 0.001 were combined, and the learning process was replicated 10 times for each combination. The epoch number was set to 100, and the learning process was conducted by minimising the loss of estimated and observed yields in the training dataset. The validation loss was also calculated for every epoch, and the model showing the least loss for validation was recorded. The rRMSE for the test dataset was calculated for models with all combinations of the hyper-parameters, and averaged across 10 replications. The best combination of batch size and learning rate was determined, and the recorded model was used in the present study.
Occlusion-based method to quantify the additive effect on the yield estimation
The occlusion-based method25 was applied to visualise the spatial distribution of the additive effect on yield estimation. The image of the rice canopy with 450 × 600 pixels was partly masked by the grey square with a brightness of [R, G, B] = [0.5, 0.5, 0.5]. The size of the grey square was 30 × 30 pixels. By shifting the position of the grey square by 30 pixels for both the row and column directions of the image array, 300 images were generated per original image (Supplementary Fig. S5a, b). Each portion of the original image was covered by one of the images in a series of 300 images with a grey square. Then, the rough grain yield was estimated using the CNN model, and the subtraction against the estimation for the original image was calculated. These values overlapped with the original image as a heat map (Supplementary Fig S5c).
Statistical analyses, data summarizing, and code availability
The 4820 observations of rough grain yield data were summarised by calculating the average, maximum, and minimum yields. The data were categorised according to the collected country, and the average yield in each country was calculated. The R2 and rRMSE were calculated to evaluate the model performance in each analysis. The rRMSE is defined as follows:
where , is the average of the observed yield, n is the size of the data, and fi and yi are the individual estimations and observations of the yield. The rough grain yield for panicle removal, angle, shooting date, and prediction dataset was estimated with five replicated images per harvested plot, and then averaged. The standard error of the five replicated estimations was calculated in the panicle removal experiment. For the changing angle experiment, the first, second, and third quartiles were calculated for the deviation between the estimated and observed yields across 25 plots and displayed with their average, maximum, and minimum values as the box plot. For the day time experiment, the estimated yield for every 30 min was averaged across successive 6 days, and the standard error was calculated. Segmented linear regression was adopted to determine the relationship between days after 50% heading and the relative yield observed in the shooting date experiment. For the data collected at M’bé, Cote d’Ivoire.
and for the data collected at Marovoay, Madagascar;
were used, respectively. The parameters a and b are constant, y is the ratio between the observed and the final yield, and x is the date after 50% heading. The parameters c1 and c2 are the breaking points of the segments, and Eq. (3) represents the 3 segmented regression. Function ‘I’ is the step function, which is defined as follows:
For the dataset in Madagascar for the shooting date experiment, the six estimations from 1200 to 1250 hrs were averaged and defined as an estimation for a plot. The estimations at seven harvested plots were then averaged, and the standard error was calculated. All analyses in the present study were conducted using Microsoft Excel (Microsoft, Redmond, WA, USA), Neural Network Console software (Sony Network Communications Inc., Japan), and Python language version 3.7 (http://www.python.org) with Pytorch framework version 1.7 (https://pytorch.org/). The code to run the developed CNN model is available at https://github.com/r1wtn/rice_yield_CNN.git.
The data that support the findings of this study are available from the authors on reasonable request.