Background Nowadays, automated phenotyping of plants is essential for precise and cost-effective improvement in the efficiency of crop genetics. In recent years, machine learning (ML) techniques have shown great success in classification and modelling of crop parameters. In this research, we consider the capability of ML to perform grain yield prediction in soybeans by combining data coming from different optical sensors via RF (Random Forest) and XGBoost (eXtreme Gradient Boosting). During the 2018 growing season, a soybean experiment was performed at the Agronomy Center for Research and Education (ACRE) in West Lafayette (Indiana, USA). Images were acquired by the Parrot Sequoia Multispectral Sensor and the S.O.D.A. compact digital camera on board senseFly eBee UAS (Unnamed Aircraft System) solution at different dates. Next, a standard photogrammetric pipeline is carried out by SfM (Structure from Motion). Multispectral imagery serves to analyse the spectral response of the soybean end-member in 2D. In addition, RGB images are used to reconstruct the study area in 3D, evaluating the physiological growth dynamics per plot via height variations and crop volume estimations. As ground truth, destructive grain yield measurements were taken at the end of the growing season.
Results Algorithms and feature extraction techniques were combined to develop a regression model to predict final yield from imagery, achieving an accuracy of over 90.72% by RF and 91.36% by XGBoost.
Conclusions Results provide practical information for the selection of phenotypes for breeding coming from UAS data as a decision support tool, affording constant operational improvement and proactive management for high spatial precision.