FER disease inoculation:
A diversity panel of 66 sweetcorn and 2 field corn varieties of maize was grown in Citra, FL, from March-June of 2022. Ears were harvested approximately 21 days after 50% silking was observed in each variety. Ears were brought to an inoculation location before the husk was wiped down with 70% ethanol. Circles were drawn on the husk in the center of each ear to indicate the inoculation location. An 18-gauge needle was then used to make a hole through the husk to the cob. A Prima-tech® vaccinator was then primed with a spore solution and another 18-gauge needle was attached. Prior to inoculation, spore solutions were created by growing up isolates of Fusarium verticillioides in potato dextrose agar plates, taking five 1cm2 mycelial plugs from the plates and placing them in ½ strength potato dextrose broth to shake for 5 days. After 5 days spore solutions were filtered through sterile cheese cloth and concentrated or diluted to equal 1x105 conidia per ml. Once the spore solution was primed into the vaccinator, the needle was then placed into the existing hole in the ear and .75 ml of the spore solution was injected. Ears, once inoculated, were then placed in bags for humidity and isolation before being incubated at 27°C for 7 days. After 7 days, ears were removed from bags, husked, shanks were removed, and ears were individually placed on The Ear Unwrapper stepper motor. Often maize ears would have damage from other sources, primarily on the top of the ear. In these cases, the affected part of the ear was removed before being mounted on The Ear Unwrapper to minimize the confounding factors that would impact subsequent image analysis.
Hardware
To “unwrap” each ear of corn, the approach was to rotate the ear around its axis (the cob) and using a camera, take a continuous set of images to extract a single row of pixels from each to form one image. The motor used was a Nema 17 stepper motor that controls the position and rotation of the motor (and the object mounted on the motor) at a fraction of a degree. Here, each pulse of the stepper motor represents 0.45 degrees, thus 800 pulses are required to spin 360 degrees. The camera used was a See3CAM_24CUG camera (Fig. 2B). It was chosen because it has UVC driver compatibility that facilitates integration into an operating system as a USB webcam and because it has an external shutter trigger. The external trigger allows a microcontroller to control the shutter for synchronization with the stepper motor. As the motor is sent a pulse to rotate 0.45 degrees, the camera is also sent a pulse to capture an image. By syncing the pulses of the camera and the motor, one can control the speed while maintaining the accuracy of the image that is being produced via stitching. The pulses that are sent to the motor and the camera come from an Arduino UNO (SparkFun RedBoard) (Fig. 2C). Although reducing the time it took to obtain a single image was important, obtaining a clear, well lit, and focused image were all factors that reduced the frame rate. The USB bandwidth and stability of the corn ear when rotating also forced a reduction in frame rate.
The frame for the machine was built with Nylon and high-density polyethylene (HDPE) to provide durability for mounting the other components and allowing room for the ear to rotate less than one foot from the camera lens. The camera and motor were mounted on Picatinny rails. The stepper motor mount that came with the device was removed and replaced with two spikes (sharpened screws) that screwed into the existing stepper motor mount holes. Two spikes were used to provide the leverage required to rotate the ear on the sliding rails. A single conical spike of aluminum was fixed directly above the center of the modified stepper motor. This conical spike provided additional stability for the maize ear when mounted on the modified stepper spikes and allowed the ear to rotate freely (Fig. 2D).
To prepare ears to produce an image, ears were husked, the shanks cut or snapped off, silks removed, and the unpollinated tips of the ears removed. The Ear Unwrapper was placed in a photobox (Fotodiox™) to obtain images with consistent lighting and background (Fig. 2A). Ears were then mounted in the Ear Unwrapper by gently pressing the base of the ear onto the modified stepper motor spikes until it would stand without assistance and then sliding down the conical spike to the tip of the ear. The door to the photobox was closed and a custom Python script was used to capture, process, and output images.
Calibration And Error Correction
The 24CUG camera uses a wide-angle lens with focal length 3.0mm and aperture f/2.8, a benefit because the camera could be positioned closer to the maize ear while keeping the full ear in view. To correct for the “fisheye” effect that results, a correction algorithm was used to remove this distortion and extract the rectangular image. This was done in Python with the OpenCV library and calibrated by printing out a checkerboard and photographing it with the camera at the fixed distance from the camera to the motor. This also allowed for the determination of pixels/cm measurement. The de-warped image of the checkerboard was used to produce a correction matrix that was then applied to every image captured by the camera. To more efficiently use the sensor of the 24CUG camera default 1280 x 720 resolution, we rotated the camera 90 degrees so that the 1280 resolution was vertical and could better accommodate the geometry of the corn ears. When an image was taken, a single vertical line of 1280 pixels was obtained before the motor turns 0.45 degrees and another image is taken, and the next vertical line was stitched to the previous line of pixels. This was repeated 800 times until the ear completed a 360-degree rotation. This value was based on an estimation that an average ear of corn is roughly 800 pixels or 2 inches in diameter when viewed at the specified distance and resolution from the camera lens. Although there are substantial morphological differences in ears of corn that could cause the image to be stretched or compressed, particularly when using a diversity panel, this was addressed by capturing an additional four images, each 200 pulses (or 90 degrees) apart. These four additional images captured the full field of view rather than a line of pixels. Segmenting the ear from these four images allowed us to calculate an estimate of the ear diameter, and correct distortions in the 'unwrapped’ image incurred when imaging ears that were significantly wider or narrower than the 2” ear diameter the image dimensions were calibrated for.
Image Processing
A custom Python application was built to interface with the UVC driver, control the camera shutter and the stepper motor (via the Arduino), and capture the images. This application also provided a command-line interface to allow user control for photo capture and file naming. The photos were captured, named, and saved according to the date, time, and an ID number entered by the user as a unique identifier. Images were saved as PNG files and then filtered and sorted based on a set of criteria. First, images of ears with < 50% pollination, significant damage that was not caused by F. verticillioides, or images that were blurred were removed as these factors influence the ability to measure disease severity. After removing and filtering the images, 59 images remained before 10 images were selected to train the model. When assigning images to the training folder, the images were chosen based on morphological and phenotypic diversity to provide a range of disease severity, ear size, and a diversity of lesion colors and shapes. Images used for training were also selected so that visual differentiation between healthy and diseased tissues was clear. The testing set consisted of the remaining 49 images.
Pixel Classification
To build and train a classification model, we used the interactive machine learning software Ilastik(28). This software provides a pre-defined feature space across several workflows for biological image analysis, reducing both the amount of computing time and training data required to develop an accurate classifier. We used the 'pixel classification' workflow to semantically segment the images according to our three class labels - background, healthy tissue, and disease tissue. The background consisted of all image areas which were not ear tissue, and all maize ear tissues without fungal hyphae were considered healthy. Any tissue that contained visually apparent disease symptoms was annotated as diseased tissue (Supplemental Fig. 2). We used the default Random Forest classifier, and feature selection settings of σ3 (Sigma of 1.60) for Color/Intensity, Edge, and Texture. The ten training images were imported to Ilastik, and each image was manually annotated with the class labels until the real-time classifier predictions became stable (i.e. when additional annotation caused either no visible change or minimal change in the model feedback). Once training was complete, the remaining 49 testing/validation images were imported for batch processing. The model was used to generate a probability map for each image, i.e. an image with identical dimensions where each pixel is colored by the classifier according to the class label that it most likely represents in the original image (Fig. 3B).
Imagej And Opencv Processing
Outputs from the testing data set were processed in ImageJ (FIJI), and OpenCV using a custom Python script(29, 30). Probability files were converted from TIFF to PNG and a median blur function (Ksize = 11) was applied to each image in ImageJ. In OpenCV, the RGB images were converted to an HSV (hue, saturation, and value) representation and thresholded to create a mask for the pixels classified as likely to be disease. This mask image was converted to binary (black and white) where disease is represented as white pixels (Fig. 3D,E). These white pixels were then counted using a Python script and written out to a CSV file with their corresponding unique ID numbers for reference to their manually recorded scores.
Scoring Methods And Correlations
Ears that were imaged were also scored in two different ways. The first was a user with experience with FER infections counting the kernels that had visual symptoms consistent with FER. The second was by manually annotating each image in the program Microsoft Paint (on Windows 10) by coloring the disease pixels with the same blue color used in Ilastik, using the same methods to blur, threshold, and mask the image as was done to the Ilastik output images (Fig. 3C,E). It should also be noted that the expert counting diseased kernels was the same person manually annotating the images on Paint to provide continuity. To correlate the kernel counting scores with the predicted pixel counts from Ilastik and manually annotated pixel counts from Paint, the log scores of each were calculated and compared.