Our goal was to build a versatile, universally applicable free open-source system, that is scalable and extensible, and also modifiable to suit a wide range of research and diagnostic purposes. To address these needs, Shiny12, an R19 package for building interactive web applications, was chosen. The advantages of using Shiny are portability and easy deployment of the applications. In combination with a 3D-printed photo box20, it represents an all-in-one, portable, cost-efficient and easily reproducible setup for extensive analysis, that works well on computers as well as portable devices, such as smartphones. The system was optimized and validated with images of several different types of lateral flow assays, such as a Dipstick assay by Wagner et al.21, a Digoxigenin lateral flow assay (LFA) and a duplex quantum dot labelled LFA by Ruppert et al.22,23, as well as a multicolour quantum dot labelled LFA by Mahmoud et al.20. shiny mobile version24 directly on smartphones (see Figure 1), allowing a quick and detailed analysis of LFAs in any setting.
Our software package (available at: https://github.com/fpaskali/LFApp and https://cran.r-project.org/package=LFApp ) consists of four modular R Shiny applications:
(1) LFA App core for image acquisition, editing, region of interest definition via gridding, background correction with multiple available methods, as well as intensity data extraction of the pre-defined bands of the analysed LFAs; (2) LFA App calibration, which also includes tools for data processing, adding additional information and calibration functionality; (3) LFA App quantification, which enables quantification of the extracted intensity values via loading existing calibration models and (4) LFA App analysis, which includes all the modules mentioned above and enables full analysis in one application.
The graphical user interface of the apps is built in a modular setup divided in several tabs, where each tab represents a specific step of the workflow. While the applications can be used in a sequential fashion, the specific steps can be carried out individually.
The first tab consists of the image loading and region of interest grid definition. For image loading and manipulation, we utilized the R package EBImage25, supporting standard image types. Here, the image can be rotated or flipped horizontally and vertically and cropped to a specific size. The type of the grid can be specified in the side panel, according to the type of the assay, by selecting the number of lines (signal bands on the LFA) and strips (number of LFAs in the image). Then the region of interest can be precisely marked on the interactive plot. Figure 1 shows an example with the sample image of the application, in which a grid for two lines and three strips is used. The result is a 3x3 grid, because an extra rectangle is generated between the two lines of each strip that may later be used for background correction (quantile method). Image manipulation is carried out in a non-destructive fashion, allowing to restore the original state at any time during analysis. When editing is completed to the user’s satisfaction and the grid is defined, segmentation is enabled and the application proceeds to the next tab.
The second tab is background correction of the gridded image, done after the selection of the region of interest. This module allows to enhance the signal and to clear the image by reducing noise and false positive pixels. For this purpose, various algorithms are used in the application. Otsu 26,27 and Li 28,29 are non-parametric, fully automatic algorithms that find the optimal threshold for the image. Additionally, two semi-automatic algorithms, namely Quantile and Triangle 30 were included, to cover different images and cases where automatic threshold results are not ideal. The automatic thresholding methods do not require additional parameters, while the semi-automatic methods have an additional parameter that can be tuned for more flexible thresholding. These settings will be saved in a data table for documentation and to make the analysis reproducible.
The background correction process begins with the selection of the strip that is going to be analysed. The maximal number of strips is defined in the first tab as described above. The implemented thresholding methods work with grayscale images, hence two grayscale conversion methods are implemented, namely a luminance and a gray approach. Colour images can also be analysed by selecting one of the three colour channels (RGB). The application also provides colour inversion functionality, which proves beneficial in the analysis of LFAs with fluorescent labelling, in which the lines or bands can be lighter than the background. After the thresholding method is applied, several info and plot boxes are rendered in the main panel of the second tab. Regarding Figure 2, first the individual threshold levels are displayed. The upper plots show the pixels above the threshold from the lines of the selected strip, whereas the lower plots display the signal after background subtraction. Below, the calculated mean and median intensities of the lines in order from the top of the strip to the bottom are shown. The values can be added to an intensity data table and the user is able to continue with the background correction and quantification of other strips or images.
In the third tab, using the DT package 31 a table of intensity data is rendered, that can be easily filtered and sorted. It contains all data from the quantification, such as file name, the number of strips, background correction method, additionally applied parameters, as well as the mean and median values of all lines of the analysed strip. The measured intensity data can be saved or deleted to restart. Existing intensity data can be imported and the current data can be added. LFA App core consists of these three tabs representing the core functionality of our software package – image analysis and intensity data extraction.
LFA App calibration contains data processing functionality in the Experiment Info tab, where information about the experiment can be merged with measurements performed with the application. It is also an optional starting point in case of analysis of pre-existing data. Also, an already merged data set can be loaded using the upload widget in the sidebar and data sets can be adjusted with tools from the sidebar, when experiment data is stored in a non-standard .csv file. After the upload of experiment data, it can be merged with intensity data by specifying the name of the columns of intensity data and experiment data accordingly. As in the previous tab, the data can be saved or deleted for restart. In the main panel of the Experiment Info tab, the data table with experiment data or a combined table is shown. Additionally, LFA App calibration has the Calibration tab, which bundles different functionalities, such as averaging technical replicates and tools for reshaping the table from long to wide, to pre-process the data before calibration. The calibration can be performed by using combined data from the previous tab, or by loading previously saved data. Additionally, the calibration can be performed on a subset of the data, by inputting a logical R expression in the specify subset field. Finally, the calibration module uses the power of R to compute a variety of calibration models fitted with functions “lm” (linear models) and “loess” (local polynomial regression) of R package stats19 as well as function “gam” (generalized additive models) of R package mgcv32.
The results of the calibration are displayed in the sixth tab (see Figure 3), with specific calibration values, such as R² (only in case of lm), limit of blank, limit of detection and limit of quantitation33,34.
Furthermore, at the end of the calibration analysis, a full report is generated by R Markdown13, summarizing the fitting of the calibration model. A section of the report can be seen in Figure 4.
LFA App quantification offers quantification of substrates via calibration models and extracted intensity data values. In the Quantification tab a calibration model can be specified to quantify the intensity data values as concentrations. Also, a data table (similar to the Intensity Data tab) is displayed with the addition of the calculated concentration values for each measurement of the image analysis as well as the calibration model.
LFA App analysis offers the functionality of all modules combined into one.
The modularity of the software package allows for flexible approaches in the image and data analysis as well as different starting points. The user can start with the image analysis in the application or use extracted intensity values from different software and continue with data analysis steps. Calibration can be performed for a sample assay and the calibration model can be exported and re-applied for the quantification of many different assays in an experimental setup. Due to the flexible implementation, the different modules can easily be adjusted to the researcher’s needs or recombined to create new applications.