Protocol for applying Machine Learning models for the transformation of conventional uorescence images to super-resolution


 Machine Learning offers the opportunity to visualize the invisible in conventional fluorescence microscopy images by improving their resolution while preserving and enhancing image details. This protocol describes the application of GAN-based Machine Learning models to transform the resolution of conventional fluorescence microscopy images to a resolution comparable with super-resolution. It provides a flexible environment using a modern app functioning on both desktop and mobile computers. This approach can be extended for use on other types of microscopy images empowering life science researchers with modern analytical tools.


Introduction
The majority of studies evaluating uorescently-labeled proteins are conducted using conventional microscopes with resolution determined by the physical properties of light at approximately 250 nm. This diffraction limit makes conventional microscopes unusable for analytical studies, since they are unable to distinguish many multi-protein complexes 1 .
In recent years, this limit has been overcome with the help of super-resolution microscopy (SRM). These sophisticated microscopes can achieve a resolution of approximately 100 nm and, more recently, even 25-50 nm, which makes them the tools of choice for various analytical applications 2,3 . However, SRMs utilize highly specialized hardware, can be too daunting to operate and, since images are acquired at multiple frames, require signi cant computational resources. The occurrence of artifacts in superresolution images was also reported 4,5 . Importantly, SRMs are usually orders of magnitude more expensive than conventional microscopes, putting them out of reach for many research labs.
The latest advances in Machine Learning (ML) provided another way to achieve super-resolution in microscopy images 6,7 . With the help of ML models based on convolutional neural networks (CNN), it became possible to use images obtained using conventional microscopes and transform them into images with resolutions comparable to SRMs 8 . The biggest issues when creating traditional ML models are the vast amounts of images required for model training and the need for their annotation. Large sets are required for supervised learning to properly learn the mapping between a source domain and a target domain and annotation of images in them serves to identify pairs of sources and targets 9 . These issues obstruct the model development, since large sets of highly speci c images are frequently impossible to obtain and proper annotation of them is very laborious and error-prone.
In this protocol, we describe the application of Generative Adversarial Networks (GAN)-based ML models for the transformation of conventional microscopy images to images comparable with super-resolution.
GAN-based models proved to be a highly e cient architecture capable of precise image transformation and reconstruction 10 . GAN networks are unsupervised. Importantly, they can perform with signi cantly less model training data and do not require paired training sets. GAN networks train a generative model and an adversarial model simultaneously, learn to optimize a loss function, and produce more realistic results. GAN-based models were used with great success for bright-eld holography 11 , virtual histological staining 12 , denoising of optical diffraction tomography images 13 , and super-resolution reconstruction 14,15 . We employed a model architecture similar to a so-called Multi-Scale GAN (MSGAN) 16 .
This architecture, Super-Resolution GAN (SRGAN) generates a low-resolution version of the image rst followed by the generation of patches of constant sizes but successively growing resolutions before generating the nal image ( Figure 1).
Our model combines three separate models dealing with independent image transformation tasks: Denoising, Axial Resolution Restoration, and Super-Resolution Reconstruction ( Figure 2). The protocol itself consists of two work ows: work ow A and work ow B for conventional and super-resolution images, respectively ( Figure 3). We enabled the use of the protocol on both desktop and mobile computing platforms 17

Procedure
Before you begin Before capturing any photos, careful consideration should be given to the image le type and storage choices. Many microscopes save acquired images in lossy compression formats, such as JPEG, due to their smaller le size set by default. However, lossy formats "discard" important image color data and are not suitable for quantitative analysis. Other popular le formats saving options, such as TIFF and PNG, are lossless. Natively, the CoLocalizer app used to transform images in this protocol, opens a limited number of lossy and lossless image le formats, including JPEG, PNG, and TIFF. Other proprietary microscopy image formats (ICS, LIF, LSM, ND2, etc) can only be opened following conversion via 3 rd party software tools and are not always optimal storage-wise, eventually limiting their use. Therefore, saving captured images as lossless TIFFs and PNGs is highly recommended. We strongly discourage using previously acquired images in lossy formats in this protocol.

Launching CoLocalizer app and opening images
Timing: 1 min 1. Launch the CoLocalizer app either on a desktop (Mac) or a tablet (iPad) computer ( Figure 4).

Open your image le:
On a desktop: i. Open the image by selecting the File > Open path from the application menu bar or using the Command+O shortcut or dragging and dropping the le on a CoLocalizer app icon in the dock ( Figure  4a).
ii. If you are using single-channel images, merge them on a desktop before opening for transformation. The Merge functionality is not available on a tablet. To merge, select Tools > Merge from the application menu bar or use the Shift+Command+M shortcut to open the Merge window.
iii. Drag-and-drop single-channel images onto the respective image wells of the Merge window and click the Merge button to merge the selected channels (Figure 4b).
On a tablet: i. Open the image by tapping the image thumbnail on the File Browser screen (Figure 4c).
ii. Once tapped, the image will open on a new Open Image screen (Figure 4d).
Critical: Due to system RAM limitations, the protocol currently works on SISR only. If you need to apply it to the image of stacks, split it into single images (slices) and apply the protocol to the slices one-by-one.

Applying ML models
Timing: 10-30 min per image 3. Once the image is opened, apply ML models to transform it. For conventional images, use the Conventional image approach (Work ow A) ( Figure 5). For original super-resolution images, use the Super-Resolution approach (Work ow B) ( Figure 5). The Conventional approach will apply the SRGAN ML combined model consisting of denoising, axial restoration, and super-resolution reconstruction models.
The Super-Resolution approach will apply the SRGAN ML BC model consisting of the denoising model. i. Access the model either by clicking the Background icon in the application toolbar or going to Tools > Background Correction in the application menu bar or using the Shift+Command+B shortcut (Figure 5g).
ii. In the opened Background Correction view, select the color channels used for staining antigens in the image (Figure 5h).
iii. Click the ML Correct button to apply the SRGAN ML BC model (Figure 5i).

On a tablet:
i. Access the model by tapping the Tools icon in the navigation bar and then selecting the Correct Background option from the Tools popover (Figure 5j).
ii. On the opened Background Correction screen, select the color channels used for staining antigens in the image (Figure 5k).
iii. Tap the ML Correct button to apply the SRGAN ML BC model (Figure 5l).
Critical: If a sample is stained for three different antigens and you wish to change a channel pair to view them all, change the Channel Pair using the selector at the top of the view. No need to click or tap the ML Correct button again.
Application of the models shows signi cantly improved quality of transformed images veri ed by several controls ( Figure 6). After selecting a channel pair, the image will be synced between devices, i.e., it will be possible to switch between a desktop and a tablet and continue the image analysis on one device where it is left off on another device using the Handoff functionality.

Comparing results
Timing: 1-3 min per image Work ow A and B 4. Once the models were applied, compare results in original vs transformed images. Comparison serves as a control of successful use of the models, since images are expected to change following their application.
On a desktop: i. Return to the main window by clicking the Inspector icon in the application toolbar or going to Tools > Inspector in the application menu bar or using the Shift+Command+I shortcut.
ii. Choose Undo > ML Super Resolution in the application menu bar to return to the original image. Alternatively, go to File > Revert To and nd the original image to revert to.
iii. Access the Colocalization view again by clicking the Colocalization icon in the application toolbar or going to Tools > Quantify Colocalization in the application menu bar or using the Shift+Command+C shortcut as described above.
iv. Compare pixel information data on transformed vs original images. ii. In the opened Export window, ll the Comments eld and select the Data, Images, and Report options (Figure 7b). The le formats for Data (XLSX and Text) and Images (JPEG, PNG, and TIFF) can be set in the Preferences of the app. You can export the following features of analyzed images: the whole image, selected ROI, revealed pixels, and scattergram (scatter plot). Once selected, they will be exported as single les as well as included in the Report le.
iii. The Report option will allow the saving of all data in PDF and HTML le formats.
iv. Click the Save… button at the bottom of the window to save the selected options (Figure 7b).
v. To nish the export, select the destination where you would like to save the exported data, either locally or on iCloud Drive.
Caution: Saving the whole image in JPEG format will make it unusable for colocalization analysis. We recommend saving it as lossless TIFF.
Critical: If you wish to export only the transformed image, go to the application menu bar and select File > Export As… or use the Command+E shortcut. Then, save it in the le format of your choice. If you plan to reuse this image, we recommend saving it in native COLOCALIZER format.
On a tablet: i. On the Open Image screen, tap the Export icon in the navigation bar at the right to access the Export popover (Figure 7c).
ii. In the popover, select the type of data to export, Data or Images. Only one option can be selected at a time.
iii. In the appeared pop-up screen, select the Data or Images exporting options. Only one option can be selected at a time. Data can be saved in either PDF or XLSX formats. Images can be saved in either JPEG, PNG, or TIFF formats (Figure 7d). In addition to coe cients numbers, a PDF le of Data will include the analyzed image, selected ROI, and scattergram (scatter plot) too.
iv. To nish the export, select the destination where you would like to save your data, either on On My iPad or on iCloud Drive.

ML Super Resolution option is grayed out on a desktop and an alert is shown on a tablet when trying to apply it on image stacks
Possible reason: Due to GPU RAM limitations, the use of models on stacks of images is not supported Solution: Split stacks of images into single images (SI) and apply SRGAN ML models to them (SISR) Problem: The transformed image is the same or lower quality than the original image, image artifacts are visible To delete images only on one of your devices, sign off from iCloud on that device. Keep in mind that this action will cancel all synchronization steps Step 4 (comparing results) Problem: The total pixel count according to the selected channel pair of transformed images increased less than three times compared to the original ones Make sure to save the image after opening. The app will take care of the rest Step 5 (exporting data) Problem:

Not all images have been exported
Possible reason: Certain image exporting options in the export sheet were left unchecked Solution: Make sure to check boxes for all the desired options

Time Taken
The time to complete the protocol will depend on the size of the original image les.
Steps 1 and 2, launching CoLocalizer app and opening the image le either on a desktop or a tablet: 1 min Step 3, applying SRGAN ML models: 10-30 min per average (1024 x 1024 pixels) image Step 4, comparing results: 1-3 min per image Step 5, exporting data: 1-3 min per image

Anticipated Results
The application of SRGAN ML models should result in a noticeable visual improvement by revealing more details in the images ( Figure 6). These changes will be more pronounced in conventional images. The extent of improvement, as well as the presence of artifacts, will depend on the image quality -the higher is the quality of the original image the better and more reliably it can be transformed. Original superresolution images will be likely visually unchanged.
Technically, the image les will become signi cantly bigger and their pixel dimensions will increase approximately three times.      Performance of the applied models. Comparison of the input (conventional) vs output (transformed to super-resolution) shows signi cant improvements of images according to all three (Denoising, Axial Resolution Restoration, and Super-Resolution Reconstruction) components of SRGAN ML combined model on different (rounded, irregular, and lamentous) types of cellular structures. The results of transformation were estimated using Peak Signal-to-Noise Ratio (PSNR) (dB) and structural similarity index (SSIM). Higher values indicate better image quality. Comparison of Output and Ground Truth images shows no obvious mismatch between the two. Super-resolution artifacts were examined by estimating Resolution Scaled Pearson-Correlation (RSP), Resolution Scaled Error (RSE), and the RSE error maps using the Fiji software NanoJ-SQUIRREL plugin version 1.2. RSP estimates the quality of images by quantifying their correlation, RSE indicates brightness and contrast differences between images, and RSE error map shows the areas of discrepancy between the input and output images. Both RSP and RSE show a low degree of error, while the RSE map reveals no obvious artifacts. The models were trained on cloudhosted Google Collaboratory GPU and the local host computer running Nvidia GeForce GTX 1080 Ti GPU.

Figures
Following training, the Python code of the models was converted to CoreML format optimized for use in Apple devices. Converted models were quantized to reduce the size of model les. Quantization decreased the size of CoreML les twice without affecting their performance. Exporting data. Work ows A and B. All data (numerical and images), produced by the protocol, can be exported either on a desktop (a-c) or a tablet (d-f). All exported data can be included in a single report le for archiving and further use.