5.1. Wound healing experiments
Human coronary artery endothelial cells (HCAEC) were purchased from Cell Systems, Germany. To evaluate initial cell growth, HCEAC were used at passage 3 to 5 and cultured in endothelial cell growth medium (Cell Systems, Germany) containing 10% fetal calf serum (FCS). Cells were seeded with a density of 3x105 cells/mL in Culture-Inserts 2 Well (Ibidi, Germany) on a 25 x 75 mm Thermanox™ coverslip (fisher scientific, Germany). According to the manufacturer’s instructions, 70 µL of the cell suspension was added to each well. After 24 h the insert was removed, thereby generating an accurate longitudinal gap between both monolayers of approx. 500 µm. Subsequently, the coverslip was attached to a 0.8 mm sticky-slide I Luer perfusion channel. Using the ibidi pump system (Ibidi GmbH, Germany), endothelial cells were exposed to a constant laminar wall shear stress of 0.15 Pa or 1 Pa in an incubator (5% CO2, 95% H2O). Additionally, control samples were kept under static conditions. In the incubator, cell movement was monitored over a period of 15 h using the JuLiTM Life Cell Analyser (NanoEnTek, Korea). Every 15 minutes a live-cell image was captured. Since this work is intended to serve as a basis for future studies on cell growth under flow conditions, all results presented here are based on wound healing assays under flow.
5.2. Cell detection and segmentation
In order to segment the endothelial cells we used the U-net semantic segmentation network, developed by Ronneberger et al. 2015 [19]. Here, we utilized a variation of the network implemented in MATLAB consisting of 3 encoder and decoder stages. Each contraction stage consists of two convolutions (3x3) with a linear activation function (ReLU) followed by 2x2 maximal pooling layer. The stages of the upsampling side of the network consist of transposed convolutions and the concatenated feature maps from the corresponding downsampling path as well as following ReLU layers [19, 31], see Fig. 9
The Network structure was generated by using MATLAB’s pre-implemented function unetLayers. For more detailed descriptions of the network, refer to Ronneberger et al. 2015 and to the MATLAB documentary. To classify each pixel we used a softmax layer. The network loss during training was calculated by Tversky-Loss function, which balances false positive and false negative detections (weighting factors: alpha, beta). Therefore, the MATLAB implementation of Salehi et al. was used, whose work is recommended to gain a detailed understanding [32].
Using U-net, a wide range of image sizes can be segmented. The only limitation of the segmentation process is that the edge lengths of feature maps must be even before applying the max-pooling. To use any image size the image needs to be mirrored at the edges and cropped to the required size. Mirroring at the edges also has the advantage that cells at the image border can be segmented without artifacts [31].
To separately segment cells that are close to each other cells and cell border were labelled separately as described by Ronneberger et al. 2015 [19].
For Training we semi-manually segmented cell images using an adaptive thresholding algorithm and manually fine-tuned the predictions. Therefore, a customized graphical user interface was programmed, which easily allows the user to generate additional training images in order to improve the segmentation result of the network on specific cell images. In this way 280 arbitrary sized training images were segmented. To train the network sufficiently with few training images, it is absolutely necessary to perform data augmentation. In each epoch we generated up to 100 augmented images from one original training image by applying random skewing, rotation, translation and brightness variation. The augmentation of the images was performed on the fly, using the imageDataAugmenter function in MATLAB.
In order to solve the minimization problem we used an adaptive momentum solver with a learning rate of 0.001. As one of the most important hyperparameter for the training; the learning rate was reduced by multiplying 0.9 in every epoch. Before training, the image data were normalized between 0 and 1. Training worked out well with a small mini-batch size of 30 images. To avoid that the network is only learning structures from the last training images, the data was shuffled before each epoch. As weighting factor for the Tversky-Loss we achieved the best results with alpha=0.3 and beta=0.7. The network was trained on an Nvidia GTX 1500i with 4GB GDDR5 RAM with Cuda 10.2.
The output of the network is an image, which is segmented into background, cell and cell-border. We applied a median filter on the segmentation results to remove noisy predictions. For further processing the results were binarized with the background. Therefore, cell-borders were set to 0 and cells were set to 1. Accordingly, all connected pixel regions form an individual cell. To validate the segmentation results, Jaccard matrix was used and referred to as Intersection over Union (IoU). This metric also penalizes false positives.
For conventional cell detection we applied an adaptive threshold method in MATLAB by using the Image Processing Toolbox (MATLAB R2019b). This threshold was adjusted based on the locally calculated mean grey value field. In our study, a size of 161x121 pixel was chosen as neighborhood for the calculation of the mean intensity. A sensitivity factor (ratio of background and foreground pixels) was set manually, which allows for an adaptation to the particular cell series. Using morphological operations, holes in detected cells were closed (imfill) and very small segmentations below the manual observed minimal cell size were removed (bwareaopen).
5.3. Cell tracking and cell velocity
Cell tracking based on iCD
By means of the iCD approach individual endothelial cells were detected on every live cell image. As a result we were able to define the position of an individual cell by the centroid of the cell area and each individual cell was assigned by a unique ID. The challenge of cell tracking lies in the recognition of individual cells in sequential images. For our cases we applied the so-called nearest-neighboring-algorithm, see Fig. 10. According to this method the nearest cell centroid on the following frame is assigned to the cell center of the previous frame. The quality of the algorithm was increased by defining a maximum length of cell motion (R), which is an additional criterion for cell recognition.
Cell Image Velocimetry (CIV)
The Cell Image Velocimetry (CIV) approach originates from Particle Image Velocimetry (PIV) method which widely used in fluid mechanics [27]. PIV is a full-field post-processing method for the determination of velocity fields in fluid flows. For this purpose, correlations between small subunits, so-called interrogation areas (IA), of two sequential images were determined. From the displacement (grey value shift) and the time step between the sequential images, a velocity vector was obtained for each IA [24, 33]. Tracer particles which are usually required for flow visualization when using PIV are not necessary for CIV. Here, the movement of cell compartments and structures already results in an evaluable signal. We applied an adaptive correlation algorithm from Dantec Dynamics (Dantec Dynamics A/S, Denmark). This algorithm is included in the PIV-analysis software Dynamic Studio. The minimum and maximum size of the IA was defined as 32x32 pixels and 64x64 pixels, respectively. The permitted overlay of the IA was set to 50%. For this study, we used raw cell-images without any image preparation.
5.4. Leading edge detection for wound closure analysis
Leading edge detection based on iCD
The iCD method provided individual segmented cells. In order obtain a closed cell front on the edge of the monolayer one need to upscale the results from cell -scale to population -scale. Therefore each pixel of each cell boundary was radially dilated by factor 15, so that all cell boundaries were slightly overlapping. Afterwards, by using the bwareaopen function of MATLAB the two largest areas, the gap and the cell monolayer, were kept. Then both areas were eroded by a factor of 15. By means of the bwboundaries function we were able to detect the edge of the gap and calculated the edge protrusion as well as edge velocity in an additional post- processing step.
U-net for direct scratch detection
As a second AI approach, our intelligent direct scratch detection (iDSD) approach aims to train a U-net on the population -scale level by using the manually segmented gaps. The network architecture corresponded to the one described above for iCD. A total of 170 training images were applied to train the network and again these dataset was augmented by the MATLAB data augmentation function as described before. In total 100 augmented images per live cell image were generated. A schematic illustration of both U-net approaches is visualized in Fig. 11.
Canny method
For conventional edge detection, we applied the Canny method [24]. Customized image processing software based on the Canny method was implemented by using MATLAB’s image processing toolbox [edge(Image,’canny’]). First, images were converted to grayscale (rgb2gray). The relevant edges between cell and background were isolated by manually adjusting a threshold value. These edges were dilated by a user defined factor typically in the range of 1 to 20 pixel. Finally, smaller elements were removed and the resulting areas were eroded again, resulting in a binary image consisting of the cell monolayer and the gap (using bwareaopen and imerode). The bwboundaries function from MATALB was used to calculate the position and length of the edge.
Manual detection and segmentation
Manual data were generated for training purposes and as a reference to the presented wound healing analysis methods In addition to the 280 manually segmented cell images for iCD training we manually segmented 150 individual cells to evaluate the training success of the neural network. For comparison with automatic cell density determinations approximately 360 individual endothelial cells were manually marked. To validate the cell velocity estimation, the velocity of each cell was measured manually on two consecutive images. Again, this process was performed for 360 single cells. Furthermore, the manual cell tracking was repeated 3 times by different operators.
For training and evaluation of the population scale methods the leading edge was detected freehand. Of these, 170 images were used for training purpose and 50 for evaluation.