Our proposed I-EDL method consisted of a single-shot off-axis digital holographic interferometer and an ensemble deep learning system for interference hologram capturing and complex object wavefront recognition. The interferometer provides off-axis holograms with a shifted spectrum of the object's real image that can be easily extracted and processed by the fast Fourier transform method, which is explained in detail by [10, 11]. The optical setup is based on the principle of spatial coherence and is installed on a curtain-enclosed optical table [14]. The coherent light source is a red He-Ne laser with a wavelength of 632.8 nm [15], and the laser beam is approximately 2 mm in diameter. The laser penetrates the tissue of a specimen which is moved inside a fixed fixture across the x-y directions to capture hundreds of samples from a particular specimen. The holograms are captured by a CMOS camera equipped with a Nikon Plan microscope objective [16]. The following Fig. 21 shows a schematic diagram of the optical setup in detail, where MO is a microscopic objective, and NDF is a neutral density filter. The reference light is separated by several beam splitters (BS2, BS3) into three light beams and polarized by three linear polarizers (LP0°, LP45°, and LP90°) under 0°, 45°, and 90°, respectively. Then, they are combined by BS4 and BS5 and interfere with the object light to form the fringe image. The system employs two neutral density filters in object light beam (Lo) and references light beam (Lr) to adjust the light intensity from overexposure. The interference hologram of the object is recorded by the camera. Compared with a standard off-axis system, the setup can get interference holograms with different polarization states in a single shot, reduces image degradation due to light scatterings, and has higher imaging efficiency. The extra polarization information is capable for the image denoising and more details can be found in [17].
The hologram-classifier employed in this paper is EDL-IOHC. The approach in the articles [18] and [19] open a new paradigm for digital hologram recognition directly by wavefront analysis. The choice of EDL-IOHC is due to its robustness and high accuracy under noisy conditions with occlusion compared to its predecessors in [18, 19]. For clarity of explanation, the hologram-classifier EDL-IOHC, a complex wavefront recognition system, is outlined below.
In reference to Fig. 22(a), which shows the structure of EDL-IOHC in [6] for recognizing digital holograms with the powerful capability to handle occluded objects contaminated with speckle noise. The first and second CNN, known as the Magnitude CNN and the Phase CNN, accept the magnitude and phase components of the digital hologram, which is structurally the same but trained with different components' information. The architecture of both CNNs, as shown in Fig. 22(b), the CNN can be divided into three sections. Sec. 1 and Sec. 2 have identical structures but different hyper-parameters, containing a convolution layer for local feature extraction, max-pooling, and dropout layers. The 3rd section is a shared section for both the CNNs, and it is a "Concatenate Unit" to ensemble output information from the two CNNs. The concatenate unit ensembles all the extracted phase features and magnitude features into a combined flatten features vector before fitting into the "Output Dense Layer" for the decision unit to output the identity of the input digital hologram.
This study employs the hologram-classifier EDL-IHC to identify the tissue object wavefronts (digital holograms) reconstructed from the interference fringe patterns.
Interference fringe pattern as referred to be interference hologram, \(\text{Ґ}\), is a real number quantity and can be obtained as the result of measuring the intensity that results from the linear superposition of a diffracted object wavefront '\(\text{O}\)' and a reference wavefront '\(\text{R}\).' Mathematically, the recorded intensity image can be expressed as follows:
$${\text{Ґ}\left(\text{m},\text{n}\right)=‖\text{R}\left(\text{m},\text{n}\right)+\text{O}(\text{m},\text{n})‖}^{2}$$
2.1
where \(\text{Ґ}\left(\text{m},\text{n}\right)\) is the intensity of the captured hologram with a size of M columns × N rows. \(\text{R}\left(\text{m},\text{n}\right)\) is the reference wavefront, and \(\text{O}(\text{m},\text{n})\) is the object wavefront.
Expanding Eq. (2.1) is as follows:
$$\text{Ґ}\left(\text{m},\text{n}\right)={‖\text{R}\left(\text{m},\text{n}\right)‖}^{2}+ {‖\text{O}\left(\text{m},\text{n}\right)‖}^{2}+\text{O}\left(\text{m},\text{n}\right){\text{R}}^{\text{*}}\left(\text{m},\text{n}\right)+{\text{O}}^{\text{*}}\left(\text{m},\text{n}\right)\text{R}\left(\text{m},\text{n}\right)$$
2.2
where * is the complex conjugate operation for complex numbers, \({‖\text{R}\left(\text{m},\text{n}\right)‖}^{2}\) is the square magnitude of the reference wavefront, and \({‖\text{O}\left(\text{m},\text{n}\right)‖}^{2}\) is the square magnitude of the object wavefront. \(\text{Ґ}\) is a set of dark and bright fringes that embeds the amplitude and the phase information of the corresponding complex object wavefront.
Discrete Fourier Transform (DFT) is performed on the off-axis interference hologram and generates the four terms in the frequency domain. The DFT transforms the interference hologram from the spatial domain to the frequency domain in a discrete manner. After performing DFT on Eq. (2.2) and getting Eq. (2.3), as below.
$$\text{H}\left(\text{u},\text{v}\right)= {\text{A}}^{2}\text{M}\text{N}{\delta }\left(\text{u},\text{v}\right)+\text{D}\text{F}\text{T}\left\{{‖\text{O}\left(\text{m},\text{n}\right)‖}^{2}\right\}+\text{D}\text{F}\text{T}\{\text{O}\left(\text{m},\text{n}\right){\text{R}}^{\text{*}}\left(\text{m},\text{n}\right)+{\text{O}}^{\text{*}}\left(\text{m},\text{n}\right)\text{R}(\text{m},\text{n}\left)\right\}$$
2.3
where \(u,\text{v}\) are the frequency axis, \(\delta \left(*\right)\) is the delta function, and A is the reference wave's amplitude.
In the frequency domain, the spectral locations of the frequency components separated by the recorded off-axis hologram provide easy means to separate specific wavefront information in the Fourier space. The spectrum in the third term is extracted by a masking method, and the zero-order low-frequency spectrum and the twin image spectrum are removed. The third term extracted spectrum \(\text{D}\text{F}\text{T}\{\text{O}\left(\text{m},\text{n}\right){\text{R}}^{\text{*}}\left(\text{m},\text{n}\right)\)} as shown in Eq. (2.3) is centered (the masking method and centering algorithm is introduced in[10] with great detail), and then inverse Fourier transform is performed and get the scaled complex object wavefront A\(O\left(m,n\right)\) which is the object wavefront multiplied by the reference wave with amplitude A. Then, a 'min-max' normalization algorithm[20] is applied to A\(O\left(m,n\right)\). This method of normalization algorithm used in the machine learning community scales the values in a data array from [minimum value, maximum value] to [-1, 1] through a linear mapping. It normalizes the effect of the scalar multiplication by the reference wave for recognition. The normalization provides a robust pre-processing method for recognition purposes. In the following Fig. 23 illustrates the procedures to get the object wavefront (the digital hologram) for training the EDL-IOHC deep learning network.
The tissue interference holograms are captured from the biological samples. Fast DFT transforms the interference holograms from the spatial domain to the frequency domain. The spectrums of the object wavefronts are extracted, and fast IDFT restores the spectrums into the object wavefronts. Interference hologram spectrum and the corresponding object wavefront extraction methods have been reported in [10–13] with details. The object wavefront extracted from the above process is a complex quantity that contains both the magnitude and phase components of the object wave \(O\left(m,n\right)\), a digital hologram. The full dataset is split into an in-training train set and an out-training test set. The EDL-IOHC is trained with the train set and tested by the test set. It is found that aberrations have come from dust on optical lenses and mirrors, Airy-plaque-like rings [21] out-turn from the system's lenses. However, the deep learning network can adapt to these background irregularities during the first training stage and continue to perform well in the later recognition stage without any necessary background compensation.
Ten different types of tissues are captured from ten different types of flawed biological specimens, which are Cucurbita Stem, Pine Stem, Corn (Zea Mays) Seed, House Fly Wing, Honeybee Wing, Bird Feather, Corpus Ventriculi, Liver Section, Lymph Node and Human Chromosome with their class labels shown in the table below.
Table 2-1 The class labels for different specimens.
Specimen Class | Class Label |
Cucurbita Stem | 0 |
Pine Stem | 1 |
Corn (zea mays) Seed | 2 |
House Fly Wing | 3 |
Honeybee Wing | 4 |
Bird Feather | 5 |
Corpus Ventriculi | 6 |
Liver Section | 7 |
Lymph Node | 8 |
Human Chromosome | 9 |
Five hundred interference holograms are captured from tissues of each class of the ten biological specimens and result in a total dataset size of 5000 digital holograms (object wavefront). They are used to train the hologram-classifier EDL-IOHC. Then the trained hologram-classifier is used to identify the type of biological specimens by recognizing the tissues' digital holograms.