2.1 Design of the meniscus multi-focusing compound eye lens.
The design of compound eye determines the performance of the system. First of all, design the base of the compound eye. The base has a radius of 9 mm and a height of 4.2 mm, which can cover the target surface of a 1-inch flat image sensor. Under the same air pressure, the tensile deformation of the Polydimethylsiloxane (PDMS, Dow Chemical, China) membrane46–48 and the diameter are approximately linear. In order to express the amount of deformation more intuitively, it is drawn as a line graph as shown in Fig. 2 (a). Based on this principle, the negative pressure forming method can be used to design each ring of ommatidia of the meniscus compound eye.
In order for the compound eye to match the flat image sensor directly, the focal length of the ommatidia need to be converged on the same plane. The focal length and the radius of the ommatidia increase with distance from the center. The main optical axis of all microlenses passes through the center of the curved base. The focal length fn of each ommatidia should be equal to the distance ln from the ommatidia to the focal plane, it is mainly determined by the refractive index of the material n and the radius of curvature rn. The material is solidified with NOA63 (NORLAND, America), which the refractive index n is 1.56.
$${f}_{n}={l}_{n}=\frac{{r}_{n}}{n-1}$$
1
According to the geometric relationship, the three relationships of the ommatidia curvature radius rn, the height hn, and the diameter dn can be obtained.
$${r}_{n}=\frac{{d}_{n}^{2}+{4h}_{n}^{2}}{{8h}_{n}}$$
2
According to the PDMS membrane deformation line chat, combined with fn and rn, the relationship between the radius of the ommatidia and the focal length of the ommatidia can be obtained, as shown in Fig. 2 (b).
In order to avoid the imaging overlap of different ommatidia and the insufficient strength of the silicon wafer covered by the PDMS membrane during negative pressure forming, the distance between adjacent ommatidia is designed to be 0.15 mm, as shown in Fig. 2 (c). The diameter, height and focal length of the ommatidia are showed in Table.1.
Table.1 The diameter, height and focal length of the ommatidia.
Unit (mm)
|
diameter dn
|
Height hn
|
focal length fn
|
1
|
0.500
|
0.090
|
0.709
|
2
|
0.520
|
0.097
|
0.717
|
3
|
0.560
|
0.109
|
0.736
|
4
|
0.660
|
0.131
|
0.801
|
5
|
0.880
|
0.226
|
0.980
|
The larger the numerical aperture, the greater the luminous flux entering the lens, which is proportional to the effective diameter of the lens and inversely proportional to the distance of the focal point. The numerical aperture of the central ommatidia NA = 0.52.
$$\text{N}\text{A}=\text{n}\text{*}\frac{r}{\sqrt{{r}^{2}+{f}^{2}}}$$
3
Establish the corresponding compound eye model through SolidWorks software, and import zemax software for ray tracing. It can be seen that the spot size is uniform, and the light intensity is uniform. As shown in Fig. 3(a-b), it can be directly connected to the flat image sensor without the need for the optical relay. FWHM (Full Width Half Maximum) can be used as an evaluation index for imaging resolution. When the incident angle is 50 degrees, the distortion and FWHM of the edge ommatidia with the meniscus structure are better than those of the plano-convex structure, as shown in Fig. 3(c).The Two circles ommatidia in the center cannot image the field of view beyond 50 degrees, so only the FWHM of the three circles ommatidia outside is listed.
2.2 Preparation of multifocal meniscus compound eye.
Prepare the obtained compound eye parameters on a mask. The mask uses a film. Use CAD software to draw a two-dimensional drawing of the design pattern, the white part is transparent. Because the silicon wafer is to be carved with holes, the thinner the thickness of the silicon wafer the better, but thin silicon wafer will cause mechanical fracture during the negative pressure forming process, so choose a silicon wafer with a thickness of 280 µm for photolithography. The positive photoresist AZ5214 (Merck, German) is used to obtain the pattern on the mask, and the front surface of the silicon wafer is etched 80 µm by the ICP (Alcatel 601E, France) plasma etching machine, micropores are obtained by etching 200 µm on the back. Before etching the back, it is necessary to protect the front side that has been etched. For this reason, a 1 µm thick copper film is plated on the front side using a magnetron sputtering machine (Lesker LabLine, America). The copper is soft and easy to remove. During the copper deposition process, the vacuum degree needs to be kept below 10− 8 torr, otherwise the copper film will partially fall off, and after the copper deposition is over, it needs to stand for 24 hours to release the stress. Since the back surface etching depth reaches 200 µm, the photoresist film can no longer protect the back surface from being etched, and aluminum has better anti-etching ability. Therefore, it is necessary to plate a layer of aluminum mask protection structure on the back surface and complete the back surface by evaporation. Next, perform photolithography on the back. After the development is completed, use a small amount of H3PO4 to remove the aluminum on the back structure, thereby obtaining a back pattern with an aluminum mask. Etch 200 µm on the back to obtain a silicon wafer structure with holes, use FeCl3 to remove the copper film, and then use H3PO4 to remove the remaining aluminum film on the back. So far, the silicon wafer structure with a micro-hole array is prepared.
Cover the PDMS membrane on the through hole of the silicon wafer, and perform a negative pressure operation on the lower side of the silicon wafer. Under the action of atmospheric pressure, the PDMS membrane deforms, keeping the negative pressure on the lower side unchanged, and drip NOA63 on the PDMS membrane, and curing under ultraviolet light for 3 minutes, after waiting for cooling, a microlens array is obtained. It is necessary to transfer the obtained microlens array into a curved surface, use PDMS to drop on the flat microlens, place it in a horizontal position, 80℃ for 3 hours to cure the PDMS to obtain a PDMS mold with the opposite structure of the flat microlens. The thickness of the mold is 3 mm. Put the PDMS mold into the self-made mold with the pattern part facing upwards and perform a negative pressure operation. The PDMS mold is deformed into a curved surface under air pressure. The amount of surface deformation determines the height of the meniscus multi-focusing compound eye lens base. Keep the air pressure constant and drip NOA63. Cover the quartz lens and cure for 7 minutes to obtain the final meniscus multi-focusing compound eye lens, as shown in Fig. 4.
2.3 Measurement and characterization of meniscus multi-focus compound eye lens.
After obtaining the compound eye lens, measure and characterize the geometric shape and optical performance. Figure 5(a) is a photo of the compound eye lens taken by a camera. It can be seen that the surface of the lens is smooth and the light transmittance is good. Using a scanning electron microscope to take a picture of the lens as a whole, as shown in Fig. 5(b) (JEOL FESEM 6700F electron microscope, Japan), the shape is a spherical coronal compound eye.
Next, use the keyence digital microscope to test the diameter of each ring of compound eye in detail. The diameters from the middle to the outside are 504.06 µm, 523.99 µm, 555.99 µm, and 666.11 µm and 886.31 µm, the error is less than 1% compared with the design theoretical value. The error is mainly due to the difference in the measurement accuracy of the instrument and the deformation of the PDMS membrane. The overall error is small and has no effect on the imaging quality, so it can be considered as consistent with the design.
The height of each ring of compound eye is measured with a profilometer, the pointer of the profilometer is drawn from the center of each ring of compound eye, so that the height curve of the compound eye can be measured, and the measurement result is shown in Fig. 5(c). The height of each ring of compound eye is 90.3 µm, 97.2 µm, 109.1 µm, 131.4 µm, 226.7 µm, and the error is less than 1% compared with the design theoretical value. The main reason for the error is the difference in the deformation of the PDMS membrane and the pointer inability to accurately pass through the center of the compound eye. By fitting the height to the curve shown in Fig. 5(d), a linear relationship similar to the design value can be obtained, which is in line with expectations.
The meniscus multi-focusing compound eye lens and CMOS (optical size: 1 inch, cell size: 2.4 * 2.4 µm, resolution: 5488 * 3672, frame rate: 19.5FPS) are assembled to form a camera without the optical relay, and the imaging effect is tested using the photo of a car. The imaging effect is shown in Fig. 6(a). After the laser (Edmund, wavelength:632.8 nm) passes through the compound eye lens, after testing and adjustment, the focal spot of each ring as shown in Fig. 6(b), it can be concluded that the focal length is in the same plane.
By designing the device shown in Fig. 6(c) to measure the FOV (field of view), the camera is set at the center of the image symmetrically. The FOV of the compound eye lens can be obtained by using geometric relations and trigonometric functions. In one shot, the distance of the furthest two letters is 180mm, which is the value of D. L is the distance from the camera to the letter version. After calculation, the FOV of the multifocal meniscus compound lens is 101.14°45. The FOV can be enlarged by increasing the number of circles of the compound eye.
$$FOV=2arctan\frac{D}{2L}$$
4
The resolution version of the 1951 USAF (Thorlabs, America) was used to test the resolution of the compound eye lens. The compound eye camera directly takes pictures of the resolution version, the fine line width that the compound eye can recognize is shown in Fig. 6(d), which is the second pair of the fifth group. By consulting the parameters of the resolution board, the resolution of the ommatidia is 36.00 lp/mm. High resolution is meaningful for subsequent image stitching and moving object recognition. High resolution can make the recognition of feature points more accurate, and the boundary selection of foreground and background is also more accurate.
2.4 System Application of Meniscus Multi-focusing Compound Eye Lens.
The imaging field angle of each ommatidia is small. We want to get a large picture, so we need to stitch the obtained pictures of each channel. The overlapping images of the compound eye lens can meet the requirements of image stitching. Using Harris corner detection method to stitch multiple images, the pixels in the area near the corners have large changes in gradient direction and amplitude. When the pixel moves, the gray scale changes as follows, where w (x, y) is the weight of the Gaussian function, and l (x, y) is the gradient of the gray value, and the corner points in the image can be obtained.
$$\text{E}\left(\text{u},\text{v}\right)=\sum _{\text{x},\text{y}}\text{w}\left(\text{x},\text{y}\right){[\text{I}\left(\text{x}+\text{u},\text{y}+\text{v}\right)-\text{I}(\text{x},\text{y}\left)\right]}^{2}$$
5
The obtained corner points can be selected by the ANMS (Adaptive Non-Maximal Suppression) method to select a specific number of key points. Then the obtained feature points are matched, and the new image obtained is shown in Fig. 7(a-b).
One advantage of compound eye is that it is sensitive to moving objects, so tracking of moving objects is crucial. When the background is static, the moving object is the foreground. The compound eye camera is in a static state, and the foreground and background are separated in real time on the captured image, so as to achieve the purpose of detecting moving objects. The GMM (Gaussian Mixture Model) has been used in many occasions because of its robustness to complex scenes and can meet the real-time requirements of compound eye camera. Before shooting, a Gaussian mixture model is established for each pixel of the captured video. The number of Gaussian distributions is adaptive, and the background can be obtained. In the long-term observation scene, the background occupies most of the time, and most of the data supports the background distribution. The GMM model is constantly updated and learned, so it is robust to small disturbances. Set the background to black and the foreground to white to form a binary image separated from the foreground and the background, and perform contour recognition on the foreground image. We use the compound eye camera to photograph a drone in flight. The distance from the drone to the camera is 1 meter. The contour is drawn with a rectangle, the green frame is the recognized moving object, the yellow curve is the trajectory of the recognized object. The picture shows the moving drone of 5 frames of images. The direction of movement and acceleration of the moving object can be judged according to the direction and length of the yellow line in the adjacent frame. The results shown in Fig. 7 (c) can be obtained by averaging the drone identified by each channel and removing the results of excessively large and small values during the identification process. The large field of view and multi-channel imaging of the compound eye camera can accurately identify objects and implement corresponding functions in the case of partial image loss and low resolution.
The stray light in the space between the ommatidia will reduce the imaging quality and reduce the recognition rate of image stitching feature points and GMM moving object recognition. In this paper, the addition of diaphragms between adjacent ommatidia is studied. The comsol software is used to obtain the figure in Fig. 8. After calculation, the stray light can be reduced by more than 70%. The use of this diaphragm has a higher signal-to-noise ratio for imaging. The improvement can greatly improve the imaging quality, so that compound eye can be used in more fields.