4.1 Model comparison
To validate the segmentation performance of the proposed deep learning algorithm, a comparative experiment with the traditional segmentation model U-net and DeeplabV3 + 48 on our open-source dataset was designed to compare their segmentation results, mIoU, and loss. The mIoU indicates the extent to which the model-predicted area overlaps with the actual area. The loss is used to measure the difference between the model prediction and the actual target. In the recognition process, red represents the "unsplit" state, green indicates the "splitting" state, yellow represents the "split" state, and blue signifies the "merging" state.
Figure 4a-d illustrates the process of a droplet splitting. As can be seen, it transitions from the initial red “unsplit” state to the green “splitting” state, culminating in the yellow “split” state. In our evaluation, the DeeplabV3+, U-net, and our proposed model accurately recognize droplets in the "unsplit" and "split" states (Fig. 4b1-d1 and b3-d3). In instances where droplets assume an hourglass shape, the DeeplabV3 + fails to segment the connected regions within the "splitting" state (Fig. 4b2). In contrast, both U-net and our proposed model effectively segment the "splitting" droplet (Fig. 4c2 and d2). Accurate recognition of the "splitting" state is crucial for the successful execution of the splitting operation. When two droplets merge, our proposed model accurately discriminates between the blue "merging" and "split" states (Fig. 4d4). In contrast, the DeeplabV3 + erroneously categorizes them as "split" (Fig. 4b4), while U-net detects parts of the droplets as "split" (Fig. 4c4). These results indicate that our model consistently outperforms the other models in most scenarios.
The mIoU of the proposed is in the range of 85–91% (Fig. 4j), while the mIoU of DeeplabV3 + and U-net are in the range of 80–89% and 83–89% (Fig. 4e and g). The mIoU of the proposed 1.1-6% better than that of the DeeplabV3+, and 1.1-2% better than that of the U-net. A higher mIoU indicates better segmentation performance, with the proposed predicted regions overlapping more closely with the actual regions. Moreover, the loss of the proposed stabilizes at 0.08–0.15 (Fig. 4k), which is approximately 50–60% lower than the DeeplabV3 + loss of 0.2–0.3 (Fig. 4f), and 20–37% lower than the U-net loss of 0.11–0.18 after the 75th epoch (Fig. 4h). A lower loss indicates that the proposed predictions are closer to the true droplet states. Furthermore, the loss and mIoU of the proposed tend to stabilize after convergence, indicating that it has learned and mastered effective semantic segmentation ability.
The pixel-level evaluation metrics on a test set of 438 images were also computed, including the mean precision (mPrecision), mean pixel accuracy (mPA), and mean recall (mRecall) values. Three metrics are used to evaluate the performance of the model's semantic segmentation results in terms of category accuracy, pixel-level classification accuracy, and positive sample detection (†ESI S2 Eqs. 1–5). The mPrecision is 95.72%, the mPA is 93.30%, and the mRecall is 93.30% (†Fig. 2). The three metrics of the proposed method are greater than 91.76%, 89.80%, and 88.76% of the DeeplabV3+, and greater than 92.96%, 90.97% and 90.37% of the U-net, with higher values indicating that the proposed can accurately recognize objects in the image and generate segmentation results that highly match the actual labelled objects.
4.2 Influence of luminous environment
To investigate the impact of potential variations in luminous environment within various DMF systems, OpenCV was utilized to adjust the brightness levels of real-time videos (Fig. 5a) and the accuracy of recognizing states, positions, and overall recognition under different lighting conditions was analyzed across eight videos (†ESI S2 Eqs. 6–8).
With a 0% and 50% increase or decrease in brightness, the overall accuracy decreased to 100% and 96.7% (Fig. 5b). Meanwhile, the mean accuracy for recognizing both states and positions consistently ranges between 97% and 100% (Fig. 5c and d). Obviously, the DMF's lighting just changed the droplet color from blue to cyan or dark blue, and the droplet on the DMF chip are still visible. The change in lighting intensity is consistent with the lighting conditions of the vast majority of DMFs, and in this case, the proposed system still maintains a high recognition accuracy of over 96.7%. With a 60% and 80% increase or decrease in brightness, the overall mean accuracy decreases to 91.7% and 67.5% (Fig. 5b). Likewise, the mean accuracy for states and positions recognition decreases to 95.6% and 73.8% (Fig. 5c), 95.7% and 76.3% (Fig. 5d). It is evident that when the brightness variation approaches 80%, overexposure or underexposure occurs. The droplet's color has changed, and its edges are no longer distinct, leading to a noticeable drop in recognition accuracy. Furthermore, as the brightness varies from 0–50%, the standard deviation (SD) of overall accuracy consistently remains within the range of 0-0.3%, which is lower by 0.5–2% compared to the observed SD fluctuating between 0.5–2.3% when brightness varies from 60–80%. This indicates that the proposed system exhibits stronger accuracy and robustness, especially in the brightness variation range of 0–50%.
4.3 Influence of droplet color and shape
Considering the difference in color and shape of droplets due to differences in their composition and handling methods in DMF system, experiments for recognizing droplets with different colors and shapes were established. Droplets of different colors can be recognized as "unsplit" by the different shapes they appear to take during moving, such as L-shaped, rectangular, circular, and triangular. (Fig. 6a1-b1 and †Fig. 3a). In real-time operation, the state transitions of the splitting of different colored droplets from "unsplit" to "splitting" to "split", and the merging of two droplets "merging" can also be recognized (Fig. 6a2-5, Fig. 6b2-5, †Fig. 3b-e, and †video 1). In conclusion, the proposed system not only accurately recognizes various shapes and states of blue droplets but also extends its recognition to red, yellow, black, and even transparent droplets (†ESI S4). This proves that our system has strong generalization ability to recognize different colors and shapes droplets of instantaneous changes in operating states.
Considering the merging of droplets with various substances during DMF experimental operations to observe reaction phenomena, an experiment for the merging of droplets of different colors was devised. When red and blue "unsplit" droplets merge (Fig. 6b1), they enter a "merging" state (Fig. 6b2), returning to "unsplit" state after merging (Fig. 6b3). Even as they mix further, they still stay "unsplit" and trackable (Fig. 6b4-b5)). The successful recognition of the merging of droplets with different colors indicates the ability of the proposed DMF system for merging, subsequent splitting, and moving of droplets with various reagents or materials, extending beyond the constraints of single-substance droplets.
The error rate is the error caused by the droplets being recognized as other states while remaining in the continuous and same state. It can be found that the average error rates for the 4 states of different colored droplets are maintained at around 0.63%, 0.57%, 0.43% and 0.44% respectively, which means that the average number of incorrectly recognized frames is less than 1 frame in continuous and same state (Fig. 6c). This demonstrates the system's capability to perform real-time recognition of multiple states of differently colored droplets with a low error rate. When applied in real-time recognition during DMF experimental operations, it ensures the automation of droplet handling processes, preventing operational failures due to misrecognized states. For example, in a control sequence that transitions from a "splitting" state to a "splitting" state that is incorrectly identified as "splitting," the system incorrectly determines that the droplet has completed the splitting process.
4.4 AI-assisted multistate droplet control
To validate the applicability of the AI-assisted multistate feedback control system named µDropAI, droplet manipulations including moving, splitting, and dispensing on the DMF platform were performed.
Droplet moving, and splitting on digital microfluidic are common biological and chemical experimental operations. A flow chart of automated feedback control involving droplet moving and splitting is performed (†ESI S5 and †Fig. 4). In the droplet moving manipulation, µDropAI recognizes the droplets and determines their locations on the platform. Subsequently, the activated electrodes drive the droplets to reach their destination. When the droplet reaches the target electrode, µDropAI confirmed that the droplet has successfully moved. (Fig. 7a4-a5, Fig. 7b, and †video 2). When the movement fails, the electrode needs to be reactivated to drive the droplet. Compared to existing image-based methods,28 our system not only recognizes the contours of the droplets, but also distinguishes the state and position of the droplets and is adaptable to a variety of environments. In the droplet splitting manipulation, the droplet undergoes a total of three state transitions, from the "unsplit" state of a large droplet to the hourglass-shaped "splitting" state, and finally to the "split" state of two small droplets (Fig. 7a1-a3, Fig. 7b, and †video 2). When two electrodes are energized sequentially, the droplet may be pulled to one side by the first energized electrode, leading to a failure in splitting. In such cases, the control system reverts the droplet to its initial state and re-split (†video 3). Additionally, the system can automatically merge the droplets and to recognize this droplet. The proposed control system can accurately monitor the droplet state, and guide the droplet operation according to the droplet state, extending beyond mere location-based operation determination.
Droplet dispensing is an operation to generate droplets from a reservoir. A flow chart of automated feedback control experiments for dispensing is performed (†ESI S5 and †Fig. 5). The initial position and state of droplet in reservoir is recognized (Fig. 7d1), an automatic process is executed by the µDropAI to activate the electrodes to split the droplet from the reservoir, (Fig. 7d2-d3). During the process, an hourglass-shaped structure is formed and recognized as the “splitting” state until the droplet is totally spitted from the reservoir (Fig. 7d4, Fig. 7c, and †video 4). To create a droplet of one unit size, simply activate one electrode on one side of the reservoir. AI-assisted methods can adjust the ideal size of droplets on demand, and can be used to enhance the precise of droplet generation for high-throughput experiments.
The volume precision of droplets is crucial in various applications such as drug discovery and quantitative analysis. Hence, a feedback closed-loop control method based on pixel values is proposed to enable the precise splitting of droplets by monitoring droplet volumes and controlling electrodes (†ESI S5 and †Fig. 6). During the process of droplet splitting, the proposed semantic segmentation model is also utilized to calculate the pixel values of the split droplets, determining their respective volumes. If the volume error of the split droplets exceeds 3%, the droplet will be moved to another electrode for re-split. The volume of a droplet covering two electrodes is approximately 4 µL, corresponding to a pixel value of 8430 (The pixel value of the droplet in a stationary state is averaged over 70 experiments). The coefficients of variation (CV) are calculated for split droplets under the proposed close-loop and open-loop control methods in 70 experiments. A larger CV indicates a greater volume error in the split droplets. It is observed that the proposed method yields more evenly split droplets, with CV value being limited to below 2.8%, which is lower than CV value 5.62% (The CV value is basically consistent with that of traditional droplet dispense.49) (Fig. 7e and †Fig. 7).
The fundamental droplet operations shown above demonstrate the unprecedented advantages of the AI-assisted feedback control system for DMF. It can autonomously conduct experimental operations with various forms of reagents, such as protein-rich droplets, on DMF chips without human intervention. Moreover, it can combine with reinforcement learning to enhance the precision and automation of droplet manipulation by recognizing, tracking, discriminating states, and automating droplet control, thereby expanding the capabilities and applications of DMF systems.