Creating deepfake multimedia, and especially deepfake videos, has become much easier these days due to the availability of deepfake tools and the virtually unlimited numbers of face images found online. Research and industry communities have dedicated time and resources to develop detection methods to expose these fake videos. Although these detection methods have been developed over the past few years, synthesis methods have also made progress, allowing for the production of deepfake videos that are harder and harder to differentiate from real videos. This research paper proposes an improved optical flow estimation-based method to detect and expose the discrepancies between video frames. Augmentation and modification are used to improve the system’s overall accuracy. Furthermore, the system is trained on Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs) to explore the effects and benefit of each type of hardware in deepfake detection. TPUs were found to have shorter training times compared to the GPUs. VGG-16 is the best performing model when used as backbone for the system, as it achieved around 82.0% detection accuracy when trained on GPUs and 71.34% accuracy on TPUs.