Deepfake Detection Using Anomaly Detection Techniques
Recent advances in generative models, such as generative adversarial networks (GANs) and diffusion models, have enabled the creation of highly realistic fake images and videos. In particular, the generation of fake human faces, or so-called deepfakes, has become increasingly popular, with such images/videos often being used for malicious purposes. Consequently, many deepfake detection techniques have been developed to counteract this threat. The most common approach to deepfake detection involves training a binary classifier model on both real and fake images, which performs well on deepfake generation methods seen during training but struggles against unseen manipulations. To address this limitation, we propose to formulate deepfake detection as a one-class anomaly detection problem. Specifically, we introduce a differential anomaly detection framework that uses only real images during training. Preliminary results show promising generalisation performance across different manipulation methods.