NoiseScope: Detecting Deepfake Images in a Blind Setting
Recent advances in Generative Adversarial Networks (GANs) have significantly improved the quality of synthetic images or deepfakes. Photorealistic images generated by GANs start to challenge the boundary of human perception of reality, and brings new threats to many critical domains, e.g., journalism, and online media. Detecting whether an image is generated by GAN or a real camera has become an important yet under-investigated area. In this work, we propose a blind detection approach called NoiseScope for discovering GAN images among other real images. A blind approach requires no a priori access to GAN images for training, and demonstrably generalizes better than supervised detection schemes. Our key insight is that, similar to images from cameras, GAN images also carry unique patterns in the noise space. We extract such patterns in an unsupervised manner to identify GAN images. We evaluate NoiseScope on 11 diverse datasets containing GAN images, and achieve up to 99.68% F1 score in detecting GAN images. We test the limitations of NoiseScope against a variety of countermeasures, observing that NoiseScope holds robust or is easily adaptable.
Jiameng Pu, Neal Mangaokar, Bolun Wang, Chandan K. Reddy, Bimal Viswanath: NoiseScope: Detecting Deepfake Images in a Blind Setting. ACSAC 2020: 913-927
- Date of publication:
- December 7, 2020
- Annual Computer Security Applications
- Page number(s):