In an era characterized by the prolific generation of digital imagery through advanced artificial intelligence, the need for reliable methods to authenticate actual photographs from AI-generated ones has become paramount. The ever-increasing ubiquity of AI-generated imagery, which seamlessly blends with authentic photographs, raises concerns about misinformation and trustworthiness. Authenticating these images has taken on critical significance in various domains, including journalism, forensic science, and social media. Traditional methods of image authentication often struggle to adapt to the increasingly sophisticated nature of AI-generated content. In this context, frequency domain analysis emerges as a promising avenue due to its effectiveness in uncovering subtle discrepancies and patterns that are less apparent in the spatial domain. Delving into the imperative task of imagery authentication, this paper introduces a novel Generative Adversarial Networks (GANs) based AI-generated Imagery Authentication (GANIA) method using frequency domain analysis. By exploiting the inherent differences in frequency spectra, GANIA uncovers unique signatures that are difficult to replicate, ensuring the integrity and authenticity of visual content. By training GANs on vast datasets of real images, we create AI-generated counterparts that closely mimic the characteristics of authentic photographs. This approach enables us to construct a challenging and realistic dataset, ideal for evaluating the efficacy of frequency domain analysis techniques in image authentication. Our work not only highlights the potential of frequency domain analysis for image authentication but also underscores the importance of adopting generative AI approaches in studying this critical topic. Through this innovative fusion of AI and frequency domain analysis, we contribute to advancing image forensics and preserving trust in visual information in an AI-driven world.
Ever since human society entered the age of social media, every user has had a considerable amount of visual content stored online and shared in variant virtual communities. As an efficient information circulation measure, disastrous consequences are possible if the contents of images are tampered with by malicious actors. Specifically, we are witnessing the rapid development of machine learning (ML) based tools like DeepFake apps. They are capable of exploiting images on social media platforms to mimic a potential victim without their knowledge or consent. These content manipulation attacks can lead to the rapid spread of misinformation that may not only mislead friends or family members but also has the potential to cause chaos in public domains. Therefore, robust image authentication is critical to detect and filter off manipulated images. In this paper, we introduce a system that accurately AUthenticates SOcial MEdia images (AUSOME) uploaded to online platforms leveraging spectral analysis and ML. Images from DALL-E 2 are compared with genuine images from the Stanford image dataset. Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT) are used to perform a spectral comparison. Additionally, based on the differences in their frequency response, an ML model is proposed to classify social media images as genuine or AI-generated. Using real-world scenarios, the AUSOME system is evaluated on its detection accuracy. The experimental results are encouraging and they verified the potential of the AUSOME scheme in social media image authentications.
Deep neural networks (DNN) have been studied intensively in recent years, leading to many practical applications. However, there are also concerns about the security problems and vulnerabilities of DNN. Studies on adversarial network development have shown that relatively more minor perturbations can impact the DNN performance and manipulate its outcome. The impacts of adversarial perturbations have led to the development of advanced techniques for generating image-level perturbations. Once embedded in a clean image, these perturbations are not perceptible to human eyes and fool a well-trained deep learning (DL) convolutional neural network (CNN) classifier. This work introduces a new Critical-Pixel Iterative (CriPI) algorithm after a thorough study on critical pixels’ characteristics. The proposed CriPI algorithm can identify the critical pixels and generate one-pixel attack perturbations with a much higher efficiency. Compared to a one-pixel attack benchmark algorithm, the CriPI algorithm significantly reduces the time delay of the attack from seven minutes to one minute with similar success rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.