DETECTING AI GENERATED IMAGES USING CNN AND EXPLAINABLE AI
DOI:
https://doi.org/10.64751/wf4mh384Keywords:
AI-Generated Image Detection, Deep Learning, Convolutional Neural Networks (CNN), Explainable Artificial Intelligence (XAI), Grad-CAM, Digital Forensics, Image Authentication, Misinformation DetectionAbstract
The rapid advancement of artificial intelligence, particularly in generative modeling, has resulted in the widespread creation and distribution of AI-generated images across social media, digital marketing, journalism, entertainment, and creative industries [1][2]. While these technologies enable innovation and automation, they simultaneously introduce serious challenges related to misinformation, digital forgery, identity manipulation, and erosion of trust in visual media [3][4]. AI-generated images are now capable of mimicking real-world photographs with high fidelity, making it increasingly difficult for humans and traditional verification methods to distinguish between real and synthetic content [5]. Conventional image authentication techniques, such as metadata analysis, watermarking, and manual visual inspection, have become ineffective due to deliberate removal of metadata and the increasing realism of generative models [6]. To address these limitations, this paper proposes Fake Vision, an intelligent image verification system that leverages Convolutional Neural Networks (CNNs) combined with Explainable Artificial Intelligence (XAI) techniques for accurate and transparent detection of AI-generated images [7][8]. The proposed system performs systematic image preprocessing, extracts discriminative visual features using deep CNN architectures, and classifies images as real or AI-generated [9]. To overcome the black-box nature of deep learning models, explainability techniques such as Gradient-weighted Class Activation Mapping (Grad-CAM) are incorporated to visualize image regions influencing classification decisions [10][11]. Experimental evaluation demonstrates that the system achieves high detection accuracy while offering interpretable visual explanations [12]. The Fake Vision framework is suitable for applications in digital forensics, media authentication, cybersecurity, misinformation control, and content moderation platforms [13]
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.







