Journal of Machine and Computing


Deep Learning for Facial Forgery Detection Performance Evaluation of DenseNet201, InceptionV3 and ConvNeXt



Journal of Machine and Computing

Received On : 23 June 2025

Revised On : 30 August 2025

Accepted On : 02 October 2025

Published On : 14 October 2025

Volume 06, Issue 01

Pages : 048-057


Abstract


The recent spread of AI-generated face forgery is one of the greatest threats to visual media credibility. The Proposed work compares three deep transfer learning models DenseNet201, InceptionV3, and ConvNeXt, in detecting manipulated facial images. An 8,000 real and fake facial image dataset was used to train and benchmark models under consistent experimental condition. ConvNeXt achieved the best classification accuracy of 91.25 % which is much higher than that of DenseNet201 (75.12 %) and InceptionV3 (68.38 %). In addition, ConvNeXt had better trade-off between true positive and false positive rates, which means better generalization and resistance to overfitting. These findings prove the applicability of ConvNeXt in the robust detection of facial forgery and highlight potential application in the practical implementation of the facial authenticity determination. Future research will investigate ensemble methods and real-time inference.


Keywords


Facial Fake Images, Transfer Learning (TL), DenseNet201, InceptionV3, Facial Forgery Detection, ConvNeXt.


  1. Malik, M. Kuribayashi, S. M. Abdullahi, and A. N. Khan, “DeepFake Detection for Human Face Images and Videos: A Survey,” IEEE Access, vol. 10, pp. 18757–18775, 2022, doi: 10.1109/access.2022.3151186.
  2. Y. Lu and T. Ebrahimi, “Assessment framework for deepfake detection in real-world situations,” EURASIP Journal on Image and Video Processing, vol. 2024, no. 1, Feb. 2024, doi: 10.1186/s13640-024-00621-8.
  3. F. Marra, D. Gragnaniello, D. Cozzolino, and L. Verdoliva, “Detection of GAN-Generated Fake Images over Social Networks,” 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), Apr. 2018, doi: 10.1109/mipr.2018.00084.
  4. Z. Liu, X. Qi, and P. H. S. Torr, “Global Texture Enhancement for Fake Face Detection in the Wild,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8057–8066, Jun. 2020, doi: 10.1109/cvpr42600.2020.00808.
  5. Wang and W. Deng, “Representative Forgery Mining for Fake Face Detection,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14918–14927, Jun. 2021, doi: 10.1109/cvpr46437.2021.01468.
  6. J. C. Neves, R. Tolosana, R. Vera-Rodriguez, V. Lopes, H. Proenca, and J. Fierrez, “GANprintR: Improved Fakes and Evaluation of the State of the Art in Face Manipulation Detection,” IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 5, pp. 1038–1048, Aug. 2020, doi: 10.1109/jstsp.2020.3007250.
  7. J. Yang, S. Xiao, A. Li, G. Lan, and H. Wang, “Detecting fake images by identifying potential texture difference,” Future Generation Computer Systems, vol. 125, pp. 127–135, Dec. 2021, doi: 10.1016/j.future.2021.06.043.
  8. F. Benchallal, A. Hafiane, N. Ragot, and R. Canals, “ConvNeXt based semi-supervised approach with consistency regularization for weeds classification,” Expert Systems with Applications, vol. 239, p. 122222, Apr. 2024, doi: 10.1016/j.eswa.2023.122222.
  9. S. C. Hoo, H. Ibrahim, and S. A. Suandi, “ConvFaceNeXt: Lightweight Networks for Face Recognition,” Mathematics, vol. 10, no. 19, p. 3592, Oct. 2022, doi: 10.3390/math10193592.
  10. Z. Li, T. Gu, B. Li, W. Xu, X. He, and X. Hui, “ConvNeXt-Based Fine-Grained Image Classification and Bilinear Attention Mechanism Model,” Applied Sciences, vol. 12, no. 18, p. 9016, Sep. 2022, doi: 10.3390/app12189016.
  11. Q. Zhang, X. Wang, M. Zhang, L. Lu, and P. Lv, “Face-Inception-Net for Recognition,” Electronics, vol. 13, no. 5, p. 958, Mar. 2024, doi: 10.3390/electronics13050958.
  12. H. Qi, R. Han, Y. Shi, and X. Qi, “A Novel High-Performance Face Anti-Spoofing Detection Method,” IEEE Access, vol. 12, pp. 67379–67391, 2024, doi: 10.1109/access.2024.3400285.
  13. K. Hasan et al., “Facial Expression Based Imagination Index and a Transfer Learning Approach to Detect Deception,” 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 634–640, Sep. 2019, doi: 10.1109/acii.2019.8925473.
  14. L. Guarnera et al., “The Face Deepfake Detection Challenge,” Journal of Imaging, vol. 8, no. 10, p. 263, Sep. 2022, doi: 10.3390/jimaging8100263.
  15. M. Elasri, O. Elharrouss, S. Al-Maadeed, and H. Tairi, “Image Generation: A Review,” Neural Processing Letters, vol. 54, no. 5, pp. 4609–4646, Mar. 2022, doi: 10.1007/s11063-022-10777-x.
  16. Y. Wang, V. Zarghami, and S. Cui, “Fake Face Detection using Local Binary Pattern and Ensemble Modeling,” 2021 IEEE International Conference on Image Processing (ICIP), pp. 3917–3921, Sep. 2021, doi: 10.1109/icip42928.2021.9506460.
  17. H. Mittal, M. Saraswat, J. C. Bansal, and A. Nagar, “Fake-Face Image Classification using Improved Quantum-Inspired Evolutionary-based Feature Selection Method,” 2020 IEEE Symposium Series on Computational Intelligence (SSCI), Dec. 2020, doi: 10.1109/ssci47803.2020.9308337.
  18. E. Kim and S. Cho, “Exposing Fake Faces Through Deep Neural Networks Combining Content and Trace Feature Extractors,” IEEE Access, vol. 9, pp. 123493–123503, 2021, doi: 10.1109/access.2021.3110859.
  19. M. Tanaka and H. Kiya, “Fake-image detection with Robust Hashing,” 2021 IEEE 3rd Global Conference on Life Sciences and Technologies (LifeTech), Mar. 2021, doi: 10.1109/lifetech52111.2021.9391842.
  20. P. He, H. Li, and H. Wang, “Detection of Fake Images Via The Ensemble of Deep Representations from Multi Color Spaces,” 2019 IEEE International Conference on Image Processing (ICIP), pp. 2299–2303, Sep. 2019, doi: 10.1109/icip.2019.8803740.

CRediT Author Statement


The authors confirm contribution to the paper as follows:

Conceptualization: Akshatha G, Kempanna M, Ashoka S B and Job Prasanth Kumar Chinta Kunta; Methodology: Akshatha G and Kempanna M; Software: Ashoka S B and Job Prasanth Kumar Chinta Kunta; Data Curation: Akshatha G and Kempanna M; Writing-Original Draft Preparation: Akshatha G, Kempanna M, Ashoka S B and Job Prasanth Kumar Chinta Kunta; Visualization: Akshatha G and Kempanna M; Investigation: Ashoka S B and Job Prasanth Kumar Chinta Kunta; Supervision: Akshatha G and Kempanna M; Validation: Ashoka S B and Job Prasanth Kumar Chinta Kunta; Writing- Reviewing and Editing: Akshatha G, Kempanna M, Ashoka S B and Job Prasanth Kumar Chinta Kunta; All authors reviewed the results and approved the final version of the manuscript.


Acknowledgements


We would like to thank Reviewers for taking the time and effort necessary to review the manuscript. We sincerely appreciate all valuable comments and suggestions, which helped us to improve the quality of the manuscript.


Funding


No funding was received to assist with the preparation of this manuscript.


Ethics declarations


Conflict of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.


Availability of data and materials


Data sharing is not applicable to this article as no new data were created or analysed in this study.


Author information


Contributions

All authors have equal contribution in the paper and all authors have read and agreed to the published version of the manuscript.


Corresponding author


Rights and permissions


Open Access This article is licensed under a Creative Commons Attribution NoDerivs is a more restrictive license. It allows you to redistribute the material commercially or non-commercially but the user cannot make any changes whatsoever to the original, i.e. no derivatives of the original work. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-nd/4.0/


Cite this article


Akshatha G, Kempanna M, Ashoka S B and Job Prasanth Kumar Chinta Kunta, “Deep Learning for Facial Forgery Detection Performance Evaluation of DenseNet201, InceptionV3 and ConvNeXt”, Journal of Machine and Computing, vol.6, no.1, pp. 048-057, 2026, doi: 10.53759/7669/jmc202606005.


Copyright


© 2026 Akshatha G, Kempanna M, Ashoka S B and Job Prasanth Kumar Chinta Kunta. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.