Journal of Biomedical and Sustainable Healthcare Applications


A Survey of the Interpretability Aspect of Deep Learning Models



Journal of Biomedical and Sustainable Healthcare Applications

Received On : 28 August 2021

Revised On : 31 March 2022

Accepted On : 18 May 2022

Published On : 05 January 2023

Volume 03, Issue 01

Pages : 056-065


Abstract


Deep neural networks have attained near-human degree of quality in images, textual, audio, and video recording categorization and predictions tasks. The networks, on the other hand, are still typically thought of as black-box functional probabilistic models that transfer an input data to a trained classifier. Integrating these systems into mission-critical activities like clinical diagnosis, scheduling, and management is the next stage in this human-machine evolutionary change, and it necessitates a degree of confidence in the technology output. Statistical measures are often employed to estimate an output's volatility. The idea of trust, on the other hand, is dependent on a human's sight into a machine's inner workings. To put it another way, the neural networks must justify its outputs in a way that is intelligible to humans, leading to new insights into its internal workings. "Interpretable deep networks" is the name we give to such networks. The concept of interpretability is not one-dimensional. Indeed, the variability of an interpretation owing to varying degrees of human comprehension necessitates the existence of a plethora of characteristics that together define interpretability. Furthermore, the model's interpretations may be expressed in terms of low-level network variables or input properties. We describe several of the variables that are helpful for model interpretability in this study, as well as previous work on those dimensions. We do a gap analysis to determine what remains to be improved to increase models interpretability as step of the procedure.


Keywords


Deep Learning, Deep Learning Models, Machine Learning, Interpretability, Convolutional Neural Network (CNN).


  1. F. B. Hüttel and L. K. Harder Clemmensen, “Consistent and accurate estimation of stellar parameters from HARPS-N Spectroscopy using Deep Learning,” nldl, vol. 2, 2021.
  2. H. Song, Z. Dai, P. Xu, and L. Ren, “Interactive visual pattern search on graph data via graph representation learning,” IEEE Trans. Vis. Comput. Graph., vol. PP, pp. 1–1, 2021.
  3. J. Torres-Tello and S.-B. Ko, “Interpretability of artificial intelligence models that use data fusion to predict yield in aeroponics,” J. Ambient Intell. Humaniz. Comput., 2021.
  4. R. L. Marchese Robinson, A. Palczewska, J. Palczewski, and N. Kidley, “Comparison of the predictive performance and interpretability of random forest and linear models on benchmark data sets,” J. Chem. Inf. Model., vol. 57, no. 8, pp. 1773–1792, 2017.
  5. T. Devji, B. C. Johnston, D. L. Patrick, M. Bhandari, L. Thabane, and G. H. Guyatt, “Presentation approaches for enhancing interpretability of patient-reported outcomes (PROs) in meta-analysis: a protocol for a systematic survey of Cochrane reviews,” BMJ Open, vol. 7, no. 9, p. e017138, 2017.
  6. A. Guha, N. Ho, and X. Nguyen, “On posterior contraction of parameters and interpretability in Bayesian mixture modeling,” Bernoulli (Andover.), vol. 27, no. 4, 2021.
  7. M.-Y. Chen, M.-H. Fan, and L.-X. Huang, “AI-based vehicular network toward 6G and IoT: Deep learning approaches,” ACM Trans. Manag. Inf. Syst., vol. 13, no. 1, pp. 1–12, 2022.
  8. M. Rath, P. S. D. Reddy, and S. K. Singh, “Deep Convolutional Neural Networks (CNNs) to Detect Abnormality in Musculoskeletal Radiographs,” in Lecture Notes in Networks and Systems, Cham: Springer International Publishing, 2022, pp. 107–117.
  9. S. Xiao, Z. Wang, and Y. Tian, “Stability analysis of delayed recurrent neural networks via a quadratic matrix convex combination approach,” IEEE Trans. Neural Netw. Learn. Syst., vol. PP, pp. 1–6, 2021.
  10. D. Milošević, M. Vodanović, I. Galić, and M. Subašić, “Automated estimation of chronological age from panoramic dental X-ray images using deep learning,” Expert Syst. Appl., vol. 189, no. 116038, p. 116038, 2022.
  11. A. M. García Vicente et al., “Increasing the confidence of 18F-Florbetaben PET interpretations: Machine learning quantitative approximation,” Rev. Esp. Med. Nucl. Imagen Mol. (Engl. Ed.), 2021.
  12. G. Tian et al., “Adding before pruning: Sparse filter fusion for deep convolutional neural networks via auxiliary attention,” IEEE Trans. Neural Netw. Learn. Syst., vol. PP, 2021.
  13. Andreas, M. H. Purnomo, and M. Hariadi, “Controlling the hidden layers’ output to optimizing the training process in the Deep Neural Network algorithm,” in 2015 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2015.
  14. M. S. Ansari, V. Bartoš, and B. Lee, “GRU-based deep learning approach for network intrusion alert prediction,” Future Gener. Comput. Syst., vol. 128, pp. 235–247, 2022.
  15. R. R. Chowdhury, B. K. Bhargava, N. Aneja, and S. Aneja, “Device fingerprinting using deep convolutional neural networks,” Int. j. commun. netw. distrib. syst., vol. 1, no. 1, p. 1, 2022.
  16. U. Bhatt, I. Chien, M. B. Zafar, and A. Weller, “DIVINE: Diverse INfluEntial training points for data visualization and model refinement,” arXiv [cs.LG], 2021.
  17. S. D. Wickramaratne and M. S. Mahmud, “Conditional-GAN based data augmentation for deep learning task classifier improvement using fNIRS data,” Front. Big Data, vol. 4, p. 659146, 2021.
  18. P. Thomson, “Static Analysis: An Introduction: The fundamental challenge of software engineering is one of complexity,” ACM Queue, vol. 19, no. 4, pp. 29–41, 2021.
  19. S. Yang, B. Lin, and J. Xu, “Safe randomized load-balanced switching by diffusing extra loads,” Perform. Eval. Rev., vol. 46, no. 1, pp. 135–137, 2019.
  20. G. A. Mousa, E. A. H. Elamir, and K. Hussainey, “Using machine learning methods to predict financial performance: Does disclosure tone matter?,” Int. J. Disclosure Gov., 2021.
  21. G. Fier, D. Hansmann, and R. C. Buceta, “Stochastic model for the CheY-P molarity in the neighbourhood ofE. coliflagella motors,” bioRxiv, 2019.
  22. M. Rajalakshmi and K. Annapurani, “Enhancement of vascular patterns in palm images using various image enhancement techniques for person identification,” Int. J. Image Graph., p. 2250032, 2021.
  23. R. Wang, X. Yao, J. Yang, L. Xue, and M. Hu, “Hierarchical deep transfer learning for fine-grained categorization on micro datasets,” J. Vis. Commun. Image Represent., vol. 62, pp. 129–139, 2019.
  24. U. Schlegel, D. V. Lam, D. A. Keim, and D. Seebacher, “TS-MULE: Local interpretable model-agnostic explanations for time series forecast models,” arXiv [cs.LG], 2021.
  25. M. Toğaçar, N. Muzoğlu, B. Ergen, B. S. B. Yarman, and A. M. Halefoğlu, “Detection of COVID-19 findings by the local interpretable model-agnostic explanations method of types-based activations extracted from CNNs,” Biomed. Signal Process. Control, vol. 71, no. 103128, p. 103128, 2022.
  26. B. Wang, W. Pei, B. Xue, and M. Zhang, “Evolving local interpretable model-agnostic explanations for deep neural networks in image classification,” in Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2021.
  27. H. Wu, A. Huang, and J. W. Sutherland, “Layer-wise relevance propagation for interpreting LSTM-RNN decisions in predictive maintenance,” Int. J. Adv. Manuf. Technol., 2021.
  28. A. I. Korda et al., “Identification of voxel-based texture abnormalities as new biomarkers for schizophrenia and major depressive patients using layer-wise relevance propagation on deep learning decisions,” Psychiatry Res. Neuroimaging, vol. 313, no. 111303, p. 111303, 2021.
  29. Y. S. Ju and K. E. Goodson, “Short-time-scale thermal mapping of microdevices using a scanning thermoreflectance technique,” J. Heat Transfer, vol. 120, no. 2, pp. 306–313, 1998.
  30. R. Kucharski, B. Kostic, and G. Gentile, “Real-time traffic forecasting with recent DTA methods,” in 2017 5th IEEE International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS), 2017.
  31. M. Wang, X. Tong, and B. Li, “KW-race and fast KW-race: Racing-based frameworks for tuning parameters of evolutionary algorithms on black-box optimization problems,” in Lecture Notes in Computer Science, Cham: Springer International Publishing, 2017, pp. 617–628.
  32. W. Du, S. Ding, C. Zhang, and S. Du, “Modified action decoder using Bayesian reasoning for multi-agent deep reinforcement learning,” Int. j. mach. learn. cybern., 2021.

Acknowledgements


Authors thank Reviewers for taking the time and effort necessary to review the manuscript.


Funding


No funding was received to assist with the preparation of this manuscript.


Ethics declarations


Conflict of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.


Availability of data and materials


No data available for above study.


Author information


Contributions

All authors have equal contribution in the paper and all authors have read and agreed to the published version of the manuscript.


Corresponding author


Rights and permissions


Open Access This article is licensed under a Creative Commons Attribution NoDerivs is a more restrictive license. It allows you to redistribute the material commercially or non-commercially but the user cannot make any changes whatsoever to the original, i.e. no derivatives of the original work. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-nd/4.0/


Cite this article


Eliot Spitzer and Rona Miles, “A Survey of the Interpretability Aspect of Deep Learning Models”, Journal of Biomedical and Sustainable Healthcare Applications, vol.3, no.1, pp. 056-065, January 2023. doi: 10.53759/0088/JBSHA202303006.


Copyright


© 2023 Eliot Spitzer and Rona Miles. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.