Journal of Machine and Computing


nCD and Clipped RBM Based Multimode DBN for Optimal Classification of Heterogeneous Images in Big Data



Journal of Machine and Computing

Received On : 25 April 2024

Revised On : 18 September 2024

Accepted On : 20 January 2025

Published On : 05 April 2025

Volume 05, Issue 02

Pages : 682-693


Abstract


The scientific community has shown a keen interest in the application of big data analytics in the healthcare industry. The management of healthcare records is extremely challenging not just because of the sheer volume of these records but also due to the multifaceted nature of the data sets and the high dimensions of these records. Recently, it has been demonstrated that deep learning models are extremely powerful generative models that are able to progressively separate features and provide great predictive execution. When it comes to medical image processing, traditional algorithms have been established for a particular modality and a specific condition. Because of the high memory and processing needs of each neural network, it is challenging to build a system that utilises a large number of neural networks and a wide range of specialised image-processing algorithms. This study proposed a C-RBM and nCD-based multimode DBN method. In the first stage, we use nCD (neutral Contrastive Divergence) to train unimodal CRBM pathways, and in the second stage, we build a multimode DBN architecture using only the shared representations of the two pathways. This method fundamentally comprises two-stage learning techniques. A computerised method for the classification of breast and brain cancer was represented by the multimode technique that was mentioned above. When it comes to accuracy, the recommended methodological setups perform better than the alternatives that are considered to be state-of-the-art.


Keywords


RBM (Restricted Boltzmann Machine, Deep Learning, Big Data, Deep Belief Network (DBN)), Computed Tomography (CT), Transfer Learning.


  1. H. Sun, Z. Liu, G. Wang, W. Lian, and J. Ma, “Intelligent Analysis of Medical Big Data Based on Deep Learning,” IEEE Access, vol. 7, pp. 142022–142037, 2019, doi: 10.1109/access.2019.2942937.
  2. Z. Chen, F. Zhong, X. Yuan, and Y. Hu, “Framework of integrated big data: A review,” 2016 IEEE International Conference on Big Data Analysis (ICBDA), pp. 1–5, Mar. 2016, doi: 10.1109/icbda.2016.7509815.
  3. I. Hirra et al., “Breast Cancer Classification From Histopathological Images Using Patch-Based Deep Learning Modeling,” IEEE Access, vol. 9, pp. 24273–24287, 2021, doi: 10.1109/access.2021.3056516.
  4. Y. Liang, J. Yang, X. Quan, and H. Zhang, “Metastatic Breast Cancer Recognition in Histopathology Images Using Convolutional Neural Network with Attention Mechanism,” 2019 Chinese Automation Congress (CAC), pp. 2922–2926, Nov. 2019, doi: 10.1109/cac48633.2019.8997460.
  5. S. S. Aboutalib, A. A. Mohamed, W. A. Berg, M. L. Zuley, J. H. Sumkin, and S. Wu, “Deep Learning to Distinguish Recalled but Benign Mammography Images in Breast Cancer Screening,” Clinical Cancer Research, vol. 24, no. 23, pp. 5902–5909, Dec. 2018, doi: 10.1158/1078-0432.ccr-18-1115.
  6. A. A. Alhussan, N. M. A. Samee, V. F. Ghoneim, and Y. M. Kadah, “Evaluating Deep and Statistical Machine Learning Models in the Classification of Breast Cancer from Digital Mammograms,” International Journal of Advanced Computer Science and Applications, vol. 12, no. 10, 2021, doi: 10.14569/ijacsa.2021.0121033.
  7. J. Peng et al., “Regularized multivariate regression for identifying master predictors with application to integrative genomics study of breast cancer,” The Annals of Applied Statistics, vol. 4, no. 1, Mar. 2010, doi: 10.1214/09-aoas271.
  8. J. Tong, Y. Zhao, P. Zhang, L. Chen, and L. Jiang, “MRI brain tumor segmentation based on texture features and kernel sparse coding,” Biomedical Signal Processing and Control, vol. 47, pp. 387–392, Jan. 2019, doi: 10.1016/j.bspc.2018.06.001.
  9. S. Yazdani, R. Yusof, A. Karimian, M. Pashna, and A. Hematian, “Image Segmentation Methods and Applications in MRI Brain Images,” IETE Technical Review, vol. 32, no. 6, pp. 413–427, Jul. 2015, doi: 10.1080/02564602.2015.1027307.
  10. M. Chen, Q. Yan, and M. Qin, “A segmentation of brain MRI images utilizing intensity and contextual information by Markov random field,” Computer Assisted Surgery, vol. 22, no. sup1, pp. 200–211, Oct. 2017, doi: 10.1080/24699322.2017.1389398.
  11. G. Litjens et al., “A survey on deep learning in medical image analysis,” Medical Image Analysis, vol. 42, pp. 60–88, Dec. 2017, doi: 10.1016/j.media.2017.07.005.
  12. Y. Yang, “Medical Multimedia Big Data Analysis Modeling Based on DBN Algorithm,” IEEE Access, vol. 8, pp. 16350–16361, 2020, doi: 10.1109/access.2020.2967075.
  13. Y. Liu, X. Chen, J. Cheng, and H. Peng, “A medical image fusion method based on convolutional neural networks,” 2017 20th International Conference on Information Fusion (Fusion), pp. 1–7, Jul. 2017, doi: 10.23919/icif.2017.8009769.
  14. W. Xue-jun and M. Ying, “A Medical Image Fusion Algorithm Based on Lifting Wavelet Transform,” 2010 International Conference on Artificial Intelligence and Computational Intelligence, pp. 474–476, Oct. 2010, doi: 10.1109/aici.2010.337.
  15. T. Mittal, U. Bhattacharya, R. Chandra, A. Bera, and D. Manocha, “M3ER: Multiplicative Multimodal Emotion Recognition using Facial, Textual, and Speech Cues,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 02, pp. 1359–1367, Apr. 2020, doi: 10.1609/aaai.v34i02.5492.
  16. M. Egger, M. Ley, and S. Hanke, “Emotion Recognition from Physiological Signal Analysis: A Review,” Electronic Notes in Theoretical Computer Science, vol. 343, pp. 35–55, May 2019, doi: 10.1016/j.entcs.2019.04.009.
  17. M. Ashwin Shenoy and N. Thillaiarasu, “Enhancing temple surveillance through human activity recognition: A novel dataset and YOLOv4-ConvLSTM approach,” Journal of Intelligent & Fuzzy Systems, vol. 45, no. 6, pp. 11217–11232, Dec. 2023, doi: 10.3233/jifs-233919.
  18. X. Tai and W. Song, “An Improved Approach Based on FCM Using Feature Fusion for Medical Image Retrieval,” Fourth International Conference on Fuzzy Systems and Knowledge Discovery (FSKD 2007), pp. 336–342, 2007, doi: 10.1109/fskd.2007.160.
  19. B. Xu et al., “Attention by Selection: A Deep Selective Attention Approach to Breast Cancer Classification,” IEEE Transactions on Medical Imaging, vol. 39, no. 6, pp. 1930–1941, Jun. 2020, doi: 10.1109/tmi.2019.2962013.
  20. F. Shahidi, S. Mohd Daud, H. Abas, N. A. Ahmad, and N. Maarop, “Breast Cancer Classification Using Deep Learning Approaches and Histopathology Image: A Comparison Study,” IEEE Access, vol. 8, pp. 187531–187552, 2020, doi: 10.1109/access.2020.3029881.
  21. R. Azad, M. Asadi-Aghbolaghi, S. Kasaei, and S. Escalera, “Dynamic 3D Hand Gesture Recognition by Learning Weighted Depth Motion Maps,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 6, pp. 1729–1740, Jun. 2019, doi: 10.1109/tcsvt.2018.2855416.
  22. H. H. Sultan, N. M. Salem, and W. Al-Atabany, “Multi-Classification of Brain Tumor Images Using Deep Neural Network,” IEEE Access, vol. 7, pp. 69215–69225, 2019, doi: 10.1109/access.2019.2919122.
  23. K. Bhattacharjee and M. Pant, “Hybrid particle swarm optimization-genetic algorithm trained multi-layer perceptron for classification of human glioma from molecular brain neoplasia data,” Cognitive Systems Research, vol. 58, pp. 173–194, Dec. 2019, doi: 10.1016/j.cogsys.2019.06.003.
  24. M. Arbane, R. Benlamri, Y. Brik, and M. Djerioui, “Transfer Learning for Automatic Brain Tumor Classification Using MRI Images,” 2020 2nd International Workshop on Human-Centric Smart Environments for Health and Well-being (IHSH), pp. 210–214, Feb. 2021, doi: 10.1109/ihsh51661.2021.9378739.
  25. H. Ucuzal, S. YASAR, and C. Colak, “Classification of brain tumor types by deep learning with convolutional neural network on magnetic resonance images using a developed web-based interface,” 2019 3rd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), pp. 1–5, Oct. 2019, doi: 10.1109/ismsit.2019.8932761.
  26. G. E. Hinton, S. Osindero, and Y.-W. Teh, “A Fast-Learning Algorithm for Deep Belief Nets,” Neural Computation, vol. 18, no. 7, pp. 1527–1554, Jul. 2006, doi: 10.1162/neco.2006.18.7.1527.
  27. W. Zhang, L. Ren, and L. Wang, “A Method of Deep Belief Network Image Classification Based on Probability Measure Rough Set Theory,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 32, no. 11, p. 1850040, Jul. 2018, doi: 10.1142/s0218001418500404.
  28. H. Jang, H. Choi, Y. Yi, and J. Shin, “Adiabatic Persistent Contrastive Divergence learning,” 2017 IEEE International Symposium on Information Theory (ISIT), pp. 3005–3009, Jun. 2017, doi: 10.1109/isit.2017.8007081.
  29. T. Tlusty, G. Amit, and R. Ben-Ari, “Unsupervised clustering of mammograms for outlier detection and breast density estimation,” 2018 24th International Conference on Pattern Recognition (ICPR), pp. 3808–3813, Aug. 2018, doi: 10.1109/icpr.2018.8545588.
  30. S. A. Ali et al., “An Optimally Configured and Improved Deep Belief Network (OCI-DBN) Approach Disease Prediction Based on Ruzzo–Tompa and Stacked Genetic Algorithm,” IEEE Access, vol. 8, pp. 65947–65958, 2020, doi: 10.1109/access.2020.2985646for Heart.

CRediT Author Statement


The authors confirm contribution to the paper as follows:

Conceptualization: Neha Ahlawat and Franklin Vinod D; Methodology: Franklin Vinod D; Software: Neha Ahlawat and Franklin Vinod D; Data Curation: Neha Ahlawat and Franklin Vinod D; Writing- Original Draft Preparation: Franklin Vinod D; Visualization: Neha Ahlawat and Franklin Vinod D; Investigation: Franklin Vinod D; Supervision: Neha Ahlawat and Franklin Vinod D; Validation: Franklin Vinod D; Writing- Reviewing and Editing: Neha Ahlawat and Franklin Vinod D; All authors reviewed the results and approved the final version of the manuscript.


Acknowledgements


Author(s) thanks to Dr. Franklin Vinod D for this research completion and support.


Funding


No funding was received to assist with the preparation of this manuscript.


Ethics declarations


Conflict of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.


Availability of data and materials


Data sharing is not applicable to this article as no new data were created or analysed in this study.


Author information


Contributions

All authors have equal contribution in the paper and all authors have read and agreed to the published version of the manuscript.


Corresponding author


Rights and permissions


Open Access This article is licensed under a Creative Commons Attribution NoDerivs is a more restrictive license. It allows you to redistribute the material commercially or non-commercially but the user cannot make any changes whatsoever to the original, i.e. no derivatives of the original work. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-nd/4.0/


Cite this article


Neha Ahlawat and Franklin Vinod D, “nCD and Clipped RBM Based Multimode DBN for Optimal Classification of Heterogeneous Images in Big Data”, Journal of Machine and Computing, pp. 682-693, April 2025, doi: 10.53759/7669/jmc202505054.


Copyright


© 2025 Neha Ahlawat and Franklin Vinod D. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.