Journal of Robotics Spectrum

Analysis of Conventional Feature Learning Algorithms and Advanced Deep Learning Models

Journal of Robotics Spectrum

Received On : 02 November 2022

Revised On : 10 December 2022

Accepted On : 26 December 2023

Published On : 05 January 2023

Volume 01, 2023

Pages : 001-012


Representation learning or feature learning refers to a collection of methods employed in machine learning, which allows systems to autonomously determine representations needed for classifications or feature detection from unprocessed data. Representation learning algorithms are specifically crafted to acquire knowledge of conceptual features that define data. The field of state representation learning is centered on a specific type of representation learning that involves the acquisition of low-dimensional learned features that undergo temporal evolution and are subject to the influence of an agent's actions. Over the past few years, deep architecture have been widely employed for representation learning and have demonstrated exceptional performance in various tasks, including but not limited to object detection, speech recognition, and image classification. This article provides a comprehensive overview of the evolution of techniques for data representation learning. Our research focuses on the examination of conventional feature learning algorithms and advanced deep learning models. This paper presents an introduction to data representation learning history, along with a comprehensive list of available resources such as online courses, tutorials, and books. Additionally, various tool-boxes are also provided for further exploration in this field. In conclusion, this article presents remarks and future prospects for data representation learning.


Feature Learning, Feature Detection, Representation Learning, Deep Learning Models, Data Architectures, Deep Learning.

  1. C.-C. Chang, “Fisher’s linear discriminant analysis with space-folding operations,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PP, 2023.
  2. P. Shrivastava, Department of Electronics and Telecommunication Engineering, Graduate. The areas of interests are Machine Learning, Data Analytics, Deep Learning. Mumbai, India., K. Singh, A. Pancham, Department of Electronics and Telecommunication Engineering, Graduate. The areas of interests are Machine Learning, Data Analytics, Deep Learning, Cloud Computing. Mumbai, India., and Department of Electronics and Telecommunication Engineering, graduate. The areas of interests are Machine Learning, Data Analytics, Deep Learning, Cloud Computing. Mumbai, India., “Classification of Grain s and Quality Analysis u sing Deep Learning,” Int. J. Eng. Adv. Technol., vol. 11, no. 1, pp. 244–250, 2021.
  3. F. Dalvi, N. Durrani, H. Sajjad, Y. Belinkov, A. Bau, and J. Glass, “What is one grain of sand in the desert? Analyzing individual neurons in deep NLP models,” Proc. Conf. AAAI Artif. Intell., vol. 33, no. 01, pp. 6309–6317, 2019.
  4. J. Treur, “Relating an adaptive network’s structure to its emerging behaviour for Hebbian learning,” in Theory and Practice of Natural Computing, Cham: Springer International Publishing, 2018, pp. 359–373.
  5. L. Dung and M. Mizukaw, “Designing a pattern recognition neural network with a reject output and many sets of weights and biases,” in Pattern Recognition Techniques, Technology and Applications, InTech, 2008.
  6. K. Ren, Q. Wang, and R. J. Burkholder, “A fast back-projection approach to diffraction tomography for near-field microwave imaging,” IEEE Antennas Wirel. Propag. Lett., vol. 18, no. 10, pp. 2170–2174, 2019.
  7. R. Guo, X. Qiu, and Y. He, “Evaluation of agricultural investment climate in CEE countries: The application of back propagation neural network,” Algorithms, vol. 13, no. 12, p. 336, 2020.
  8. E. S. Gopi, “Dimensionality Reduction Techniques,” in Pattern Recognition and Computational Intelligence Techniques Using Matlab, Cham: Springer International Publishing, 2020, pp. 1–29.
  9. J. Gou et al., “Discriminative and Geometry-Preserving Adaptive Graph Embedding for dimensionality reduction,” Neural Netw., vol. 157, pp. 364–376, 2023.
  10. A. Sarhadi, D. H. Burn, G. Yang, and A. Ghodsi, “Advances in projection of climate change impacts using supervised nonlinear dimensionality reduction techniques,” Clim. Dyn., vol. 48, no. 3–4, pp. 1329–1351, 2017.
  11. Y. Yang and T. Hospedales, “Deep multi-task representation learning: A tensor factorisation approach,” arXiv [cs.LG], 2016.
  12. L. Yang, C. Heiselman, J. G. Quirk, and P. M. Djurić, “Class-imbalanced classifiers using ensembles of Gaussian processes and Gaussian process latent variable models,” Proc. IEEE Int. Conf. Acoust. Speech Signal Process., vol. 2021, 2021.
  13. G. Song, S. Wang, Q. Huang, and Q. Tian, “Multimodal Similarity Gaussian Process latent variable model,” IEEE Trans. Image Process., vol. 26, no. 9, pp. 4168–4181, 2017.
  14. G. Zhong, W.-J. Li, D.-Y. Yeung, X. Hou, and C.-L. Liu, “Gaussian process latent random field,” Proc. Conf. AAAI Artif. Intell., vol. 24, no. 1, pp. 679–684, 2010.
  15. T. L. Harris, R. A. DeCarlo, and S. Richter, “A Continuation Approach To Global Eigenvalue Assignment11supported by U.s. department of energy under DOE contract number DE-AC01-79ET29365,” in Computer Aided Design of Multivariable Technological Systems, Elsevier, 1983, pp. 95–101.
  16. P. Thongkruer and P. Aree, “Power-flow initialization of fixed-speed pump as turbines from their characteristic curves using unified Newton-Raphson approach,” Electric Power Syst. Res., vol. 218, no. 109214, p. 109214, 2023.
  17. R. W. Dimand, “Irving fisher and the fisher relation: Setting the record straight,” Can. J. Econ., vol. 32, no. 3, p. 744, 1999.
  18. Z. J. &. F. Lai, “A convergence analysis on the iterative trace ratio algorithm and its refinements,” CSIAM Transactions on Applied Mathematics, vol. 2, no. 2, pp. 297–312, 2021.
  19. S. Banerjee, W. Scheirer, K. Bowyer, and P. Flynn, “Analyzing the impact of shape & context on the face recognition performance of deep networks,” arXiv [cs.CV], 2022.
  20. S. Deng, Y. Guo, D. Hsu, and D. Mandal, “Learning tensor representations for meta-learning,” arXiv [cs.LG], 2022.
  21. W. Guo and J.-M. Qiu, “A low rank tensor representation of linear transport and nonlinear Vlasov solutions and their associated flow maps,” J. Comput. Phys., vol. 458, no. 111089, p. 111089, 2022.
  22. S. D. Choudhury, “Root LaplacianEigenmaps with their application in spectral embedding,” arXiv [math.DG], 2023.
  23. J. Hernandez, M. Muratet, M. Pierotti, and T. Carron, “Can we detect non-playable characters’ personalities using machine and deep learning approaches?,” Proc. Eur. Conf. Games-based Learn., vol. 16, no. 1, pp. 271–279, 2022.
  24. R. Espinosa, F. Jimenez, and J. Palma, “Surrogate-assisted and filter-based multiobjective evolutionary feature selection for deep learning,” IEEE Trans. Neural Netw. Learn. Syst., vol. PP, pp. 1–15, 2023.
  25. C. Zhang, N. N. A. Sjarif, and R. B. Ibrahim, “Deep learning techniques for financial time series forecasting: A review of recent advancements: 2020-2022,” arXiv [q-fin.ST], 2023.
  26. L.-W. Kim, “DeepX: Deep learning accelerator for restricted Boltzmann machine artificial neural networks,” IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 5, pp. 1441–1453, 2018.
  27. S. Theodoridis, “Neural networks and deep learning,” in Machine Learning, Elsevier, 2015, pp. 875–936.
  28. “UFLDL tutorial,” [Online]. Available: [Accessed: 31-May-2023].
  29. “Deep learning specialization,” Coursera. [Online]. Available: [Accessed: 31-May-2023].
  30. “AP Chinese Language and Culture past exam questions,” [Online]. Available: [Accessed: 31-May-2023].
  31. A. Haldorai and S. Anandakumar, “Motivation, Definition, Application and the Future of Edge Artificial Intelligence,” Journal of Computing and Natural Science, pp. 77–87, Jul. 2022, doi: 10.53759/181x/jcns202202011.
  32. I. Goodfellow, Y. Bengio, and A. Courville, “Deep learning,” MIT Press, 01-Dec-2021. [Online]. Available: [Accessed: 31-May-2023].
  33. H. Schulz and S. Behnke, “Deep learning: Layer-wise learning of feature hierarchies,” KI - Künstl. Intell., vol. 26, no. 4, pp. 357–363, 2012.
  34. G. Agrafiotis, E. Makri, I. Kalamaras, A. Lalas, K. Votis, and D. Tzovaras, “Nearest Unitary and Toeplitz matrix techniques for adaptation of Deep Learning models in photonic FPGA,” nldl, vol. 4, 2023.
  35. X. Gu et al., “Hierarchical weight averaging for deep neural networks,” IEEE Trans. Neural Netw. Learn. Syst., vol. PP, 2023.
  36. S. Pootheri and G. V. K, “Localisation of mammographic masses by greedy backtracking of activations in the stacked auto-encoders,” arXiv [cs.CV], 2023.
  37. G. Pahuja and B. Prasad, “Deep learning architectures for Parkinson’s disease detection by using multi-modal features,” Comput. Biol. Med., vol. 146, no. 105610, p. 105610, 2022.
  38. S. Cascianelli, M. Cornia, L. Baraldi, and R. Cucchiara, “Boosting modern and historical handwritten text recognition with deformable convolutions,” Int. J. Doc. Anal. Recognit., vol. 25, no. 3, pp. 207–217, 2022.
  39. L. Gu, L. Yang, and F. Zhou, “Approximation properties of Gaussian-binary restricted Boltzmann machines and Gaussian-binary deep belief networks,” Neural Netw., vol. 153, pp. 49–63, 2022.
  40. A. A. Barbhuiya, R. K. Karsh, and S. Dutta, “AlexNet-CNN based feature extraction and classification of multiclass ASL hand gestures,” in Lecture Notes in Electrical Engineering, Singapore: Springer Singapore, 2021, pp. 77–89.
  41. B.-J. Singstad and B. Tavashi, “Using deep convolutional neural networks to predict patients age based on ECGs from an independent test cohort,” nldl, vol. 4, 2023.
  42. K. Joo, K. Lee, S.-M. Lee, A. Choi, G. Noh, and J.-Y. Chun, “Deep learning model based on natural language processes for multi-class classification of R&D documents: Focused on climate technology classification,” J. Inst. Electron. Inf. Eng., vol. 59, no. 7, pp. 21–30, 2022.
  43. Y. Cai, G. Zhong, Y. Zheng, K. Huang, and J. Dong, “Is DeCAF good enough for accurate image classification?,” in Neural Information Processing, Cham: Springer International Publishing, 2015, pp. 354–363.
  44. W. Qu, D. Wang, S. Feng, Y. Zhang, and G. Yu, “A novel cross-modal hashing algorithm based on multimodal deep learning,” Sci. China Inf. Sci., vol. 60, no. 9, 2017.
  45. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE Inst. Electr. Electron. Eng., vol. 86, no. 11, pp. 2278–2324, 1998.
  46. K. Hayashi, “Exploring unexplored tensor network decompositions for convolutional neural networks,” Brain Neural Netw., vol. 29, no. 4, pp. 193–201, 2022.
  47. A. Haldorai and S. Anandakumar, “Artificial Intelligence in Causality Healthcare Sector,” Journal of Computing in Engineering, pp. 30–37, Jul. 2020, doi: 10.46532/jce.20200704.
  48. E. Shalaby, N. ElShennawy, and A. Sarhan, “Utilizing deep learning models in CSI-based human activity recognition,” Neural Comput. Appl., vol. 34, no. 8, pp. 5993–6010, 2022.
  49. “IBM Developer,” [Online]. Available: [Accessed: 31-May-2023].


Author(s) thanks to Tokyo Institute of Technology for research lab and equipment support.


No funding was received to assist with the preparation of this manuscript.

Ethics declarations

Conflict of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.

Availability of data and materials

No data available for above study.

Author information


All authors have equal contribution in the paper and all authors have read and agreed to the published version of the manuscript.

Corresponding author

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution NoDerivs is a more restrictive license. It allows you to redistribute the material commercially or non-commercially but the user cannot make any changes whatsoever to the original, i.e. no derivatives of the original work. To view a copy of this license, visit

Cite this article

Toshihiro Endo, “Analysis of Conventional Feature Learning Algorithms and Advanced Deep Learning Models”, Journal of Robotics Spectrum, vol.1, pp. 001-012, 2023. doi: 10.53759/9852/JRS202301001.


© 2023 Toshihiro Endo. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.