A person's emotional state can be determined from their facial expression emotion recognition (FEER). Rich emotional information can be found in FEER. One of the most crucial types of interpersonal communication is FEER. Finding computational methods to replicate facial emotion expression in a similar or identical manner remains an unresolved issue, despite the fact that it is a skill that humans naturally do. To overcome the problem, in this work, Adaptive Firefly Optimization (AFO) and Ensemble (ML) Machine Learning (EML) algorithm is proposed for FEER. In this work, initially, dataset is collected using CK+ database and KMU-FED database. In occlusion generation, occlusions around mouths and eyes are duplicated. When calculating the optical flow, we aim to preserve as much information as possible with normalized inputs that deep networks require for recognitions and reconstructions. The reconstruction is done by using Deep Q-learning (DQL) which is used for semantic segmentation (SS) based on occlusions. For Feature selection (FS), the AFO algorithm is used. From the provided database, AFO is utilised to choose more pertinent and redundant features. It generates best fitness values (FV) using objective function (OF) for higher recognition accuracy (ACC). EML algorithms including the K-Nearest Neighbour (KNN), Random Forest (RF), and Enhanced Artificial Neural Network (EANN) are used to execute FEER. EML provides faster convergence time during training and testing process. It is mainly used to classify the accurate FEER results for the given database. According to the results, the suggested AFO-EML method overtakes the current techniques by ACC, precision (P), recall (R), and f-measure.
S. A. Hussain and A. Salim Abdallah Al Balushi, “A real time face emotion classification and recognition using deep learning model,”Journal of Physics: Conference Series, vol. 1432, no. 1, p. 012087, Jan. 2020, doi: 10.1088/1742-6596/1432/1/012087.
X. Wang, X. Chen, and C. Cao, “Human emotion recognition by optimally fusing facial expression and speech feature,” Signal Processing:Image Communication, vol. 84, p. 115831, May 2020, doi: 10.1016/j.image.2020.115831.
M. B. Abdulrazaq, M. R. Mahmood, S. R. M. Zeebaree, M. H. Abdulwahab, R. R. Zebari, and A. B. Sallow, “An Analytical Appraisal for Supervised Classifiers’ Performance on Facial Expression Recognition Based on Relief-F Feature Selection,” Journal of Physics: Conference Series, vol. 1804, no. 1, p. 012055, Feb. 2021, doi: 10.1088/1742-6596/1804/1/012055.
H.-Y. An and R.-S. Jia, “Self-supervised facial expression recognition with fine-grained feature selection,” The Visual Computer, vol. 40, no. 10, pp. 7001–7013, Mar. 2024, doi: 10.1007/s00371-024-03322-5.
K. DONUK, A. ARI, M. F. ÖZDEMİR, and D. HANBAY, “BPSO ve SVM’ye Dayalı Yüzde Duygu Tanıma için Derin Özellik Seçimi,” Politeknik Dergisi, vol. 26, no. 1, pp. 131–142, Mar. 2023, doi: 10.2339/politeknik.992720.
Aslam, Tanveer, et al. "Emotion based facial expression detection using machine learning." Life Science Journal 17.8 (2020): 35-43.
E. Ivanova and G. Borzunov, “Optimization of machine learning algorithm of emotion recognition in terms of human facial expressions,”Procedia Computer Science, vol. 169, pp. 244–248, 2020, doi: 10.1016/j.procs.2020.02.143.
Z. Liu et al., “A facial expression emotion recognition based human-robot interaction system,” IEEE/CAA Journal of Automatica Sinica,vol. 4, no. 4, pp. 668–676, 2017, doi: 10.1109/jas.2017.7510622.
R. Cui, W. Chen, and M. Li, “Emotion recognition using cross-modal attention from EEG and facial expression,” Knowledge-Based Systems, vol. 304, p. 112587, Nov. 2024, doi: 10.1016/j.knosys.2024.112587.
X. Tao et al., “Facial video-based non-contact emotion recognition: A multi-view features expression and fusion method,” Biomedical Signal Processing and Control, vol. 96, p. 106608, Oct. 2024, doi: 10.1016/j.bspc.2024.106608.
C. Lu et al., “Multiple Spatio-temporal Feature Learning for Video-based Emotion Recognition in the Wild,” Proceedings of the 20th ACM International Conference on Multimodal Interaction, pp. 646–652, Oct. 2018, doi: 10.1145/3242969.3264992.
X. Pu, K. Fan, X. Chen, L. Ji, and Z. Zhou, “Facial expression recognition from image sequences using twofold random forest classifier,”Neurocomputing, vol. 168, pp. 1173–1180, Nov. 2015, doi: 10.1016/j.neucom.2015.05.005.
Y. K. Bhatti, A. Jamil, N. Nida, M. H. Yousaf, S. Viriri, and S. A. Velastin, “Facial Expression Recognition of Instructor Using Deep Features and Extreme Learning Machine,” Computational Intelligence and Neuroscience, vol. 2021, no. 1, Jan. 2021, doi: 10.1155/2021/5570870.
P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression,” 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition -Workshops, Jun. 2010, doi: 10.1109/cvprw.2010.5543262.
KMU-FED. Available online: http://cvpr.kmu.ac.kr/KMU-FED.htm (accessed on 4 December 2018).
B. Allaert, I. R. Ward, I. M. Bilasco, C. Djeraba, and M. Bennamoun, “A comparative study on optical flow for facial expression analysis,”Neurocomputing, vol. 500, pp. 434–448, Aug. 2022, doi: 10.1016/j.neucom.2022.05.077.
H. Xu, S. Yu, J. Chen, and X. Zuo, “An Improved Firefly Algorithm for Feature Selection in Classification,” Wireless Personal Communications, vol. 102, no. 4, pp. 2823–2834, Jan. 2018, doi: 10.1007/s11277-018-5309-1.
W. Xie, L. Wang, K. Yu, T. Shi, and W. Li, “Improved multi-layer binary firefly algorithm for optimizing feature selection and classification of microarray data,” Biomedical Signal Processing and Control, vol. 79, p. 104080, Jan. 2023, doi:10.1016/j.bspc.2022.104080.
K. Verma and A. Khunteta, “Facial expression recognition using Gabor filter and multi-layer artificial neural network,” 2017 International Conference on Information, Communication, Instrumentation and Control (ICICIC), pp. 1–5, Aug. 2017, doi:10.1109/icomicon.2017.8279123.
Y. Afriansyah, R. A. Nugrahaeni, and A. L. Prasasti, “Facial Expression Classification for User Experience Testing Using K-Nearest Neighbor,” 2021 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT), pp. 63–68, Jul. 2021, doi: 10.1109/iaict52856.2021.9532535.
M. I. N. P. Munasinghe, “Facial Expression Recognition Using Facial Landmarks and Random Forest Classifier,” 2018 IEEE/ACIS 17th International Conference on Computer and Information Science (ICIS), pp. 423–427, Jun. 2018, doi: 10.1109/icis.2018.8466510.
I. Qadir, M. A. Iqbal, S. Ashraf, and S. Akram, “A fusion of CNN And SIFT For multicultural facial expression recognition,” Multimedia Tools and Applications, Jan. 2025, doi: 10.1007/s11042-024-20589-x.
S. S. Sudha and S. S. Suganya, “On-road driver facial expression emotion recognition with parallel multi-verse optimizer (PMVO) and optical flow reconstruction for partial occlusion in internet of things (IoT),” Measurement: Sensors, vol. 26, p. 100711, Apr. 2023, doi:10.1016/j.measen.2023.100711.
CRediT Author Statement
The authors confirm contribution to the paper as follows:
Conceptualization: Sudha S S and Suganya S S;
Methodology: Suganya S S;
Writing- Original Draft Preparation: Sudha S S;
Investigation: Sudha S S and Suganya S S;
Supervision: Sudha S S;
Validation: Suganya S S;
Writing- Reviewing and Editing: Sudha S S and Suganya S S; All authors reviewed the results and approved the final version of the manuscript.
Acknowledgements
Author(s) thanks to Dr. Suganya S S for this research completion and support.
Funding
No funding was received to assist with the preparation of this manuscript.
Ethics declarations
Conflict of interest
The authors have no conflicts of interest to declare that are relevant to the content of this article.
Availability of data and materials
Data sharing is not applicable to this article as no new data were created or analysed in this study.
Author information
Contributions
All authors have equal contribution in the paper and all authors have read and agreed to the published version of the manuscript.
Corresponding author
Sudha S S
Department of Applied Mathematics and Computational Sciences, PSG College of Technology, Peelamedu, Coimbatore, Tamil Nadu, India.
Open Access This article is licensed under a Creative Commons Attribution NoDerivs is a more restrictive license. It allows you to redistribute the material commercially or non-commercially but the user cannot make any changes whatsoever to the original, i.e. no derivatives of the original work. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-nd/4.0/
Cite this article
Sudha S S and Suganya S S, “Adaptive Firefly Optimization Based Feature Selection and Ensemble Machine Learning Algorithm for Facial Expression Emotion Recognition”, Journal of Machine and Computing, vol.5, no.3, pp. 1543-1558, July 2025, doi: 10.53759/7669/jmc202505122.