Journal of Machine and Computing


Efficient and Accurate Traffic Sign Detection Leveraging YOLOv8: A Cutting Edge Deep Learning Framework



Journal of Machine and Computing

Received On : 10 April 2024

Revised On : 12 June 2024

Accepted On : 10 September 2024

Volume 05, Issue 01


Article Views

Abstract


The timely and precise identification of traffic signs is essential for maintaining the effectiveness and safety of contemporary roads, particularly in light of the increasing number of self-driving cars. Conventional image processing methods have faced challenges because to the intricate and fluctuating variables present in real-world settings, including various signage, erratic weather, and inconsistent illumination. This study utilizes recent breakthroughs in deep learning, particularly the YOLOv8 (You Only Look Once version 8) model, to tackle these difficulties. YOLOv8 incorporates cutting-edge neural network architectural advancements, such as an anchor-free detection methodology, adaptive spatial feature pooling, and dynamic neural configurations. In order to further increase detection efficiency and accuracy, this study presents two innovative models, YOLOv8-DH and YOLOv8-TDHSA. These models make use of improvements such decoupled heads and transformer-based self-attention mechanisms. Experimental results indicate that the suggested models substantially surpass current deep learning models, attaining enhanced performance across multiple measures, including accuracy, recall, F-score, and mean average precision (mAP). This research enhances traffic sign detecting technology, facilitating the development of safer and more intelligent transportation systems.


Keywords


Object detection, Traffic sign, YOLO, Image processing, Computer vision, Attention mechanism.


  1. A. Kaur, V. Kukreja, N. Thapliyal, M. Aeri, R. Sharma, and S. Hariharan, “An Improved YOLOv8 Model for Traffic Sign Detection and Classification,” 2024 3rd International Conference for Innovation in Technology (INOCON), pp. 1–5, Mar. 2024, doi: 10.1109/inocon60754.2024.10511576.
  2. Z. Huang, L. Li, G. C. Krizek, and L. Sun, “Research on Traffic Sign Detection Based on Improved YOLOv8,” Journal of Computer and Communications, vol. 11, no. 07, pp. 226–232, 2023, doi: 10.4236/jcc.2023.117014.
  3. A. de la Escalera, J. M. Armingol, and M. Mata, “Traffic sign recognition and analysis for intelligent vehicles,” Image and Vision Computing, vol. 21, no. 3, pp. 247–258, Mar. 2003, doi: 10.1016/s0262-8856(02)00156-7.
  4. Y.-B. Jo, W.-S. Na, S.-J. Eom, and Y.-J. Jeong, “Traffic Sign Recognition using SVM and Decision Tree for Poor Driving Environment,” Journal of IKEEE, vol. 18, no. 4, pp. 485–494, Dec. 2014, doi: 10.7471/ikeee.2014.18.4.485.
  5. S. Maldonado-Bascon, S. Lafuente-Arroyo, P. Gil-Jimenez, H. Gomez-Moreno, and F. Lopez-Ferreras, “Road-Sign Detection and Recognition Based on Support Vector Machines,” IEEE Transactions on Intelligent Transportation Systems, vol. 8, no. 2, pp. 264–278, Jun. 2007, doi: 10.1109/tits.2007.895311.
  6. J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition,” Neural Networks, vol. 32, pp. 323–332, Aug. 2012, doi: 10.1016/j.neunet.2012.02.016.
  7. S. Song, Z. Que, J. Hou, S. Du, and Y. Song, “An efficient convolutional neural network for small traffic sign detection,” Journal of Systems Architecture, vol. 97, pp. 269–277, Aug. 2019, doi: 10.1016/j.sysarc.2019.01.012.
  8. P. Sermanet and Y. LeCun, “Traffic sign recognition with multi-scale Convolutional Networks,” The 2011 International Joint Conference on Neural Networks, Jul. 2011, doi: 10.1109/ijcnn.2011.6033589.
  9. D. Cireşan, U. Meier, J. Masci, and J. Schmidhuber, “Multi-column deep neural network for traffic sign classification,” Neural Networks, vol. 32, pp. 333–338, Aug. 2012, doi: 10.1016/j.neunet.2012.02.023.
  10. R. Girshick, “Fast r-cnn,” arXiv preprint arXiv:1504.08083, 2015.
  11. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, Jun. 2017, doi: 10.1109/tpami.2016.2577031.
  12. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, doi: 10.1109/cvpr.2016.91.
  13. J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul. 2017, doi: 10.1109/cvpr.2017.690.
  14. N. Thillaiarasu and Ashwin Shenoy, "Enhancing Security Through Real-Time Classification of Normal and Abnormal Human Activities: A YOLOv7-SVM Hybrid Approach," IAENG International Journal of Computer Science, Vol. 51, no. 8, pp. 1027-1034, 2024.
  15. A. Bochkovskiy, C.-Y. Wang and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.
  16. J. Wang, Y. Chen, Z. Dong, and M. Gao, “Improved YOLOv5 network for real-time multi-scale traffic sign detection,” Neural Computing and Applications, vol. 35, no. 10, pp. 7853–7865, Dec. 2022, doi: 10.1007/s00521-022-08077-5.
  17. R. Kaur and J. Singh, “Local Regression Based Real-Time Traffic Sign Detection using YOLOv6,” 2022 4th International Conference on Advances in Computing, Communication Control and Networking (ICAC3N), pp. 522–526, Dec. 2022, doi: 10.1109/icac3n56670.2022.10074236.
  18. S. Li, S. Wang, and P. Wang, “A Small Object Detection Algorithm for Traffic Signs Based on Improved YOLOv7,” Sensors, vol. 23, no. 16, p. 7145, Aug. 2023, doi: 10.3390/s23167145.
  19. M. Ashwin Shenoy and N. Thillaiarasu, “Enhancing temple surveillance through human activity recognition: A novel dataset and YOLOv4-ConvLSTM approach,” Journal of Intelligent & Fuzzy Systems, vol. 45, no. 6, pp. 11217–11232, Dec. 2023, doi: 10.3233/jifs-233919.
  20. Z. Ge, S. Liu, F. Wang, Z. Li and J Sun, “Yolox: Exceeding yolo series in 2021,” arXiv preprint arXiv:2107.08430, 2021.
  21. G. H. Ballantyne, “Review of sigmoid volvulus,” Diseases of the Colon & Rectum, vol. 25, no. 8, pp. 823–830, Nov. 1982, doi: 10.1007/bf02553326.
  22. Y. Cui, D. Guo, H. Yuan, H. Gu, and H. Tang, “Enhanced YOLO Network for Improving the Efficiency of Traffic Sign Detection,” Applied Sciences, vol. 14, no. 2, p. 555, Jan. 2024, doi: 10.3390/app14020555.
  23. S. Zhang, S. Che, Z. Liu, and X. Zhang, “A real-time and lightweight traffic sign detection method based on ghost-YOLO,” Multimedia Tools and Applications, vol. 82, no. 17, pp. 26063–26087, Jan. 2023, doi: 10.1007/s11042-023-14342-z.
  24. J. Chu, C. Zhang, M. Yan, H. Zhang, and T. Ge, “TRD-YOLO: A Real-Time, High-Performance Small Traffic Sign Detection Algorithm,” Sensors, vol. 23, no. 8, p. 3871, Apr. 2023, doi: 10.3390/s23083871.
  25. W. Song and S. A. Suandi, “TSR-YOLO: A Chinese Traffic Sign Recognition Algorithm for Intelligent Vehicles in Complex Scenes,” Sensors, vol. 23, no. 2, p. 749, Jan. 2023, doi: 10.3390/s23020749.
  26. A. Swetha, M. S. Lakshmi and M. R. Kumar, “Chronic Kidney Disease Diagnostic Approaches Using Efficient Artificial Intelligence Methods,” International Journal of Intelligent Systems and Applications in Engineering, vol. 10, no. 1s, Oct. 2022, pp. 254
  27. M. Rudra Kumar and V. K. Gunjan, “Peer Level Credit Rating: An Extended Plugin for Credit Scoring Framework,” ICCCE 2021, pp. 1227–1237, 2022, doi: 10.1007/978-981-16-7985-8_128.

Acknowledgements


The authors would like to thank to the reviewers for nice comments on the manuscript.


Funding


No funding was received to assist with the preparation of this manuscript.


Ethics declarations


Conflict of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.


Availability of data and materials


Data sharing is not applicable to this article as no new data were created or analysed in this study.


Author information


Contributions

All authors have equal contribution in the paper and all authors have read and agreed to the published version of the manuscript.


Corresponding author


Rights and permissions


Open Access This article is licensed under a Creative Commons Attribution NoDerivs is a more restrictive license. It allows you to redistribute the material commercially or non-commercially but the user cannot make any changes whatsoever to the original, i.e. no derivatives of the original work. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-nd/4.0/


Cite this article


Gunji Sreenivasulu, Lakshmi H N, Muni Kumari T, Anjaiah P, Suresh A and Avanija J, “Efficient and Accurate Traffic Sign Detection Leveraging YOLOv8: A Cutting Edge Deep Learning Framework”, Journal of Machine and Computing. doi: 10.53759/7669/jmc202505001.


Copyright


© 2025 Gunji Sreenivasulu, Lakshmi H N, Muni Kumari T, Anjaiah P, Suresh A and Avanija J. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.