Journal of Machine and Computing


Efficient Event Transactions in VANET’s Using Reinforcement Learning Aided Block Chain Architecture



Journal of Machine and Computing

Received On : 22 June 2024

Revised On : 15 August 2024

Accepted On : 04 October 2024

Volume 05, Issue 02


Article Views

Abstract


Vehicular Ad-Hoc Networks (VANETs) have emerged as a pivotal technology for enhancing road safety and traffic management through real-time vehicle-to-vehicle (V2V) communication. However, the dynamic and open nature of VANETs introduces challenges related to data security, privacy, and trust among vehicles. To address these challenges, the integration of blockchain technology into VANETs has gained considerable attention. In this study, we introduce Vehicular chain- Reinforcement Learning (RL), a Blockchain-based VANET system that employs artificial intelligence (AI), Deep Reinforcement Learning (DRL), to create a flexible, knowledgeable, collaborative, and secure network for the VANET industry. The framework brings together a wide variety of VANET systems by utilizing Blockchain technology and an intelligent decision-making RL algorithm that operates online. The goal is to optimize the network's behavior in real time, with privacy and security of Vehicles data as primary concerns. The proposed Blockchain Manager (BM) intelligently adjusts blockchain setup to optimize security, latency, and cost. In the realm of Reinforcement Learning (RL), the D3QN framework introduces Deep Q-Network (DQN), Double Deep Q-Network (DDQN) and Dueling DQN (DQDQN) techniques to efficiently solve the Markov Decision Process (MDP) optimization model. The proposed approaches and two heuristic ones are thoroughly compared. The suggested methods achieve real-time adaptation to system state convergence, maximum security, minimal latency, and low cost.


Keywords


VANET, Block Chain, Block Manager, Reinforcement Learning, Deep Q-Network (DQN).


  1. A. Hasselgren, K. Kralevska, D. Gligoroski, S. A. Pedersen, and A. Faxvaag, “Blockchain in healthcare and health sciences—A scoping review,” International Journal of Medical Informatics, vol. 134, p. 104040, Feb. 2020, doi: 10.1016/j.ijmedinf.2019.104040.
  2. M. Liu, Y. Teng, F. R. Yu, V. C. M. Leung, and M. Song, “Deep Reinforcement Learning Based Performance Optimization in Blockchain-Enabled Internet of Vehicle,” ICC 2019 - 2019 IEEE International Conference on Communications (ICC), May 2019, doi: 10.1109/icc.2019.8761206.
  3. M. Liu, F. R. Yu, Y. Teng, V. C. M. Leung, and M. Song, “Performance Optimization for Blockchain-Enabled Industrial Internet of Things (IIoT) Systems: A Deep Reinforcement Learning Approach,” IEEE Transactions on Industrial Informatics, vol. 15, no. 6, pp. 3559–3570, Jun. 2019, doi: 10.1109/tii.2019.2897805.
  4. S. Tanwar, K. Parekh, and R. Evans, “Blockchain-based electronic healthcare record system for healthcare 4.0 applications,” Journal of Information Security and Applications, vol. 50, p. 102407, Feb. 2020, doi: 10.1016/j.jisa.2019.102407.
  5. F. Jiang et al., “Artificial intelligence in healthcare: past, present and future,” Stroke and Vascular Neurology, vol. 2, no. 4, pp. 230–243, Jun. 2017, doi: 10.1136/svn-2017-000101.
  6. Y. Li, X. Liang, Z. Hu, E.P. Xing, Hybrid retrieval-generation reinforced agent for medical image report generation, in: Advances in Neural Information Processing Systems, 2018, pp. 1530–1540.
  7. Y. Ling, S.A. Hasan, V. Datla, A. Qadir, K. Lee, J. Liu, O. Farri, Diagnostic inferencing via improving clinical concept extraction with deep reinforcement learning: A preliminary study, in: Machine Learning for Healthcare Conference, 2017, pp. 271–285.
  8. S. M. Shortreed, E. Laber, D. J. Lizotte, T. S. Stroup, J. Pineau, and S. A. Murphy, “Informing sequential clinical decision-making through reinforcement learning: an empirical study,” Machine Learning, vol. 84, no. 1–2, pp. 109–136, Dec. 2010, doi: 10.1007/s10994-010-5229-0.
  9. R.S. Sutton, A.G. Barto, Reinforcement Learning: An Introduction, MIT press, 2018.
  10. H. Fan, L. Zhu, C. Yao, J. Guo, and X. Lu, “Deep Reinforcement Learning for Energy Efficiency Optimization in Wireless Networks,” 2019 IEEE 4th International Conference on Cloud Computing and Big Data Analysis (ICCCBDA), vol. 11, pp. 465–471, Apr. 2019, doi: 10.1109/icccbda.2019.8725683.
  11. H.-S. Lee, J.-Y. Kim, and J.-W. Lee, “Resource Allocation in Wireless Networks With Deep Reinforcement Learning: A Circumstance-Independent Approach,” IEEE Systems Journal, vol. 14, no. 2, pp. 2589–2592, Jun. 2020, doi: 10.1109/jsyst.2019.2933536.
  12. D. Zhang, F. R. Yu, and R. Yang, “Blockchain-Based Distributed Software-Defined Vehicular Networks: A Dueling Deep ${Q}$ -Learning Approach,” IEEE Transactions on Cognitive Communications and Networking, vol. 5, no. 4, pp. 1086–1100, Dec. 2019, doi: 10.1109/tccn.2019.2944399.
  13. Y. Dai, D. Xu, K. Zhang, S. Maharjan, and Y. Zhang, “Deep Reinforcement Learning and Permissioned Blockchain for Content Caching in Vehicular Edge Computing and Networks,” IEEE Transactions on Vehicular Technology, vol. 69, no. 4, pp. 4312–4324, Apr. 2020, doi: 10.1109/tvt.2020.2973705.
  14. Y. Y. Obaid Al Belushi, P. Jasmin Dennis, S. Deepa, V. Arulkumar, D. Kanchana, and R. Y. P, “A Robust Development of an Efficient Industrial Monitoring and Fault Identification Model using Internet of Things,” 2024 IEEE International Conference on Big Data & Machine Learning (ICBDML), pp. 27–32, Feb. 2024, doi: 10.1109/icbdml60909.2024.10577363.
  15. Y. Dai, D. Xu, S. Maharjan, Z. Chen, Q. He, and Y. Zhang, “Blockchain and Deep Reinforcement Learning Empowered Intelligent 5G Beyond,” IEEE Network, vol. 33, no. 3, pp. 10–17, May 2019, doi: 10.1109/mnet.2019.1800376.
  16. N. Mhaisen, N. Fetais, A. Erbad, A. Mohamed, and M. Guizani, “To chain or not to chain: A reinforcement learning approach for blockchain-enabled IoT monitoring applications,” Future Generation Computer Systems, vol. 111, pp. 39–51, Oct. 2020, doi: 10.1016/j.future.2020.04.035.
  17. T.T. Anh, N.C. Luong, Z. Xiong, D. Niyato, D.I. Kim, Joint time scheduling and transaction fee selection in blockchain-based RF-powered backscatter cognitive radio network, 2020, ArXiv preprint arXiv:2001.03336.

Acknowledgements


The authors would like to thank to the reviewers for nice comments on the manuscript.


Funding


No funding was received to assist with the preparation of this manuscript.


Ethics declarations


Conflict of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.


Availability of data and materials


Data sharing is not applicable to this article as no new data were created or analysed in this study.


Author information


Contributions

All authors have equal contribution in the paper and all authors have read and agreed to the published version of the manuscript.


Corresponding author


Rights and permissions


Open Access This article is licensed under a Creative Commons Attribution NoDerivs is a more restrictive license. It allows you to redistribute the material commercially or non-commercially but the user cannot make any changes whatsoever to the original, i.e. no derivatives of the original work. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-nd/4.0/


Cite this article


Shaik Mulla Almas, Kavitha K and Kalavathi Alla, “Efficient Event Transactions in VANET’s Using Reinforcement Learning Aided Block Chain Architecture”, Journal of Machine and Computing. doi: 10.53759/7669/jmc202505052.


Copyright


© 2025 Shaik Mulla Almas, Kavitha K and Kalavathi Alla. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.