Journal of Machine and Computing


HydroLens: Pioneering Underwater Surveillance with IoT-powered Object Detection and Tracking using the Hybrid ResNeXt-DenseNet Model



Journal of Machine and Computing

Received On : 15 April 2024

Revised On : 26 July 2024

Accepted On : 16 November 2024

Volume 05, Issue 01


Article Views

Abstract


Efficient object detection and tracking approaches are gaining popularity and being actively used in the world of underwater surveillance. This study presents an innovative protocol that combines a Hybrid ResNeXt-DenseNet Model to boost the visual perceptivity of the Internet of Things (IoT)-based underwater surveillance. The model focuses on what is the best of ResNeXt and DenseNet, yielding higher accuracy at lower computational cost than either. Its components are: IoT-enabled underwater sensors for data capture, a robust data preprocessing pipeline designed for underwater imagery, and the innovative Hybrid ResNeXt-DenseNet Model for object detection and tracking. The architecture of the model is proposed in order to overcome the issues related to underwater environments, such as low visibility, changeable illumination conditions, and complex background. Python was used to implement the proposed model and experiments have been conducted on popular benchmarks of underwater datasets, and the proposed approach obtains a recognition accuracy of 98%. In this model, the Hybrid ResNeXt-DenseNet Model has the notable ability to accurately identify and track objects of interest in real-time underwater situations. Furthermore, the inclusion of IoT features ensures data flows without interruption, allowing for prompt response and action. This research leads towards better situational awareness and marine environment protection systems by proliferating IoT and exploiting sophisticated deep learning methods at the root level.


Keywords


Object Detection, Deep Learning, Underwater Surveillance, Internet of Things, Cascaded CNN, Modified Gaussian Filter, Hybrid ResNeXt-DenseNet Model.


  1. N. Faruqui, M. A. Kabir, M. A. Yousuf, Md. Whaiduzzaman, A. Barros, and I. Mahmud, “Trackez: An IoT-Based 3D-Object Tracking From 2D Pixel Matrix Using Mez and FSL Algorithm,” IEEE Access, vol. 11, pp. 61453–61467, 2023, doi: 10.1109/access.2023.3287496.
  2. H. Li, X. Liang, H. Yin, L. Xu, X. Kong, and T. A. Gulliver, “Multiobject Tracking via Discriminative Embeddings for the Internet of Things,” IEEE Internet of Things Journal, vol. 10, no. 12, pp. 10532–10546, Jun. 2023, doi: 10.1109/jiot.2023.3242739.
  3. I. Ahmed, G. Jeon, and A. Chehri, “A Smart IoT Enabled End-to-End 3D Object Detection System for Autonomous Vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 11, pp. 13078–13087, Nov. 2023, doi: 10.1109/tits.2022.3210490.
  4. S. Li et al., “A Multitask Benchmark Dataset for Satellite Video: Object Detection, Tracking, and Segmentation,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–21, 2023, doi: 10.1109/tgrs.2023.3278075.
  5. Z. Meng, X. Xia, R. Xu, W. Liu, and J. Ma, “HYDRO-3D: Hybrid Object Detection and Tracking for Cooperative Perception Using 3D LiDAR,” IEEE Transactions on Intelligent Vehicles, vol. 8, no. 8, pp. 4069–4080, Aug. 2023, doi: 10.1109/tiv.2023.3282567.
  6. J. Wu, X. Su, Q. Yuan, H. Shen, and L. Zhang, “Multivehicle Object Tracking in Satellite Video Enhanced by Slow Features and Motion Features,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–26, 2022, doi: 10.1109/tgrs.2021.3139121.
  7. Y. Gong et al., “An Energy-Efficient Reconfigurable AI-Based Object Detection and Tracking Processor Supporting Online Object Learning,” IEEE Solid-State Circuits Letters, vol. 5, pp. 78–81, 2022, doi: 10.1109/lssc.2022.3163478.
  8. I. S. Mohamed and L. K. Chuan, “PAE: Portable Appearance Extension for Multiple Object Detection and Tracking in Traffic Scenes,” IEEE Access, vol. 10, pp. 37257–37268, 2022, doi: 10.1109/access.2022.3160424.
  9. S. Guo, C. Zhao, G. Wang, J. Yang, and S. Yang, “EC²Detect: Real-Time Online Video Object Detection in Edge-Cloud Collaborative IoT,” IEEE Internet of Things Journal, vol. 9, no. 20, pp. 20382–20392, Oct. 2022, doi: 10.1109/jiot.2022.3173685.
  10. Z. Peng, Z. Xiong, Y. Zhao, and L. Zhang, “3-D Objects Detection and Tracking Using Solid-State LiDAR and RGB Camera,” IEEE Sensors Journal, vol. 23, no. 13, pp. 14795–14808, Jul. 2023, doi: 10.1109/jsen.2023.3279500.
  11. C. Nie, Z. Ju, Z. Sun, and H. Zhang, “3D Object Detection and Tracking Based on Lidar-Camera Fusion and IMM-UKF Algorithm Towards Highway Driving,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 7, no. 4, pp. 1242–1252, Aug. 2023, doi: 10.1109/tetci.2023.3259441.
  12. M. Jiang, C. Zhou, and J. Kong, “AOH: Online Multiple Object Tracking With Adaptive Occlusion Handling,” IEEE Signal Processing Letters, vol. 29, pp. 1644–1648, 2022, doi: 10.1109/lsp.2022.3191549.
  13. C. Zhang, S. Zheng, H. Wu, Z. Gu, W. Sun, and L. Yang, “AttentionTrack: Multiple Object Tracking in Traffic Scenarios Using Features Attention,” IEEE Transactions on Intelligent Transportation Systems, vol. 25, no. 2, pp. 1661–1674, Feb. 2024, doi: 10.1109/tits.2023.3315222.
  14. S. Wang et al., “Object Tracking Based on the Fusion of Roadside LiDAR and Camera Data,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–14, 2022, doi: 10.1109/tim.2022.3201938.
  15. H. Liu, Y. Ma, H. Wang, C. Zhang, and Y. Guo, “AnchorPoint: Query Design for Transformer-Based 3D Object Detection and Tracking,” IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 10, pp. 10988–11000, Oct. 2023, doi: 10.1109/tits.2023.3282204.
  16. T. Jaganathan, A. Panneerselvam, and S. K. Kumaraswamy, “Object detection and multi‐object tracking based on optimized deep convolutional neural network and unscented Kalman filtering,” Concurrency and Computation: Practice and Experience, vol. 34, no. 25, Aug. 2022, doi: 10.1002/cpe.7245.
  17. L. Huang et al., “Simultaneous object detection and segmentation for patient‐specific markerless lung tumor tracking in simulated radiographs with deep learning,” Medical Physics, vol. 51, no. 3, pp. 1957–1973, Sep. 2023, doi: 10.1002/mp.16705.
  18. Q. Zhang, Y. Shan, Z. Zhang, H. Lin, Y. Zhang, and K. Huang, “Multisensor fusion‐based maritime ship object detection method for autonomous surface vehicles,” Journal of Field Robotics, vol. 41, no. 3, pp. 493–510, Nov. 2023, doi: 10.1002/rob.22273.
  19. D. Roja, "A, Smart Ultrasonic Radar: Real-Time Object Detection and Tracking with IoT Integration," International Journal for Modern Trends in Science and Technology, 10(2), 102-109, 2024, DOI: 10.46501/IJMTST1002014
  20. Z. Ni, C. Zhai, Y. Li, and Y. Yang, “A Multi-Object Tracking Method With Adaptive Dual Decoder and Better Motion Affinity,” IEEE Access, vol. 12, pp. 20221–20231, 2024, doi: 10.1109/access.2024.3362673.
  21. H. Gao, L. Yu, I. A. Khan, Y. Wang, Y. Yang, and H. Shen, “Visual Object Detection and Tracking for Internet of Things Devices Based on Spatial Attention Powered Multidomain Network,” IEEE Internet of Things Journal, vol. 10, no. 4, pp. 2811–2820, Feb. 2023, doi: 10.1109/jiot.2021.3099855.
  22. I. Ahmed, M. Ahmad, A. Chehri, M. M. Hassan, and G. Jeon, “IoT Enabled Deep Learning Based Framework for Multiple Object Detection in Remote Sensing Images,” Remote Sensing, vol. 14, no. 16, p. 4107, Aug. 2022, doi: 10.3390/rs14164107.
  23. V. Kamath, R. A., V. G. Kini, and S. Prabhu, “Exploratory Data Preparation and Model Training Process for Raspberry Pi-Based Object Detection Model Deployments,” IEEE Access, vol. 12, pp. 45423–45441, 2024, doi: 10.1109/access.2024.3381798.
  24. Nookala Venu, "Object Detection in Motion Estimation and Tracking analysis for IoT devices," European Chemical Bulletin, 12 (9), 2023, DOI: 10.48047/ecb/2023.12.9.141
  25. S. Bilakeri and K. A. Kotegar, “Learning to Track With Dynamic Message Passing Neural Network for Multi-Camera Multi-Object Tracking,” IEEE Access, vol. 12, pp. 63317–63333, 2024, doi: 10.1109/access.2024.3383138.

Acknowledgements


Author(s) thanks to Dr. Jeevanantham Vellaichamy for this research completion and support.


Funding


No funding was received to assist with the preparation of this manuscript.


Ethics declarations


Conflict of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.


Availability of data and materials


Data sharing is not applicable to this article as no new data were created or analysed in this study.


Author information


Contributions

All authors have equal contribution in the paper and all authors have read and agreed to the published version of the manuscript.


Corresponding author


Rights and permissions


Open Access This article is licensed under a Creative Commons Attribution NoDerivs is a more restrictive license. It allows you to redistribute the material commercially or non-commercially but the user cannot make any changes whatsoever to the original, i.e. no derivatives of the original work. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-nd/4.0/


Cite this article


Sujilatha Tada and Jeevanantham Vellaichamy, “HydroLens: Pioneering Underwater Surveillance with IoT-powered Object Detection and Tracking using the Hybrid ResNeXt-DenseNet Model”, Journal of Machine and Computing. doi: 10.53759/7669/jmc202505022.


Copyright


© 2025 Sujilatha Tada and Jeevanantham Vellaichamy. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.