Journal of Biomedical and Sustainable Healthcare Applications


Range Imaging and Video Generation using Generative Adversarial Network



Journal of Biomedical and Sustainable Healthcare Applications

Received On : 21 August 2020

Revised On : 23 September 2020

Accepted On : 24 October 2020

Published On : 05 January 2021

Volume 01, Issue 01

Pages : 034-041


Abstract


Latency, high temporal pixel density, and dynamic range are just a few of the benefits of event camera systems over conventional camera systems. Methods and algorithms cannot be applied directly because the output data of event camera systems are segments of synchronization events and experiences rather than precise pixel intensities. As a result, generating intensity photographs from occurrences for other functions is difficult. We use occurrence camera-based contingent deep convolutional connections to establish images and videos from a variable component of the occasion stream of data in this journal article. The system is designed to replicate visuals based on spatio-temporal intensity variations using bundles of spatial coordinates of occurrences as input data. The ability of event camera systems to produce High Dynamic Range (HDR) pictures even in exceptional lighting circumstances, as well as non-blurry pictures in rapid motion, is demonstrated. Furthermore, because event cameras have a transient response of about 1 s, the ability to generate very increased frame rate video content has been evidenced, conceivably up to 1 million arrays per second. The implementation of the proposed algorithms are compared to density images recorded onto a similar gridline in the image of events based on the application of accessible primary data obtained and synthesized datasets generated by the occurrence camera simulation model.


Keywords


Generative Adversarial (GA), High-Dynamic Range (HDR), Stacking-Based Time (SBT), Stacking Based Events (SBE)


  1. D. Chrysostomou, G. Sirakoulis and A. Gasteratos, "A bio-inspired multi-camera system for dynamic crowd analysis", Pattern Recognition Letters, vol. 44, pp. 141-151, 2014. Doi: 10.1016/j.patrec.2013.11.020.
  2. Y. Wang and H. Jiang, "Three-dimensional reconstruction of microscopic images using different order intensity derivatives", Optical Engineering, vol. 54, no. 2, p. 023103, 2015. Doi: 10.1117/1.oe.54.2.023103.
  3. R. Kaushik and J. Xiao, "Accelerated patch-based planar clustering of noisy range images in indoor environments for robot mapping", Robotics and Autonomous Systems, vol. 60, no. 4, pp. 584-598, 2012. Doi: 10.1016/j.robot.2011.12.001.
  4. P. Ghasemi and M. Ghafoori, "Holden in Search of Identity: Recreating the Picture of theFlâneur", English Studies, vol. 91, no. 1, pp. 74-88, 2010. Doi: 10.1080/00138380903355098.
  5. P. Shedligeri and K. Mitra, "Photorealistic image reconstruction from hybrid intensity and event-based sensor", Journal of Electronic Imaging, vol. 28, no. 06, p. 1, 2019. Doi: 10.1117/1.jei.28.6.063012.
  6. Preethi D. and N. Khare, "EFS-LSTM (Ensemble-Based Feature Selection With LSTM) Classifier for Intrusion Detection System", International Journal of e-Collaboration, vol. 16, no. 4, pp. 72-86, 2020. Doi: 10.4018/ijec.2020100106.
  7. A. Jaad and K. Abdelghany, "Modeling urban growth using video prediction technology: A time‐dependent convolutional encoder–decoder architecture", Computer-Aided Civil and Infrastructure Engineering, vol. 35, no. 5, pp. 430-447, 2020. Doi: 10.1111/mice.12503.
  8. Z. GUO, Z. CHEN and J. ZHANG, "Event-based merging of partial behavior models", Journal of Computer Applications, vol. 30, no. 1, pp. 266-269, 2010. Doi: 10.3724/sp.j.1087.2010.00266.
  9. M. Fairhurst and M. Mattoso Maia, "Performance comparisons in hierarchical architectures for memory network pattern classifiers", Pattern Recognition Letters, vol. 4, no. 2, pp. 121-124, 1986. Doi: 10.1016/0167-8655(86)90033-4.
  10. S. Park and Y. Shin, "U-Net-Based Generative Adversarial Network", Journal of the Institute of Electronics and Information Engineers, vol. 58, no. 5, pp. 61-67, 2021. Doi: 10.5573/ieie.2021.58.5.61.
  11. Y. Liu, L. Wang and M. Sun, "Efficient Heuristics for Structure Learning of k-Dependence Bayesian Classifier", Entropy, vol. 20, no. 12, p. 897, 2018. Doi: 10.3390/e20120897.
  12. X. Tu et al., "MR image segmentation and bias field estimation based on coherent local intensity clustering with total variation regularization", Medical & Biological Engineering & Computing, vol. 54, no. 12, pp. 1807-1818, 2016. Doi: 10.1007/s11517-016-1540-7.

Acknowledgements


The authors would like to thank to the reviewers for nice comments on the manuscript.


Funding


No funding was received to assist with the preparation of this manuscript.


Ethics declarations


Conflict of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.


Availability of data and materials


No data available for above study.


Author information


Contributions

All authors have equal contribution in the paper and all authors have read and agreed to the published version of the manuscript.


Corresponding author


Rights and permissions


Open Access This article is licensed under a Creative Commons Attribution NoDerivs is a more restrictive license. It allows you to redistribute the material commercially or non-commercially but the user cannot make any changes whatsoever to the original, i.e. no derivatives of the original work. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-nd/4.0/


Cite this article


Anderson Stephanie, “Range Imaging and Video Generation using Generative Adversarial Network", vol.1, no.1, pp. 034-041, January 2021. doi: 10.53759/0088/JBSHA202101005.


Copyright


© 2021 Anderson Stephanie. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.