Navigating through an environment can be challenging for visually impaired individuals, especially when they are outdoors or in unfamiliar surroundings. In this research, we propose a multi-robot system equipped with sensors and machine learning algorithms to assist the visually impaired in navigating their surroundings with greater ease and independence. The robot is equipped with sensors, including Lidar, proximity sensors, and a Bluetooth transmitter and receiver, which enable it to sense the environment and deliver information to the user. The presence of obstacles can be detected by the robot, and the user is notified through a Bluetooth interface to their headset. The robot's machine learning algorithm is generated using Python code and is capable of processing the data collected by the sensors to make decisions about how to inform the user about their surroundings. A microcontroller is used to collect data from the sensors, and a Raspberry Pi is used to communicate the information to the system. The visually impaired user can receive instructions about their environment through a speaker, which enables them to navigate their surroundings with greater confidence and independence. Our research shows that a multi-robot system equipped with sensors and machine learning algorithms can assist visually impaired individuals in navigating their environment. The system delivers the user with real-time information about their surroundings, enabling them to make informed decisions about their movements. Additionally, the system can replace the need for a human assistant, providing greater independence and privacy for the visually impaired individual. The system can be improved further by incorporating additional sensors and refining the machine learning algorithms to enhance its functionality and usability. This technology has the possible to greatly advance the value of life for visually impaired individuals by increasing their independence and mobility. It has important implications for the design of future assistive technologies and robotics.
Q.-H. Nguyen, H. Vu, T.-H. Tran, and Q.-H. Nguyen, “Developing a way-finding system on mobile robot assisting visually impaired people in an indoor environment,” Multimedia Tools and Applications, vol. 76, no. 2, pp. 2645–2669, Jan. 2016, doi: 10.1007/s11042-015-3204-2.
Y.-C. Lin, J. Fan, J. A. Tate, N. Sarkar, and L. C. Mion, “Use of robots to encourage social engagement between older adults,” Geriatric Nursing, vol. 43, pp. 97–103, Jan. 2022, doi: 10.1016/j.gerinurse.2021.11.008.
J. Fried, A. C. Leite, and F. Lizarralde, “Uncalibrated image-based visual servoing approach for translational trajectory tracking with an uncertain robot manipulator,” Control Engineering Practice, vol. 130, p. 105363, Jan. 2023, doi: 10.1016/j.conengprac.2022.105363.
H. Kim et al., “Robot-assisted gait training with auditory and visual cues in Parkinson’s disease: A randomized controlled trial,” Annals of Physical and Rehabilitation Medicine, vol. 65, no. 3, p. 101620, 2022, doi: 10.1016/j.rehab.2021.101620.
M. Zbytniewska-Mégret et al., “Reliability, validity and clinical usability of a robotic assessment of finger proprioception in persons with multiple sclerosis,” Multiple Sclerosis and Related Disorders, vol. 70, p. 104521, Feb. 2023, doi: 10.1016/j.msard.2023.104521.
B. Hong, Z. Lin, X. Chen, J. Hou, S. Lv, and Z. Gao, “Development and application of key technologies for Guide Dog Robot: A systematic literature review,” Robotics and Autonomous Systems, vol. 154, p. 104104, Aug. 2022, doi: 10.1016/j.robot.2022.104104.
T. C. Bourke, A. M. Coderre, S. D. Bagg, S. P. Dukelow, K. E. Norman, and S. H. Scott, “Impaired corrective responses to postural perturbations of the arm in individuals with subacute stroke,” Journal of NeuroEngineering and Rehabilitation, vol. 12, no. 1, Jan. 2015, doi: 10.1186/1743-0003-12-7.
K. R. da S. Santos, E. Villani, W. R. de Oliveira, and A. Dttman, “Comparison of visual servoing technologies for robotized aerospace structural assembly and inspection,” Robotics and Computer-Integrated Manufacturing, vol. 73, p. 102237, Feb. 2022, doi: 10.1016/j.rcim.2021.102237.
T. M. Herter, S. H. Scott, and S. P. Dukelow, “Vision does not always help stroke survivors compensate for impaired limb position sense,” Journal of NeuroEngineering and Rehabilitation, vol. 16, no. 1, Oct. 2019, doi: 10.1186/s12984-019-0596-7.
P. Uluer, N. Akalın, and H. Köse, “A New Robotic Platform for Sign Language Tutoring,” International Journal of Social Robotics, vol. 7, no. 5, pp. 571–585, Jun. 2015, doi: 10.1007/s12369-015-0307-x.
X. Li et al., “AviPer: assisting visually impaired people to perceive the world with visual-tactile multimodal attention network,” CCF Transactions on Pervasive Computing and Interaction, vol. 4, no. 3, pp. 219–239, Jun. 2022, doi: 10.1007/s42486-022-00108-3.
D. Novak, A. Nagle, U. Keller, and R. Riener, “Increasing motivation in robot-aided arm rehabilitation with competitive and cooperative gameplay,” Journal of NeuroEngineering and Rehabilitation, vol. 11, no. 1, p. 64, 2014, doi: 10.1186/1743-0003-11-64.
A. Bardella, M. Danieletto, E. Menegatti, A. Zanella, A. Pretto, and P. Zanuttigh, “Autonomous robot exploration in smart environments exploiting wireless sensors and visual features,” annals of telecommunications - annales des télécommunications, vol. 67, no. 7–8, pp. 297–311, Jun. 2012, doi: 10.1007/s12243-012-0305-z.
M. Zbytniewska et al., “Reliable and valid robot-assisted assessments of hand proprioceptive, motor and sensorimotor impairments after stroke,” Journal of NeuroEngineering and Rehabilitation, vol. 18, no. 1, Jul. 2021, doi: 10.1186/s12984-021-00904-5.
A. Esfandbod, A. Nourbala, Z. Rokhi, A. F. Meghdari, A. Taheri, and M. Alemi, “Design, Manufacture, and Acceptance Evaluation of APO: A Lip-syncing Social Robot Developed for Lip-reading Training Programs,” International Journal of Social Robotics, Oct. 2022, doi: 10.1007/s12369-022-00933-7.
Madeleine Wang Yue Dong and Yannis Yortsos, “Application of Machine Learning Technologies for Transport layer Congestion Control", vol.2, no.2, pp. 066-076, April 2022. doi: 10.53759/181X/JCNS202202010.
C. Bayón, S. S. Fricke, H. van der Kooij, and E. H. F. van Asseldonk, “Automatic Versus Manual Tuning of Robot-Assisted Gait Training,” Converging Clinical and Engineering Research on Neurorehabilitation IV, pp. 9–14, Oct. 2021, doi: 10.1007/978-3-030-70316-5_2.
G. Capi and H. Toda, “Development of a New Robotic System for Assisting Visually Impaired People,” International Journal of Social Robotics, vol. 4, no. S1, pp. 33–38, Sep. 2011, doi: 10.1007/s12369-011-0103-1.
R. Secoli, M.-H. Milot, G. Rosati, and D. J. Reinkensmeyer, “Effect of visual distraction and auditory feedback on patient effort during robot-assisted movement training after stroke,” Journal of NeuroEngineering and Rehabilitation, vol. 8, no. 1, p. 21, 2011, doi: 10.1186/1743-0003-8-21.
C. P. Gharpure and V. A. Kulyukin, “Robot-assisted shopping for the blind: issues in spatial cognition and product selection,” Intelligent Service Robotics, vol. 1, no. 3, pp. 237–251, Mar. 2008, doi: 10.1007/s11370-008-0020-9.
T. C. Bourke, C. R. Lowrey, S. P. Dukelow, S. D. Bagg, K. E. Norman, and S. H. Scott, “A robot-based behavioural task to quantify impairments in rapid motor decisions and actions after stroke,” Journal of NeuroEngineering and Rehabilitation, vol. 13, no. 1, Oct. 2016, doi: 10.1186/s12984-016-0201-2.
G. Tulsulkar, N. Mishra, N. M. Thalmann, H. E. Lim, M. P. Lee, and S. K. Cheng, “Can a humanoid social robot stimulate the interactivity of cognitively impaired elderly? A thorough study based on computer vision methods,” The Visual Computer, vol. 37, no. 12, pp. 3019–3038, Jul. 2021, doi: 10.1007/s00371-021-02242-y.
V. Kulyukin, C. Gharpure, J. Nicholson, and G. Osborne, “Robot-assisted wayfinding for the visually impaired in structured indoor environments,” Autonomous Robots, vol. 21, no. 1, pp. 29–41, Jun. 2006, doi: 10.1007/s10514-006-7223-8.
A. K. Sangaiah, J. S. Ramamoorthi, J. J. P. C. Rodrigues, Md. A. Rahman, G. Muhammad, and M. Alrashoud, “LACCVoV: Linear Adaptive Congestion Control With Optimization of Data Dissemination Model in Vehicle-to-Vehicle Communication,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 8, pp. 5319–5328, Aug. 2021, doi: 10.1109/tits.2020.3041518.
Acknowledgements
Authors thank Reviewers for taking the time and effort necessary to review the manuscript.
Funding
No funding was received to assist with the preparation of this manuscript.
Ethics declarations
Conflict of interest
The authors have no conflicts of interest to declare that are relevant to the content of this article.
Availability of data and materials
No data available for above study.
Author information
Contributions
All authors have equal contribution in the paper and all authors have read and agreed to the published version of the manuscript.
Corresponding author
C P Shirley
C P Shirley
Department of Computer Science and Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India.
Open Access This article is licensed under a Creative Commons Attribution NoDerivs is a more restrictive license. It allows you to redistribute the material commercially or non-commercially but the user cannot make any changes whatsoever to the original, i.e. no derivatives of the original work. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-nd/4.0/
Cite this article
C P Shirley, Kantilal Rane, Kolli Himantha Rao, B Bradley Bright, Prashant Agrawal and Neelam Rawat, “Machine learning and sensor-Based Multi-Robot System with Voice Recognition for Assisting the Visually Impaired, Journal of Machine and Computing, vol.3, no.3, pp. 206-215, July 2023. doi: 10.53759/7669/jmc202303019.