New AI Tool Enables Visually Impaired Users to Locate Everyday Items More Quickly
Penn State researchers have unveiled a new AI-driven navigation system designed to transform digital assistance for people with visual impairments. The smartphone-based tool, called NaviSense, identifies objects in real time and guides users toward them using audio cues and haptic feedback—without relying on preloaded object libraries.
The technology debuted at the ACM SIGACCESS ASSETS ’25 conference in Denver, where it received the Best Audience Choice Poster Award.
Real-time object recognition without static databases
Traditional assistive navigation apps often depend on human agents or require object models to be manually loaded into their databases [1]. According to Vijaykrishnan Narayanan, Evan Pugh University Professor and A. Robert Noll Chair Professor of Electrical Engineering, this creates major limitations in flexibility and performance.
Figure 1. New AI App.
“Older systems needed object models stored in memory to recognize them,” Narayanan explained. “This is inefficient and restricts users. We turned to advanced AI to overcome this.”
NaviSense connects to remote servers that run large language models (LLMs) and vision-language models (VLMs) [1]. These allow the system to interpret voice commands, scan surroundings, and identify requested items dynamically, enabling real-time recognition without fixed object catalogs.
Designed with user feedback
The tool was built based on extensive interviews with visually impaired individuals to understand practical challenges. Ajay Narayanan Sridhar, computer engineering Ph.D. student and the project’s lead investigator, said this input directly shaped the app’s features.
The app takes a spoken request, searches the user’s environment, filters irrelevant objects, and asks follow-up questions when clarification is needed—creating a conversational and adaptive guidance system.
One of NaviSense’s most innovative functions is hand guidance: the system tracks hand position using smartphone motion and gives directional instructions to help users physically reach the target item. Sridhar noted that participants repeatedly requested this capability during research sessions.
Proven early performance
Researchers tested NaviSense with 12 participants, comparing its performance to two existing commercial tools. The system significantly cut search time, improved accuracy, and produced better user satisfaction.
One participant wrote:
“I like how it tells you exactly where the object is—left, right, up, down—and then bullseye, you’ve got it.”The team is now working on reducing power usage and boosting processing efficiency as they move toward commercial deployment. “This technology is very close to market-ready,” Narayanan said. “Our goal is to make it even more accessible.”
References:
- https://interestingengineering.com/innovation/ai-app-for-visually-impaired
Cite this article:
Keerthana S (2025), New AI Tool Enables Visually Impaired Users to Locate Everyday Items More Quickly, AnaTechMaz, pp.336










