Poster Session 5, 3:15 PM - 4:00 PM: Room 165 [D5]

Building an Robust Accessibility-Object Dataset for Guide Dog Robots

Presenter: Duretti Diribaa Hordofaa

Group Members: Anshu Anjna, Dylan M. Gage

Faculty Sponsor: Donghyun Kim

School: UMass Amherst

Research Area: Computer Science

ABSTRACT

While animal guide dogs provide essential mobility assistance to blind and low-vision (BLV) individuals, they are resource-intensive to train, expensive to maintain, and limited in availability. Furthermore, existing computer vision datasets lack the specific "accessibility objects” - such as door buttons, elevators, pedestrian signals, and crosswalks - necessary for training robust perception models on edge-compute platforms. Current datasets lack the quantity and quality of objects that are beneficial to navigation in the context of assisting BLV individuals. 
 
To bridge this gap, we introduce a novel, specialized dataset for accessibility-aware robotic navigation. We collected diverse imagery of accessibility objects across varying times of day, environmental conditions, and viewpoints. To enhance model robustness, we augment this real-world data using vision-language models (VLMs) to simulate varied environmental effects. We utilize Roboflow to generate precise segmentation masks and fine-tune lightweight segmentation architectures specifically for deployment on the Unitree Go2 quadruped robot. 
 
Our approach directly addresses the data scarcity problem in assistive robotics. By enabling real-time, on-device segmentation of critical navigation cues, we demonstrate a scalable pathway for autonomous guide robots. These results will support the development of guide dog robots to be more reliable and efficient in assisting BLV individuals navigate any environment. We expect this work to not only provide a foundational dataset for the community but also to validate real-world deployment.