Researchers at Binghamton University (New York) have developed a new robotic guide dog system that not only accompanies a person but can also maintain a full-fledged vocal dialogue with them.
Unlike traditional guide dogs, which can remember a limited number of commands (approximately two dozen), this technology is based on the GPT-4 language model. This allows the device to interpret more complex queries and provide detailed responses. The project was presented at the AAAI 2026 Artificial Intelligence Conference in Singapore.
The main advantage of the “talking” robot is that it develops situational awareness in the user. The system operates in two stages: first, it verbalizes the route and action plan before setting off, and then provides feedback on the surrounding environment as the robot moves.
The robot can suggest several route options, calculate the estimated travel time, and provide real-time information about obstacles or spatial features, such as long corridors or complex areas. This is especially important for people with severe visual impairments, who have difficulty navigating without assistance.
During testing in a large office building, seven participants successfully reached their destinations following the system’s prompts. The subjects noted the ease of voice interaction, emphasizing that the combination of pre-planning and ongoing feedback makes navigation clearer and safer.
Currently, the team, led by Associate Professor Shiqi Zhang, continues to improve the robot’s autonomy.
The plan is to teach the robots to navigate not only indoors but also in open spaces, including complex urban routes. Scientists believe that implementing such solutions into everyday life can significantly improve mobility and quality of life for people with disabilities.
As a reminder, a robot that can lift up to 8 kg was tested in a warehouse in the UK.
To be continued…
Only registered users can leave comments