The Myopic Robot Problem
- Jul 5
- 3 min read
Author: Bern Grush
Date Published: July 4, 2025
Mobile robots moving among pedestrians—such as sidewalk delivery robots—often exhibit jerky, unpredictable movement patterns that make them appear shortsighted. Instead of smoothly navigating around obstacles or people, they follow relatively straight paths until suddenly veering, stopping abruptly, or making sharp, last-second corrections. This creates an impression that the robot "didn't see it coming" even when obstacles were clearly visible well in advance.
This behaviour can be very unsettling to humans.
Such unpredictability is foreign to social navigation norms. When walking near other people we telegraph our intentions through body language, eye contact, and gradual course adjustments. We expect others to do the same. A robot that suddenly jerks aside violates this implicit social contract, leaving us uncertain about where to step or whether it's safe to proceed.
This behaviour signals poor awareness or decision-making, and we may interpret such navigation corrections as evidence that a robot is incompetent, confused or is not paying attention. This erodes trust and can make us unduly concerned for what the robot might do next.
Smooth, anticipatory movements feel natural and predictable while jerky, sudden corrections feel robotic in the worst sense—unaware, perhaps inhuman and potentially dangerous—certainly unpredictable.
Algorithmic myopia
This is unlikely to be a sensor problem. Unless occluded by another object, robots can readily be designed to see objects well in advance. Instead, this behaviour stems from the nature of robot navigation algorithms.
Many robots use algorithms with short planning horizons. They plan a couple seconds ahead, continuously recalculating based on immediate conditions. This optimizes for “don’t hit anything,” and allows for quick reactions—both critical goals—but it prevents the smooth, anticipatory movements more natural to attentive humans.
To further ensure a robot never hits anything, it may be reasonable to maintain its current path until an obstacle becomes an immediate threat, then execute an avoidance maneuver—in other words, correct reaction is valued higher than intelligent prediction. Such a "wait and see" approach minimizes computational load, but it maximizes social awkwardness.
Some navigation algorithms treat all obstacles equally so that a trash can and a person standing get the same geometric avoidance response. Without social motion planning, such a robot would not account for human expectations about personal space, smooth approach and avoidance behaviour—i.e., the social dynamics of shared spaces.
Solution approaches
Implement extended planning algorithms 15-20 seconds ahead rather than 2-3 seconds. This would allow for smoother course corrections rather than sudden movements. This needs research. One expected approach is reinforcement learning that penalizes behaviours such as sudden accelerations or direction changes and rewards smooth, predictable trajectories, even if they are slightly slower. Any brute force extended planning solution would be computationally expensive and a tax on the robot’s energy supply.
Extended planning would benefit by being able to distinguish humans and pets, as well as vehicles such as cars and bicycles from inanimate stationary objects in order to reduce the re-computational expense to give wider berths, make earlier adjustments, and communicate intent—but only to humans. This, too, is a difficult problem. The vision systems used in robotaxis generally do a better job of this than do many smaller delivery robots, but robotaxis can afford to carry considerably more computing power at the edge. It may take a couple more compute generations until this capability can be inherited by smaller robots.
Note that human pedestrian behaviour is arguably more unpredictable than human driver behaviour, so inheriting the navigational competence of a robotaxi will probably not take mobile robots the full distance to social competence.

In addition to better anticipation for more human-like navigation behaviour, it will be critical for robots to signal their intentions through sound or light indicators and gestural behaviours such as consistent movement styles and gentle speed changes.
Functional navigation is not enough. We need mobile robots that share space with us to exhibit movement patterns that feel socially appropriate and predictable. For these devices to be more readily accepted, each human bystander needs to believe “that robot sees me and is clearly planning to accommodate me.”
Draft Technical Standard ISO 4448-3
In the ISO series, 4448, Public-area mobile robots, a part called “Journey Meso-Planning” defines the “situational awareness envelope” for exactly this extended planning of a public-area mobile robot (PMR) such that it will be possible to validate whether a mobile robot is able to share our social navigation space without making us uncomfortable, or conversely to give us sufficient confidence that it will not do something unpredictable while navigating nearby.
Please contact URF if you would like to join our stakeholder group as a member or sponsor, or if you would like to set up a workshop to dig into the technical details of journey meso-planning with your team (or any PMR-related topics)!
Comments