Much effort is going into teaching robots natural language for easy human-robot interaction. When robots work side by side with humans they must understand when they're given a certain task. But robots must also learn to refuse harmful or inappropriate orders. Researchers from Tufts University are teaching Nao robots how to reject directives given by humans while staying polite.

Gordon Briggs and Matthias Scheutz of the Human-Robot Interaction Laboratory, Tufts University have developed a mechanism that enables robots to reject a directive and explain why. The work was done for DIARC/ADE cognitive robotic architecture embedded in a humanoid Nao robot. They published their findings in the paper “Sorry I can't do that”.

In this scenario the Nao robot refuses a command and explains why.


Most robotic systems have some sort of indicator to inform their human partners they're unable to carry out a task if they are physically incapable. But humans have a much wider range of reasons for denying a request. Briggs and Scheutz identify 5 conditions us humans consider when asked to do something:

1. Knowledge: Do I know how to do X ?
2. Capacity: Am I physically able to do X now? Am I nor- mally physically able to do X ?
3. Goal priority and timing: Am I able to do X right now ?
4. Social role and obligation: Am I obligated based on my social role to do X ?
5. Normative permissibility: Does it violate any normative principle to do X ?

Especially the last two conditions indicate a high level of complex reasoning: the first calling for social awareness while the second hits on the field of ethics.

The contribution of Briggs and Scheutz is that they've laid the groundwork for a “rejection and explanation mechanism” enabling robots to assess a request based on all 5 conditions. Additionally, the mechanism provides an explanation if the request is refused.

In this scenario Nao refuses a request because its human partner does not have the appropriate social status.


Machine ethics
The fast development of robotics brings us rapidly to a point where robots need to be able to consider conditions 4 and 5, say Briggs and Scheutz. Robots are leaving the factories to which they were confined until quite recently, and are entering the world of humans. This requires them to conform to human norms, first and foremost those norms that are about safety. A self-driving car, for instance, must avoid pedestrians just like a human driver does.

That's why there is a growing interest in machine ethics, Briggs and Scheutz point out. This field considers how autonomous machines can be enabled to calculate the consequences of their actions and avoid those that are ethically unacceptable. In the case of the self-driving car, if the owner tells the car to turn left while there is a pedestrian on the road, the car must reject the directive.

Human-robot interaction
But a robot that functions entirely on its own is not the pinnacle of what can be achieved. The summit is humans and robots mutually empowering each other. The rejection and explanation mechanism makes it easy for the robot and the human to exchange information and act upon it. In the last video Nao initially refuses a request, but changes its stance when it is given additional information.




Image: Nao robot. By Marc Seil. CC-BY licence.