Professor Alan Winfield
The simulation enables the robot to predict the consequences of its own actions without committing to them. In this case Robot A senses Robot H is headed toward the hole. When it runs a sequence of what-if scenario's it predicts that if it moves ahead right it will block H's path, if it moves ahead left it will reach it destination goal.

The mechanism to predict consequences is called a Consequence Engine. The next step is to attribute weight to the possible actions. Here an 'ethical rule' is introduced: saving the proxy-human always takes precedence over all other possible actions, even if it compromises the robot's own well-being.

“What it means”, continued Prof. Winfield, “is that, providing the robot can sense the world sufficiently well, and can then initialize – in other words reflect - what it sees in its internal model accurately, than it has the ability to simulate a number of next possible actions. The ethical rule simply chooses one of those actions on the basis of the simulated future consequences. Because the simulation has been initialized with the world as it is at this particular moment, the robot should be able to cope with unknown situations.

“What we are doing is still very difficult. There are hard problems that need to be solved, such as accurate sensors. So a practical ethical robot, even the way we are making an ethical robot that can deal with the real world, is still a long way ahead. It will most likely require all sorts of advances in sensing as well extensions to the basic architecture we've developed.”

To my question whether the robot can be called ethical even though it makes no moral judgments of its own Prof. Winfield responded: “Being programmed to behave ethically does not mean you are not ethical. The difference between you and I and our simple robot is that you and I can choose to behave ethically or not. And that's a responsibility of being adult humans. But our simple robots can not choose - they are hard-wired to behave in the way they do.”

Ethical choice
The issue of moral judgment also came up in the experiment when Prof. Winfield and colleagues introduced a second proxy-human (H2) onto the football field. H and H2 are both headed towards the hole presenting robot A with the dilemma of which one to save. The experiment ran multiple times and in some cases A saved H, in others H2 and sometimes even both. However, in several instances A was unable to decide and kept going back and forth between the two proxy-humans, eventually saving neither. In their paper Prof. Winfield and his colleagues write: “We could introduce a rule, or heuristic, that allows A to choose H or H2, but deliberately chose not to on the grounds that such a rule should be determined on ethical rather than engineering grounds. If ethical robots prove to be a practical proposition their design and validation will need to be a collaborative effort of roboticist and ethicist.”