The second time I met Alice, she was already able to stand up. When she smiled, I involuntarily smiled back. It took a while for me to realize that I was sending nonverbal signals to an entity that was not able to receive them. That says something about the Alice robot, and it says something about me. The facial expressions of the robot are so refined that I responded to them automatically as a social being [1].

The meeting with Alice took place in the lab of the Services of Electromechanical Care Agencies (SELEMCA) project, which has its quarters at the Free University of Amsterdam. The project team is studying how intelligent systems, such as robots, can interact with their users in a more human manner. The social issue underlying this project is the growing demand for care services. As a consequence of the ageing of the population, the number of people needing care will keep rising faster than the available number of care professionals. To be able to offer people adequate care in future, work is currently underway on technological solutions that could take over some of the care tasks. To make dealing with a technological care system pleasant for users, SELEMCA is developing the human-friendly I-Care system for care services.

Johan F. Hoorn, who holds doctorates in both literature and science, is the principal investigator and project manager of SELEMCA. He was enthusiastic when talking about the goals, achievements and obstacles of the project: ‘The core of SELEMCA is the scientific investigation of intelligence, emotion and creativity. This is surrounded by a shell of machine code and machine behavior, consisting of a number of programs that can simulate these capabilities. There are also specific functionalities – things that can be meaningful to someone or actions that someone can perform. They collectively form the I-Care system, which runs in the background. Finally, there is the interface that makes the I-Care system visible to the outside world.’
 
Alice facial expressions


Machines with human capabilities

An example of how this layered structure is elaborated in practice is provided by research into the emotional component of moral reasoning. A robot acting strictly according to an ethical code will be experienced by humans as coldly rational and therefore threatening. In a scientific article about Moral Coppélia, to which Johan contributed as a co-author, this was illustrated using the cart and footbridge dilemmas.

A cart traveling at dangerously high speed is heading along a railway track towards a group of five people. By throwing a switch, the cart can be directed onto a track where just one person is standing. The choice for the moral agent is to take action to save the lives of five people at the expense of the life of one person, or to allow the cart to continue on its course, resulting in five fatalities. In another scenario, the moral agent is standing on a footbridge next to another person. Here again the cart is threatening five people, and this time the choice is whether to throw the one person off the bridge in order to stop the cart. Although taking action results in a five to one ratio of living and dead persons in both cases, people will generally choose to throw the switch but will draw the line at actively throwing someone from a bridge. This is because they do not reason on the basis of purely ethical principles, but also let emotions play a part in their moral decisions. By contrast, a robot with purely rational moral reasoning will always sacrifice the one person for the benefit of the larger number.

Johan HoornPeople would not like a robot that throws people off bridges, so Johan and his colleagues are developing a system that integrates emotional intelligence into moral reasoning. Systems of this sort, which simulate human capabilities such as affection, moral reasoning and creativity, are built into I-Care and are expressed in the functions offered to care recipients. If a patient with a broken leg does not want to eat, the robot respects the patient’s autonomy and leaves the decision to the patient, but in the case of an Alzheimer patient with reduced autonomy, the robot would offer the food again. Creativity is also a significant aspect. Instead of repeatedly putting the plate in front of the patient, which would probably just provoke more and more resistance, the robot can try an alternative approach, such as taking a spoonful of food and pretending it’s an airplane.



Alice en DARwIn

The interface that makes the I-Care system visible to the outside world is an important element. According to Johan, the interface can take virtually any imaginable form. It can be a robot, a toy, a doll or a virtual agent on a screen, but what’s behind it is always the same system. It does not have to look like a person, but it does act like a person. A coffee machine, for example, could act as an avatar of the I-Care system. Users may think that they are working with three different devices, but in fact they are simply interacting with the I-care system in three different manifestations. After all, the real meaning of ‘avatar’ is a god in human form, such as Vishnu.

The Alice robot is one of the avatars in which the I-Care system can manifest itself. Thanks to the human facial expressions of the robot, many users find it a nice way to communicate with the system. However, in terms of physical development Alice is still at a relatively rudimentary stage. It can stand up, but it can hardly perform any actions. Alice’s companion DARwIn-OP, whose name is short for ‘Dynamic Anthropomorphic Robot with Intelligence – Open Platform’, is a lot more agile and can perform physical tasks.

130039-58However – as Johan mentioned – robots are not the only type of interface. In the SELEMCA lab they are also working on an interactive bicycle. Alzheimer patients are not good at following therapy programs. After they sit down on a home trainer to get the required exercise, they quickly become distracted and get off the trainer. Johan and his team are developing a virtual environment that gives the patient the feeling of cycling through the city and increases their attention span. The team would like to extend this to allow the patient to cycle virtually alongside a close friend or relative, by establishing an online link to another person, such as the patient’s son, who in reality is biking to work. This gives the patient physical exercise together with human contact, without the risk of ending up under a bus. The companion cyclist is visualized on a handlebar screen as an avatar. Having the companion act as an interface for I-Care gives the system very human traits. During the day, the I-Care system in its various manifestations cares for the patient without the conscious awareness of the patient.

 Fashioning the future today

It’s essential to build I-Care as an open, modular platform, according to Johan: ‘Everything we develop is open and available to the entire world. What we offer is a structure or abstraction, and what you attach to this structure is up to you.’ That applies equally well to users and developers. If a company in the industry wants to offer its own module and wishes to screen off part of it in order to make a profit with it, that is possible. ‘I like to describe the lab as a sort of cathedral with lots of little shops clustered around it, like the ones you see around old cathedrals where you can buy things that communicate the religious message. In this case, we would like to see interface designers, robotic sensing companies and electromechanical companies set up shop around the lab. Almost literally, so that there is face-to-face contact every day and the knowledge about I-Care that we have here can be put into practice by the companies and industrial organizations.

That’s the sticking point right now, because things are very quiet on the commercial side. It’s a bit strange, because we are certain that there will be a market for this in ten years. You hardly need to do any market research, because we worked with the end users in the development process. Care providers and people in need of care have personally contributed to the concept that we have created here. For the government, this offers a solution to a growing problem, and for companies it offers a business opportunity, so I don’t understand why people are so reluctant to run with it. What we do here arouses more interest in Hong Kong and South Korea than here in Europe. Here everybody says, ‘Very interesting, really special, good job’, but that’s it. We lack a real innovation climate. People talk about innovation all the time and there are a thousand committees, but all the committees get in the way of innovation. I don’t want committees; I want effective action.

In terms of technology, a lot is already possible with robotics, but there is a lack of cooperation. Alice has well-developed facial expressions, but the body of the robot is fairly limited. If you look at DARwIn, the body motion is quite good but it doesn’t have any facial expression. The machines developed by the DARPA projects in the USA can be kicked around without falling over – they recover their balance and keep on walking – but they are totally lacking in creativity. There are all sorts of bits and pieces that work well on their own, but we still don’t have an integrated platform. What we need is for all these people to get together and integrate everything that is already possible. You would be amazed at the results – it’s unbelievable what you could do then.’

[1] This article was first published in the Elektor July & august issue.

SELEMCA is part of the Creative Industry Scientific Programme (CRISP), with funding from the Netherlands Ministry of Education, Culture and Science.

Many thanks to the Waag Society for organizing the PhDO – Trust Me, I’m a Robot event and permission to use the photos of Johan Hoorn and DARwIn under the CC BY 2.0 license.

.