by Nicola Plant, Queen Mary University of London
(See end for links to related careers)
Think of the perfect robot companion. A robot you can hang out with, chat to and who understands how you feel. Robots can already understand some of what we say and talk back. They can even respond to the emotions we express in the tone of our voice. But, what about body language? We also show how we feel by the way we stand, we describe things with our hands and we communicate with the expressions on our faces. Could a robot use body language to show that it understands how we feel? Could a robot show empathy?

If a robot companion did show this kind of empathetic body language we would likely feel that it understood us, and shared our feelings and experiences. For robots to be able to behave like this though, we first need to understand more about how humans use movement to show empathy with one another.
Think about how you react when a friend talks about their headache. You wouldn’t stay perfectly still. But what would you do? We’ve used motion capture to track people’s movements as they talk to each other. Motion capture is the technology used in films to make computer-animated creatures like Gollum in Lord of the Rings, or the Apes in the Planet of the Apes. Lots of cameras are used together to create a very precise computer model of the movements being recorded. Using motion capture, we’ve been able to see what people actually do when chatting about their experiences.
It turns out that we share our understanding of things like a headache by performing it together. We share the actions of the headache as if we have it ourselves. If I hit my head, wince and say ‘ouch’, you might wince and say ‘ouch’ too – you give a multimodal performance, with actions and words, to show me you understand how I feel.
So should we just program robots to copy us? It isn’t as simple as that. We don’t copy exactly. A perfect copy wouldn’t show understanding of how we feel. A robot doing that would seem like a parrot, repeating things without any understanding. For the robot to show that it understands how you feel it must perform a headache like it owns it – as though it were really theirs! That means behaving in a similar way to you; but adapted to the unique type of headache it has.
Designing the way robots should behave in social situations isn’t easy. If we work out exactly how humans interact with each other to share their experiences though, we can use that understanding to program robot companions. Then one day your robot friend will be able to hang out with you, chat and show they understand how you feel. Just like a real friend.
multimodal = two or more different ways of doing something. With communication that might be spoken words, facial expressions and hand gestures.
This article was previously published on the original CS4FN website and a copy is on page 16 of issue 19 of the CS4FN magazine, which you can read by clicking on the magazine cover below.
Related Magazine …

See also (previous post and related career options)
We have recently written about the AMPER project which uses a tablet-based AI tool / robot to support people with dementia and their carers. It prompts the person to discuss events from their younger life and adapts to their needs. We also linked this with information about the types of careers people working in this area might do – the examples given were for a project based in the Netherlands called ‘Dramaturgy for Devices’ – using lessons learned from the study of theatre and theatrical performances in designing social robots so that their behaviour feels more natural and friendly to the humans who’ll be using them.
See our collection of posts about Career paths in Computing.
EPSRC supports this blog through research grant EP/W033615/1.




