Computers that read emotions

by Matthew Purver, Queen Mary University of London

One of the ways that computers could be more like humans – and maybe pass the Turing test – is by responding to emotion. But how could a computer learn to read human emotions out of words? Matthew Purver of Queen Mary University of London tells us how.

Have you ever thought about why you add emoticons to your text messages – symbols like 🙂 and :-@? Why do we do this with some messages but not with others? And why do we use different words, symbols and abbreviations in texts, Twitter messages, Facebook status updates and formal writing?

In face-to-face conversation, we get a lot of information from the way someone sounds, their facial expressions, and their gestures. In particular, this is the way we convey much of our emotional information – how happy or annoyed we’re feeling about what we’re saying. But when we’re sending a written message, these audio-visual cues are lost – so we have to think of other ways to convey the same information. The ways we choose to do this depend on the space we have available, and on what we think other people will understand. If we’re writing a book or an article, with lots of space and time available, we can use extra words to fully describe our point of view. But if we’re writing an SMS message when we’re short of time and the phone keypad takes time to use, or if we’re writing on Twitter and only have 140 characters of space, then we need to think of other conventions. Humans are very good at this – we can invent and understand new symbols, words or abbreviations quite easily. If you hadn’t seen the 😀 symbol before, you can probably guess what it means – especially if you know something about the person texting you, and what you’re talking about.

But computers are terrible at this. They’re generally bad at guessing new things, and they’re bad at understanding the way we naturally express ourselves. So if computers need to understand what people are writing to each other in short messages like on Twitter or Facebook, we have a problem. But this is something researchers would really like to do: for example, researchers in France, Germany and Ireland have all found that Twitter opinions can help predict election results, sometimes better than standard exit polls – and if we could accurately understand whether people are feeling happy or angry about a candidate when they tweet about them, we’d have a powerful tool for understanding popular opinion. Similarly we could automatically find out whether people liked a new product when it was launched; and some research even suggests you could even predict the stock market. But how do we teach computers to understand emotional content, and learn to adapt to the new ways we express it?

One answer might be in a class of techniques called semi-supervised learning. By taking some example messages in which the authors have made the emotional content very clear (using emoticons, or specific conventions like Twitter’s #fail or abbreviations like LOL), we can give ourselves a foundation to build on. A computer can learn the words and phrases that seem to be associated with these clear emotions, so it understands this limited set of messages. Then, by allowing it to find new data with the same words and phrases, it can learn new examples for itself. Eventually, it can learn new symbols or phrases if it sees them together with emotional patterns it already knows enough times to be confident, and then we’re on our way towards an emotionally aware computer. However, we’re still a fair way off getting it right all the time, every time.


This article was first published on the original CS4FN website and a copy can be found on Pages 16-17 of Issue 14 of the CS4FN magazine, “The genius who gave us the future“. You can download a free PDF copy below, and download all of our free magazines and booklets from our downloads site.


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Blade: the emotional computer.

Zabir talking to Blade who is reacting
Image taken from video by Zabir for QMUL

Communicating with computers is clunky to say the least – we even have to go to IT classes to learn how to talk to them. It would be so much easier if they went to school to learn how to talk to us. If computers are to communicate more naturally with us we need to understand more about how humans interact with each other.

The most obvious ways that we communicate is through speech – we talk, we listen – but actually our communication is far more subtle than that. People pick up lots of information about our emotions and what we really mean from the expressions and the tone of our voice – not from what we actually say. Zabir, a student at Queen Mary was interested in this so decided to experiment with these ideas for his final year project. He used a kit called Lego Mindstorm that makes it really easy to build simple robots. The clever stuff comes in because, once built, Mindstorm creations can be programmed with behaviour. The result was Blade.

In the video above you can see Blade the robot respond. Video by Zabir for QMUL

Blade, named after the Wesley Snipes film, was a robotic face capable of expressing emotion and responding to the tone of the user’s voice. Shout at Blade and he would look sad. Talk softly and, even though he could not understand a word of what you said he would start to appear happy again. Why? Because your tone says what you really mean whatever the words – that’s why parents talk gobbledegook softly to babies to calm them.

Blade was programmed using a neural network, a computer science model of the way the brain works, so he had a brain similar to ours in some simple ways. Blade learnt how to express emotions very much like children learn – by tuning the connections (his neurons) based on his experience. Zabir spent a lot of time shouting and talking softly to Blade, teaching him what the tone of his voice meant and so how to react. Blade’s behaviour wasn’t directly programmed, it was the ability to learn that was programmed.

Eventually we had to take Blade apart which was surprisingly sad. He really did seem to be more than a bunch of lego bricks. Something about his very human like expressions pulled on our emotions: the same trick that cartoonists pull with the big eyes of characters they want us to love.

Zabir went on to work in the city for Merchant Bank, JP Morgan

– Paul Curzon, Queen Mary University of London


⬇️ This article has also been published in two CS4FN magazines – first published on p13 in Issue 4, Computer Science and BioLife, and then again on page 18 in Issue 26 (Peter McOwan: Serious Fun), our magazine celebrating the life and research of Peter McOwan (who co-founded CS4FN with Paul Curzon and researched facial recognition). There’s also a copy on the original CS4FN website. You can download free PDF copies of both magazines below, and any of our other magazines and booklets from our CS4FN Downloads site.

This video below Why faces are special from Queen Mary University of London asks the question “How does our brain recognise faces? Could robots do the same thing?”.

Peter McOwan’s research into face recognition informed the production of this short film. Designed to be accessible to a wide audience, the film was selected as one of the finalist 55 from 1450 films submitted to the festival CERN CineGlobe film festival 2012.

Related activities

We have some fun paper-based activities you can do at home or in the classroom.

  1. The Emotion Machine Activity
  2. Create-A-Face Activity
  3. Program A Pumpkin

See more details for each activity below.

1. The Emotion Machine Activity

From our Teaching London Computing website. Find out about programs and sequences and how how high-level language is translated into low-level machine instructions.

2. Create-A-Face Activity

Fom our Teaching London Computing website. Get people in your class (or at home if you have a big family) to make a giant robotic face that responds to commands.

3. Program A Pumpkin

Especially for Hallowe’en, a slightly spookier, pumpkin-ier version of The Emotion Machine above.


Related Magazine …


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

How to get a head in robotics

[This article includes a free papercraft activity with a paper robot that expresses ’emotions’.]

If humans are ever to get to like and live with robots we need to understand each other. One of the ways that people let others know how they are feeling is through the expressions on their faces. A smile or a frown on someone’s face tells us something about how they are feeling and how they are likely to react. We can also tell something of a person’s emotions from their eyes and eyebrows. Some scientists think it might be possible for robots to express feelings this way too, but understanding how a robot can usefully express its ‘emotions’ (what its internal computer program is processing and planning to do next), is still in its infancy. A group of researchers in Poland, at Wroclaw University of Technology, have come up with a clever new design for a robot head that could help a computer show its feelings. It’s inspired by the Teenage Mutant Ninja Turtles cartoon and movie series.

The real Emys orbicularis (European pond turtle) Image by Luis Fernández García, CC BY-SA 3.0 from Wikimedia

The real Teenage Mutant Ninja Turtle

Their turtle-inspired robotic head called EMYS, which stands for EMotive headY System is cleverly also the name of a European pond turtle, Emys orbicularis. Taking his inspiration from cartoons, the project’s principal ‘head’ designer Jan Kedzierski created a mechanical marvel that can convey a whole range of different emotions by tilting a pair of movable discs, one of which contains highly flexible eyes and eyebrows.

Eye see

The CS4FN/LIREC emotional Robot face with three discs like EMYS
Image by CS4FN

The lower disc imitates the movements of the human lower jaw, while the upper disk can mimic raising the eyebrows and wrinkling the forehead. There are eyelids and eyebrows linked to each eye. Have a look at your face in the mirror, then try pulling some expressions like sadness and anger. In particular look at what these do to your eyes. In the robot, as in humans, the eyelids can move to cover the eye. This helps in the expression of emotions like sadness or anger, as your mirror experiment probably showed.

Pop eye

But then things get freaky and fun. Following the best traditions of cartoons, when EMYS is ‘surprised’ the robot’s eyes can shoot out to a distance of more than 10 centimetres! This well-known ‘eyes out on stalks’ cartoon technique, which deliberately over-exaggerates how people’s eyes widen and stare when they are startled, is something we instinctively understand even though our eyes don’t really do this. It makes use of the fact that cartoons take the real world to extremes, and audiences understand and are entertained by this sort of comical exaggeration. In fact it’s been shown that people are faster at recognising cartoons of people than recognising the un- exaggerated original.

High tech head builder

The mechanical internals of EMYS consist of lightweight aluminium, while the covering external elements, such as the eyes and discs, are made of lightweight plastic using 3D rapid prototyping technology. This technology allows a design on the computer to be ‘printed’ in plastic in three dimensions. The design in the computer is first converted into a stack of thin slices. Each slice of the design, from the bottom up, individually oozes out of a printer and on to the slice underneath, so layer-by-layer the design in the computer becomes a plastic reality, ready for use.

Facing the future

A ‘gesture generator’ computer program controls the way the head behaves. Expressions like ‘sad’ and ‘surprised’ are broken down into a series of simple commands to the high-speed motors, moving the various lightweight parts of the face. In this way EMYS can behave in an amazingly fluid way – its eyes can ‘blink’, its neck can turn to follow a person’s face or look around. EMYS can even shake or nod its head. EMYS is being used on the Polish group’s social robot FLASH (FLexible Autonomous Social Helper) and also with other robot bodies as part of the LIREC project (www.lirec.eu [archived]). This big project explores the question of how robot companions could interact with humans, and helps find ways for robots to usefully show their ‘emotions’.

Do try this at home

You can program a paper version of an EMYS-like robot. Download and follow the instructions on the Emotion Machine in the printable version below and build your own EMYS.

Print, cut out and make your own emotional robot. The strips of paper at the top (‘sliders’) containing the expressions and letters are slotted into the grooves on the robot’s face and happy or annoyed faces can created by moving the sliders.

By selecting a series of different commands in the Emotion Engine boxes, the expression on EMYS’s face will change. How many different expressions can you create? What are the instructions you need to send to the face for a particular expression? What emotion do you think that expression looks like – how would you name it? What would you expect the robot to be ‘feeling’ if it pulled that face?

Emotion Machine Sheet - a robot head with strips to thread foreyes, eyebrow and mouth
Click on the image to go to the download page. Activity sheet by CS4FN

Go further

Why not draw your own sliders, with different eye shapes, mouth shapes and so on. Explore and experiment! That’s what computer scientists do.

– Paul Curzon, Queen Mary University of London


More on …

Related Magazines

This article was originally published on CS4FN (Computer Science For Fun) and on page 7 of issue 13 of the CS4FN magazine. You can download a free PDF copy of that issue, as well as all of our other free magazines and booklets. The emotion machine appeared in this issue of CS4FN magazine (issue 13).

QMUL CS4FN EPSrC logos