Emoticons and Emotions

Emoticons are a simple and easily understandable way to express emotions in writing using letters and punctuation without any special pictures, but why might Japanese emoticons be better than western ones? And can we really trust expressions to tell us about emotions anyway?

African woman smiling 
Image by Tri Le from Pixabay

The trouble with early online message board messages, email and text messages was that it was always more difficult to express subtleties, including intended emotions, than if talking to someone face to face. Jokes were often assumed to be serious and flame wars were the result. So when in 1982 Carnegie Mellon Professor Scott Fahlman suggested the use of the smiley : – ) to indicate a joke in message board messages, a step forward in global peace was probably made. He also suggested that since posts more often than not seemed to be intended as jokes then a sad face : – ( would be more useful to explicitly indicate anything that wasn’t a joke.

He wasn’t actually the first to use punctuation characters to indicate emotions though. The earliest apparently recorded use is in a poem in 1648 by Robert Herrick, an English poet in his poem “To Fortune”.

Tumble me down, and I will sit
Upon my ruins, (smiling yet:)

Whether this was intentional or not is disputed, as punctuation wasn’t consistently used then. Perhaps the poet intended it, perhaps it was just a coincidentall printing error, or perhaps it was a joke inserted by the printers. Either way it is certainly an appropriate use (why not write your own emoticon poem!)

You might think that everyone uses the same emoticons you are familiar with but different cultures use them in different ways. Westerners follow Fahlman’s suggestion putting them on their side. In Japan by contrast they sit the right way up and crucially the emotion is all in the eyes not the mouth which is represented by an underscore. In this style, happiness can be given by (^_^) and T or ; as an indication of crying, can be used for sadness: (T_T) or (;_;). In South Korea, the Korean alphabet is used so a different character set of letter are available (though their symbols are the right way up as with the Japanese version).

Automatically understanding people’s emotions is an important area of research, called sentiment analysis, whether analysing text, faces or other aspects that can be captured. It is amongst other things important for marketeers and advertisers to work out whether people like their products or what issues matter most to people in elections, so it is big business. Anyone who truly cracks it will be rich.

So in reality is the western version or the Eastern version more accurate: are emotions better detected in the shape of the mouth or the eyes? With a smile at least, it turns out that the eyes really give away whether someone is happy or not, not the mouth. When people put on a fake smile their mouth does curve just as with a natural smile. The difference between fake and genuine smiles that really shows if the person is happy is in the eyes. A genuine smile is called a Duchenne smile after Duchenne de Boulogne who in 1862 showed that when people find something actually funny the smile affects the muscles in their eyes. It causes a tell-tale crow’s foot pattern in the skin at the sides of the eyes. Some people can fake a Duchenne too though, so even that is not totally reliable.

As emoticons hint, because emotions are indicated in the eyes as much as in the mouth, sentiment analysis of emotions based on faces needs to focus on the whole face, not just the mouth. However, all may not be what it seems as other research shows that most of the time people do not actually smile at all when genuinely happy. Just like emoticons facial expressions are just a way we tell other people what we want them to think our emotions are, not necessarily our actual emotions. Expressions are not a window into our souls, but a pragmatic way to communicate important information. They probably evolved for the same reason emoticons were invented, to avoid pointless fights. Researchers trying to create software that works out what we really feel, may have their work cut out if their life’s work is to make them genuinely happy.

     ( O . O )
         0

– Paul Curzon, Queen Mary University of London, Summer 2021

Standup Robots

‘How do robots eat pizza?’… ‘One byte at a time’. Computational Humour is real, but it’s not jokes about computers, it’s computers telling their own jokes.

Robot performing
Image from istockphoto

Computers can create art, stories, slogans and even magic tricks. But can computers perform themselves? Can robots invent their own jokes? Can they tell jokes?

Combining Artificial Intelligence, computational linguistics and humour studies (yes you can study how to be funny!) a team of Scottish researchers made an early attempt at computerised standup comedy! They came up with Standup (System to Augment Non Speakers Dialogue Using Puns): a program that generates riddles for kids with language difficulties. Standup has a dictionary and joke-building mechanism, but does not perform, it just creates the jokes. You will have to judge for yourself as to whether the puns are funny. You can download the software from here. What makes a pun funny? It is a about the word having two meanings at exactly the same time in a sentence. It is also about generating an expectation that you then break: a key idea about what is at the core of creativity too.

A research team at Virginia Tech in the US created a system that started to learn about funny pictures. Having defined a ‘funniness score’ they created a computational model for humorous scenes, and trained it to predict funniness, perhaps with an eye to spotting pics for social media posting, or not.

But are there funny robots out there? Yes! RoboThespian programmed by researchers at Queen Mary University of London, and Data, created by researchers at Carnegie Mellon University are both robots programmed to do stand-up comedy. Data has a bank of jokes and responds to audience reaction. His developers don’t actually know what he will do when he performs, as he is learning all the time. At his first public gig, he got the crowd laughing, but his timing was poor. You can see his performance online, in a TED Talk.

RoboThespian did a gig at the London Barbican alongside human comedians. The performance was a live experiment to understand whether the robot could ‘work the audience’ as well as a human comedian. They found that even relatively small changes in the timing of delivery make a big difference to audience response.

What have these all got in common? Artificial Intelligence, machine learning and studies to understand what humour actually is, are being combined to make something that is funny. Comedy is perhaps the pinnacle of creativity. It’s certainly not easy for a human to write even one joke, so think how hard it is distill that skill into algorithms and train a computer to create loads of them.

You have to laugh!

Watch RoboThespian [EXTERNAL]

– Jane Waite, Queen Mary University of London, Summer 2017

Download Issue 22 of the cs4fn magazine “Creative Computing” here

Lots more computing jokes on our Teaching London Computing site

Studying Comedy with Computers

by Vanessa Pope, Queen Mary University of London

Smart speakers like Alexa might know a joke or two, but machines aren’t very good at sounding funny yet. Comedians, on the other hand, are experts at sounding both funny and exciting,  even when they’ve told the same joke hundreds of times. Maybe speech technology could learn a thing or two from comedians… that is what my research is about.

Image by Rob Slaven from Pixabay 

To test a joke, stand-up comedians tell it to lots of different audiences and see how they react. If no-one laughs, they might change the words of the joke or the way they tell it. If we can learn how they make their adjustments, maybe technology can borrow their tricks. How much do comedians change as they write a new show? Does a comedian say the same joke the same way at every performance? The first step is to find out.

The first step is to record lots of the same live show of a comedian and find the parts that match from one show to the next. It was much faster to write a program to find the same jokes in different shows than finding them all myself. My code goes through all the words and sounds a comedian said in one live show and looks for matching chunks in their other shows. Words need to be in the same exact order to be a match: “Why did the chicken cross the road” is very different to “Why did the road cross the chicken”! The process of looking through a sequence to find a match is called “subsequence matching,” because you’re looking through one sequence (the whole set of words and sounds in a show) for a smaller sequence (the “sub” in “subsequence”). If a subsequence (little sequence) is found in lots of shows, it means the comedian says that joke the same way at every show. Subsequence matching is a brand new way to study comedy and other types of speech that are repeated, like school lessons or a favourite campfire story.

By comparing how comedians told the same jokes in lots of different shows, I found patterns in the way they told them. Although comedy can sound very improvised, a big chunk of comedians’ speech (around 40%) was exactly the same in different shows. Sounds like “ummm” and “errr” might seem like mistakes but these hesitation sounds were part of some matches, so we know that they weren’t actually mistakes. Maybe “umm”s help comedians sound like they’re making up their jokes on the spot.

Varying how long pauses are could be an important part of making speech sound lively, too. A comedian told a joke more slowly and evenly when they were recorded on their own than when they had an audience. Comedians work very hard to prepare their jokes so they are funny to lots of different people. Computers might, therefore, be able to borrow the way comedians test their jokes and change them. For example, one comedian kept only five of their original jokes in their final show! New jokes were added little by little around the old jokes, rather than being added in big chunks.

If you want to run an experiment at home, try recording yourself telling the same joke to a few different people. How much practice did you need before you could say the joke all at once? What did you change, including little sounds like “umm”? What didn’t you change? How did the person you were telling the joke to, change how you told it?

There’s lots more to learn from comedians and actors, like whether they change their voice and movement to keep different people’s attention. This research is the first to use computers to study how performers repeat and adjust what they say, but hopefully just the beginning. 

Now, have you heard the one about the …

For more information about Vanessa’s work visit https://vanessapope.co.uk/ [EXTERNAL]