Computers that read emotions

by Matthew Purver, Queen Mary University of London

One of the ways that computers could be more like humans – and maybe pass the Turing test – is by responding to emotion. But how could a computer learn to read human emotions out of words? Matthew Purver of Queen Mary University of London tells us how.

Have you ever thought about why you add emoticons to your text messages – symbols like 🙂 and :-@? Why do we do this with some messages but not with others? And why do we use different words, symbols and abbreviations in texts, Twitter messages, Facebook status updates and formal writing?

In face-to-face conversation, we get a lot of information from the way someone sounds, their facial expressions, and their gestures. In particular, this is the way we convey much of our emotional information – how happy or annoyed we’re feeling about what we’re saying. But when we’re sending a written message, these audio-visual cues are lost – so we have to think of other ways to convey the same information. The ways we choose to do this depend on the space we have available, and on what we think other people will understand. If we’re writing a book or an article, with lots of space and time available, we can use extra words to fully describe our point of view. But if we’re writing an SMS message when we’re short of time and the phone keypad takes time to use, or if we’re writing on Twitter and only have 140 characters of space, then we need to think of other conventions. Humans are very good at this – we can invent and understand new symbols, words or abbreviations quite easily. If you hadn’t seen the 😀 symbol before, you can probably guess what it means – especially if you know something about the person texting you, and what you’re talking about.

But computers are terrible at this. They’re generally bad at guessing new things, and they’re bad at understanding the way we naturally express ourselves. So if computers need to understand what people are writing to each other in short messages like on Twitter or Facebook, we have a problem. But this is something researchers would really like to do: for example, researchers in France, Germany and Ireland have all found that Twitter opinions can help predict election results, sometimes better than standard exit polls – and if we could accurately understand whether people are feeling happy or angry about a candidate when they tweet about them, we’d have a powerful tool for understanding popular opinion. Similarly we could automatically find out whether people liked a new product when it was launched; and some research even suggests you could even predict the stock market. But how do we teach computers to understand emotional content, and learn to adapt to the new ways we express it?

One answer might be in a class of techniques called semi-supervised learning. By taking some example messages in which the authors have made the emotional content very clear (using emoticons, or specific conventions like Twitter’s #fail or abbreviations like LOL), we can give ourselves a foundation to build on. A computer can learn the words and phrases that seem to be associated with these clear emotions, so it understands this limited set of messages. Then, by allowing it to find new data with the same words and phrases, it can learn new examples for itself. Eventually, it can learn new symbols or phrases if it sees them together with emotional patterns it already knows enough times to be confident, and then we’re on our way towards an emotionally aware computer. However, we’re still a fair way off getting it right all the time, every time.


This article was first published on the original CS4FN website and a copy can be found on Pages 16-17 of Issue 14 of the CS4FN magazine, “The genius who gave us the future“. You can download a free PDF copy below, and download all of our free magazines and booklets from our downloads site.


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Happy World Emoji Day – 📅 17 July 2023 – how people use emoji to communicate and what it tells us about them 😀

“Emoji didn’t become so essential because they stand in for words – but because they finally made writing a lot more like talking.”

Gretchen McCulloch (see Further reading below)
A selection of emoji

The emoji for ‘calendar‘ shows the 17th July 📅 (click the ‘calendar’ link to find out why) and, since 2014, Emojipedia (an excellent resource for all things emoji, including their history) has celebrated World Emoji Day on that date.

Before we had emoji (the word emoji can be both singular as well as plural, but ’emojis’ is fine too) people added text-based ‘pictures’ to their texts and emails to add flavour to their online conversations, like 🙂 or 🙂 for a smiling face or 😦 for a sad one. These text-based pictures were known as ’emoticons’ (icons that added emotion) because it isn’t always possible to know just from the words alone what the writer means. They weren’t just used to clarify meaning though, people peppered their prose with other playful pictures, such as :p where the ‘p’ is someone blowing a raspberry / sticking their tongue out* and created other icons such as this rose to send to someone on Valentine’s Day @-‘-,->—-, or this polevaulting amoeba ./

Here are the newly released emoji for 2023.

People use emoji in very different ways depending on their age, gender, ethnicity, personal writing style. In our “The Emoji Crystal Ball” article we look at how people can tell a lot about us from the types of emoji we use and the way we use them.

The Emoji Crystal Ball

Fairground fortune tellers claim to be able to tell a lot about you by staring into a crystal ball. They could tell far more about you (that wasn’t made up) by staring at your public social media profile. Even your use of emojis alone gives away something of who you are. Walid Magdy’s research team … Continue reading

Further reading

Writing IRL (July 2019) Gretchen McCullock writing in Slate
(IRL = In Real Life)
– this is an excerpt about emoji from Gretchen’s fascinating book “Because internet” about internet culture, communication and linguistics (the study of language).

Penguins and pizza – cracking the secret Valentine’s Day code (February 2018) The Scotsman – on how people are using emoji as a secret language, from research done by Sarah Wiseman and Sandy Gould.



*For an even better raspberry-blowing emoticon try one of the letters (called ‘thorn’) from the Runic alphabet. If you have a Windows computer with a numeric keypad on the right hand side press the Num Lock key at the top to lock the number keypad (so that the keys are now numbers and not up and down arrows etc). Hold down the Alt key (there’s usually one on either side of the spacebar) and while holding it down type 0254 on the numeric keypad and let go. This should now appear wherever your cursor is: þ. Or for the lower case letter it’s Alt+0222 = Þ – for when you just want to blow a small raspberry :Þ

For Mac users press control+command+spacebar to bring up the Character Viewer and just type thorn in the search bar and lots will appear. Double-click to select the one you want, it will automatically paste into wherever your cursor is.


EPSRC supports this blog through research grant EP/W033615/1.

Chatbot or Cheatbot?

by Paul Curzon, Queen Mary University of London

Speech bubbles
Image by Clker-Free-Vector-Images from Pixabay
IImage by Clker-Free-Vector-Images from Pixabay 

The chatbots have suddenly got everyone talking, though about them as much as with them. Why? Because one, chatGPT has (amongst other things) reached the level of being able to fool us into thinking that it is a pretty good student.

It’s not exactly what Alan Turing was thinking about when he broached his idea of a test for intelligence for machines: if we cannot tell them apart from a human then we must accept they are intelligent. His test involved having a conversation with them over an extended period before making the decision, and that is subtly different to asking questions.

ChatGPT may be pretty close to passing an actual Turing Test but it probably still isn’t there yet. Ask the right questions and it behaves differently to a human. For example, ask it to prove that the square root of 2 is irrational and it can do it easily, and looks amazingly smart, – there are lots of versions of the proof out there that it has absorbed. It isn’t actually good at maths though. Ask it to simply count or add things and it can get it wrong. Essentially, it is just good at determining the right information from the vast store of information it has been trained on and then presenting it in a human-like way. It is arguably the way it can present it “in its own words” that makes it seem especially impressive.

Will we accept that it is “intelligent”? Once it was said that if a machine could beat humans at chess it would be intelligent. When one beat the best human, we just said “it’s not really intelligent – it can only play chess””. Perhaps chatGPT is just good at answering questions (amongst other things) but we won’t accept that as “intelligent” even if it is how we judge humans. What it can do is impressive and a step forward, though. Also, it is worth noting other AIs are better at some of the things it is weak at – logical thinking, counting, doing arithmetic, and so on. It likely won’t be long before the different AIs’ mistakes and weaknesses are ironed out and we have ones that can do it all.

Rather than asking whether it is intelligent, what has got everyone talking though (in universities and schools at least) is that chatGPT has shown that it can answer all sorts of questions we traditionally use for tests well enough to pass exams. The issue is that students can now use it instead of their own brains. The cry is out that we must abandon setting humans essays, we should no longer ask them to explain things, nor for that matter write (small) programs. These are all things chatGPT can now do well enough to pass such tests for any student unable to do them themselves. Others say we should be preparing students for the future so its ok, from now on, we just only test what human and chatGPT can do together.

It certainly means assessment needs to be rethought to some extent, and of course this is just the start: the chatbots are only going to get better, so we had better do the thinking fast. The situation is very like the advent of calculators, though. Yes, we need everyone to learn to use calculators. But calculators didn’t mean we had to stop learning how to do maths ourselves. Essay writing, explaining, writing simple programs, analytical skills, etc, just like arithmetic, are all about core skill development, building the skills to then build on. The fact that a chatbot can do it too doesn’t mean we should stop learning and practicing those skills (and assessing them as an inducement to learn as well as a check on whether the learning has been successful). So the question should not be about what we should stop doing, but more about how we make sure students do carry on learning. A big, bad thing about cheating (aside from unfairness) is that the person who decides to cheat loses the opportunity to learn. Chatbots should not stop humans learning either.

The biggest gain we can give a student is to teach them how to learn, so now we have to work out how to make sure they continue to learn in this new world, rather than just hand over all their learning tasks to the chatbot to do. As many people have pointed out, there are not just bad ways to use a chatbot, there are also ways we can use chatbots as teaching tools. Used well by an autonomous learner they can act as a personal tutor, explaining things they realise they don’t understand immediately, so becoming a basis for that student doing very effective deliberate learning, fixing understanding before moving on.

Of course, a bigger problem, if a chatbot can do things at least as well as we can then why would a company employ a person rather than just hire an AI? The AIs can now a lot of jobs we assumed were ours to do. It could be yet another way of technology focussing vast wealth on the few and taking from the many. Unless our intent is a distopian science fiction future where most humans have no role and no point, (see for example, CS Forester’s classic, The Machine Stops) then we still in any case ought to learn skills. If we are to keep ahead of the AIs and use them as a tool not be replaced by them, we need the basic skills to build on to gain the more advanced ones needed for the future. Learning skills is also, of course, a powerful way for humans (if not yet chatbots) to gain self-fulfilment and so happiness.

Right now, an issue is that the current generation of chatbots are still very capable of being wrong. chatGPT is like an over confident student. It will answer anything you ask, but it gives wrong answers just as confidently as right ones. Tell it it is wrong and it will give you a new answer just as confidently and possibly just as wrong. If people are to use it in place of thinking for themselves then, in the short term at least, they still need the skill it doesn’t have of judging when it is right or wrong.

So what should we do about assessment. Formal exams come back to the fore so that conditions are controlled. They make it clear you have to be able to do it yourself. Open book online tests that become popular in the pandemic, are unlikely to be fair assessments any more, but arguably they never were. Chatbots or not they were always too easy to cheat in. They may well be good still for learning. Perhaps in future if the chatbots are so clever then we could turn the Turing test around: we just ask an artificial intelligence to decide whether particular humans (our students) are “intelligent” or not…

Alternatively, if we don’t like the solutions being suggesting about the problems these new chatbots are raising, there is now another way forward. If they are so clever, we could just ask a chatbot to tell us what we should do about chatbots…

.

More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.

The last speaker

by Paul Curzon, Queen Mary University of London

(from the cs4fn archive)

The wings of a green macau looking like angel wings
Image by Avlis AVL from Pixabay

The languages of the world are going extinct at a rapid rate. As the numbers of people who still speak a language dwindle, the chance of it surviving dwindles too. As the last person dies, the language is gone forever. To be the last living speaker of the language of your ancestors must be a terribly sad ordeal. One language’s extinction bordered on the surreal. The last time the language of the Atures, in South America was heard, it was spoken by a parrot: an old blue-and-yellow macaw, that had survived the death of all the local people.

Why do languages die?

The reason smaller languages die are varied, from war and genocide, to disease and natural disaster, to the enticement of bigger, pushier languages. Can technology help? In fact global media: films, music and television are helping languages to die, as the youth turn their backs on the languages of their parents. The Web with its early English bias may also be helping to push minority languages even faster to the brink. Computers could be a force for good though, protecting the world’s languages, rather than destroying them.

Unicode to the rescue

In the early days of the web, web pages used the English alphabet. Everything in a computer is just stored as numbers, including letters: 1 for ‘a’, 2 for ‘b’, for example. As long as different computers agree on the code they can print them to the screen as the same letter. A problem with early web pages is there were lots of different encodings of numbers to letters. Worse still only enough numbers were set aside for the English alphabet in the widely used encodings. Not good if you want to use a computer to support other languages with their variety of accents and completely different sets of characters. A new universal encoding system called Unicode came to the rescue. It aims to be a single universal character encoding – with enough numbers allocated for ALL languages. It is therefore allowing the web to be truly multi-lingual.

Languages are spoken

Languages are not just written but are spoken. Computers can help there, too, though. Linguists around the world record speakers of smaller languages, understanding them, preserving them. Originally this was done using tapes. Now the languages can be stored on multimedia computers. Computers are not just restricted to playing back recordings but can also actively speak written text. The web also allows much wider access to such materials that can also be embedded in online learning resources, helping new people to learn the languages. Language translators such as BabelFish and Google Translate can also help, though they are still far from perfect even for common languages. The problem is that things do not translate easily between languages – each language really does constitute a different way of thinking, not just of talking. Some thoughts are hard to even think in a different language.

AI to the rescue?

Even that is not enough. To truly preserve a language, the speakers need to use it in everyday life, for everyday conversation. Speakers need someone to speak with. Learning a language is not just about learning the words but learning the culture and the way of thinking, of actively using the language. Perhaps future computers could help there too. A long-time goal of artificial intelligence (AI) researchers is to develop computers that can hold real conversations. In fact this is the basis of the original test for computer intelligence suggested by Alan Turing back in 1950…if a computer is indistinguishable from a human in conversation, then it is intelligent. There is also an annual competition that embodies this test: the Loebner Prize. It would be great if in the future, computer AIs could help save languages by being additional everyday speakers holding real conversations, being real friends.

Time is running out…
by the time the AIs arrive,
the majority of languages may be gone forever.

Too late?

The problem is that time is running out. Artificial intelligences that can have totally realistic human conversations even in English are still a way off. None have passed the Turing Test. To speak different languages really well for everyday conversations those AIs will have to learn the different cultures and ‘think’ in the different languages. The window of opportunity is disappearing. By the time the AIs arrive the majority of human languages may be gone forever. Let’s hope that computer scientists and linguists do solve the problems in time, and that computers are not used just to preserve languages for academic interest, but really can help them to survive. It is sad that the last living creature to speak Atures was a parrot. It would be equally sad if the last speakers of all current languages bar English, Spanish and Chinese say, were computers.

More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.

Only the fittest slogans survive!

by Paul Curzon, Queen Mary University of London

(From the archive)

Being creative isn’t just for the fun of it. It can be serious too. Marketing people are paid vast amounts to come up with slogans for new products, and in the political world, a good, memorable soundbite can turn the tide over who wins and loses an election. Coming up with great slogans that people will remember for years needs both a mastery of language and a creative streak too. Algorithms are now getting in on the act, and if anyone can create a program as good as the best humans, they will soon be richer than the richest marketing executive. Polona Tomašicˇ and her colleagues from the Jožef Stefan Institute in Slovenia are one group exploring the use of algorithms to create slogans. Their approach is based on the way evolution works – genetic algorithms. Only the fittest slogans survive!

A mastery of language

To generate a slogan, you give their program a short description on the slogan’s topic – a new chocolate bar perhaps. It then uses existing language databases and programs to give it the necessary understanding of language.

First, it uses a database of common grammatical links between pairs of words generated from wikipedia pages. Then skeletons of slogans are extracted from an Internet list of famous (so successful) slogans. These skeletons don’t include the actual words, just the grammatical relationships between the words. They provide general outlines that successful slogans follow.

From the passage given, the program pulls out keywords that can be used within the slogans (beans, flavour, hot, milk, …). It generates a set of fairly random slogans from those words to get started. It does this just by slotting keywords into the skeletons along with random filler words in a way that matches the grammatical links of the skeletons.

Breeding Slogans

New baby slogans are now produced by mating pairs of initial slogans (the parents). This is done by swapping bits into the baby from each parent. Both whole sections and individual words are swapped in. Mutation is allowed too. For example, adjectives are added in appropriate places. Words are also swapped for words with a related meaning. The resulting children join the new population of slogans. Grammar is corrected using a grammar checker.

Culling Slogans

Slogans are now culled. Any that are the same as existing ones go immediately. The slogans are then rated to see which are fittest. This uses simple properties like their length, the number of keywords used, and how common the words used are. More complex tests used are based on how related the meanings of the words are, and how commonly pairs of words appear together in real sentences. Together these combine to give a single score for the slogan. The best are kept to breed in the next generation, the worst are discarded (they die!), though a random selection of weaker slogans are also allowed to survive. The result is a new set of slogans that are slightly better than the previous set.

Many generations later…

The program breeds and culls slogans like this for thousands, even millions of generations, gradually improving them, until it finally chooses its best. The slogans produced are not yet world beating on their own, and vary in quality as judged by humans. For chocolate, one run came up with slogans like “The healthy banana” and “The favourite oven”, for example. It finally settled on “The HOT chocolate” which is pretty good.

More work is needed on the program, especially its fitness function – the way it decides what is a good slogan and what isn’t. As it stands this sort of program isn’t likely to replace anyone’s marketing department. They could help with brainstorming sessions though, to spark new ideas but leaving humans to make the final choice. Supporting human creativity rather than replacing it is probably just as rewarding for the program after all.


Related Magazines …

Issue 22 Cover Creative Computing

This article was originally published on CS4FN and also appears on p13 of Creative Computing, issue 22 of the CS4FN magazine. You can download a free PDF copy of the magazine as well as all of our previous booklets and magazines from the CS4FN downloads site.

What are birds actually saying?

by Dan Stowell and Paul Curzon, Queen Mary University of London

Birds make so much noise, and it’s very complex. Is it just babble, or are they saying complicated things to each other? If so, could we work out what they are saying, what it means? Could we learn their language and speak to the birds?

Gull image by Thanasis Papazacharias from Pixabay

We know that bird communication is not as complicated as the words and sentences in human speech. So far, no one has been able to find grammatical patterns like those we find in human language. There apparently aren’t rules for birds like the ones we have about verbs and nouns. Birds don’t have to learn grammar! Exactly how complex bird languages are is still hotly debated, though.

Sometimes they’re passing on information about predators, or food, or sometimes just advertising their own fitness – showing off to get a mate (a bit like karaoke nights). Scientists have proved that such specific kinds of information are in the sounds birds make by observing bird behaviour. By playing recordings of birds and seeing how other birds react, they can see what information was communicated by a particular sound. If you play a ‘predator near’ call, for example, then other birds flee, but they stay put if you play other calls. They get the message.

Birds are definitely passing on specific information when they sing.

It turns out some birds have even learnt the languages of other animals and use it both to help those other animals and to support a life of crime. Many animals listen for the alarm calls of the animals around them, and so flee when others see a problem. Birds called Drongos, for example, act as lookouts for Meerkats, giving warning calls when they see Meerkat predators, allowing them to return to the safety of their burrows. However, the Drongos also sound false alarms every so often. They do it when they see a Meerkat with some juicy morcel. As the Meerkats run, the Drongo swoops in to steal the abandoned food.

Unfortunately for the Drongo, Meerkats are quite clever and get wise to the con. Eventually, they start to ignore the Drongo and only listen for their own Meerkat sentry’s call. The Drongo has another trick though. They are really good at mimicking sounds they hear, just like parrots. They have learnt to speak Meerkat just like the scientists do in experiments. So when the Meerkats stop reacting, the Drongos just switch tactics and start making perfect Meerkat language alarm calls instead. Once again the food is theirs.

Drongos give false alarms so they can steal food.

While most of us can’t reproduce bird sounds ourselves, and so talk directly to animals, we can certainly write programs to do it. In Star Wars, C3PO is a master of languages, speaking millions. Real robots of the near future will be able to mimic the sounds of whatever animals they wish and communicate with them in at least the simple ways that animals of different species listen and talk to each other. Perhaps something like this might be used to help protect endangered species from their predators, for example, watching for hawks and issuing timely warnings. We just have to hope they don’t turn to the Dark Side, like the Drongos, and use these skills to support a life of crime.

 


This article was originally published on CS4FN and in issue 21 of the CS4FN magazine ‘Computing Sounds Wild’ on p3. You can download a PDF copy of Issue 21, as well as all of our previous published material, free, at the CS4FN downloads site.

Computing Sounds Wild explores the work of scientists and engineers who are using computers to understand, identify and recreate wild sounds, especially those of birds. We see how sophisticated algorithms that allow machines to learn, can help recognize birds even when they can’t be seen, so helping conservation efforts. We see how computer models help biologists understand animal behaviour, and we look at how electronic and computer generated sounds, having changed music, are now set to change the soundscapes of films. Making electronic sounds is also a great, fun way to become a computer scientist and learn to program.

Front cover of CS4FN Issue 21 – Computing sounds wild

 

 

Emoticons and Emotions

Emoticons are a simple and easily understandable way to express emotions in writing using letters and punctuation without any special pictures, but why might Japanese emoticons be better than western ones? And can we really trust expressions to tell us about emotions anyway?

African woman smiling 
Image by Tri Le from Pixabay

The trouble with early online message board messages, email and text messages was that it was always more difficult to express subtleties, including intended emotions, than if talking to someone face to face. Jokes were often assumed to be serious and flame wars were the result. So when in 1982 Carnegie Mellon Professor Scott Fahlman suggested the use of the smiley : – ) to indicate a joke in message board messages, a step forward in global peace was probably made. He also suggested that since posts more often than not seemed to be intended as jokes then a sad face : – ( would be more useful to explicitly indicate anything that wasn’t a joke.

He wasn’t actually the first to use punctuation characters to indicate emotions though. The earliest apparently recorded use is in a poem in 1648 by Robert Herrick, an English poet in his poem “To Fortune”.

Tumble me down, and I will sit
Upon my ruins, (smiling yet:)

Whether this was intentional or not is disputed, as punctuation wasn’t consistently used then. Perhaps the poet intended it, perhaps it was just a coincidentall printing error, or perhaps it was a joke inserted by the printers. Either way it is certainly an appropriate use (why not write your own emoticon poem!)

You might think that everyone uses the same emoticons you are familiar with but different cultures use them in different ways. Westerners follow Fahlman’s suggestion putting them on their side. In Japan by contrast they sit the right way up and crucially the emotion is all in the eyes not the mouth which is represented by an underscore. In this style, happiness can be given by (^_^) and T or ; as an indication of crying, can be used for sadness: (T_T) or (;_;). In South Korea, the Korean alphabet is used so a different character set of letter are available (though their symbols are the right way up as with the Japanese version).

Automatically understanding people’s emotions is an important area of research, called sentiment analysis, whether analysing text, faces or other aspects that can be captured. It is amongst other things important for marketeers and advertisers to work out whether people like their products or what issues matter most to people in elections, so it is big business. Anyone who truly cracks it will be rich.

So in reality is the western version or the Eastern version more accurate: are emotions better detected in the shape of the mouth or the eyes? With a smile at least, it turns out that the eyes really give away whether someone is happy or not, not the mouth. When people put on a fake smile their mouth does curve just as with a natural smile. The difference between fake and genuine smiles that really shows if the person is happy is in the eyes. A genuine smile is called a Duchenne smile after Duchenne de Boulogne who in 1862 showed that when people find something actually funny the smile affects the muscles in their eyes. It causes a tell-tale crow’s foot pattern in the skin at the sides of the eyes. Some people can fake a Duchenne too though, so even that is not totally reliable.

As emoticons hint, because emotions are indicated in the eyes as much as in the mouth, sentiment analysis of emotions based on faces needs to focus on the whole face, not just the mouth. However, all may not be what it seems as other research shows that most of the time people do not actually smile at all when genuinely happy. Just like emoticons facial expressions are just a way we tell other people what we want them to think our emotions are, not necessarily our actual emotions. Expressions are not a window into our souls, but a pragmatic way to communicate important information. They probably evolved for the same reason emoticons were invented, to avoid pointless fights. Researchers trying to create software that works out what we really feel, may have their work cut out if their life’s work is to make them genuinely happy.

     ( O . O )
         0

– Paul Curzon, Queen Mary University of London, Summer 2021

Standup Robots

‘How do robots eat pizza?’… ‘One byte at a time’. Computational Humour is real, but it’s not jokes about computers, it’s computers telling their own jokes.

Robot performing
Image from istockphoto

Computers can create art, stories, slogans and even magic tricks. But can computers perform themselves? Can robots invent their own jokes? Can they tell jokes?

Combining Artificial Intelligence, computational linguistics and humour studies (yes you can study how to be funny!) a team of Scottish researchers made an early attempt at computerised standup comedy! They came up with Standup (System to Augment Non Speakers Dialogue Using Puns): a program that generates riddles for kids with language difficulties. Standup has a dictionary and joke-building mechanism, but does not perform, it just creates the jokes. You will have to judge for yourself as to whether the puns are funny. You can download the software from here. What makes a pun funny? It is a about the word having two meanings at exactly the same time in a sentence. It is also about generating an expectation that you then break: a key idea about what is at the core of creativity too.

A research team at Virginia Tech in the US created a system that started to learn about funny pictures. Having defined a ‘funniness score’ they created a computational model for humorous scenes, and trained it to predict funniness, perhaps with an eye to spotting pics for social media posting, or not.

But are there funny robots out there? Yes! RoboThespian programmed by researchers at Queen Mary University of London, and Data, created by researchers at Carnegie Mellon University are both robots programmed to do stand-up comedy. Data has a bank of jokes and responds to audience reaction. His developers don’t actually know what he will do when he performs, as he is learning all the time. At his first public gig, he got the crowd laughing, but his timing was poor. You can see his performance online, in a TED Talk.

RoboThespian did a gig at the London Barbican alongside human comedians. The performance was a live experiment to understand whether the robot could ‘work the audience’ as well as a human comedian. They found that even relatively small changes in the timing of delivery make a big difference to audience response.

What have these all got in common? Artificial Intelligence, machine learning and studies to understand what humour actually is, are being combined to make something that is funny. Comedy is perhaps the pinnacle of creativity. It’s certainly not easy for a human to write even one joke, so think how hard it is distill that skill into algorithms and train a computer to create loads of them.

You have to laugh!

Watch RoboThespian [EXTERNAL]

– Jane Waite, Queen Mary University of London, Summer 2017

Download Issue 22 of the cs4fn magazine “Creative Computing” here

Lots more computing jokes on our Teaching London Computing site

Studying Comedy with Computers

by Vanessa Pope, Queen Mary University of London

Smart speakers like Alexa might know a joke or two, but machines aren’t very good at sounding funny yet. Comedians, on the other hand, are experts at sounding both funny and exciting,  even when they’ve told the same joke hundreds of times. Maybe speech technology could learn a thing or two from comedians… that is what my research is about.

Image by Rob Slaven from Pixabay 

To test a joke, stand-up comedians tell it to lots of different audiences and see how they react. If no-one laughs, they might change the words of the joke or the way they tell it. If we can learn how they make their adjustments, maybe technology can borrow their tricks. How much do comedians change as they write a new show? Does a comedian say the same joke the same way at every performance? The first step is to find out.

The first step is to record lots of the same live show of a comedian and find the parts that match from one show to the next. It was much faster to write a program to find the same jokes in different shows than finding them all myself. My code goes through all the words and sounds a comedian said in one live show and looks for matching chunks in their other shows. Words need to be in the same exact order to be a match: “Why did the chicken cross the road” is very different to “Why did the road cross the chicken”! The process of looking through a sequence to find a match is called “subsequence matching,” because you’re looking through one sequence (the whole set of words and sounds in a show) for a smaller sequence (the “sub” in “subsequence”). If a subsequence (little sequence) is found in lots of shows, it means the comedian says that joke the same way at every show. Subsequence matching is a brand new way to study comedy and other types of speech that are repeated, like school lessons or a favourite campfire story.

By comparing how comedians told the same jokes in lots of different shows, I found patterns in the way they told them. Although comedy can sound very improvised, a big chunk of comedians’ speech (around 40%) was exactly the same in different shows. Sounds like “ummm” and “errr” might seem like mistakes but these hesitation sounds were part of some matches, so we know that they weren’t actually mistakes. Maybe “umm”s help comedians sound like they’re making up their jokes on the spot.

Varying how long pauses are could be an important part of making speech sound lively, too. A comedian told a joke more slowly and evenly when they were recorded on their own than when they had an audience. Comedians work very hard to prepare their jokes so they are funny to lots of different people. Computers might, therefore, be able to borrow the way comedians test their jokes and change them. For example, one comedian kept only five of their original jokes in their final show! New jokes were added little by little around the old jokes, rather than being added in big chunks.

If you want to run an experiment at home, try recording yourself telling the same joke to a few different people. How much practice did you need before you could say the joke all at once? What did you change, including little sounds like “umm”? What didn’t you change? How did the person you were telling the joke to, change how you told it?

There’s lots more to learn from comedians and actors, like whether they change their voice and movement to keep different people’s attention. This research is the first to use computers to study how performers repeat and adjust what they say, but hopefully just the beginning. 

Now, have you heard the one about the …

For more information about Vanessa’s work visit https://vanessapope.co.uk/ [EXTERNAL]