Blade: the emotional computer.

by Paul Curzon, Queen Mary University of London

Communicating with computers is clunky to say the least – we even have to go to IT classes to learn how to talk to them. It would be so much easier if they went to school to learn how to talk to us. If computers are to communicate more naturally with us we need to understand more about how humans interact with each other.

The most obvious ways that we communicate is through speech – we talk, we listen – but actually our communication is far more subtle than that. People pick up lots of information about our emotions and what we really mean from the expressions and the tone of our voice – not from what we actually say. Zabir, a student at Queen Mary was interested in this so decided to experiment with these ideas for his final year project. He used a kit called Lego Mindstorm that makes it really easy to build simple robots. The clever stuff comes in because, once built, Mindstorm creations can be programmed with behaviour. The result was Blade.

In the video above you can see Blade the robot respond

Blade, named after the Wesley Snipes film, was a robotic face capable of expressing emotion and responding to the tone of the user’s voice. Shout at Blade and he would look sad. Talk softly and, even though he could not understand a word of what you said he would start to appear happy again. Why? Because your tone says what you really mean whatever the words – that’s why parents talk gobbledegook softly to babies to calm them.

Blade was programmed using a neural network, a computer science model of the way the brain works, so he had a brain similar to ours in some simple ways. Blade learnt how to express emotions very much like children learn – by tuning the connections (his neurons) based on his experience. Zabir spent a lot of time shouting and talking softly to Blade, teaching him what the tone of his voice meant and so how to react. Blade’s behaviour wasn’t directly programmed, it was the ability to learn that was programmed.

Eventually we had to take Blade apart which was surprisingly sad. He really did seem to be more than a bunch of lego bricks. Something about his very human like expressions pulled on our emotions: the same trick that cartoonists pull with the big eyes of characters they want us to love.

Zabir went on to work in the city for Merchant Bank, JP Morgan


⬇️ This article has also been published in two CS4FN magazines – first published on p13 in Issue 4, Computer Science and BioLife, and then again on page 18 in Issue 26 (Peter McOwan: Serious Fun), our magazine celebrating the life and research of Peter McOwan (who co-founded CS4FN with Paul Curzon and researched facial recognition). There’s also a copy on the original CS4FN website. You can download free PDF copies of both magazines below, and any of our other magazines and booklets from our CS4FN Downloads site.

This video below Why faces are special from Queen Mary University of London asks the question “How does our brain recognise faces? Could robots do the same thing?”.

Peter McOwan’s research into face recognition informed the production of this short film. Designed to be accessible to a wide audience, the film was selected as one of the finalist 55 from 1450 films submitted to the festival CERN CineGlobe film festival 2012.

Related activities

We have some fun paper-based activities you can do at home or in the classroom.

  1. The Emotion Machine Activity
  2. Create-A-Face Activity
  3. Program A Pumpkin

See more details for each activity below.

1. The Emotion Machine Activity

From our Teaching London Computing website. Find out about programs and sequences and how how high-level language is translated into low-level machine instructions.

2. Create-A-Face Activity

Fom our Teaching London Computing website. Get people in your class (or at home if you have a big family) to make a giant robotic face that responds to commands.

3. Program A Pumpkin

Especially for Hallowe’en, a slightly spookier, pumpkin-ier version of The Emotion Machine above.


Related Magazine …


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Exploring mazes, inventing algorithms (part I) 

by Paul Curzon, Queen Mary University of London

A maze with mouse searching for cheese.

Computer science research in part involves inventing new algorithms or improving new ones. But what does that mean. Let’s explore some mazes to explore algorithms.

What does computer science research involve? It is very varied: from interviewing people to find out what the real problems that need solving in their lives or jobs are; to running experiments to find out what works and what doesn’t; to writing programs to solve problems.

Improving algorithms

A core part of much research is coming up with new and better algorithms that solve particular problems. The kind of algorithm could be anything from a new more secure cryptographic protocol, or a better way to rank the results of a search engine, to a new more effective machine learning algorithm that is less likely to make things up, or perhaps can better explain how it came to its conclusions.

What does it mean to come up with a better algorithm though? Once a problem is solved, isn’t it solved? Let’s explore a simple problem to see. Let’s explore mazes. Solve the simple maze puzzle above before you go on. Find a route that gets the mouse to the cheese.

Wandering around mazes, finding algorithms

If you’ve ever been in a hedge maze in the garden of some stately home, or a corn field maze, the chances are you just dived in and wandered rather aimlessly. Perhaps you tried to remember which way you went at each junction, to avoid going down the same dead-ends more than once. How about solving the paper version of a maze puzzle like the one above? Now perhaps you looked ahead to spot dead-ends to avoid tracing wrong paths with your pencil.

Probably what you are doing is at least a little random. You could, in theory at least, end up going back over the same paths, never taking the right one and and never getting to the middle. Could we come up with an algorithm that guarantees to solve mazes? To be an algorithm it would need to guarantee you ended up finding a path to the centre of the maze if you followed the steps of the algorithm precisely. It should also work for any maze, or at least all mazes of a particular kind. Ideally, the algorithm gives you a path that can then be followed by anyone without them having to run the algorithm themselves. They can just follow the path generated by the algorithm for that maze.

Wall-following

In fact lots of maze algorithms have been invented. Perhaps the one most people have heard of, if they know of any maze algorithm, is called Wall-following. It is very simple to do, You just pick a wall at the entrance either to the left or right and then follow it, If in a garden maze, keep your hand on the hedge as you walk round. If doing a paper puzzle, draw the path sticking to the chosen wall. Try it on the following simple maze.

A simple maze with mouse and cheese.

Simply connected

The wall-following algorithm will guarantee to get you to the centre of the maze, and back out again too, but only as long as the maze is what is called simply connected. That just means the maze is constructed from a single hedge (or one unbroken drawn line) not a series of unconnected hedges. If you look at both examples above you will see I created them by just drawing a single wiggly line.

If a maze is simply connected then it cannot have looping paths, so no going round in circles for ever. It will also only have one entrance/exit. That shows the first aspect of inventing algorithms that is important. They often only work for some situations, not all. You must be sure you know what situations they do and don’t work.

Often the earliest algorithms invented to solve a problem are like wall-following: they only work for simple situations. Other people then come along and find new algorithms that can cover more problems (here more mazes). Can you tweak the wall-following maze algorithm to work even if there are multiple exits from the maze, for example? As it stands our algorithm could just take you from the entrance straight out of another exit without exploring much of the maze at all! See the end for one simple way to tweak the algorithm. What if there are paths that take you round in circles? Can you come up with an algorithm to deal with that?

Some times the improvements invented just involve tweaking an existing algorithm as with dealing with multiple exits in a maze. Some times a whole new algorithm is needed.

Faster, higher, stronger?

Even for a simple constrained version of the problem, like simply connected mazes, people can invent better algorithms. What does better mean for a maze? Well one way you might have a better algorithm is if it is faster in coming up with a solution. Another is that the solution it comes up with is faster. For a maze that means a shorter (ideally the shortest) path to the centre. Wall following may get you in to the centre (and out again) but you probably will have discovered a very long path that takes you in and out of lots of dead-ends needlessly. You do find a path to the centre, but it may be a very long path. Can you come up with an algorithm that finds shorter paths?

We will explore an algorithm that does next.

More to come…

Some solutions

The result of wall following on our simple maze

A route for the mouse to follow that takes it to the cheese.

One way to deal with multiple exits

To deal with a maze that has multiple exits, so multiple breaks in the outer wall, tweak the wall-following algorithm as follows. First mark the exit you use to enter the maze, so you know when you return to it. If you come to any other exit then pretend there is a gate there and keep following the wall as though it were unbroken and there were no exit.

More on …

Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Why the Romans were pants at maths

Paul Curzon, Queen Mary University of London

The Romans were great at counting and addition but they were absolutely pants at multiplication. It wasn’t because they were stupid. It was because they hadn’t invented a good way to represent numbers, and that meant they needed really convoluted algorithms.

The Roman system is based on an earlier really simple way of writing numbers. You just put a line for each thing you’ve counted. Its probably the way shepherds kept count of sheep, drawing a line for each sheep. Those lines turned into the Roman letter I. To add 1 to a number you just add another I. You count: I, II, III, and so on and it makes counting easy.

This system is called unary – whereas binary involves counting with two symbols, 1 and 0, in unary you only have one symbol to count with. Addition in unary is easy too at least for small numbers. Take the first number and add on the end all the Is for the second and you’ve got the answer number. This is exactly the way we all start doing addition on our fingers.To add 2+3, hold up 2 fingers (II) then hold up another three fingers (III) and you have the answer (IIIII).

This is fine for small numbers but it gets a bit tedious as the numbers increase (and you run out of fingers!) Comparing numbers is easy in principle – do you have the same number of Is, but hard in practice for large numbers. We can’t keep all those Is in our head so a large number is hard to think about. To get round this the Romans invented new letters to stand for groups of Is. This is what we do when we tally numbers making a crossbar for every fifth number we count. It helps us keep track of larger numbers. The Romans invented a whole bunch of symbols to help: so for example in the Roman numeral system, V stands for 5 (IIIII), X stands for 10, L for 50, C for 100, D for 500 and M for 1000. They had invented a new way to represent numbers.

Image by Katie Rose from Pixabay

This makes it much easier to write and compare larger numbers. Now when counting and you get up to 5 you just replace all those Is with a V and then carry on adding Is: VI, VII, VIII, VIIII. Then you get to VIIIII (10) so replace it all with an X, starting again adding a new lot of Is: XI, XII, XIII, XIIII, XV, and so on. Counting large numbers is now a bit more involved – the algorithm involves more than just adding an I on the end, but it is much more convenient. The addition algorithm has now become more complicated, though it is still fairly simple too. Take any two numbers to add like VII and VIII and string them together: VIIVIII. Now group together the same letters: VVIIIII. Anywhere you have enough to replace symbols with the next character do so. VV can be replaced by X and IIIII can be replaced by V to give XV in the above. Keep making replacements until you can make no more. Put the symbols in order from largest to smallest symbol and you have your answer.

Now the Romans were obviously a bit lazy as bored with writing even four Is in a row they sometimes introduced a new set of abbreviations, so that IIII became IV and VIIII became IX. Putting a smaller symbol (like I) before a larger one (like X) instead of after meant subtract it to get the number. so IX means “one less than 10” or 9. Counting just got a tiny bit more complicated to get the advantage of writing fewer symbols. Addition now needs a more convoluted algorithm though. There are several ways to do it. The easiest is actually just to change the numbers to add to the simpler form (so IV goes back to IIII). You them do the addition that way, and convert back at the end. Addition just got that little bit harder, and all because of a change in representation.

Worse, doing any more complicated maths is even harder still using the Roman number representation. See if you can work out how to multiply Roman numbers. The Roman number system doesn’t help at all. The only really easy way is to just repeatedly add ( so III x VI is VI + VI + VI). That just isn’t practical for large numbers. Try it on XXIII x LXV1. There are other possible ways including one that is actually based on the binary multiplication algorithms computers use – multiplying and dividing repeatedly by 2. See if you can work out how to do it. Whatever way you do it, its clear that the number system the Romans chose made maths hard for them to do!

A good representation makes maths easy. A bad one makes it much harder to do

Image by Michael Kauer from Pixabay

Luckily, Indian and Arabian scholars understood that the representation they used mattered. They invented, and spread, the Hindu-Arabic numbers and decimal system we use today. What is special about it is that rather than introducing new symbols for bigger and bigger numbers, the position of a symbol is used instead. As we go from nine to ten we go back to the start of our symbols, from 9 back to 0, but stick a 1 in a new 10s column to count how many 10s we have. Counting is still pretty easy but suddenly not only is the algorithm for addition straightforward but we can come up with fairly simple algorithms for multiplication and division too. They are the algorithms you learn at school – though as with any algorithm making sure you follow the steps exactly and don’t miss steps is hard for a human (unlike for a computer). That is why we tend to find learning maths hard at first and it gets easier the more we practice.

In fact Romans needing to do serious maths probably used a variation of an abacus representing numbers with stones. They would do a calculation on the abacus and then convert the answer back into the Roman number system. And guess what. The Roman Abacus uses columns to represent larger numbers in a very similar way to the Hindu-Arabic system. The Romans understood that representation matters too.

Image by Hans from Pixabay

Sometimes things are hard to do just because we make them hard! The secret of coming up with good algorithms is often to come up with a good representation first. In programming too, if you come up with a good way to represent data, a good data structure, you can often then make it much easier to write an efficient program.


This article was first published on the original CS4FN website.


EPSRC supports this blog through research grant EP/W033615/1.