The joke Turing test

A funny thing happened on the way to the computer

by Peter W. McOwan, Queen Mary University of London

(from the archive)

Laugh and the world laughs with you they say, but what if you’re a computer. Can a computer have a ‘sense of humour’?

Computer generated jokes can do more than give us a laugh. Human language in jokes can often be ambiguous: words can have two meanings. For example the word ‘bore’ can mean a person who is uninteresting or could be to do with drilling … and if spoken it could be about a male pig. It’s often this slip between the meaning of words that makes jokes work (work that joke out for yourself). To be able to understand how human based humour works, and build a computer program that can make us laugh will give us a better understanding of how the human mind works … and human minds are never boring.

Many researchers believe that jokes come from the unexpected. As humans we have a brain that can try to ‘predict the future’, for example when catching a fast ball our brains have a simple learned mathematical model of the physics so we can predict where the ball will be and catch it. Similarly in stories we have a feel for where it should be going, and when the story takes an unexpected turn, we often find this funny. The shaggy dog story is an example; it’s a long series of parts of a story that build our expectations, only to have the end prove us wrong. We laugh (or groan) when the unexpected twist occurs. It’s like the ball suddenly doing three loop-the-loops then stopping in mid-air. It’s not what we expect. It’s against the rules and we see that as funny.

Some artificial intelligence researchers who are interested in understanding how language works look at jokes as a way to understand how we use language. Graham Richie was one early such researcher, and funnily enough he presented his work at an April Fools’ Day Workshop on Computational Humour. Richie looked at puns: simple gags that work by a play on words, and created a computer program called JAPE that generates jokes.

How do we know if the computer has a sense of humour? Well how would we know a human comic had a sense of humour? We’d get them to tell a joke. Now suppose that we had a test where we had a set of jokes, some made by humans and some by computers, and suppose we couldn’t tell the difference? If you can’t tell which is computer generated and which is human generated then the argument goes that the computer program must, in some way, have captured the human ability. This is called a Turing Test after the computer scientist Alan Turing. The original idea was to use it as a test for intelligence but we can use the same idea as a test for an ability to be funny too.

So let’s finish with a joke (and test). Which of the following is a joke created by a computer program following Richie’s theory of puns, and which is a human’s attempt? Will humans or machines have the last laugh on this test?

Have your vote: which of these two jokes do you think was written by a computer and which by a human.


1) What’s fast and wiry?

… An aircraft hanger!


2) What’s green and bounces?

… A spring cabbage!

Make your choice before scrolling down to find the answer.


More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.


The answers

Could you tell which of the two jokes was written by a human’s and which by a computer?

Lots of cs4fn readers voted over several years and the voting went:

  • 58 % votes cast believed the aircraft hanger joke is computer generated
  • 42 % votes cast believed the spring cabbage joke is computer generated

In fact …

  • The aircraft hanger joke was the work of a computer.
  • The spring cabbage joke was the human generated cracker.

If the voters were doing no better than guessing then the votes would be about 50-50: no better than tossing a coin to decide. Then the computer was doing as well at being funny as the human. A vote share of 58-42 suggests (on the basis of this one joke only) that the computer is getting there, but perhaps doesn’t quite have as good a sense of humour as the human who invented the spring cabbage joke. A real test would use lots more jokes, of course. If doing a real experiment it would also be important that they were not only generated by the human/computer but selected by them too (or possibly selected at random from ones they each picked out as their best). By using ones we selected our sense of humour could be getting in the way of a fair test.

The Chinese room: zombie attack!

by Paul Curzon, Queen Mary University of London

(From the cs4fn archive)

Iain M Banks’s science fiction novels about ‘The Culture’ imagine a universe inhabited (and largely run) by ‘Minds’. These are incredibly intelligent machines – mainly spaceships – that are also independently thinking conscious beings with their own personalities. From the replicants in Blade Runner and robots in Star Wars to Iain M Banks’s Minds, science fiction is full of intelligent machines. Could we ever really create a machine with a mind: not just a computer that computes, one that really thinks? Philosophers have been arguing about it for centuries. Things came to a head when philosopher John Searle came up with a thought experiment called the ‘Chinese room’. He claims it gives a cast iron argument that programmed ‘Minds’ can never exist. Are the computer scientists who are trying to build real artificial intelligences wasting their time? Or could zombies lurch to the rescue?

The Shaolin warrior monk

Imagine that the galaxy is populated by an advanced civilisation that has solved the problem of creating artificial intelligence programs. Wanting to observe us more closely they build a replicant that looks, dresses and moves just like a Shaolin warrior monk (it has to protect itself and the aliens watch too much TV!) They create a program for it that encodes the rules of Chinese. The machine is dispatched to Earth. Claiming to have taken a vow of silence, it does not speak (the aliens weren’t hot on accents). It reads Chinese characters written by the earthlings, then follows the instructions in its Chinese program that tell it the Chinese characters to write in response. It duly has written conversations with all the earthlings it meets as it wanders the planet, leaving them all in no doubt that they have been conversing with a real human Chinese speaker.

The question is, is that machine monk really a Mind? Does it really understand Chinese or is it just simulating that ability?

The Chinese room

Searle answers this by imagining a room in which a human sits. She speaks no Chinese but instead has a book of rules – the aliens’ computer program written out in English. People pass in Chinese symbols through a slot. She looks them up in the book and it tells her the Chinese symbols to pass back out. As she doesn’t understand Chinese she has no idea what the symbols coming in or going out mean. She is just uncomprehendingly following the book. Yet to the outside world she seems to be just as much a native speaker as that machine monk. She is simulating the ability to understand Chinese. As she’s using the same program as the monk, doing exactly what it would do, it follows that the machine monk is also just simulating intelligence. Therefore programs cannot understand. They cannot have a mind.

Is that machine monk a Mind?

Searle’s argument is built on some assumptions. Programs are ‘syntactic devices’: that just means they move symbols around, swapping them for others. They do it without giving those symbols any meaning. A human mind on the other hand works with ‘semantics’ – the meanings of symbols not just the symbols themselves. We understand what the symbols mean. The Chinese room is supposed to show you can’t get meaning by pushing symbols around. As any future artificial intelligence will be based on programs pushing symbols around they will not be a Mind that understands what it is doing.

The zombies are coming

So is this argument really cast iron? It has generated lots of debate, virtually all of it aiming to prove Searle wrong. The counter-arguments are varied and even the zombies have piled in to fight the cause: philosophical ones at least. What is a philosophical zombie? It’s just a human with no consciousness, no mind. One way to attack Searle’s argument is to attack the assumptions. That’s what the zombies are there to do. If the assumptions aren’t actually true then the argument falls apart. According to Searle human brains do something more than push symbols about\; they have a way of working with meaning. However, there can’t be a way of telling that by talking to one as otherwise it could have been used to tell that the machine monk wasn’t a mind.

Imagine then, there has been a nuclear accident and lots of babies are born with a genetic mutation that makes them zombies. They have no mind so no ability to understand meaning. Despite that they act exactly like humans: so much so that there is no way to tell zombies and humans apart. The zombies grow up, marry and have zombie children.

Presumably zombie brains are simpler than human ones – they don’t have whatever complication it is that introduces minds. Being simpler they have a fitness advantage that will allow them to out-compete humans. They won’t need to roam the streets killing humans to take over the world. If they wait long enough and keep having children, natural selection will do it for them.

The zombies are here

The point is it could have already happened. We could all be zombies but just don’t know it. We think we are conscious but that could just be an illusion – another simulation. We have no way to prove we are not zombies and if we could be zombies then Searle’s assumption that we are different to machines may not be true. The Chinese room argument falls apart.

Does it matter?

The arguments and counter arguments continue. To an engineer trying to build an artificial intelligence this actually doesn’t matter. Whether you have built a Mind or just something that exactly simulates one makes no practical difference. It makes a big difference to philosophers, though, and to our understanding of what it means to be human.

Let’s leave the last word to Alan Turing. He pointed out 30 years before the Chinese room was invented that it’s generally considered polite to assume that other humans are Minds like us (not zombies). If we do end up with machine intelligences so good we can’t tell they aren’t human, it would be polite to extend the assumption to them too. That would surely be the only humane thing to do.


More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.

The paranoid program

by Paul Curzon, Queen Mary University of London

One of the greatest characters in Douglas Adams’ Hitchhiker’s Guide to the Galaxy, science fiction radio series, books and film was Marvin the Paranoid Android. Marvin wasn’t actually paranoid though. Rather, he was very, very depressed. This was because as he often noted he had ‘a brain the size of a planet’ but was constantly given trivial and uninteresting jobs to do. Marvin was fiction. One of the first real computer programs to be able to converse with humans, PARRY, did aim to behave in a paranoid way, however.

PARRY was in part inspired by the earlier ELIZA program. Both were early attempts to write what we would now call chatbots: programs that could have conversations with humans. This area of Natural Language Processing is now a major research area. Modern chatbot programs rely on machine learning to learn rules from real conversations that tell them what to say in different situations. Early programs relied on hand written rules by the programmer. ELIZA, written by Joseph Weizenbaum, was the most successful early program to do this and fooled people into thinking they were conversing with a human. One set of rules, called DOCTOR, that ELIZA could use, allowed it to behave like a therapist of the kind popular at the time who just echoed back things their patient said. Weizenbaum’s aim was not actually to fool people, as such, but to show how trivial human-computer conversation was, and that with a relatively simple approach where the program looked for trigger words and used them to choose pre-programmed responses could lead to realistic appearing conversation.

PARRY was more serious in its aim. It was written by, Kenneth Colby, in the early 1970s. He was a psychiatrist at Stanford. He was trying to simulate the behaviour of person suffering from paranoid schizophrenia. It involves symptoms including the person believing that others have hostile intentions towards them. Innocent things other people say are seen as being hostile even when there was no such intention.

PARRY was based on a simple model of how those with the condition were thought to behave. Writing programs that simulate something being studied is one of the ways computer science has added to the way we do science. If you fully understand a phenomena, and have embodied that understanding in a model that describes it, then you should be able to write a program that simulates that phenomena. Once you have written a program then you can test it against reality to see if it does behave the same way. If there are differences then this suggests the model and so your understanding is not yet fully accurate. The model needs improving to deal with the differences. PARRY was an attempt to do this in the area of psychiatry. Schizophrenia is not in itself well-defined: there is no objective test to diagnose it. Psychiatrists come to a conclusion about it just by observing patients, based on their experience. Could a program display convincing behaviours?

It was tested by doing a variation of the Turing Test: Alan Turing’s suggestion of a way to tell if a program could be considered intelligent or not. He suggested having humans and programs chat to a panel of judges via a computer interface. If the judges cannot accurately tell them apart then he suggested you should accept the programs as intelligent. With PARRY rather than testing whether the program was intelligent, the aim was to find out if it could be distinguished from real people with the condition. A series of psychiatrists were therefore allowed to chat with a series of runs of the program as well as with actual people diagnosed with paranoid schizophrenia. All conversations were through a computer. The psychiatrists were not told in advance which were which. Other psychiatrists were later allowed to read the transcripts of those conversations. All were asked to pick out the people and the programs. The result was they could only correctly tell which was a human and which was PARRY about half the time. As that was about as good as tossing a coin to decide it suggests the model of behaviour was convincing.

As ELIZA was simulating a mental health doctor and PARRY a patient someone had the idea of letting them talk to each other. ELIZA (as the DOCTOR) was given the chance to chat with PARRY several times. You can read one of the conversations between them here. Do they seem believably human? Personally, I think PARRY comes across more convincingly human-like, paranoid or not!


Activity for you to do…

If you can program, why not have a go at writing your own chatbot. If you can’t writing a simple chatbot is quite a good project to use to learn as long as you start simple with fixed conversations. As you make it more complex, it can, like ELIZA and PARRY, be based on looking for keywords in the things the other person types, together with template responses as well as some fixed starter questions, also used to change the subject. It is easier if you stick to a single area of interest (make it football mad, for example): “What’s your favourite team?” … “Liverpool” … “I like Liverpool because of Klopp, but I support Arsenal.” …”What do you think of Arsenal?” …

Alternatively, perhaps you could write a chatbot to bring Marvin to life, depressed about everything he is asked to do, if that is not too depressingly simple, should you have a brain the size of a planet.


More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.

How does Santa do it?

Fast yuletide algorithms to visit all those chimneys in time

by Paul Curzon, Queen Mary University of London

Lots of Santas in a line
Image by Thomas Ulrich from Pixabay 

How does Santa do it? How does he visit all those children, all those chimneys, in just one night? My theory is he combines a special Scandinavian super-power with some computational wizardry.

There are about 2 billion children in the world and Santa visits them all. Clearly he has magic (flying reindeer remember) to help him do it but what kind of magic (beyond the reindeer)? And is it all about magic? Some have suggested he stops time, or moves through other dimensions, others that he just travels at an amazingly fast speed (Speedy Gonzales or The Flash style). Perhaps though he uses computer science too (though by that I don’t mean computer technology, just the power of computation).

The problem can be thought of as a computational one. The task is to visit, let’s say a billion homes (assuming an average of 2 children per household), as fast as possible. The standard solution assumes Santa visits them one at a time in order. This is what is called a linear algorithm and linear algorithms are slow. If there are n pieces of data to process (here, chimneys to descend) then we write this as having efficiency O(n). This way of writing about efficiency is called Big-O notation. O(n) just means as n increases the amount of work increases proportionately. Double the number of children and you double the workload for Santa. Currently the population doubles every 60 or 70 years or so, so clearly Santa needs to think in this way or he will eventually fail keep up, whatever magic he uses.

Perhaps, Santa uses teams of Elves as in the film Arthur Christmas, so that at each location he can deliver say presents to 1000 homes at once (though then it is the 1000 Elf helpers doing the delivering not Santa which goes against all current wisdom that Santa does it himself). It would speed things up apparently enormously to 1000 times faster. However, in computational terms that barely makes a difference. It is still a linear order of efficiency: it is still O(n) as the work still goes up proportionately with n. Double the population and Santa is in trouble still as his one night workload doubles too. O(2n) and O(1000n) both simplify to mean exactly the same as O(n). Computationally it makes little difference, and if their algorithms are to solve big problems computer scientists have to think in terms of dealing with data doubling, doubling and doubling again, just like Santa has had to over the centuries.

Divide and Conquer problem solving

When a computer scientist has a problem like this to solve, one of the first tools to reach for is called Divide and Conquer problem solving. It is a way of inventing lightening fast algorithms, that barely increase in work needed as the size of the problem doubles. The secret is to find a way to convert the problem into one that is half the size of the original, but (and this is key) that is otherwise exactly the same problem. If it is the same problem (just smaller) then that means you can solve those resulting smaller problems in the same way. You keep splitting the problem until the problems are so small they are trivial. That turns out to be a massively fast way to get a job done. It does not have to be computers doing the divide and conquer: I’ve used the approach for sorting piles of hundreds and hundreds of exam scripts into sorted order quickly, for example.

My theory is that divide and conquer is what Santa does, though it requires a particular superhero power too to work in his context, but then he is magical, so why not. How do I think it works? I think Santa is capable of duplicating himself. There is a precedent for this in the superhero world. Norse god Loki is able to copy himself to get out of scrapes, and since Santa is from the same part of the world it seems likely he could have a similar power.

If he copied himself twice then one of him could do the Northern Hemisphere and the other the Southern hemisphere. The problem has been split into an identical problem (delivering presents to lots of children) but that is half the size for each Santa (each has only half the world so half as many children to cover). That would allow him to cover the world twice as fast. However that is really no different to getting a couple of Elves to do the work. It is still O(n) in terms of the efficiency the work is done. As the population doubles he quickly ends up back in the same situation as before: too much work for each Santa. Likewise if he made a fixed number of 1000 copies of himself it would be similar to having 1000 Elves doing the deliveries. The work still increases proportional to the number of deliveries. Double the population and you still double the time it takes.

Double Santa and double again (and keep doubling)

So Santa needs to do better than that if he is to keep up with the population explosion. But divide and conquer doesn’t say halve the problem once, it says solve the new problem in the same way. So each new Santa has to copy themselves too! As they are identical copies to the original surely they can do that as easily as the first one could. Those new Santas have to do the same, and so on. They all split again and again until each has a simple problem to solve that they can just do. That might be having a single village to cover, or perhaps a single house. At that point the copying can stop and the job of delivering presents actually done. Each drops down a chimney and leaves the presents. (Now you can see how he manages to eat all those mince pies too!)

An important thing to remember is that that is not the end of it. The world is now full of Santas. Before the night is over and the job done, each Santa has to merge back with the one they split from, recursively all the way back to the original Santa. Otherwise come Christmas Day we wouldn’t be able to move for Santas. Better leave 30 minutes for that at the end!

Does this make a big difference? Well, yes (as long as all the copying can be done quickly and there is an organised way to split up the world). It makes a massive difference. The key is in thinking about how often the Santas double in number, so how often the problem is halved in size.

We start with 1 Santa who duplicates to 2, but now both can duplicate to 4, then to 8, 16, and after only 5 splittings there are already 32 Santas, then 64, 128, 256, 512 Santas, and after only 10 splittings we have over a 1000 Santas (1024 to be precise). As we saw that isn’t enough so they keep splitting. Following the same pattern, after 20 splittings we have over a million Santas to do the job. After only 30 rounds of splittings we have a billion Santas, so each can deal with a single family: a trivial problem for each.

So if a Santa can duplicate himself (along with the sleigh and reindeer) in a minute or so (Loki does it in a fraction of a second so probably this is a massive over-estimate and Santa can too), we have enough Santas to do the job in about half an hour, leaving each plenty of time to do the delivery to their destination. The splitting can also be done on the way so each Santa travels only as far as needed. Importantly this splitting process is NOT linear. It is O(log2 n) rather than O(n) and log2 n is massively smaller than n for large n. It means if we double the population of households to visit due to population explosion, the number of rounds of splitting does not need to double, the Santas just have to do one more round of splitting to cover it. The calculation log2 n (the logarithm to base 2 of n) is just a mathematicians way of saying how many times you can halve the number n before you get to 1 (or equivalently how many times you double from 1 before you get up to n). 1024 can be halved 10 times so (log2 1024) is 10. A billion can be halved about 30 times so (log2 1 billion) is about 30. Instead of a billion pieces of work we do only 30 for the splitting. Double the chimneys to 2 billion and you need only one more for a total of 31 splittings.

In computer terms divide and conquer algorithms involve methods (ie functions / procedures) calling themselves multiple times. Each call of the method, works on eg half the problem. So a method to sort data might first divide the data in half. One half is passed to one new call (copy) of the same method to sort in the same way, the other half is passed to the other call (copy). They do the same calling more copies to work on half of their part of the data, until eventually each has only one piece of data to sort (which is trivial). Work then has to be done merging the sorted halves back into sorted wholes. A billion pieces of data are sorted in only 30 rounds of recursive splitting. Double to 2 billion pieces of data and you need just 1 more round of splitting to get the sorting done.

Living in a simulation

If this mechanism for Santa to do deliveries all still seems improbable then consider that for all we know the reality of our universe may actually be a simulation (Matrix-like) in some other-dimensional computer. If so we are each just software in that simulation, each of us a method executing to make decisions about what we do in our virtual world. If that is the nature of reality, then Santa is also just a (special yuletide) software routine, and his duplicating feat is just a method calling itself recursively (as with the sort algorithm). Then the whole Christmas delivery done this way is just a simple divide and conquer algorithm running in a computer…

Given the other ways suggested for Santa to do his Christmas miracle seem even more improbable, that suggests to me that the existence of Santa provides strong evidence that we are all just software in a simulation. Not that that would make our reality, or Christmas, any less special.


More on …


This blog is funded through EPSRC grant EP/W033615/1.

Pepper’s Ghost: an 1860s illusion used in ‘head-up displays’

A ghostly illustration including a woman in historic garb, an ornate candlestick, a grand chair and a mirror with grey curtains pulled back.
Ghostly stage Image by NoName_13 from Pixabay

When Pepper’s Ghost first appeared on the stage as part of one of Professor Pepper’s shows on Christmas Eve, 1862 it stunned the audiences. This was more than just magic: it was miraculous. It was so amazing that some spiritualists were convinced Pepper had discovered a way of really summoning spirits. A ghostly figure appeared on the stage out of thin air, interacted with the other characters on the stage and then disappeared in an instant. This was no dark seance where ghostly effects happen in a darkened room: who knows what tricks are then being pulled in the dark to cause the effects. Neither was it modern day special effects where it is all done on film or in the virtual world of a computer. This was on a brightly lit stage in front of everyone’s eyes…

Stage setup for Pepper’s Ghost, from Wikipedia, Public Domain

Switch to the modern day and similar ghostly magic is now being used by fighter pilots. Have the military been funding X-files research? Well maybe, but there is nothing supernatural about Pepper’s Ghost. It is just an illusion. The show it first appeared in was a Science show, though it went on to amaze audiences as part of magic shows for years to come, and can still be found, for example in Disney Theme Parks, and onstage to make virtual band Gorillaz come to life.

Today’s “supernatural” often becomes tomorrow’s reality, thanks to technology. With Pepper’s ghost, 19th century magic has in fact become enormously useful 21st century hi-tech. 19th century magicians were more than just showmen, they were inventors, precision engineers and scientists, making use of the latest scientific results, frequently pushing technology forward themselves. People often think of magicians as being secretive, but they were also businessmen, often patenting the inventions behind their tricks, making them available for all to see but also ensuring their rivals could not use them without permission. The magic behind Pepper’s ghost was patented by Henry Dircks, a Liverpudlian engineer, in 1863 as a theatrical effect though it was probably originally invented much earlier – it was described in an Italian book back in 1558 by Baptista Porta.

Through the looking glass

So what was Pepper’s ghost? It’s a cliche to say that “it’s all done with mirrors”, but it is quite amazing what you can do with them if you both understand their physics and are innovative enough to think up extraordinary ways to use old ideas. Pepper’s ghost worked in a completely different way to the normal way mirrors are used in tricks though. It was done using a normal sheet of glass, not a silvered mirror at all. If you have ever looked at your image reflected in a window on a dark night you have seen a weak version of Pepper’s Ghost. The trick was to place a large, spotlessly clean sheet of glass at an angle in front of the stage between the actors and the audience. By using the stage lights in just the right way, it becomes a half mirror. Not only can the stage be seen through the glass, but so can anything placed at the right position off the stage where the glass is pointing. Better still, because of the physics of reflection, the reflected images don’t seem to be on the surface of the glass at all, but the same distance behind as the objects are in front. The actor playing the ghost would perform in a hidden black area so that he or she was the only thing that reflected light from that area. When the ghost was to appear a very strong light was shone on the actor. Suddenly the reflection would appear – and as long as they were standing the right distance from the mirror, they could appear anywhere desired on the stage. To make them disappear in an instant the light was just switched off.

Jump to the 21st century and a similar technique has reappeared. Now the ghosts are instrument panels. A problem with controlling a fighter plane is you don’t have time to look down. You really want the data you need to keep control of your plane wherever you are looking outside the plane. It needs not just to be in the right position on the screen but at the right depth so you don’t need to refocus your eyes. Most importantly you must also be able to see out of the plane in an unrestricted way…You need the Peppers Ghost effect. That is all “Head-up” displays display do, though the precise technology used varies.

Satnav systems in cars are very dangerous if you have to keep looking down to see where the thing atually means you to turn. “What? This left turn or the next one?” Use a Head-up display and the instructions can hover in front of you, out on the road where your eyes are focussed. Better still you can project a yellow line (say) as though it was on the road, showing you the way off into the distance: Follow the Yellow Brick Road … Oh and wasn’t the Wizard of Oz another great magician who used science and engineering rather than magic dust.

Paul Curzon, Queen Mary University of London (first published in 2007)

Make…

Watch…

You can download ALL of our free material here.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Making sense of squishiness – 3D modelling the natural world

Look out the window at the human-made world. It’s full of hard, geometric shapes – our buildings, the roads, our cars. They are made of solid things like tarmac, brick and metal that are designed to be rigid and stay that way. The natural world is nothing like that though. Things bend, stretch and squish in response to the forces around them. That provides a whole bunch of fascinating problems for computer scientists like Lourdes Agapito of Queen Mary, University of London to solve.

Computer scientists interested in creating 3-dimensional models of the world have so far mainly concentrated on modelling the hard things. Why? Because they are easier! You can see the results in computer-animated films like Toy Story, and the 3D worlds like Second Life your avatar inhabits. Even the soft things tend to be rigid.

Lourdes works in this general area creating 3D computer models, but she wants to solve the problems of creating them automatically just from the flat images in videos and is specifically interested in things that deform – the squishy things.

Look out the window and watch the world go by. As you watch a woman walk past you have no problem knowing that you are looking at the same person as you were a second ago – even if she becomes partially hidden as she walks behind the post box and turns to post a letter. The sun goes behind a cloud and the scene is suddenly darker. It starts to rain and she opens an umbrella. You can still recognise her as the same object. Your brain is pulling some amazing tricks to make this seem so mundane. Essentially it is creating a model of the world – identifying all the 3-dimensional objects that you see and tracking them over time. If we can do it, why can’t a computer?

Unlike hard surfaces, deformable ones don’t look the same from one still to the next. You don’t have to just worry about changes in lighting, them being partially hidden, and that they appear different from a different angle. The object itself will be a different shape from one still to the next. That makes it far harder to work out which bits of one image are actually the same as the ones in the next. Lourdes has taken on a seriously hard problem.

Existing vision systems that create 3D objects have made things easier for themselves by using existing models. If a computer already has a model of a cube to compare what it sees with, then spotting a cube in the image stream is much easier than working it out from scratch. That doesn’t really generalise to deformable objects though because they vary too much. Another approach, used by the film industry, is to put highly visible markers on objects so that those markers can be tracked. That doesn’t help if you just want to point a camera out the window at whatever passes by though.

Lourdes aim is to be able to point a camera at a deformable object and have a computer vision system be able to create a 3D model simply by analysing the images. No markers, no existing models of what might be there, not even previous films to train it with, just the video itself. So far her team have created a system that can do this in some situations such as with faces as a person changes their expression. Their next goal is to be able to make their system work for a whole person as they are filmed doing arbitrary things. It’s the technical challenge that inspires Lourdes the most, though once the problems of deformable objects are solved there are applications of course. One immediately obvious area is in operating theatres. Keyhole surgery is now very common. It involves a surgeon operating remotely, seeing what they are doing by looking at flat video images from a fibre optic probe inside the body of the person being operated on. The image is flat but the inside of the person that the surgeon is trying to make cuts in is 3-dimensional. It would be far less error prone if what the surgeon was looking at was an accurate 3D model of the video feed rather than just a flat picture. Of course the inside of your body is made of exactly the kind of squishy deformable surfaces that Lourdes is interested in. Get the computer science right and technologies like this will save lives.

At the same time as tackling seriously hard if squishy computer science problems, Lourdes is also a mother of three. A major reason she can fit it all in, as she points out, is that she has a very supportive partner who shares in the childcare. Without him it would be impossible to balance all the work involved in leading a top European research team. It’s also important to get away from work sometimes. Running regularly helps Lourdes cope with the pressures and as we write she is about to run her first half marathon.

Lourdes may or may not be the person who turns her team’s solutions into the applications that in the future save lives in operating theatres, spot suspicious behaviour in CCTV footage or allow film-makers to quickly animate the actions of actors. Whoever does create the applications, we still need people like Lourdes who are just excited about solving the fundamental problems in the first place.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Watching whales well – the travelling salesman problem

by Paul Curzon, Queen Mary University of London

Sasha owns a new tour company and her first tours are to the Azores, a group of volcanic islands in the Atlantic Ocean, off the coast of Portugal. They are one of the best places in the world to see whales and dolphins, so lots of people are signing up to go.

Sasha’s tour as advertised is to visit all nine islands in the Azores: São Miguel, Terceira, Faial, Pico, São Jorge, Santa Maria, Graciosa, Flores and Corvo. The holidaymakers go whale watching as well as visiting the attractions on each island, like swimming in the lava pools. Sasha’s first problem, though, is to sort out the itinerary. She has to work out the best order to visit the islands so her customers spend as little time as possible travelling, leaving more for watching whales and visiting volcanos. She also doesn’t want the tour to go back to the same island twice – and she needs it to end up back at the starting island, São Miguel, for the return flight back home.

Trouble in paradise

It sounds like it should be easy, but it’s actually an example of a computer science problem that dates back at least to the 1800s. It’s known as ‘The Travelling Salesman Problem’ because it is the same problem a salesman has who wants to visit a series of cities and get back to base at the end of the trip. It is surprisingly difficult.

It’s not that hard to come up with any old answer (just join the dots!), but it’s much tougher to come up with the best answer. Of course a computer scientist doesn’t want to just solve one-off problems like Sasha’s but to come up with a way of solving any variant of the problem. Sasha, of course, agrees – once she’s sorted out the Azores itinerary, she then needs to solve similar problems, like the day trip round São Miguel. Her customers will visit the lakes, the tea factory, the hot spring-fed swimming pool in the botanic gardens and so on. Not only that, once Sasha’s done with the Azores, she then needs to plan a wildlife tour of Florida. Knowing a quick way to do it would help her a lot.

The long way round

No one has yet come up with a good way to solve the Travelling Salesman problem though and it is generally believed to be impossible. You can find the best solution in theory of course: just try all the alternatives. Sasha could first work out how long it is if you go São Miguel, Terceira, Faial, Pico, São Jorge, Santa Maria, Graciosa, Flores, Corvo and back to São Miguel, then work out the time for a different order, swapping Corvo and Flores, say. Then she could try a different route, and keep on till she knew the length of every variation. She would then just pick the best. Trouble is, that takes forever.

Even this small problem with only 9 islands has over 20 000 solutions to check. Go up to a tour of 15 destinations and you have 43 billion calculations to do. Add a few more and it would take centuries for a fast computer running flat out to solve it. Bigger still and you find the computer would have to run for longer than the time left before the end of the universe. Hmmm. It’s a problem then.

Be greedy

The solution is not to be such a perfectionist and accept that a good solution will have to be good enough even though it may not be the absolute best. One way to get a good solution is called using a ‘greedy’ algorithm. You start at São Miguel and just go from there to the nearest island, from there to the nearest island not yet visited, and so on till you have done them all. That would probably work well for the Azores as they are in groups, so visiting the close ones in each group together makes sense. It doesn’t guarantee the best answer in all cases though.

Or just go climb a hill

Another way is to use a version of ‘hill climbing’. Here you take any old route and then try and optimise it, by just making small changes – swapping pairs of legs over, say: instead of going Faial to Pico and later Corvo to Flores try substituting Pico to Flores and Faial to Corvo, with the rest the same but in the opposite order. If the change is an improvement keep it and make later changes to that. Otherwise stick with the original. Either way keep trying changes on the best solution you’ve found so far, until you run out of time.

So Sasha may want to run a great tour company but there may not be enough time in the universe for her tours to be guaranteed perfect…unless of course she keeps them very small. After all, just visiting São Miguel and Terceira makes a great holiday anyway.


This article was originally published on the CS4FN website and a copy can also be found on pages 14-15 of issue 10 of the CS4FN magazine, which you can downloads as a PDF below. All of our free material can be downloaded here.


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Recognising (and addressing) bias in facial recognition tech #BlackHistoryMonth

By Jo Brodie and Paul Curzon, Queen Mary University of London

A unit containing four sockets, 2 USB and 2 for a microphone and speakers.
Happy, though surprised, sockets Photo taken by Jo Brodie in 2016 at Gladesmore School in London.

Some people have a neurological condition called face blindness (also known as ‘prosopagnosia’) which means that they are unable to recognise people, even those they know well – this can include their own face in the mirror! They only know who someone is once they start to speak but until then they can’t be sure who it is. They can certainly detect faces though, but they might struggle to classify them in terms of gender or ethnicity. In general though, most people actually have an exceptionally good ability to detect and recognise faces, so good in fact that we even detect faces when they’re not actually there – this is called pareidolia – perhaps you see a surprised face in this picture of USB sockets below.

How about computers? There is a lot of hype about face recognition technology as a simple solution to help police forces prevent crime, spot terrorists and catch criminals. What could be bad about being able to pick out wanted people automatically from CCTV images, so quickly catch them?

What if facial recognition technology isn’t as good at recognising faces as it has sometimes been claimed to be, though? If the technology is being used in the criminal justice system, and gets the identification wrong, this can cause serious problems for people (see Robert Williams’ story in “Facing up to the problems of recognising faces“).

“An audit of commercial facial-analysis tools
found that dark-skinned faces are misclassified
at a much higher rate than are faces from any
other group. Four years on, the study is shaping
research, regulation and commercial practices.”

The unseen Black faces of AI algorithms
(19 October 2022) Nature

In 2018 Joy Buolamwini and Timnit Gebru shared the results of research they’d done, testing three different commercial facial recognition systems. They found that these systems were much more likely to wrongly classify darker-skinned female faces compared to lighter- or darker-skinned male faces. In other words, the systems were not reliable. (Read more about their research in “The gender shades audit“).

“The findings raise questions about
how today’s neural networks, which …
(look for) patterns in huge data sets,
are trained and evaluated.”

Study finds gender and skin-type bias
in commercial artificial-intelligence systems
(11 February 2018) MIT News

Their work has shown that face recognition systems do have biases and so are not currently at all fit for purpose. There is some good news though. The three companies whose products they studied made changes to improve their facial recognition systems and several US cities have already banned the use of this tech in criminal investigations. More cities are calling for it too and in Europe, the EU are moving closer to banning the use of live face recognition technology in public places. Others, however, are still rolling it out. It is important not just to believe the hype about new technology and make sure we do understand their limitations and risks.

More on

Further reading

More technical articles

• Joy Buolamwini and Timnit Gebru (2018) Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research 81:1-15. [EXTERNAL]
The unseen Black faces of AI algorithms (19 October 2022) Nature News & Views [EXTERNAL]


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


EPSRC supports this blog through research grant EP/W033615/1.

Happy Hallowe’en

Free spooky puzzles and activities

Visit our sister site, Teaching London Computing’s, Halloween puzzle area to access our free colouring-in puzzles and activities linked to Halloween.

It includes computational thinking-linked puzzles for Halloween to download and print:

  • “Maggot” logic puzzles: Place 10 Maggots, one in each of the 10 separately coloured areas of the pumpkin picture so that …
  • Colour by number Pixel Puzzles: Colour each square (pixel) according to its number. See our pixel puzzle page for lots more pixel puzzles.
  • Halloween Kriss-Kross: Given a list of words of different lengths, you must fit them all in to the grid
  • Make a Halloween “Useless Machine” coffin. Switch it on and the occupant reaches out of the coffin and switches it off again.
  • Program a pumpkin: Create a programmable paper face light up pumpkin

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Devices that work for everyone

Cartoon of the invisible man - only the clothes are visible

Invisible man Image by OpenClipart-Vectors from Pixabay

In 2009 Desi Cryer, who is Black, shared a light-hearted video with a serious message. He’d bought a new computer with a face tracking camera… which didn’t track his face, at all. It did track his White colleague Wanda’s face though. In the video (below) he asked her to go in front of the camera and move from side to side and the camera obediently tracked her face – wherever she moved the camera followed. When Desi moved back in front of the camera it stopped again. He wondered if the computer might be racist…

Another video, this time from 2017, showed a dark-skinned man failing to get a soap to dispenser to give him some soap. Nothing happened when he put his hand underneath the sensor but as soon as his lighter-skinned friend put his hand under it – out popped some soap! The only way the first man could get any soap dispensed was to put a white tissue on his hand first. He wondered if the soap dispenser might be racist…

What’s going on?

Probably no-one set out to maliciously design a racist device but designers might need to check that their products work with a range of different people before putting them on the market. This can save the company embarrassment as well as creating something that more people want to buy. 

Sensors working overtime

Both devices use a sensor that is activated (or in these cases isn’t) by a signal. Soap dispensers shine a beam of light which bounces off a hand placed below it and some of that light is reflected back. Paler skin reflects more light (and so triggers the sensor) than darker skin. Next to the light is a sensor which responds to the reflected light – but if the device was only tested on White people then the sensor wasn’t adjusted for the full range of skin tones and so won’t respond appropriately. Similarly cameras have historically been designed for White skin tones meaning darker tones are not picked up as well.

Things can be improved!

It’s a good idea, when designing something that will be used by lots of different people, to make sure that it will work correctly with everyone. Having a diverse design team and, importantly, making sure that everyone feels empowered to contribute is a good way to start. Another is to test the design with different target audiences early in the design process so that changes can be made before it’s too late. How a company responds to feedback when they’ve made an oversight is also important. In the case of the computer company they acknowledged the problem and went to work to improve the camera’s sensitivity. 

A problem with pulse oximeters

During the coronavirus pandemic many people bought a ‘pulse oximeter’, a device which clips painlessly onto a finger and measures how much oxygen is circulating in your blood (and your pulse). If the oxygen reading became too low people were advised to go to hospital. Oximeters shine red and infrared light from the top clip through the finger and the light is absorbed diferently depending on how much oxygen is present in the blood. A sensor on the lower clip measures how much light has got through but the reading can be affected by skin colour (and coloured nail polish). People were concerned that pulse oximeters would overestimate the oxygen reading for someone with darker skin (that is, tell them they had more oxygen than they actually had) and that the devices might not detect a drop in oxygen quickly enough to warn them.

In response the UK Government announced in August 2022 that it would investigate this bias in a range of medical devices to ensure that future devices work effectively for everyone.

Jo Brodie, Queen Mary University of London

Watch …

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos