The first computer music

by Paul Curzon, Queen Mary University of London

(updated from the archive)

Robot with horn
Image by www_slon_pics from Pixabay

The first recorded music by a computer program was the result of a flamboyant flourish added on the end of a program that played draughts in the early 1950s. It played God Save the King.

The first computers were developed towards the end of the second world war to do the number crunching needed to break the German codes. After the War several groups set about manufacturing computers around the world: including three in the UK. This was still a time when computers filled whole rooms and it was widely believed that a whole country would only need a few. The uses envisioned tended to be to do lots of number crunching.

A small group of people could see that they could be much more fun than that, with one being school teacher Christopher Strachey. When he was introduced to the Pilot ACE computer on a visit to the National Physical Laboratories, in his spare time he set about writing a program that could play against humans at draughts. Unfortunately, the computer didn’t have enough memory for his program.

He knew Alan Turing, one of those war time pioneers, when they were both at university before the War. He luckily heard that Turing, now working at the University of Manchester, was working on the new Feranti Mark I computer which would have more memory, so wrote to him to see if he could get to play with it. Turing invited him to visit and on the second visit, having had a chance to write a version of the program for the new machine, he was given the chance to try to get his draughts program to work on the Mark I. He was left to get on with it that evening.

He astonished everyone the next morning by having the program working and ready to demonstrate. He had worked through the night to debug it. Not only that, as it finished running, to everyone’s surprise, the computer played the National Anthem, God Save the King. As Frank Cooper, one of those there at the time said: “We were all agog to know how this had been done.” Strachey’s reputation as one of the first wizard programmers was sealed.

The reason it was possible to play sounds on the computer at all, was nothing to do with music. A special command called ‘Hoot’ had been included in the set of instructions programmers could use (called the ‘order’ code at the time) when programming the Mark I computer. The computer was connected to a loud speaker and Hoot was used to signal things like the end of the program – alerting the operators. Apparently it hadn’t occurred to anyone there but Strachey that it was everything you needed to create the first computer music.

He also programmed it to play Baa Baa Black Sheep and went on to write a more general program that would allow any tune to be played. When a BBC Live Broadcast Unit visited the University in 1951 to see the computer for Children’s Hour the Mark I gave the first ever broadcast performance of computer music, playing Strachey’s music: the UK National Anthem, Baa Baa Black Sheep and also In the Mood.

While this was the first recorded computer music it is likely that Strachey was beaten to creating the first actual programmed computer music by a team in Australia who had similar ideas and did a similar thing probably slightly earlier. They used the equivalent hoot on the CSIRAC computer developed there by Trevor Pearcey and programmed by Geoff Hill. Both teams were years ahead of anyone else and it was a long time before anyone took the idea of computer music seriously.

Strachey went on to be a leading figure in the design of programming languages, responsible for many of the key advances that have led to programmers being able to write the vast and complex programs of today.

The recording made of the performance has recently been rediscovered and restored so you can now listen to the performance yourself:

More on …

Related Magazines …

This blog is funded by UKRI, through grant EP/W033615/1.

Swat a way to drive

by Peter W McOwan, Queen Mary University of London

(updated from the archive)

Flies are small, fast and rather cunning. Try to swat one and you will see just how efficient their brain is, even though it has so few brain cells that each one of them can be counted and given a number. A fly’s brain is a wonderful proof that, if you know what you’re doing, you can efficiently perform clever calculations with a minimum of hardware. The average household fly’s ability to detect movement in the surrounding environment, whether it’s a fly swat or your hand, is due to some cunning wiring in their brain.

Speedy calculations

Movement is measured by detecting something changing position over time. The ratio distance/time gives us the speed, and flies have built in speed detectors. In the fly’s eye, a wonderful piece of optical engineering in itself with hundreds of lenses forming the mosaic of the compound eye, each lens looks at a different part of the surrounding world, and so each registers if something is at a particular position in space.

All the lenses are also linked by a series of nerve cells. These nerve cells each have a different delay. That means a signal takes longer to pass along one nerve than another. When a lens spots an object in its part of the world, say position A, this causes a signal to fire into the nerve cells, and these signals spread out with different delays to the other lenses’ positions.

The separation between the different areas that the lenses view (distance) and the delays in the connecting nerve cells (time) are such that a whole range of possible speeds are coded in the nerve cells. The fly’s brain just has to match the speed of the passing object with one of the speeds that are encoded in the nerve cells. When the object moves from A to B, the fly knows the correct speed if the first delayed signal from position A arrives at the same time as the new signal at position B. The arrival of the two signals is correlated. That means they are linked by a well-defined relation, in this case the speed they are representing.

Do locusts like Star Wars?

Understanding the way that insects see gives us clever new ways to build things, and can also lead to some bizarre experiments. Researchers in Newcastle showed locusts edited highlights from the original movie Star Wars. Why you might ask? Do locusts enjoy a good Science Fiction movie? It turns out that the researchers were looking to see if locusts could detect collisions. There are plenty of those in the battles between X-wing fighters and Tie fighters. They also wanted to know if this collision detecting ability could be turned into a design for a computer chip. The work, part-funded by car-maker Volvo, used such a strange way to examine locust’s vision that it won an Ig Nobel award in 2005. Ig Noble awards are presented each year for weird and wonderful scientific experiments, and have the motto ‘Research that makes people laugh then think’. You can find out more at

Car crash: who is to blame?

So what happens if we start to use these insect ‘eye’ detectors in cars, building

We now have smart cars with the artificial intelligence (AI) taking over from the driver completely or just to avoid hitting other things. An interesting question arises. When an accident does happen, who is to blame? Is it the car driver: are they in charge of the vehicle? Is it the AI to blame? Who is responsible for that: the AI itself (if one day we give machines human-like rights), the car manufacturer? Is it the computer scientists who wrote the program? If we do build cars with fly or locust like intelligence, which avoid accidents like flies avoid swatting or can spot possible collisions like locusts, is it the insect whose brain was copied that is to blame!?!What will insurance companies decide? What about the courts?

As computer science makes new things possible, society quickly needs to decide how to deal with them. Unlike the smart cars, these decisions aren’t something we can avoid.

More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1. 

Future Friendly: Focus on Kerstin Dautenhahn

by Peter W McOwan, Queen Mary University of London

(from the archive)

Kerstin's team including the robot waving
Kerstin’s team
Copyright © Adaptive Systems Research Group

Kerstin Dautenhahn is a biologist with a mission: to help us make friends with robots. Kerstin was always fascinated by the natural world around her, so it was no surprise when she chose to study Biology at the University of Bielefeld in Germany. Afterwards she went on to study a Diploma in Biology where she did research on the leg reflexes in stick insects, a strange start it may seem for someone who would later become one of the world’s foremost robotics researchers. But it was through this fascinating bit of biology that Kerstin became interested in the ways that living things process information and control their body movements, an area scientists call biological cybernetics. This interest in trying to understand biology made her want to build things to test her understanding, these things would be based on ideas copied from biological animals but be run by computers, these things would be robots.

Follow that robot

From humble beginning building small robots that followed one another over a hilly landscape, she started to realise that biology was a great source of ideas for robotics, and in particular that the social intelligence that animals use to live and work with each other could be modelled and used to create sociable robots.

She started to ask fascinating questions like “What’s the best way for a robot to interrupt you if you are reading a newspaper – by gesturing with its arms, blinking its lights or making a sound?” and perhaps most importantly “When would a robot become your friend?” First at the University of Hertfordshire, now a Professor at the University of Waterloo she leads a world famous research group looking to try and build friendly robots with social intelligence.

Good robot / Bad robot – East vs West

Kerstin, like many other robotics researchers, is worried that most people tend to look on robots as being potentially evil. If we look at the way robots are portrayed in the movies that’s often how it seems: it makes a good story to have a mechanical baddie. But in reality robots can provide a real service to humans, from helping the disabled, assisting around the home and even becoming friends and companions. The baddie robot ideas tends to dominate in the west, but in Japan robots are very popular and robotics research is advancing at a phenomenal rate. There has been a long history in Japan of people finding mechanical things that mimic natural things interesting and attractive. It is partly this cultural difference that has made Japan a world leader in robot research. But Kerstin and others like her are trying to get those of us in the west to change our opinions by building friendly robots and looking at how we relate to them.

Polite Robots roam the room

When at the University of Hertfordshire, Kerstin decided that the best way to see how people would react to a robot around the house was to rent a flat near the university, and fill it with robots. Rather than examine how people interacted with robots in a laboratory, moving the experiments to a real home, with bookcases, biscuits, sofas and coffee tables, make it real. She and her team looked at how to give their robots social skills: what was the best way for a robot to approach a person, for example? At first they thought that the best approach would be straight from the front, but they found that humans felt this too aggressive, so the robots were trained to come up gently from the side. The people in the house were also given special ‘comfort buttons’, devices that let them indicate how they were feeling in the company of robots. Again interesting things happened, it turned out that not all, but quite a lot of people were on the whole happy for these robots to be close to themselves, closer in fact than they would normally let a human approach. Kerstin explains ‘This is because these people see the robot as a machine, not a person, and so are happy to be in close proximity. You are happy to move close to your microwave, and it’s the same for robots’. These are exciting first steps as we start to understand how to build robots with socially acceptable manners. But it turns out that robots need to have good looks as well as good manners if they are going to make it in human society.

Looks are everything for a robot?

This fall in acceptability
is called the ‘uncanny valley’

How we interact with robots also depends on how the robots look. Researchers had found previously that if you make a robot look too much like a human being, people expect it to be a human being, with all the social and other skills that humans have. If it doesn’t have these, we find interaction very hard. It’s like working with a zombie, and it can be very frightening. This fall in acceptability of robots that look like, but aren’t quite, human is what researchers call the ‘uncanny valley’, so people prefer to encounter a robot that looks like a robot and acts like a robot. Kerstin’s group found this effect too, so they designed their robots to look and act they way we would expect robots to look and act, and things got much more sociable. But they are still looking at how we act with more human like robots and built KASPAR, a robot toddler, which has a very realistic rubber face capable of showing expressions and smiling, and video camera eyes that allow the robot to react to your behaviours. He possesses arms so can wave goodbye or greet you with a friendly gesture. Most recently he was extended with multi-modal technology that allowed several children to play with him at the same time, He’s very lifelike and their hope was hopefully as KASPAR’s programming grew, and his abilities improved he, or some descendent of him, would emerge from the uncanny valley to become someone’s friend, and in particular, children with autism.

Autism – mind blindness and robots

The fact that most robots at present look like and act like robots can give them a big advantage to help them support children with autism. Autism is a condition that prevents you from developing an understanding of how to interact socially with the world. A current theory to explain the condition is that those who are autistic cannot form a correct understanding of others intentions, it’s called mind blindness. For example, if I came into the room wearing a hideous hat and asked you ‘Do you like my lovely new hat?’ you would probably think, ‘I don’t like the hat, but he does, so I should say I like it so as not to hurt his feelings’, you have a mental model of my state of mind (that I like my hat). An autistic person is likely to respond ‘I don’t like your hat’, if this is what he feels. Autistic people cannot create this mental model so find it hard to make friends and generally interact with people, as they can’t predict what people are likely to say, do or expect.

Playing with Robot toys

It’s different with robots, many autistic children have an affinity with robots. Robots don’t do unexpected things. Their behaviour is much simpler, because they act like robots. Using robots Kerstin’s group examined how we can use this interaction with robot toys to help some autistic children to develop skills to allow them to interact better with other people. By controlling the robot’s behaviours some of the children can develop ways to mimic social skills, which may ultimately improve their quality of life. There were some promising results, and the work continues to be only one way to try and help those suffering with this socially isolating condition.

Future friendly

It’s only polite that the last word goes to Kerstin from her time at Hertfordshire:

‘I firmly believe that robots as assistants can potentially be very useful in many application areas. For me as a researcher, working in the field of human-robot interaction is exciting and great fun. In our team we have people from various disciplines working together on a daily basis, including computer scientists, engineers and psychologist. This collaboration, where people need to have an open mind towards other fields, as well as imagination and creativity, are necessary in order to make robots more social.’

In the future, when robots become our workmates, colleagues and companions it will be in part down to Kerstin and her team’s pioneering effort as they work towards making our robot future friendly.

More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

The last speaker

by Paul Curzon, Queen Mary University of London

(from the cs4fn archive)

The wings of a green macau looking like angel wings
Image by Avlis AVL from Pixabay

The languages of the world are going extinct at a rapid rate. As the numbers of people who still speak a language dwindle, the chance of it surviving dwindles too. As the last person dies, the language is gone forever. To be the last living speaker of the language of your ancestors must be a terribly sad ordeal. One language’s extinction bordered on the surreal. The last time the language of the Atures, in South America was heard, it was spoken by a parrot: an old blue-and-yellow macaw, that had survived the death of all the local people.

Why do languages die?

The reason smaller languages die are varied, from war and genocide, to disease and natural disaster, to the enticement of bigger, pushier languages. Can technology help? In fact global media: films, music and television are helping languages to die, as the youth turn their backs on the languages of their parents. The Web with its early English bias may also be helping to push minority languages even faster to the brink. Computers could be a force for good though, protecting the world’s languages, rather than destroying them.

Unicode to the rescue

In the early days of the web, web pages used the English alphabet. Everything in a computer is just stored as numbers, including letters: 1 for ‘a’, 2 for ‘b’, for example. As long as different computers agree on the code they can print them to the screen as the same letter. A problem with early web pages is there were lots of different encodings of numbers to letters. Worse still only enough numbers were set aside for the English alphabet in the widely used encodings. Not good if you want to use a computer to support other languages with their variety of accents and completely different sets of characters. A new universal encoding system called Unicode came to the rescue. It aims to be a single universal character encoding – with enough numbers allocated for ALL languages. It is therefore allowing the web to be truly multi-lingual.

Languages are spoken

Languages are not just written but are spoken. Computers can help there, too, though. Linguists around the world record speakers of smaller languages, understanding them, preserving them. Originally this was done using tapes. Now the languages can be stored on multimedia computers. Computers are not just restricted to playing back recordings but can also actively speak written text. The web also allows much wider access to such materials that can also be embedded in online learning resources, helping new people to learn the languages. Language translators such as BabelFish and Google Translate can also help, though they are still far from perfect even for common languages. The problem is that things do not translate easily between languages – each language really does constitute a different way of thinking, not just of talking. Some thoughts are hard to even think in a different language.

AI to the rescue?

Even that is not enough. To truly preserve a language, the speakers need to use it in everyday life, for everyday conversation. Speakers need someone to speak with. Learning a language is not just about learning the words but learning the culture and the way of thinking, of actively using the language. Perhaps future computers could help there too. A long-time goal of artificial intelligence (AI) researchers is to develop computers that can hold real conversations. In fact this is the basis of the original test for computer intelligence suggested by Alan Turing back in 1950…if a computer is indistinguishable from a human in conversation, then it is intelligent. There is also an annual competition that embodies this test: the Loebner Prize. It would be great if in the future, computer AIs could help save languages by being additional everyday speakers holding real conversations, being real friends.

Time is running out…
by the time the AIs arrive,
the majority of languages may be gone forever.

Too late?

The problem is that time is running out. Artificial intelligences that can have totally realistic human conversations even in English are still a way off. None have passed the Turing Test. To speak different languages really well for everyday conversations those AIs will have to learn the different cultures and ‘think’ in the different languages. The window of opportunity is disappearing. By the time the AIs arrive the majority of human languages may be gone forever. Let’s hope that computer scientists and linguists do solve the problems in time, and that computers are not used just to preserve languages for academic interest, but really can help them to survive. It is sad that the last living creature to speak Atures was a parrot. It would be equally sad if the last speakers of all current languages bar English, Spanish and Chinese say, were computers.

More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.

The joke Turing test

A funny thing happened on the way to the computer

by Peter W. McOwan, Queen Mary University of London

(from the archive)

A cabbage smiling at you
Image by Lynn Greyling from Pixabay

Laugh and the world laughs with you they say, but what if you’re a computer. Can a computer have a ‘sense of humour’?

Computer generated jokes can do more than give us a laugh. Human language in jokes can often be ambiguous: words can have two meanings. For example the word ‘bore’ can mean a person who is uninteresting or could be to do with drilling … and if spoken it could be about a male pig. It’s often this slip between the meaning of words that makes jokes work (work that joke out for yourself). To be able to understand how human based humour works, and build a computer program that can make us laugh will give us a better understanding of how the human mind works … and human minds are never boring.

Many researchers believe that jokes come from the unexpected. As humans we have a brain that can try to ‘predict the future’, for example when catching a fast ball our brains have a simple learned mathematical model of the physics so we can predict where the ball will be and catch it. Similarly in stories we have a feel for where it should be going, and when the story takes an unexpected turn, we often find this funny. The shaggy dog story is an example; it’s a long series of parts of a story that build our expectations, only to have the end prove us wrong. We laugh (or groan) when the unexpected twist occurs. It’s like the ball suddenly doing three loop-the-loops then stopping in mid-air. It’s not what we expect. It’s against the rules and we see that as funny.

Some artificial intelligence researchers who are interested in understanding how language works look at jokes as a way to understand how we use language. Graham Richie was one early such researcher, and funnily enough he presented his work at an April Fools’ Day Workshop on Computational Humour. Richie looked at puns: simple gags that work by a play on words, and created a computer program called JAPE that generates jokes.

How do we know if the computer has a sense of humour? Well how would we know a human comic had a sense of humour? We’d get them to tell a joke. Now suppose that we had a test where we had a set of jokes, some made by humans and some by computers, and suppose we couldn’t tell the difference? If you can’t tell which is computer generated and which is human generated then the argument goes that the computer program must, in some way, have captured the human ability. This is called a Turing Test after the computer scientist Alan Turing. The original idea was to use it as a test for intelligence but we can use the same idea as a test for an ability to be funny too.

So let’s finish with a joke (and test). Which of the following is a joke created by a computer program following Richie’s theory of puns, and which is a human’s attempt? Will humans or machines have the last laugh on this test?

Have your vote: which of these two jokes do you think was written by a computer and which by a human.

1) What’s fast and wiry?

… An aircraft hanger!

2) What’s green and bounces?

… A spring cabbage!

Make your choice before scrolling down to find the answer.

More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.

The answers

Could you tell which of the two jokes was written by a human’s and which by a computer?

Lots of cs4fn readers voted over several years and the voting went:

  • 58 % votes cast believed the aircraft hanger joke is computer generated
  • 42 % votes cast believed the spring cabbage joke is computer generated

In fact …

  • The aircraft hanger joke was the work of a computer.
  • The spring cabbage joke was the human generated cracker.

If the voters were doing no better than guessing then the votes would be about 50-50: no better than tossing a coin to decide. Then the computer was doing as well at being funny as the human. A vote share of 58-42 suggests (on the basis of this one joke only) that the computer is getting there, but perhaps doesn’t quite have as good a sense of humour as the human who invented the spring cabbage joke. A real test would use lots more jokes, of course. If doing a real experiment it would also be important that they were not only generated by the human/computer but selected by them too (or possibly selected at random from ones they each picked out as their best). By using ones we selected our sense of humour could be getting in the way of a fair test.

The Chinese room: zombie attack!

by Paul Curzon, Queen Mary University of London

Jigsaw brain with pieces missing
Image by Gordon Johnson from Pixabay 

(From the cs4fn archive)

Iain M Banks’s science fiction novels about ‘The Culture’ imagine a universe inhabited (and largely run) by ‘Minds’. These are incredibly intelligent machines – mainly spaceships – that are also independently thinking conscious beings with their own personalities. From the replicants in Blade Runner and robots in Star Wars to Iain M Banks’s Minds, science fiction is full of intelligent machines. Could we ever really create a machine with a mind: not just a computer that computes, one that really thinks? Philosophers have been arguing about it for centuries. Things came to a head when philosopher John Searle came up with a thought experiment called the ‘Chinese room’. He claims it gives a cast iron argument that programmed ‘Minds’ can never exist. Are the computer scientists who are trying to build real artificial intelligences wasting their time? Or could zombies lurch to the rescue?

The Shaolin warrior monk

Imagine that the galaxy is populated by an advanced civilisation that has solved the problem of creating artificial intelligence programs. Wanting to observe us more closely they build a replicant that looks, dresses and moves just like a Shaolin warrior monk (it has to protect itself and the aliens watch too much TV!) They create a program for it that encodes the rules of Chinese. The machine is dispatched to Earth. Claiming to have taken a vow of silence, it does not speak (the aliens weren’t hot on accents). It reads Chinese characters written by the earthlings, then follows the instructions in its Chinese program that tell it the Chinese characters to write in response. It duly has written conversations with all the earthlings it meets as it wanders the planet, leaving them all in no doubt that they have been conversing with a real human Chinese speaker.

The question is, is that machine monk really a Mind? Does it really understand Chinese or is it just simulating that ability?

The Chinese room

Searle answers this by imagining a room in which a human sits. She speaks no Chinese but instead has a book of rules – the aliens’ computer program written out in English. People pass in Chinese symbols through a slot. She looks them up in the book and it tells her the Chinese symbols to pass back out. As she doesn’t understand Chinese she has no idea what the symbols coming in or going out mean. She is just uncomprehendingly following the book. Yet to the outside world she seems to be just as much a native speaker as that machine monk. She is simulating the ability to understand Chinese. As she’s using the same program as the monk, doing exactly what it would do, it follows that the machine monk is also just simulating intelligence. Therefore programs cannot understand. They cannot have a mind.

Is that machine monk a Mind?

Searle’s argument is built on some assumptions. Programs are ‘syntactic devices’: that just means they move symbols around, swapping them for others. They do it without giving those symbols any meaning. A human mind on the other hand works with ‘semantics’ – the meanings of symbols not just the symbols themselves. We understand what the symbols mean. The Chinese room is supposed to show you can’t get meaning by pushing symbols around. As any future artificial intelligence will be based on programs pushing symbols around they will not be a Mind that understands what it is doing.

The zombies are coming

So is this argument really cast iron? It has generated lots of debate, virtually all of it aiming to prove Searle wrong. The counter-arguments are varied and even the zombies have piled in to fight the cause: philosophical ones at least. What is a philosophical zombie? It’s just a human with no consciousness, no mind. One way to attack Searle’s argument is to attack the assumptions. That’s what the zombies are there to do. If the assumptions aren’t actually true then the argument falls apart. According to Searle human brains do something more than push symbols about\; they have a way of working with meaning. However, there can’t be a way of telling that by talking to one as otherwise it could have been used to tell that the machine monk wasn’t a mind.

Imagine then, there has been a nuclear accident and lots of babies are born with a genetic mutation that makes them zombies. They have no mind so no ability to understand meaning. Despite that they act exactly like humans: so much so that there is no way to tell zombies and humans apart. The zombies grow up, marry and have zombie children.

Presumably zombie brains are simpler than human ones – they don’t have whatever complication it is that introduces minds. Being simpler they have a fitness advantage that will allow them to out-compete humans. They won’t need to roam the streets killing humans to take over the world. If they wait long enough and keep having children, natural selection will do it for them.

The zombies are here

The point is it could have already happened. We could all be zombies but just don’t know it. We think we are conscious but that could just be an illusion – another simulation. We have no way to prove we are not zombies and if we could be zombies then Searle’s assumption that we are different to machines may not be true. The Chinese room argument falls apart.

Does it matter?

The arguments and counter arguments continue. To an engineer trying to build an artificial intelligence this actually doesn’t matter. Whether you have built a Mind or just something that exactly simulates one makes no practical difference. It makes a big difference to philosophers, though, and to our understanding of what it means to be human.

Let’s leave the last word to Alan Turing. He pointed out 30 years before the Chinese room was invented that it’s generally considered polite to assume that other humans are Minds like us (not zombies). If we do end up with machine intelligences so good we can’t tell they aren’t human, it would be polite to extend the assumption to them too. That would surely be the only humane thing to do.

More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.

The paranoid program

by Paul Curzon, Queen Mary University of London

One of the greatest characters in Douglas Adams’ Hitchhiker’s Guide to the Galaxy, science fiction radio series, books and film was Marvin the Paranoid Android. Marvin wasn’t actually paranoid though. Rather, he was very, very depressed. This was because as he often noted he had ‘a brain the size of a planet’ but was constantly given trivial and uninteresting jobs to do. Marvin was fiction. One of the first real computer programs to be able to converse with humans, PARRY, did aim to behave in a paranoid way, however.

PARRY was in part inspired by the earlier ELIZA program. Both were early attempts to write what we would now call chatbots: programs that could have conversations with humans. This area of Natural Language Processing is now a major research area. Modern chatbot programs rely on machine learning to learn rules from real conversations that tell them what to say in different situations. Early programs relied on hand written rules by the programmer. ELIZA, written by Joseph Weizenbaum, was the most successful early program to do this and fooled people into thinking they were conversing with a human. One set of rules, called DOCTOR, that ELIZA could use, allowed it to behave like a therapist of the kind popular at the time who just echoed back things their patient said. Weizenbaum’s aim was not actually to fool people, as such, but to show how trivial human-computer conversation was, and that with a relatively simple approach where the program looked for trigger words and used them to choose pre-programmed responses could lead to realistic appearing conversation.

PARRY was more serious in its aim. It was written by, Kenneth Colby, in the early 1970s. He was a psychiatrist at Stanford. He was trying to simulate the behaviour of person suffering from paranoid schizophrenia. It involves symptoms including the person believing that others have hostile intentions towards them. Innocent things other people say are seen as being hostile even when there was no such intention.

PARRY was based on a simple model of how those with the condition were thought to behave. Writing programs that simulate something being studied is one of the ways computer science has added to the way we do science. If you fully understand a phenomena, and have embodied that understanding in a model that describes it, then you should be able to write a program that simulates that phenomena. Once you have written a program then you can test it against reality to see if it does behave the same way. If there are differences then this suggests the model and so your understanding is not yet fully accurate. The model needs improving to deal with the differences. PARRY was an attempt to do this in the area of psychiatry. Schizophrenia is not in itself well-defined: there is no objective test to diagnose it. Psychiatrists come to a conclusion about it just by observing patients, based on their experience. Could a program display convincing behaviours?

It was tested by doing a variation of the Turing Test: Alan Turing’s suggestion of a way to tell if a program could be considered intelligent or not. He suggested having humans and programs chat to a panel of judges via a computer interface. If the judges cannot accurately tell them apart then he suggested you should accept the programs as intelligent. With PARRY rather than testing whether the program was intelligent, the aim was to find out if it could be distinguished from real people with the condition. A series of psychiatrists were therefore allowed to chat with a series of runs of the program as well as with actual people diagnosed with paranoid schizophrenia. All conversations were through a computer. The psychiatrists were not told in advance which were which. Other psychiatrists were later allowed to read the transcripts of those conversations. All were asked to pick out the people and the programs. The result was they could only correctly tell which was a human and which was PARRY about half the time. As that was about as good as tossing a coin to decide it suggests the model of behaviour was convincing.

As ELIZA was simulating a mental health doctor and PARRY a patient someone had the idea of letting them talk to each other. ELIZA (as the DOCTOR) was given the chance to chat with PARRY several times. You can read one of the conversations between them here. Do they seem believably human? Personally, I think PARRY comes across more convincingly human-like, paranoid or not!

Activity for you to do…

If you can program, why not have a go at writing your own chatbot. If you can’t writing a simple chatbot is quite a good project to use to learn as long as you start simple with fixed conversations. As you make it more complex, it can, like ELIZA and PARRY, be based on looking for keywords in the things the other person types, together with template responses as well as some fixed starter questions, also used to change the subject. It is easier if you stick to a single area of interest (make it football mad, for example): “What’s your favourite team?” … “Liverpool” … “I like Liverpool because of Klopp, but I support Arsenal.” …”What do you think of Arsenal?” …

Alternatively, perhaps you could write a chatbot to bring Marvin to life, depressed about everything he is asked to do, if that is not too depressingly simple, should you have a brain the size of a planet.

More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.

How does Santa do it?

Fast yuletide algorithms to visit all those chimneys in time

by Paul Curzon, Queen Mary University of London

Lots of Santas in a line
Image by Thomas Ulrich from Pixabay 

How does Santa do it? How does he visit all those children, all those chimneys, in just one night? My theory is he combines a special Scandinavian super-power with some computational wizardry.

There are about 2 billion children in the world and Santa visits them all. Clearly he has magic (flying reindeer remember) to help him do it but what kind of magic (beyond the reindeer)? And is it all about magic? Some have suggested he stops time, or moves through other dimensions, others that he just travels at an amazingly fast speed (Speedy Gonzales or The Flash style). Perhaps though he uses computer science too (though by that I don’t mean computer technology, just the power of computation).

The problem can be thought of as a computational one. The task is to visit, let’s say a billion homes (assuming an average of 2 children per household), as fast as possible. The standard solution assumes Santa visits them one at a time in order. This is what is called a linear algorithm and linear algorithms are slow. If there are n pieces of data to process (here, chimneys to descend) then we write this as having efficiency O(n). This way of writing about efficiency is called Big-O notation. O(n) just means as n increases the amount of work increases proportionately. Double the number of children and you double the workload for Santa. Currently the population doubles every 60 or 70 years or so, so clearly Santa needs to think in this way or he will eventually fail keep up, whatever magic he uses.

Perhaps, Santa uses teams of Elves as in the film Arthur Christmas, so that at each location he can deliver say presents to 1000 homes at once (though then it is the 1000 Elf helpers doing the delivering not Santa which goes against all current wisdom that Santa does it himself). It would speed things up apparently enormously to 1000 times faster. However, in computational terms that barely makes a difference. It is still a linear order of efficiency: it is still O(n) as the work still goes up proportionately with n. Double the population and Santa is in trouble still as his one night workload doubles too. O(2n) and O(1000n) both simplify to mean exactly the same as O(n). Computationally it makes little difference, and if their algorithms are to solve big problems computer scientists have to think in terms of dealing with data doubling, doubling and doubling again, just like Santa has had to over the centuries.

Divide and Conquer problem solving

When a computer scientist has a problem like this to solve, one of the first tools to reach for is called Divide and Conquer problem solving. It is a way of inventing lightening fast algorithms, that barely increase in work needed as the size of the problem doubles. The secret is to find a way to convert the problem into one that is half the size of the original, but (and this is key) that is otherwise exactly the same problem. If it is the same problem (just smaller) then that means you can solve those resulting smaller problems in the same way. You keep splitting the problem until the problems are so small they are trivial. That turns out to be a massively fast way to get a job done. It does not have to be computers doing the divide and conquer: I’ve used the approach for sorting piles of hundreds and hundreds of exam scripts into sorted order quickly, for example.

My theory is that divide and conquer is what Santa does, though it requires a particular superhero power too to work in his context, but then he is magical, so why not. How do I think it works? I think Santa is capable of duplicating himself. There is a precedent for this in the superhero world. Norse god Loki is able to copy himself to get out of scrapes, and since Santa is from the same part of the world it seems likely he could have a similar power.

If he copied himself twice then one of him could do the Northern Hemisphere and the other the Southern hemisphere. The problem has been split into an identical problem (delivering presents to lots of children) but that is half the size for each Santa (each has only half the world so half as many children to cover). That would allow him to cover the world twice as fast. However that is really no different to getting a couple of Elves to do the work. It is still O(n) in terms of the efficiency the work is done. As the population doubles he quickly ends up back in the same situation as before: too much work for each Santa. Likewise if he made a fixed number of 1000 copies of himself it would be similar to having 1000 Elves doing the deliveries. The work still increases proportional to the number of deliveries. Double the population and you still double the time it takes.

Double Santa and double again (and keep doubling)

So Santa needs to do better than that if he is to keep up with the population explosion. But divide and conquer doesn’t say halve the problem once, it says solve the new problem in the same way. So each new Santa has to copy themselves too! As they are identical copies to the original surely they can do that as easily as the first one could. Those new Santas have to do the same, and so on. They all split again and again until each has a simple problem to solve that they can just do. That might be having a single village to cover, or perhaps a single house. At that point the copying can stop and the job of delivering presents actually done. Each drops down a chimney and leaves the presents. (Now you can see how he manages to eat all those mince pies too!)

An important thing to remember is that that is not the end of it. The world is now full of Santas. Before the night is over and the job done, each Santa has to merge back with the one they split from, recursively all the way back to the original Santa. Otherwise come Christmas Day we wouldn’t be able to move for Santas. Better leave 30 minutes for that at the end!

Does this make a big difference? Well, yes (as long as all the copying can be done quickly and there is an organised way to split up the world). It makes a massive difference. The key is in thinking about how often the Santas double in number, so how often the problem is halved in size.

We start with 1 Santa who duplicates to 2, but now both can duplicate to 4, then to 8, 16, and after only 5 splittings there are already 32 Santas, then 64, 128, 256, 512 Santas, and after only 10 splittings we have over a 1000 Santas (1024 to be precise). As we saw that isn’t enough so they keep splitting. Following the same pattern, after 20 splittings we have over a million Santas to do the job. After only 30 rounds of splittings we have a billion Santas, so each can deal with a single family: a trivial problem for each.

So if a Santa can duplicate himself (along with the sleigh and reindeer) in a minute or so (Loki does it in a fraction of a second so probably this is a massive over-estimate and Santa can too), we have enough Santas to do the job in about half an hour, leaving each plenty of time to do the delivery to their destination. The splitting can also be done on the way so each Santa travels only as far as needed. Importantly this splitting process is NOT linear. It is O(log2 n) rather than O(n) and log2 n is massively smaller than n for large n. It means if we double the population of households to visit due to population explosion, the number of rounds of splitting does not need to double, the Santas just have to do one more round of splitting to cover it. The calculation log2 n (the logarithm to base 2 of n) is just a mathematicians way of saying how many times you can halve the number n before you get to 1 (or equivalently how many times you double from 1 before you get up to n). 1024 can be halved 10 times so (log2 1024) is 10. A billion can be halved about 30 times so (log2 1 billion) is about 30. Instead of a billion pieces of work we do only 30 for the splitting. Double the chimneys to 2 billion and you need only one more for a total of 31 splittings.

In computer terms divide and conquer algorithms involve methods (ie functions / procedures) calling themselves multiple times. Each call of the method, works on eg half the problem. So a method to sort data might first divide the data in half. One half is passed to one new call (copy) of the same method to sort in the same way, the other half is passed to the other call (copy). They do the same calling more copies to work on half of their part of the data, until eventually each has only one piece of data to sort (which is trivial). Work then has to be done merging the sorted halves back into sorted wholes. A billion pieces of data are sorted in only 30 rounds of recursive splitting. Double to 2 billion pieces of data and you need just 1 more round of splitting to get the sorting done.

Living in a simulation

If this mechanism for Santa to do deliveries all still seems improbable then consider that for all we know the reality of our universe may actually be a simulation (Matrix-like) in some other-dimensional computer. If so we are each just software in that simulation, each of us a method executing to make decisions about what we do in our virtual world. If that is the nature of reality, then Santa is also just a (special yuletide) software routine, and his duplicating feat is just a method calling itself recursively (as with the sort algorithm). Then the whole Christmas delivery done this way is just a simple divide and conquer algorithm running in a computer…

Given the other ways suggested for Santa to do his Christmas miracle seem even more improbable, that suggests to me that the existence of Santa provides strong evidence that we are all just software in a simulation. Not that that would make our reality, or Christmas, any less special.

More on …

This blog is funded through EPSRC grant EP/W033615/1.