The Chinese room: zombie attack!

by Paul Curzon, Queen Mary University of London

(From the cs4fn archive)

Iain M Banks’s science fiction novels about ‘The Culture’ imagine a universe inhabited (and largely run) by ‘Minds’. These are incredibly intelligent machines – mainly spaceships – that are also independently thinking conscious beings with their own personalities. From the replicants in Blade Runner and robots in Star Wars to Iain M Banks’s Minds, science fiction is full of intelligent machines. Could we ever really create a machine with a mind: not just a computer that computes, one that really thinks? Philosophers have been arguing about it for centuries. Things came to a head when philosopher John Searle came up with a thought experiment called the ‘Chinese room’. He claims it gives a cast iron argument that programmed ‘Minds’ can never exist. Are the computer scientists who are trying to build real artificial intelligences wasting their time? Or could zombies lurch to the rescue?

The Shaolin warrior monk

Imagine that the galaxy is populated by an advanced civilisation that has solved the problem of creating artificial intelligence programs. Wanting to observe us more closely they build a replicant that looks, dresses and moves just like a Shaolin warrior monk (it has to protect itself and the aliens watch too much TV!) They create a program for it that encodes the rules of Chinese. The machine is dispatched to Earth. Claiming to have taken a vow of silence, it does not speak (the aliens weren’t hot on accents). It reads Chinese characters written by the earthlings, then follows the instructions in its Chinese program that tell it the Chinese characters to write in response. It duly has written conversations with all the earthlings it meets as it wanders the planet, leaving them all in no doubt that they have been conversing with a real human Chinese speaker.

The question is, is that machine monk really a Mind? Does it really understand Chinese or is it just simulating that ability?

The Chinese room

Searle answers this by imagining a room in which a human sits. She speaks no Chinese but instead has a book of rules – the aliens’ computer program written out in English. People pass in Chinese symbols through a slot. She looks them up in the book and it tells her the Chinese symbols to pass back out. As she doesn’t understand Chinese she has no idea what the symbols coming in or going out mean. She is just uncomprehendingly following the book. Yet to the outside world she seems to be just as much a native speaker as that machine monk. She is simulating the ability to understand Chinese. As she’s using the same program as the monk, doing exactly what it would do, it follows that the machine monk is also just simulating intelligence. Therefore programs cannot understand. They cannot have a mind.

Is that machine monk a Mind?

Searle’s argument is built on some assumptions. Programs are ‘syntactic devices’: that just means they move symbols around, swapping them for others. They do it without giving those symbols any meaning. A human mind on the other hand works with ‘semantics’ – the meanings of symbols not just the symbols themselves. We understand what the symbols mean. The Chinese room is supposed to show you can’t get meaning by pushing symbols around. As any future artificial intelligence will be based on programs pushing symbols around they will not be a Mind that understands what it is doing.

The zombies are coming

So is this argument really cast iron? It has generated lots of debate, virtually all of it aiming to prove Searle wrong. The counter-arguments are varied and even the zombies have piled in to fight the cause: philosophical ones at least. What is a philosophical zombie? It’s just a human with no consciousness, no mind. One way to attack Searle’s argument is to attack the assumptions. That’s what the zombies are there to do. If the assumptions aren’t actually true then the argument falls apart. According to Searle human brains do something more than push symbols about\; they have a way of working with meaning. However, there can’t be a way of telling that by talking to one as otherwise it could have been used to tell that the machine monk wasn’t a mind.

Imagine then, there has been a nuclear accident and lots of babies are born with a genetic mutation that makes them zombies. They have no mind so no ability to understand meaning. Despite that they act exactly like humans: so much so that there is no way to tell zombies and humans apart. The zombies grow up, marry and have zombie children.

Presumably zombie brains are simpler than human ones – they don’t have whatever complication it is that introduces minds. Being simpler they have a fitness advantage that will allow them to out-compete humans. They won’t need to roam the streets killing humans to take over the world. If they wait long enough and keep having children, natural selection will do it for them.

The zombies are here

The point is it could have already happened. We could all be zombies but just don’t know it. We think we are conscious but that could just be an illusion – another simulation. We have no way to prove we are not zombies and if we could be zombies then Searle’s assumption that we are different to machines may not be true. The Chinese room argument falls apart.

Does it matter?

The arguments and counter arguments continue. To an engineer trying to build an artificial intelligence this actually doesn’t matter. Whether you have built a Mind or just something that exactly simulates one makes no practical difference. It makes a big difference to philosophers, though, and to our understanding of what it means to be human.

Let’s leave the last word to Alan Turing. He pointed out 30 years before the Chinese room was invented that it’s generally considered polite to assume that other humans are Minds like us (not zombies). If we do end up with machine intelligences so good we can’t tell they aren’t human, it would be polite to extend the assumption to them too. That would surely be the only humane thing to do.


More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.

The paranoid program

by Paul Curzon, Queen Mary University of London

One of the greatest characters in Douglas Adams’ Hitchhiker’s Guide to the Galaxy, science fiction radio series, books and film was Marvin the Paranoid Android. Marvin wasn’t actually paranoid though. Rather, he was very, very depressed. This was because as he often noted he had ‘a brain the size of a planet’ but was constantly given trivial and uninteresting jobs to do. Marvin was fiction. One of the first real computer programs to be able to converse with humans, PARRY, did aim to behave in a paranoid way, however.

PARRY was in part inspired by the earlier ELIZA program. Both were early attempts to write what we would now call chatbots: programs that could have conversations with humans. This area of Natural Language Processing is now a major research area. Modern chatbot programs rely on machine learning to learn rules from real conversations that tell them what to say in different situations. Early programs relied on hand written rules by the programmer. ELIZA, written by Joseph Weizenbaum, was the most successful early program to do this and fooled people into thinking they were conversing with a human. One set of rules, called DOCTOR, that ELIZA could use, allowed it to behave like a therapist of the kind popular at the time who just echoed back things their patient said. Weizenbaum’s aim was not actually to fool people, as such, but to show how trivial human-computer conversation was, and that with a relatively simple approach where the program looked for trigger words and used them to choose pre-programmed responses could lead to realistic appearing conversation.

PARRY was more serious in its aim. It was written by, Kenneth Colby, in the early 1970s. He was a psychiatrist at Stanford. He was trying to simulate the behaviour of person suffering from paranoid schizophrenia. It involves symptoms including the person believing that others have hostile intentions towards them. Innocent things other people say are seen as being hostile even when there was no such intention.

PARRY was based on a simple model of how those with the condition were thought to behave. Writing programs that simulate something being studied is one of the ways computer science has added to the way we do science. If you fully understand a phenomena, and have embodied that understanding in a model that describes it, then you should be able to write a program that simulates that phenomena. Once you have written a program then you can test it against reality to see if it does behave the same way. If there are differences then this suggests the model and so your understanding is not yet fully accurate. The model needs improving to deal with the differences. PARRY was an attempt to do this in the area of psychiatry. Schizophrenia is not in itself well-defined: there is no objective test to diagnose it. Psychiatrists come to a conclusion about it just by observing patients, based on their experience. Could a program display convincing behaviours?

It was tested by doing a variation of the Turing Test: Alan Turing’s suggestion of a way to tell if a program could be considered intelligent or not. He suggested having humans and programs chat to a panel of judges via a computer interface. If the judges cannot accurately tell them apart then he suggested you should accept the programs as intelligent. With PARRY rather than testing whether the program was intelligent, the aim was to find out if it could be distinguished from real people with the condition. A series of psychiatrists were therefore allowed to chat with a series of runs of the program as well as with actual people diagnosed with paranoid schizophrenia. All conversations were through a computer. The psychiatrists were not told in advance which were which. Other psychiatrists were later allowed to read the transcripts of those conversations. All were asked to pick out the people and the programs. The result was they could only correctly tell which was a human and which was PARRY about half the time. As that was about as good as tossing a coin to decide it suggests the model of behaviour was convincing.

As ELIZA was simulating a mental health doctor and PARRY a patient someone had the idea of letting them talk to each other. ELIZA (as the DOCTOR) was given the chance to chat with PARRY several times. You can read one of the conversations between them here. Do they seem believably human? Personally, I think PARRY comes across more convincingly human-like, paranoid or not!


Activity for you to do…

If you can program, why not have a go at writing your own chatbot. If you can’t writing a simple chatbot is quite a good project to use to learn as long as you start simple with fixed conversations. As you make it more complex, it can, like ELIZA and PARRY, be based on looking for keywords in the things the other person types, together with template responses as well as some fixed starter questions, also used to change the subject. It is easier if you stick to a single area of interest (make it football mad, for example): “What’s your favourite team?” … “Liverpool” … “I like Liverpool because of Klopp, but I support Arsenal.” …”What do you think of Arsenal?” …

Alternatively, perhaps you could write a chatbot to bring Marvin to life, depressed about everything he is asked to do, if that is not too depressingly simple, should you have a brain the size of a planet.


More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.

Can a computer tell a good story?

A tale by Rafael Pérez y Pérez

What’s your favourite story? Perhaps it’s from a brilliant book you’ve read: a classic like Pride and Prejudice or maybe Twilight, His Dark Materials or a Percy Jackson story? Maybe it’s a creepy tale you heard round a campfire, or a favourite bedtime story from when you were a toddler? Could your favourite story have actually been written by a machine?

Stories are important to people everywhere, whatever the culture. They aren’t just for entertainment though. For millennia, people have used storytelling to pass on their ancestral wisdom. Religions use stories to explain things like how God created the world. Aesop used fables to teach moral lessons. Tales can even be used to teach computing! I even wrote a short story called ‘A Godlike Heart‘ about a kidnapped princess to help my students understand things like bits.

It’s clear that stories are important for humans. That’s why scientists like me are studying how we create them. I use computers to help. Why? Because they give a way to model human experiences as programs and that includes storytelling. You can’t open up a human’s brain as they create a story to see how it’s done. You can analyse in detail what happens inside a computer while it is generating one, though. This kind of ‘computational modelling’ gives a way to explore what is and isn’t going on when humans do it.

So, how to create a program that writes a story? A first step is to look at theories of how humans do it. I started with a book by Open University Professor Mike Sharples. He suggests it’s a continuous cycle between engagement and reflection. During engagement a storyteller links sequences of actions without thinking too much (a bit like daydreaming). During reflection they check what they have written so far, and if needed modify it. In doing so they create rules that limit what they can do during future rounds of engagement. According to him, stories emerge from a constant interplay between engagement and reflection.

What knowledge would you need to write a story about the last football World Cup?

With this in mind I wrote a program called MEXICA that generates stories about the ancient inhabitants of Mexico City (they are often wrongly called the Aztecs – their real name is the Mexicas). MEXICA simulates these engagement-reflection cycles. However, to write a program like this you need to solve lots of problems. For instance, what type of knowledge does the program need to create a story? It’s more complicated than you might think. What knowledge would you need to write a story about the last football World Cup? You would need facts about Brazilian culture, the teams that played, the game’s rules… Similarly, to write a story about the Mexicas you need to know about the ancient cultures of Mexico, their religion, their traditions, and so on. Figuring out the amount and type of knowledge that a system needs to generate a story is a key problem a computer scientist trying to develop a computerised storyteller needs to solve. Whatever the story you need to know something about human emotions. MEXICA uses its knowledge of them to keep track of the emotional links between the characters using them to decide sensible actions that then might follow.

By now you are probably wondering what MEXICA’s stories look like. Here’s an example:

“Jaguar Knight made fun of and laughed at Trader. This situation made Trader really angry! Trader thoroughly observed Jaguar Knight. Then, Trader took a dagger, jumped towards Jaguar Knight and attacked Jaguar Knight. Jaguar Knight’s state of mind was very volatile and without thinking about it Jaguar Knight charged against Trader. In a fast movement, Trader wounded Jaguar Knight. An intense haemorrhage aroused which weakened Jaguar Knight. Trader knew that Jaguar Knight could die and that Trader had to do something about it. Trader went in search of some medical plants and cured Jaguar Knight. As a result, Jaguar Knight was very grateful towards Trader. Jaguar Knight was emotionally tied to Trader but Jaguar Knight could not accept Trader’s behaviour. What could Jaguar Knight do? Trader thought that Trader overreacted; so, Trader got angry with Trader. In this way, Trader – after consulting a Shaman – decided to exile Trader.”

As you can see it isn’t able to write stories as well as a human yet! The way it phrases things is a bit odd, like “Trader got angry with Trader” rather than “Trader got angry with himself”. It’s missing another area of knowledge: how to write English naturally! Even so, the narratives it produces are interesting and tell us something about what does and doesn’t make a good story. And that’s the point. Programs like MEXICA help us better understand the processes and knowledge needed to generate novel, interesting tales. If one day we create a program that can write stories as well as the best writers we will know we really do understand how humans do it. Your own favourite story might not be written by a machine, but in the future, you might find your grandchildren’s favourite ones were!

If you like to write stories, then why not learn to program too then you could try writing a storytelling program yourself. Could you improve on MEXICA?

Rafael Pérez y Pérez, Universidad Autónoma Metropolitana, México

from the CS4FN archive

More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The machines can translate now

(From the cs4fn archive)

“The Machines can translate now…
…I SAID ‘THE MACHINES CAN TRANSLATE NOW'”

The stereotype of the Englishman abroad when confronted by someone who doesn’t speak English is just to say it louder. That could soon be a thing of the past as portable devices start to gain speech recognition skills and as the machines get better at translating between languages.

Traditionally machine translation has involved professional human linguists manually writing lots of translation rules for the machines to follow. Recently there have been great advances in what is known as statistical machine translation where the machine learns the translations rules automatically. It does this using a parallel corpus*: just lots of pairs of sentences; one a sentence in the original language, the other its translation. Parallel corpora* are extracted from multi-lingual news sources like the BBC web site where professional human translators have done the translations.

Let’s look at an example translation of the accompanying original arabic:

Machine Translation: Baghdad 1-1 (AFP) – The official Iraqi news agency reported that the Chinese vice-president of the Revolutionary Command Council in Iraq, Izzat Ibrahim, met today in Baghdad, chairman of the Saudi Export Development Center, Abdel Rahman al-Zamil.

Human Translation: Baghdad 1-1 (AFP) – Iraq’s official news agency reported that the Deputy Chairman of the Iraqi Revolutionary Command Council, Izzet Ibrahim, today met with Abdul Rahman al-Zamil, Managing Director of the Saudi Center for Export Development.

This example shows a sentence from an Arabic newspaper then its translation by the Queen Mary University of London’s statistical machine translator, and finally a translation by a professional human translator. The statistical translation does allow a reader to get a rough understanding of the original Arabic sentence. There are several mistakes, though.

The Rosetta Stone
Rosetta Stone: with translation of the same text in three languages. Image © Hans Hillewaert CC BY-SA 4.0 from wikimedia.

Mistranslating the “Managing Director” of the export development center as its “chairman” is perhaps not too much of a problem. Mistranslating “Deputy Chairman” as the “Chinese vice-president” is very bad. That kind of mistranslation could easily lead to grave insults!

That reminds me of the point in ‘The Hitch-Hiker’s Guide to the Galaxy’ where Arthur Dent’s words “I seem to be having tremendous difficulty with my lifestyle,” slip down a wormhole in space-time to be overheard by the Vl’hurg commander across a conference table. Unfortunately this was understood in the Vl’hurg tongue as the most dreadful insult imaginable, resulting in them waging terrible war for centuries…

For now the human’s are still the best translators but the machines are learning from them fast!

– Paul Curzon, Queen Mary University of London

*corpus and corpora = singular and plural for the word used to describe a collection of written texts, literally a ‘body’ of text. A corpus might be all the works written by one author, corpora might be works of several authors.

More on …

QMUL CS4FN EPSrC logos