Service Model: a review

A robot butler outline on a blood red background
Image by OpenClipart-Vectors from Pixabay

Artificial Intelligences are just tools, that do nothing but follow their programming. They are not self-aware and have no ability for self-determination. They are a what not a who. So what is it like to be a robot just following its (complex) program, making decisions based on data alone? What is it like to be an artificial intelligence? What is the real difference between being self-aware and not? What is the difference to being human? These are the themes explored by the dystopian (or is it utopian?) and funny science fiction novel “Service Model” by Adrian Tchaikovsky.

In a future where the tools of computer science and robotics have been used to make human lives as comfortable as conceivably possible, Charles(TM) is a valet robot looking after his Master’s every whim. His every action is controlled by a task list turned into sophisticated human facing interaction. Charles is designed to be totally logical but also totally loyal. What could go wrong? Everything it turns out when he apparently murders his master. Why did it happen? Did he actually do it? Is there a bug in his program? Has he been infected by a virus? Was he being controlled by others as part of an uprising? Has he become self-aware and able to made his own decision to turn on his evil master. And that should he do now? Will his task list continue to guide him once he is in a totally alien context he was never designed for, and where those around him are apparently being illogical?

The novel explores important topics we all need to grapple with, in a fun but serious way. It looks at what AI tools are for and the difference between a tool and a person even when doing the same jobs. Is it actually good to replace the work of humans with programs just because we can? Who actually benefits and who suffers? AI is being promoted as a silver bullet that will solve our economic problems. But, we have been replacing humans with computers for decades now based on that promise, but prices still go up and inequality seems to do nothing but rise with ever more children living in poverty. Who is actually benefiting? A small number of billionaires certainly are. Is everyone? We have many better “toys” that superficially make life easier and more comfortable – we can buy anything we want from the comfort of our sofas, self-driving cars will soon take us anywhere we want, we can get answers to any question we care to ask, ever more routine jobs are done by machines, many areas of work, boring or otherwise are becoming a thing of the past with a promise of utopia, but are we solving problems or making them with our drive to automate everything. Is it good for society as a whole or just good for vested interests? Are we losing track of what is most important about being human? Charles will perhaps help us find out.

Thinking about the consequences of technology is an important part of any computer science education and all CS professionals should think about ethics of what they are involved in. Reading great science fiction such as this is one good way to explore the consequences, though as Ursula Le Guin has said: the best science fiction doesn’t predict the future, it tells us about ourselves in the present. Following in the tradition of “The Machine Stops” and “I, Robot”, “Service Model” (and the short story “Human Resources” that comes with it) does that, if in a satyrical way. It is a must read for anyone involved in the design of AI tools especially those promoting the idea of utopian futures.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Moon and Mind-Body Dualism

**spoiler alert**

Two identical  astronauts facing one another
Image by Mohamed Hassan from Pixabay (duplicated by CS4FN)

The least interesting thing about Duncan Jones is who his superstar father is. He stepped out of the shadow with a vengeance in directing one of the coolest films ever: Moon. It premiered at Sundance in 2009 to brilliant reviews and for me is a classic along the lines of the likes of Silent Running, Outland and 2001. If you are interested in artificial intelligence (which Jones obviously is) then you will undoubtedly love Moon.

What’s most interesting about Duncan is that he finished off his degree in Philosophy by writing a dissertation on Artificial Intelligence. He obviously wasn’t quite so in to snappy titles then as now though as he called it: “How to kill your computer friend: an investigation of the Mind / Body problem and how it relates to the hypothetical creation of a thinking machine.”

What is the mind-body problem all about? Well, it’s probably one of the deepest problems computer scientists, along with philosophers, psychologists and neurobiologists, are grappling with. Its roots date back at least as far as Plato and it has been keeping philosophers in business ever since. It boils down to the question of whether our mind is a physical thing or not, and if not how can our mind affect the physical world at all. Descartes believed they were separate but interacted through the pineal gland – a pea sized gland in the brain. (He was wrong about the pineal gland incidentally. It actually produces melatonin which amongst other things controls sleep patterns). Descartes also thought that only humans, not animals, had both pineal glands and minds (He was wrong about that too, though I’m being a bit harsh on him, making him sound like a bit of a loser – he was pretty smart really, one of the greatest thinkers ever – honest.) A more interesting part of Descartes’ theory of mind and body dualism is that he suggested that the body works like a machine. That is of course where computer scientists get interested.

Fascinating an argument as that over dualism is, it was all a bit, well philosophical, until computers became a practical reality, that is. Suddenly it turned into an important question about what it is possible to engineer. Forget about the AI question of whether a computer can be intelligent. Dualism moves us on to worrying about whether a computer can ever have a mind. Could a computer ever become conscious and have a “self”? No one knows. No machine does either, right now.

After finishing his degree, Duncan actually flirted with studying for a PhD on Artificial Intelligence but packed it in to focus on film directing instead. He seems to have done an awful lot of searching for his “self” before finding his passion as a Director. Luckily for us though he has continued to explore the same philosophical themes in Moon.

It all concerns Sam Bell, who is left alone working at a base on the far side of the moon. He has only a robot called Gerty to keep him company on his three year stint. After an accident he comes across a doppelganger of himself. Is it the real him, or a clone the company have somehow created…? Is his “self” just losing the plot or is there more to his “self” than meets the eye?

Art as film can clearly be just as good a medium as a PhD thesis for exploring the philosophy of computation!

Oh and if you are really interested and didn’t hear from all the media fuss at the time, we will leave you to Google who his father is for yourself. This may be the first article ever written about Duncan Jones that doesn’t tell you!

– Paul Curzon, Queen Mary University of London (from the archive)

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The Chinese room: zombie attack!

CS4FN Banner

by Paul Curzon, Queen Mary University of London

Jigsaw brain with pieces missing
Image by Gordon Johnson from Pixabay 

(From the cs4fn archive)

Iain M Banks’s science fiction novels about ‘The Culture’ imagine a universe inhabited (and largely run) by ‘Minds’. These are incredibly intelligent machines – mainly spaceships – that are also independently thinking conscious beings with their own personalities. From the replicants in Blade Runner and robots in Star Wars to Iain M Banks’s Minds, science fiction is full of intelligent machines. Could we ever really create a machine with a mind: not just a computer that computes, one that really thinks? Philosophers have been arguing about it for centuries. Things came to a head when philosopher John Searle came up with a thought experiment called the ‘Chinese room’. He claims it gives a cast iron argument that programmed ‘Minds’ can never exist. Are the computer scientists who are trying to build real artificial intelligences wasting their time? Or could zombies lurch to the rescue?

The Shaolin warrior monk

Imagine that the galaxy is populated by an advanced civilisation that has solved the problem of creating artificial intelligence programs. Wanting to observe us more closely they build a replicant that looks, dresses and moves just like a Shaolin warrior monk (it has to protect itself and the aliens watch too much TV!) They create a program for it that encodes the rules of Chinese. The machine is dispatched to Earth. Claiming to have taken a vow of silence, it does not speak (the aliens weren’t hot on accents). It reads Chinese characters written by the earthlings, then follows the instructions in its Chinese program that tell it the Chinese characters to write in response. It duly has written conversations with all the earthlings it meets as it wanders the planet, leaving them all in no doubt that they have been conversing with a real human Chinese speaker.

The question is, is that machine monk really a Mind? Does it really understand Chinese or is it just simulating that ability?

The Chinese room

Searle answers this by imagining a room in which a human sits. She speaks no Chinese but instead has a book of rules – the aliens’ computer program written out in English. People pass in Chinese symbols through a slot. She looks them up in the book and it tells her the Chinese symbols to pass back out. As she doesn’t understand Chinese she has no idea what the symbols coming in or going out mean. She is just uncomprehendingly following the book. Yet to the outside world she seems to be just as much a native speaker as that machine monk. She is simulating the ability to understand Chinese. As she’s using the same program as the monk, doing exactly what it would do, it follows that the machine monk is also just simulating intelligence. Therefore programs cannot understand. They cannot have a mind.

Is that machine monk a Mind?

Searle’s argument is built on some assumptions. Programs are ‘syntactic devices’: that just means they move symbols around, swapping them for others. They do it without giving those symbols any meaning. A human mind on the other hand works with ‘semantics’ – the meanings of symbols not just the symbols themselves. We understand what the symbols mean. The Chinese room is supposed to show you can’t get meaning by pushing symbols around. As any future artificial intelligence will be based on programs pushing symbols around they will not be a Mind that understands what it is doing.

The zombies are coming

So is this argument really cast iron? It has generated lots of debate, virtually all of it aiming to prove Searle wrong. The counter-arguments are varied and even the zombies have piled in to fight the cause: philosophical ones at least. What is a philosophical zombie? It’s just a human with no consciousness, no mind. One way to attack Searle’s argument is to attack the assumptions. That’s what the zombies are there to do. If the assumptions aren’t actually true then the argument falls apart. According to Searle human brains do something more than push symbols about\; they have a way of working with meaning. However, there can’t be a way of telling that by talking to one as otherwise it could have been used to tell that the machine monk wasn’t a mind.

Imagine then, there has been a nuclear accident and lots of babies are born with a genetic mutation that makes them zombies. They have no mind so no ability to understand meaning. Despite that they act exactly like humans: so much so that there is no way to tell zombies and humans apart. The zombies grow up, marry and have zombie children.

Presumably zombie brains are simpler than human ones – they don’t have whatever complication it is that introduces minds. Being simpler they have a fitness advantage that will allow them to out-compete humans. They won’t need to roam the streets killing humans to take over the world. If they wait long enough and keep having children, natural selection will do it for them.

The zombies are here

The point is it could have already happened. We could all be zombies but just don’t know it. We think we are conscious but that could just be an illusion – another simulation. We have no way to prove we are not zombies and if we could be zombies then Searle’s assumption that we are different to machines may not be true. The Chinese room argument falls apart.

Does it matter?

The arguments and counter arguments continue. To an engineer trying to build an artificial intelligence this actually doesn’t matter. Whether you have built a Mind or just something that exactly simulates one makes no practical difference. It makes a big difference to philosophers, though, and to our understanding of what it means to be human.

Let’s leave the last word to Alan Turing. He pointed out 30 years before the Chinese room was invented that it’s generally considered polite to assume that other humans are Minds like us (not zombies). If we do end up with machine intelligences so good we can’t tell they aren’t human, it would be polite to extend the assumption to them too. That would surely be the only humane thing to do.


More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.