Service Model: a review

A robot butler outline on a blood red background
Image by OpenClipart-Vectors from Pixabay

Artificial Intelligences are just tools, that do nothing but follow their programming. They are not self-aware and have no ability for self-determination. They are a what not a who. So what is it like to be a robot just following its (complex) program, making decisions based on data alone? What is it like to be an artificial intelligence? What is the real difference between being self-aware and not? What is the difference to being human? These are the themes explored by the dystopian (or is it utopian?) and funny science fiction novel “Service Model” by Adrian Tchaikovsky.

In a future where the tools of computer science and robotics have been used to make human lives as comfortable as conceivably possible, Charles(TM) is a valet robot looking after his Master’s every whim. His every action is controlled by a task list turned into sophisticated human facing interaction. Charles is designed to be totally logical but also totally loyal. What could go wrong? Everything it turns out when he apparently murders his master. Why did it happen? Did he actually do it? Is there a bug in his program? Has he been infected by a virus? Was he being controlled by others as part of an uprising? Has he become self-aware and able to made his own decision to turn on his evil master. And that should he do now? Will his task list continue to guide him once he is in a totally alien context he was never designed for, and where those around him are apparently being illogical?

The novel explores important topics we all need to grapple with, in a fun but serious way. It looks at what AI tools are for and the difference between a tool and a person even when doing the same jobs. Is it actually good to replace the work of humans with programs just because we can? Who actually benefits and who suffers? AI is being promoted as a silver bullet that will solve our economic problems. But, we have been replacing humans with computers for decades now based on that promise, but prices still go up and inequality seems to do nothing but rise with ever more children living in poverty. Who is actually benefiting? A small number of billionaires certainly are. Is everyone? We have many better “toys” that superficially make life easier and more comfortable – we can buy anything we want from the comfort of our sofas, self-driving cars will soon take us anywhere we want, we can get answers to any question we care to ask, ever more routine jobs are done by machines, many areas of work, boring or otherwise are becoming a thing of the past with a promise of utopia, but are we solving problems or making them with our drive to automate everything. Is it good for society as a whole or just good for vested interests? Are we losing track of what is most important about being human? Charles will perhaps help us find out.

Thinking about the consequences of technology is an important part of any computer science education and all CS professionals should think about ethics of what they are involved in. Reading great science fiction such as this is one good way to explore the consequences, though as Ursula Le Guin has said: the best science fiction doesn’t predict the future, it tells us about ourselves in the present. Following in the tradition of “The Machine Stops” and “I, Robot”, “Service Model” (and the short story “Human Resources” that comes with it) does that, if in a satyrical way. It is a must read for anyone involved in the design of AI tools especially those promoting the idea of utopian futures.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Moon and Mind-Body Dualism

**spoiler alert**

Two identical  astronauts facing one another
Image by Mohamed Hassan from Pixabay (duplicated by CS4FN)

The least interesting thing about Duncan Jones is who his superstar father is. He stepped out of the shadow with a vengeance in directing one of the coolest films ever: Moon. It premiered at Sundance in 2009 to brilliant reviews and for me is a classic along the lines of the likes of Silent Running, Outland and 2001. If you are interested in artificial intelligence (which Jones obviously is) then you will undoubtedly love Moon.

What’s most interesting about Duncan is that he finished off his degree in Philosophy by writing a dissertation on Artificial Intelligence. He obviously wasn’t quite so in to snappy titles then as now though as he called it: “How to kill your computer friend: an investigation of the Mind / Body problem and how it relates to the hypothetical creation of a thinking machine.”

What is the mind-body problem all about? Well, it’s probably one of the deepest problems computer scientists, along with philosophers, psychologists and neurobiologists, are grappling with. Its roots date back at least as far as Plato and it has been keeping philosophers in business ever since. It boils down to the question of whether our mind is a physical thing or not, and if not how can our mind affect the physical world at all. Descartes believed they were separate but interacted through the pineal gland – a pea sized gland in the brain. (He was wrong about the pineal gland incidentally. It actually produces melatonin which amongst other things controls sleep patterns). Descartes also thought that only humans, not animals, had both pineal glands and minds (He was wrong about that too, though I’m being a bit harsh on him, making him sound like a bit of a loser – he was pretty smart really, one of the greatest thinkers ever – honest.) A more interesting part of Descartes’ theory of mind and body dualism is that he suggested that the body works like a machine. That is of course where computer scientists get interested.

Fascinating an argument as that over dualism is, it was all a bit, well philosophical, until computers became a practical reality, that is. Suddenly it turned into an important question about what it is possible to engineer. Forget about the AI question of whether a computer can be intelligent. Dualism moves us on to worrying about whether a computer can ever have a mind. Could a computer ever become conscious and have a “self”? No one knows. No machine does either, right now.

After finishing his degree, Duncan actually flirted with studying for a PhD on Artificial Intelligence but packed it in to focus on film directing instead. He seems to have done an awful lot of searching for his “self” before finding his passion as a Director. Luckily for us though he has continued to explore the same philosophical themes in Moon.

It all concerns Sam Bell, who is left alone working at a base on the far side of the moon. He has only a robot called Gerty to keep him company on his three year stint. After an accident he comes across a doppelganger of himself. Is it the real him, or a clone the company have somehow created…? Is his “self” just losing the plot or is there more to his “self” than meets the eye?

Art as film can clearly be just as good a medium as a PhD thesis for exploring the philosophy of computation!

Oh and if you are really interested and didn’t hear from all the media fuss at the time, we will leave you to Google who his father is for yourself. This may be the first article ever written about Duncan Jones that doesn’t tell you!

– Paul Curzon, Queen Mary University of London (from the archive)

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

A storm in a bell jar

lightning
Image by FelixMittermeier from Pixabay 

Ada Lovelace was close friends with John Crosse, and knew his father Andrew: the ‘real Frankenstein’. Andrew Crosse apparently created insect life from electricity, stone and water…

Andrew Crosse was a ‘gentleman scientist’ doing science for his own amusement including work improving giant versions of the first batteries called ‘voltaic piles’. He was given the nickname ‘the thunder and lightning man’ because of the way he used the batteries to do giant discharges of electricity with bangs as loud as canons.

He hit the headlines when he appeared to create life from electricity, Frankenstein-like. This was an unexpected result of his experiments using electricity to make crystals. He was passing a current through water containing dissolved limestone over a period of weeks. In one experiment, about a month in, a perfect insect appeared apparently from no-where, and soon after starting to move. More and more insects then appeared over time. He mentioned it to friends, which led to a story in a local paper. It was then picked up nationally. Some of the stories said he had created the insects, and this led to outrage and death threats over his apparent blasphemy of trying to take the position of God.

(Does this start to sound like a modern social networking storm, trolls and all?) In fact he appears to have believed, and others agreed, that the mineral samples he was using must have been contaminated with tiny insect eggs, that just naturally hatched. Scientific results are only accepted if they can be replicated. Others, who took care to avoid contamination couldn’t get the same result. The secret of creating life had not been found.

While Mary Shelley, who wrote Frankenstein, did know Crosse, sadly perhaps, for the story’s sake, he can’t have been the inspiration for Frankenstein as has been suggested, given she wrote it decades earlier!

– Paul Curzon, Queen Mary University of London (from the archicve)


More on …

Related Magazines …


EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

QMUL CS4FN EPSRC logos

Pass the screwdriver, Igor

Mary Shelley, Frankenstein’s monster and artificial life

Frankenstein's Monster
Image by sethJreid from Pixabay

Shortly after Ada Lovelace was born, so long before she made predictions about future “creative machines”, Mary Shelley, a friend of her father (Lord Byron), was writing a novel. In her book, Frankenstein, inanimate flesh is brought to life. Perhaps Shelley foresaw what is actually to come, what computer scientists might one day create: artificial life.

Life it may not be, but engineers are now doing pretty well in creating humanoid machines that can do their own thing. Could a machine ever be considered alive? The 21st century is undoubtedly going to be the age of the robot. Maybe it’s time to start thinking about the consequences in case they gain a sense of self.

Frankenstein was obsessed with creating life. In Mary Shelley’s story, he succeeded, though his creation was treated as a “Monster” struggling to cope with the gift of life it was given. Many science fiction books and films have toyed with these themes: the film Blade Runner, for example, explored similar ideas about how intelligent life is created; androids that believe they are human, and the consequences for the creatures concerned.

Is creating intelligent life fiction? Not totally. Several groups of computer scientists are exploring what it means to create non-biological life, and how it might be done. Some are looking at robot life, working at the level of insect life-forms, for example. Others are looking at creating intelligent life within cyberspace.

For 70 years or more scientists have tried to create artificial intelligences. They have had a great deal of success in specific areas such as computer vision and chess playing programs. They are not really intelligent in the way humans are, though they are edging closer. However none of these programs really cuts it as creating “life”. Life is something more than intelligence.

A small band of computer scientists have been trying a different approach that they believe will ultimately lead to the creation of new life forms: life forms that could one day even claim to be conscious (and who would we be to disagree with them if they think they are?) These scientists believe life can’t be engineered in a piecemeal way, but that the whole being has to be created as a coherent whole. Their approach is to build the basic building blocks and let life emerge from them.

Sodarace creatures racing over a bumpy terrain
A sodarace in action
by CS4FN

The outline of the idea could be seen in the game Sodarace, where you could build your own creatures that move around a virtual world, and even let them evolve. One approach to building creatures, such as a spider, would be to try and work out mathematical equations about how each leg moves and program those equations. The alternative artificial life way as used in Sodarace is to instead program up the laws of physics such as gravity and friction and how masses, springs and muscles behave according to those laws. Then you just put these basic bits together in a way that corresponds to a spider. With this approach you don’t have to work out in advance every eventuality (what if it comes to a wall? Or a cliff? Or bumpy ground?) and write code to deal with it. Instead natural behaviour emerges.

The artificial life community believe, not just life-like movement, but life-like intelligence can emerge in a similar way. Rather than programming the behaviour of muscles you program the behaviour of neurones and then build brains out of them. That it turns out has been the key to the machine learning programs that are storming the world of Artificial Intelligence, turning it into an everyday tool. However, if aiming for artificial life, you would keep going and combine it with the basic biochemistry of an immune system, do a similar thing with a reproductive system, and so on.

Want to know more? A wonderful early book is Steve Grand’s: “Creation”, on how he created what at the time was claimed to be “the nearest thing to artificial life yet”… It started life as the game “Creatures”.

Then have a go at creating artificial life yourself (but be nice to it).

– Paul Curzon and Peter W McOwan, Queen Mary University of London

More on …

Related Magazines …


EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

QMUL CS4FN EPSRC logos