The Chinese room: zombie attack!

Iain M Banks’s science fiction novels about ‘The Culture’ imagine a universe inhabited (and largely run) by ‘Minds’. These are incredibly intelligent machines – mainly spaceships – that are also independently thinking conscious beings with their own personalities. From the replicants in Blade Runner and robots in Star Wars to Iain M Banks’s Minds, science fiction is full of intelligent machines. Could we ever really create a machine with a mind: not just a computer that computes, one that really thinks? Philosophers have been arguing about it for centuries. Things came to a head when philosopher John Searle came up with a thought experiment called the ‘Chinese room’. He claims it gives a cast iron argument that programmed ‘Minds’ can never exist. Are the computer scientists who are trying to build real artificial intelligences wasting their time? Or could zombies lurch to the rescue?

The Shaolin warrior monk

Imagine that the galaxy is populated by an advanced civilisation that has solved the problem of creating artificial intelligence programs. Wanting to observe us more closely they build a replicant that looks, dresses and moves just like a Shaolin warrior monk (it has to protect itself and the aliens watch too much TV!) They create a program for it that encodes the rules of Chinese. The machine is dispatched to Earth. Claiming to have taken a vow of silence, it does not speak (the aliens weren’t hot on accents). It reads Chinese characters written by the earthlings, then follows the instructions in its Chinese program that tell it the Chinese characters to write in response. It duly has written conversations with all the earthlings it meets as it wanders the planet, leaving them all in no doubt that they have been conversing with a real human Chinese speaker.

The question is, is that machine monk really a Mind? Does it really understand Chinese or is it just simulating that ability?

The Chinese room

Searle answers this by imagining a room in which a human sits. She speaks no Chinese but instead has a book of rules – the aliens’ computer program written out in English. People pass in Chinese symbols through a slot. She looks them up in the book and it tells her the Chinese symbols to pass back out. As she doesn’t understand Chinese she has no idea what the symbols coming in or going out mean. She is just uncomprehendingly following the book. Yet to the outside world she seems to be just as much a native speaker as that machine monk. She is simulating the ability to understand Chinese. As she’s using the same program as the monk, doing exactly what it would do, it follows that the machine monk is also just simulating intelligence. Therefore programs cannot understand. They cannot have a mind.

Is that machine monk a Mind?

Searle’s argument is built on some assumptions. Programs are ‘syntactic devices’: that just means they move symbols around, swapping them for others. They do it without giving those symbols any meaning. A human mind on the other hand works with ‘semantics’ – the meanings of symbols not just the symbols themselves. We understand what the symbols mean. The Chinese room is supposed to show you can’t get meaning by pushing symbols around. As any future artificial intelligence will be based on programs pushing symbols around they will not be a Mind that understands what it is doing.

The zombies are coming

So is this argument really cast iron? It has generated lots of debate, virtually all of it aiming to prove Searle wrong. The counter-arguments are varied and even the zombies have piled in to fight the cause: philosophical ones at least. What is a philosophical zombie? It’s just a human with no consciousness, no mind. One way to attack Searle’s argument is to attack the assumptions. That’s what the zombies are there to do. If the assumptions aren’t actually true then the argument falls apart. According to Searle human brains do something more than push symbols about\; they have a way of working with meaning. However, there can’t be a way of telling that by talking to one as otherwise it could have been used to tell that the machine monk wasn’t a mind.

Imagine then, there has been a nuclear accident and lots of babies are born with a genetic mutation that makes them zombies. They have no mind so no ability to understand meaning. Despite that they act exactly like humans: so much so that there is no way to tell zombies and humans apart. The zombies grow up, marry and have zombie children.

Presumably zombie brains are simpler than human ones – they don’t have whatever complication it is that introduces minds. Being simpler they have a fitness advantage that will allow them to out-compete humans. They won’t need to roam the streets killing humans to take over the world. If they wait long enough and keep having children, natural selection will do it for them.

The zombies are here

The point is it could have already happened. We could all be zombies but just don’t know it. We think we are conscious but that could just be an illusion – another simulation. We have no way to prove we are not zombies and if we could be zombies then Searle’s assumption that we are different to machines may not be true. The Chinese room argument falls apart.

Does it matter?

The arguments and counter arguments continue. To an engineer trying to build an artificial intelligence this actually doesn’t matter. Whether you have built a Mind or just something that exactly simulates one makes no practical difference. It makes a big difference to philosophers, though, and to our understanding of what it means to be human.

Let’s leave the last word to Alan Turing. He pointed out 30 years before the Chinese room was invented that it’s generally considered polite to assume that other humans are Minds like us (not zombies). If we do end up with machine intelligences so good we can’t tell they aren’t human, it would be polite to extend the assumption to them too. That would surely be the only humane thing to do.

Paul Curzon, Queen Mary University of London (from the cs4fn archive)


More on …

Related Magazines …

Issue 16 cover clean up your language

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The First Law of Humans

Preventing robots from hurting us

A chess board with pieces lined up
Image by Pexels from Pixabay 

The first rule of humans when around robots is apparently that they should not do anything too unexpected near a robot…

A 7-year old child playing chess in a chess tournament has had his finger broken by a chess-playing robot which grabbed it as the boy made his move. The child was blamed by one of the organisers, who claimed that it happened because the boy “broke the safety rules”! The organisers also apparently claimed that the robot itself was perfectly safe!

What seems to have happened is, after the robot played its move, the boy played his own move very quickly before the robot had finished. Somehow this triggered the wrong lines of code in the robot’s program: instructions that were intended for some other situation. In the actual situation of the boy’s hand being over the board at the wrong time it led the robot to grab his finger and not let go.

Spoiler Alert

The situation immediately brings to mind the classic science fiction story “Moxon’s Master” by Ambrose Bierce published way back in 1899. It is the story of a chess playing automaton (ie robot) and what happens when it is check-mated when playing a game with its creator Moxon. It flies into a dangerous rage. However, there the problems are apparently because the robot has developed emotions and so emotional reactions. In both situations however a robot intended simply to play chess is capable of harming a human as a result.

The three laws of robotics

Isaac Asimov is famous for his laws of robotics: fundamental unbreakable rules built in to the ‘brain’ of all robots in his fictional world precisely to stop this sort of situation. The rules he formulated were:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
A robot carefully touching fingers with a human
Image by Pete Linforth from Pixabay 

Clearly, had they been in place, the chess robot would not have harmed the boy, and Moxon’s automaton would not have been able to do anything too bad as a result of its temper either.

Asimov devised his rules as a result of a discussion with the Science Fiction magazine editor John W. Campbell, Jr. He then spent much of his Science Fiction career writing robot stories around how humans could end up being hurt despite the apparently clear rules. That aside their key idea was that, to ensure robots were safe around people, they would need built-in logic that could not be circumvented to stop them hurting them. They needed a fail-safe system monitoring their actions that would take over when breaches were possible. Clearly this chess-playing robot was not “perfectly safe” and not even fail-safe as if it was the boy would not have been harmed whatever he did. The robot did not have anything at all akin to a working, unbreakable First Law programmed into it.

Dangerous Machines

Asimov’s robots were intelligent and able to think for themselves in a more general sense than any that currently exist. The First Law essentially prevented them from deciding to harm a human not just do so by accident. However, perhaps the day will soon come when they can start to think for themselves, so perhaps a first law will soon be important. In any case, machines can harm humans without being able to think. That humans need to be wary around robots is obvious from the fact that there have been numerous injuries and even fatalities in factories using industrial robots in the decades since they were introduced. They are dangerous machines. Fortunately, the carnage inflicted on children is at least not quite that of the industrial accidents in the Industrial Revolution. It is still a problem though. People do have to take care and follow safety rules around them!

Rather than humans having to obey safety laws, we perhaps ought to be taking Asimov’s Laws more seriously for all robots, therefore. Why can’t those laws just be built in? It is certainly an interesting research problem to think about. The idea of a fail-safe is standard in engineering, so its not that general idea that is the problem. The problem is that, rather than intelligence being needed for robots to harm us, intelligence is needed to avoid them doing so.

Implementing the First Law

Let’s imagine building in the first law to chess playing robots and in particular the one that hurt the boy. For starters the chess playing robot would have needed to have recognised that the boy WAS a human so should not be harmed. It would also need to be able to recognise that his finger was a part of him and that gripping its finger would harm him. It would also need to know that it was gripping his finger (not a piece) at the time. It would then need a way to stop before it was too late, and do no harm in the stopping. It clearly needs to understand a lot about the world to be able to avoid hurting people in general.

Some of this is almost within our grasp. Computers can certainly do a fairly good job of recognising humans now through image recognition code. They can even recognise individuals, so actually that first fundamental part of knowing what is and isn’t a human is more or less possible now, just not perfectly yet. Recognising objects in general is perhaps harder. The chess robot presumably has code for recognising pieces already though a finger perhaps even looks like a piece at least to a robot. To generally, avoid causing harm in any situation it needs to be able to recognise what lots of objects are not just chess pieces. It also needs to differentiate them from what is part of a human not just what is a human. Object recognition like this is possible at least in well-defined situations. It is much harder to manage it in general, even if the precise objects have never been encountered before. Even harder though is probably recognising all the ways that would constitute doing harm to the human identified in front of it including with any of those objects that are around.

Staying in control

The code to do all this would also have to be in some sense at a higher level of control than that making the robot take actions as it has to overrule them ALWAYS. For the chess robot, there was presumably a bug that allowed it to grip a human’s finger as no programmer will have intended that, so it isn’t about monitoring the code itself. The fail-safe code has to be monitoring what is actually happening in the world and be in a position to take over. It also can’t just make the robot freeze as that may be enough to do the damage of a broken finger if already in the robot’s grip (and that may have been part of the problem for the boy’s finger). It also can’t just move its arm back suddenly as what if another child (a toddler perhaps) has just crawled up behind it! It has to monitor the effects of its own commands too! A simple version of such a monitor is probably straightforward though. The robot’s computer architecture just needs to be designed accordingly. One way robots are designed is for new modules to build on top of existing ones giving new more complex behaviour as a result, which possibly fits what is needed here. Having additional computers acting as monitors to take over when others go wrong is also not really that difficult (bugs in their own code aside) and a standard idea for mission-critical systems.

So it is probably all the complexity of the world with unexpected things happening in it that is the problem that makes a general version of the First Law hard at the moment… If Asimov’s laws in all their generalness are currently a little beyond us, perhaps we should just think about the problem in another more limited way (at least for now)…

Can a chess playing robot be safe?

In the chess game situation, if anything is moving in front of the robot then it perhaps should just be keeping well out of the way. To do so just needs monitoring code that can detect movement in a small fixed area. It doesn’t need to understand anything about the world apart from movement versus non-movement. That is easily in the realms of what computers can do – even some simple toy robots can detect movement. The monitoring code would still need to be able to override the rest of the code of course, bugs included.

Why also could the robot grip a finger with enough pressure to break it, anyway. Perhaps it just needed more accurate sensors in its fingers to avoid doing harm, together with a sensor that just let go if it felt too much resistance back. After all chess pieces don’t resist much!

And one last idea, if a little bit more futuristic. A big research area at the moment is soft robotics: robots that are soft and squidgy not hard and solid, precisely so they can do less harm. Perhaps if the chess robot’s robotic claw-like fingers had instead been totally soft and squishy it would not have harmed him even if it did grab his finger.

Had the robot designers tried hard enough they surely could have come up with solutions to make it safer, even if they didn’t have good enough debugging skills to prevent the actual bug that caused the problem. It needs safety to be a very high priority from the outset though: and certainly safety that isn’t just pushed onto humans to be responsible for as the organisers did.

We shouldn’t be blaming children for not obeying safety rules when they are given what is essentially a hard, industrial robot to play with. Doing so just lets the robot makers off the hook from even trying to make their robots safer, when they clearly could do more. When disasters happen don’t blame the people, improve the system. On the other hand perhaps we should be thinking far more about doing the research that will allow us one day to actually implement Asimov’s Laws in all robots and so have a safety culture in robotics built-in. Then perhaps people would not have to be quite so wary around robots and certainly not have to follow safety rules themselves. That surely is the robot’s job.

Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Dressing it up

Why it might be good for robots to wear clothes

Even though most robots still walk around naked, the Swedish Institute of Computer Science (SICS) in Stockholm explored how to produce fashion conscious robots.

The applied computer scientists there were looking for ways to make the robots of today easier for us to get along with. As part of the LIREC project to build the first robot friends for humans they examined how our views of simple robots change when we can clothe and customise them. Does this make the robots more believable? Do people want to interact more with a fashionable robot?

How do you want it?

These days most electronic gadgets allow the human user to customise them. For example, on a phone you can change the background wallpaper or colour scheme, the ringtone or how the menus work. The ability of the owner to change the so-called ‘look and feel’ of software is called end-user programming. It’s essentially up to you how your phone looks and what it does.

Dinosaurs waking and sleeping

The Swedish team began by taking current off-the-shelf robots and adding dress-up elements to them. Enter Pleo, a toy dinosaur ‘pet’ able to learn as you play with it. Now add in that fashion twist. What happens when you can play dress up with the dinosaur? Pleo’s costumes change its behaviour, kind of like what happens when you customise your phone. For example, if you give Pleo a special watchdog necklace the robot remains active and ‘on guard’. Change the costume from necklace to pyjamas, and the robot slowly switches into ‘sleep’ mode. The costumes or accessories you choose communicate electronically with the robot’s program, and its behaviour follows suit in a way you can decide. The team explored whether this changed the way people played with them.

Clean sweeps

In another experiment the researchers played dress up with a robot vacuum cleaner. The cleaner rolls around the house sweeping the floor, and had already proven a hit with many consumers. It bleeps happily as its on-board computer works out the best path to bust your carpet dust. The SICS team gave the vacuum a special series of stick-on patches, which could add to its basic programming. They found that choosing the right patch could change the way the humans perceive the robot’s actions. Different patches can make humans think the robot is curious, aggressive or nervous. There’s even a shyness patch that makes the robot hide under the sofa.

What’s real?

If humans are to live in a world populated by robots there to help them, the robots need to be able to play by our rules. Humans have whole parts of their brains given over to predicting how other humans will react. For example, we can empathise with others because we know that other beings have thoughts like us, and we can imagine what they think. This often spills over into anthropomorphism, where we give human characteristics to non-human animal or non-living things. Classic examples are where people believe their car has a particular personality, or think their computer is being deliberately annoying – they are just machines but our brains tend to attach motives to the behaviours we see.

Real-er robots?

Robots can produce very complex behaviours depending on the situations they are in and the ways we have interacted with them, which creates the illusion that they have some sort of ‘personality’ or motives in the way they are acting. This can help robots seem more natural and able to fit in with the social world around us. It can also improve the ways they provide us with assistance because they seem that bit more believable. Projects like the SICS’s ‘actDresses’ one help us by providing new ways that human users can customise the actions of their robots in a very natural way, in their case by getting the robots to dress for the part.

Peter W McOwan and the CS4FN team, Queen Mary University of London (Updated from the archive)

More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The Mummy in an AI world: Jane Webb’s future

Inspired by Mary Shelley’s Frankenstein, 17-year old Victorian orphan, Jane Webb secured her future by writing the first ever Mummy story. The 22nd century world in which her novel was set is perhaps the most amazing thing about the three volume book though.

On the death of her father, Jane realised she needed to find a way to support herself and did so by publishing her novel “The Mummy!” in 1827. In contrast to their modern version as stars of horror films, Webb’s Mummy, a reanimation of Cheops, was actually there to help those doing good and punish those that were evil. Napoleon had, through the start of the century, invaded Egypt, taking with him scholars intent on understanding the Ancient Egyptian society. Europe was fascinated with Ancient Egypt and awash with Egyptian artefacts and stories around them. In London, the Egyptian Hall had been built in Piccadilly in 1812 to display Egyptian artefacts and in 1821 it displayed a replica of the tomb of Seti I. The Rosetta Stone that led to the decipherment of hieroglyphics was cracked in 1822. The time was therefore ripe for someone to come up with the idea of a Mummy story.

The novel was not, however, set in Victorian times but in a 22nd century future that she imagined, and that future was perhaps more amazing than the idea of a mummy coming to life. Her version of the future was full of technological inventions supporting humanity, as well as social predictions, many of which have come to fruition such as space travel and the idea that women might wear trousers as the height of fashion (making her a feminist hero). The machines she described in the book led to her meeting her future husband, John Loudon. As a writer about farming and gardening he was so impressed by the idea of a mechanical milking machine included in the book, that he asked to meet her. They married soon after (and she became Jane Loudon).

The skilled artificial intelligences she wrote into her future society are perhaps the most amazing of her ideas in that she was the first person to really envision in fiction a world where AIs and robots were embedded in society just doing good as standard. To put this into context of other predictions, Ada Lovelace wrote her notes suggesting machines of the future would be able to compose music 20 years later.

Jane Webb’s future was also full of cunning computational contraptions: there were steam-powered robot surgeons, foreseeing the modern robots that are able to do operations (and with their steady hands are better at, for example, eye surgery than a human). She also described Artificial Intelligences replacing lawyers. Her machines were fed their legal brief, giving them instructions about the case, through tubes. Whilst robots may not yet have fully replaced barristers and judges, artificial intelligence programs are already used, for example, to decide the length of sentences of those convicted in some places, and many see it now only being a matter of time before lawyers are spending their time working with Artificial Intelligence programs as standard. Jane’s world also includes a version of the Internet, at a time before electric telegraph existed and when telegraph messages were sent by semaphore between networks of towers.

The book ultimately secured her future as required, and whilst we do not yet have any real reanimated mummy’s wandering around doing good deeds, Jane Webb did envision lots of useful inventions, many that are now a reality, and certainly had pretty good ideas about how future computer technology would pan out in society…despite computers, never mind artificial intelligences, still being well over a century away.

Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

The naked robot

A naked robot holding a flower
Image by bamenny from Pixabay 

Why are so many film robots naked? We take it for granted that robots don’t wear clothes, and why should they?

They are machines, not humans, after all. On the other hand, the quest to create artificial intelligence involves trying to create machines that share the special ingredients of humanity. One of the things that is certainly special about humans in comparison to other animals is the way we like to clothe and decorate our bodies. Perhaps we should think some more about why we do it but the robots don’t!

Shame or showoff?

The creation story in the Christian Bible suggests humans were thrown out of the Garden of Eden when Adam and Eve felt the need to cover up – when they developed shame. Humans usually wear more than just the bare minimum though, so wearing clothing can’t be all about shame. Nor is it just about practicalities like keeping warm. Turn up at an interview covering your body with the wrong sort of clothes and you won’t get the job. Go to a fancy dress party in the clothes that got you the job and you will probably feel really uncomfortable the moment you see that everyone else is wearing costumes. Clothes are about decorating our bodies as much as covering them.

Our urge to decorate our bodies certainly seems to be a deeply rooted part of what makes us human. After all, anthropologists consider finds like ancient beads as the earliest indications of humanity evolving from apehood. It is taken as evidence that there really was someone ‘in there’ back then. Body painting is used as another sign of our emerging humanity. We still paint our bodies millennia later too. Don’t think we’re only talking about children getting their faces painted – grownups do it too, as the vast make-up industry and the popularity of tattoos show. We put shiny metal and stones around our necks and on our hands too.

The fashion urge

Whatever is going on in our heads, clearly the robots are missing something. Even in the movies the intelligent ones rarely feel the need to decorate their bodies. R2D2? C3PO? Wall-E? The exceptions are the ones created specifically to pass themselves off as human like in Blade Runner.

You can of course easily program a robot to ‘want’ to decorate itself, or to refuse to leave its bedroom unless it has managed to drape some cloth over its body and shiny wire round its neck, but if it was just following a programmed rule would that be the same as when a human wears clothes? Would it be evidence of ‘someone in there’? Presumably not!

We do it because of an inner need to conform more than an inner need to wear a particular thing. That is what fashion is really all about. Perhaps programming an urge to copy others would be a start. In Wall-E, the robot shows early signs of this as he tries to copy what he sees the humans doing in the old films he watches. At one point he even uses a hubcap as a prop hat for a dance. Human decoration may have started as a part of rituals too.

Where to now?

Is this need to decorate our bodies something special, something linked to what makes us human? Should we be working on what might lead to robots doing something similar of their own accord? When archaeologists are hunting through the rubble in thousands of years’ time, will there be something other than beads that would confirm their robot equivalent to self-awareness? If robots do start to decorate and cover up their bodies because they want to rather than because it was what some God-like programmer coded them to do, surely something special will have happened. Perhaps that will be the point when the machines have to leave their Garden of Eden too.

Paul Curzon, Queen Mary University of London (from the archive)

More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Hoverflies: comin’ to get ya

A hoverfly on a blade of grass
Image by Ronny Overhate from Pixabay (cropped)

By understanding the way hoverflies mate, computer scientists found a way to sneak up on humans, giving a way to make games harder.

When hoverflies get the hots for each other they make some interesting moves. Biologists had noticed that as one hoverfly moves towards a second to try and mate, the approaching fly doesn’t go in a straight line. It makes a strange curved flight. Peter and his student Andrew Anderson thought this was an interesting observation and started to look at why it might be. They came up with a cunning idea. The hoverfly was trying to sneak up on its prospective mate unseen.

The route the approaching fly takes matches the movements of the prospective mate in such a way that, to the mate, the fly in the distance looks like it’s far away and ‘probably’ stationary.

Tracking the motion of a hoverfly and its sightlines
Image by CS4FN

How does it do this? Imagine you are walking across a field with a single tree in it, and a friend is trying to sneak up on you. Your friend starts at the tree and moves in such a way that they are always in direct line of sight between your current position and the tree. As they move towards you they are always silhouetted against the tree. Their motion towards you is mimicking the stationary tree’s apparent motion as you walk past it… and that’s just what the hoverfly does when approaching a mate. It’s a stealth technique called ‘active motion camouflage’.

By building a computer model of the mating flies, the team were able to show that this complex behaviour can actually be done with only a small amount of ‘brain power’. They went on to show that humans are also fooled by active motion camouflage. They did this by creating a computer game where you had to dodge missiles. Some of those missiles used active motion camouflage. The missiles using the fly trick were the most difficult to spot.

It just goes to show: there is such a thing as a useful computer bug.

– Peter W McOwan and Paul Curzon, Queen Mary University of London


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

EPSRC also supports this blog through research grant EP/K040251/2 held by Professor Ursula Martin. 

Ant Art

There are many ways Artificial Intelligences might create art. Breeding a colony of virtual ants is one of the most creative.

Photogrowth from the University of Coimbra does exactly that. The basic idea is to take an image and paint an abstract version of it. Normally you would paint with brush strokes. In ant paintings you paint with the trails of hundreds of ants as they crawl over the picture, depositing ink rather than the normal chemical trails ants use to guide other ants to food. The colours in the original image act as food for the ants, which absorb energy from its bright parts. They then use up energy as they move around. They die if they don’t find enough food, but reproduce if they have lots. The results are highly novel swirl-filled pictures.

The program uses vector graphics rather than pixel-based approaches. In pixel graphics, an image is divided into a grid of squares and each allocated a colour. That means when you zoom in to an area, you just see larger squares, not more detail. With vector graphics, the exact position of the line followed is recorded. That line is just mapped on to the particular grid of the display when you view it. The more pixels in the display, the more detailed the trail is drawn. That means you can zoom in to the pictures and just see ever more detail of the ant trails that make them up.

You become a breeder of a species of ant
that produce trails, and so images,
you will find pleasing

Because the virtual ants wander around at random, each time you run the program you will get a different image. However, there are lots of ways to control how ants can move around their world. Exploring the possibilities by hand would only ever uncover a small fraction of the possibilities. Photogrowth therefore uses a genetic algorithm. Rather than set all the options of ant behaviour for each image, you help design a fitness function for the algorithm. You do this by adjusting the importance of different aspects like the thickness of trail left and the extent the ants will try and cover the whole canvas. In effect you become a breeder of a species of ant that produce trails, and so images, you will find pleasing. Once you’ve chosen the fitness function, the program evolves a colony of ants based on it, and they then paint you a picture with their trails.

The result is a painting painted by ants bred purely to create images that please you.

Paul Curzon, Queen Mary University of London (from the archive)


More on …

Related Magazines …

Cover issue 18
cs4fn issue 4 cover

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

A storm in a bell jar

Ada Lovelace was close friends with John Crosse, and knew his father Andrew: the ‘real Frankenstein’. Andrew Crosse apparently created insect life from electricity, stone and water…

Andrew Crosse was a ‘gentleman scientist’ doing science for his own amusement including work improving giant versions of the first batteries called ‘voltaic piles’. He was given the nickname ‘the thunder and lightning man’ because of the way he used the batteries to do giant discharges of electricity with bangs as loud as canons.

He hit the headlines when he appeared to create life from electricity, Frankenstein-like. This was an unexpected result of his experiments using electricity to make crystals. He was passing a current through water containing dissolved limestone over a period of weeks. In one experiment, about a month in, a perfect insect appeared apparently from no-where, and soon after starting to move. More and more insects then appeared over time. He mentioned it to friends, which led to a story in a local paper. It was then picked up nationally. Some of the stories said he had created the insects, and this led to outrage and death threats over his apparent blasphemy of trying to take the position of God.

(Does this start to sound like a modern social networking storm, trolls and all?) In fact he appears to have believed, and others agreed, that the mineral samples he was using must have been contaminated with tiny insect eggs, that just naturally hatched. Scientific results are only accepted if they can be replicated. Others, who took care to avoid contamination couldn’t get the same result. The secret of creating life had not been found.

While Mary Shelley, who wrote Frankenstein, did know Crosse, sadly perhaps, for the story’s sake, he can’t have been the inspiration for Frankenstein as has been suggested, given she wrote it decades earlier!

Paul Curzon, Queen Mary University of London (from the archicve)


More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

Pass the screwdriver, Igor

Mary Shelley, Frankenstein’s monster and artificial life

Shortly after Ada Lovelace was born, so long before she made predictions about future “creative machines”, Mary Shelley, a friend of her father (Lord Byron), was writing a novel. In her book, Frankenstein, inanimate flesh is brought to life. Perhaps Shelley foresaw what is actually to come, what computer scientists might one day create: artificial life.

Life it may not be, but engineers are now doing pretty well in creating humanoid machines that can do their own thing. Could a machine ever be considered alive? The 21st century is undoubtedly going to be the age of the robot. Maybe it’s time to start thinking about the consequences in case they gain a sense of self.

Frankenstein was obsessed with creating life. In Mary Shelley’s story, he succeeded, though his creation was treated as a “Monster” struggling to cope with the gift of life it was given. Many science fiction books and films have toyed with these themes: the film Blade Runner, for example, explored similar ideas about how intelligent life is created; androids that believe they are human, and the consequences for the creatures concerned.

Is creating intelligent life fiction? Not totally. Several groups of computer scientists are exploring what it means to create non-biological life, and how it might be done. Some are looking at robot life, working at the level of insect life-forms, for example. Others are looking at creating intelligent life within cyberspace.

For 70 years or more scientists have tried to create artificial intelligences. They have had a great deal of success in specific areas such as computer vision and chess playing programs. They are not really intelligent in the way humans are, though they are edging closer. However none of these programs really cuts it as creating “life”. Life is something more than intelligence.

A small band of computer scientists have been trying a different approach that they believe will ultimately lead to the creation of new life forms: life forms that could one day even claim to be conscious (and who would we be to disagree with them if they think they are?) These scientists believe life can’t be engineered in a piecemeal way, but that the whole being has to be created as a coherent whole. Their approach is to build the basic building blocks and let life emerge from them.

Sodarace creatures racing over a bumpy terrain
A sodarace in action
by CS4FN

The outline of the idea could be seen in the game Sodarace, where you could build your own creatures that move around a virtual world, and even let them evolve. One approach to building creatures, such as a spider, would be to try and work out mathematical equations about how each leg moves and program those equations. The alternative artificial life way as used in Sodarace is to instead program up the laws of physics such as gravity and friction and how masses, springs and muscles behave according to those laws. Then you just put these basic bits together in a way that corresponds to a spider. With this approach you don’t have to work out in advance every eventuality (what if it comes to a wall? Or a cliff? Or bumpy ground?) and write code to deal with it. Instead natural behaviour emerges.

The artificial life community believe, not just life-like movement, but life-like intelligence can emerge in a similar way. Rather than programming the behaviour of muscles you program the behaviour of neurones and then build brains out of them. That it turns out has been the key to the machine learning programs that are storming the world of Artificial Intelligence, turning it into an everyday tool. However, if aiming for artificial life, you would keep going and combine it with the basic biochemistry of an immune system, do a similar thing with a reproductive system, and so on.

Want to know more? A wonderful early book is Steve Grand’s: “Creation”, on how he created what at the time was claimed to be “the nearest thing to artificial life yet”… It started life as the game “Creatures”.

Then have a go at creating artificial life yourself (but be nice to it).

Paul Curzon and Peter W McOwan, Queen Mary University of London

More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

Meet the chatbots

Sitting down and having a nice chat with a computer because they are your friend probably isn’t something you do every day. You may never have done it. We mainly still think of it as being a dream for the future. But there is lots of work being done to make it happen in the present, and you may already be asking chatbots for help regularly. The whole idea has roots that stretch far back into the past. It’s a dream that goes back to Alan Turing, and then even a little further.

The imitation game

Back around 1950, Turing was thinking about whether computers could be intelligent. He had a problem though. Once you begin thinking about intelligence, you find it is a tricky thing to pin down. Intelligence is hard to define even in humans, never mind animals or computers. Turing started to wonder if he could ask his question about machine intelligence in a different way. He turned to a Victorian parlour game called the imitation game for inspiration.

The imitation game was played with large groups at parties, but focused on two people, a man and a woman. They would go into a different room to be asked questions by a referee. The woman had to answer truthfully. The man answered in any way he believed would convince everyone else he was really the woman. Their answers were then read out to the rest of the guests. The man won the game if he could convince everyone back in the party that he was really the woman.

Pretending to be human

Turing reckoned that he could use a similar test for intelligence in a machine. In Turing’s version of the imitation game, instead of a man trying to convince everyone he’s really a woman, a computer pretends to be a human. Everyone accepts the idea that it takes a certain basic intelligence to carry on a conversation. If a computer could carry on a conversation so well that talking to it was just like talking to a human, the computer must be intelligent.

When Turing published his imitation game idea, it helped launch the field of artificial intelligence (AI). Today, the field pulls together biologists, computer scientists and psychologists in a quest to understand and replicate intelligence. AI techniques have delivered some stunning results. People have designed computers that can beat the best human at chess, diagnose diseases, and invest in stocks more successfully than humans.

A chat with a chatterbot

But what about the dream of having a chat with a computer? That’s still alive. Turing’s idea, demonstrating computer intelligence by successfully faking human conversation, became known as the Turing test. Turing thought machines would pass his test before the 20th century was over, but the goal has proved more elusive than that. People have been making better conversational chat programs, called chatbots, since the 1960s, but no one has yet made a program that can fool everyone into thinking it’s a real human.

What’s up, Doc

On the other hand, some chatbots have done pretty well. One of the first and still one of the most famous was created in 1968. It was called ELIZA. Its trick was imitating the sort of conversation you might have with a therapist. ELIZA didn’t volunteer much knowledge itself, but tried to get the user to open up about what they were thinking. So the person might type “I don’t feel well”, and ELIZA would respond with “you say you don’t feel well?” In a normal social situation, that would be a frustrating response. But it’s a therapist’s job to get a patient to talk about themselves, so ELIZA could get away with it. For an early example of a chatbot, ELIZA did pretty well, but after a few minutes of chatting users realised that ELIZA didn’t really understand what they were saying.

Where have I heard this before?

One of the big problems in making a good chatterbot is coming up with sentences that sound realistic. That’s why ELIZA tried to keep its sentences simple and non-committal. A much more recent chatterbot called Cleverbot uses another brilliantly simple solution: it doesn’t try to make up sentences at all. It just stores all the phrases that it’s ever heard, and chooses from them when it needs to say something. When a human types a phrase to say to Cleverbot, its program looks for a time in the past when it said something similar, then reuses whatever response the human gave at the time. Given that Cleverbot has had 65 million chats on the Internet since 1997, it’s got a lot to choose from. And because its sentences were all originally entered by humans, Cleverbot can speak in slang or text speak. That can lead to strange conversations, though. A member of our team at cs4fn had an online chat with Cleverbot, and found it pretty weird to have a computer tell him “I want 2 b called Silly Sally”.

Computerised con artists

Most early chatbots were designed just for fun and now they are designed to be useful helpers or to replace human jobs. But some chatbots are made for a more sinister intent. Some years ago, a program called CyberLover was stalking dating chat forums on the Internet. It would strike up flirty conversations with people, then try and get them to reveal personal details, which could then be used to steal people’s identities or credit card accounts. CyberLover even had different programmed personalities, from a more romantic flirter to a more aggressive one. Most people probably wouldn’t be fooled by a robot come-on, but that’s OK. CyberLover didn’t mind rejection: it could start up ten relationships every half an hour.

Chatbots went on to hit the big time in the form of virtual assistants. Apple’s iPhone 4S in 2011 included Siri, a computerised assistant that can find answers to human questions – sometimes with a bit of attitude. Most of Siri’s humorous answers appear to be pre-programmed, but some of them come from Siri’s access to powerful search engines. Then came Alexa in 2014 and suddenly the world is full of chatbots, including taking over social media often for nefarious purposes. Now with the advent of ChatGPT they are finally at a stage where people will happily chat to them like a human (though sometimes it is still frustrating) and they are definitely at the stage of replacing human jobs that involve, for example asking for help and advice. They haven’t yet (at the time of writing) got to the stage of passing a Turing Test. But if computerised conversation continues advancing, we are not too far off from a computer that can pass the Turing test. And while we’re waiting at least we’ve got better games to play than the Victorians had.

by the CS4FN team updated from the original.

More on …

Related Magazines …

Issue 14 cover Alan Turing
Issue 16 cover clean up your language

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos