The First Law of Humans

Preventing robots from hurting us

by Paul Curzon, Queen Mary University of London

A chess board with pieces lined up
Image by Pexels from Pixabay 

The first rule of humans when around robots is apparently that they should not do anything too unexpected…

A 7-year old child playing chess in a chess tournament has had his finger broken by a chess-playing robot which grabbed it as the boy made his move. The child was blamed by one of the organisers, who claimed that it happened because the boy “broke the safety rules”! The organisers also apparently claimed that the robot itself was perfectly safe!

What seems to have happened is, after the robot played its move, the boy played his own move very quickly before the robot had finished. Somehow this triggered the wrong lines of code in the robot’s program: instructions that were intended for some other situation. In the actual situation of the boy’s hand being over the board at the wrong time it led the robot to grab his finger and not let go.

Spoiler Alert

The situation immediately brings to mind the classic science fiction story “Moxon’s Master” by Ambrose Bierce published way back in 1899. It is the story of a chess playing automaton (ie robot) and what happens when it is check-mated when playing a game with its creator Moxon. It flies into a dangerous rage. However, there the problems are apparently because the robot has developed emotions and so emotional reactions. In both situations however a robot intended simply to play chess is capable of harming a human as a result.

The three laws of robotics

Isaac Asimov is famous for his laws of robotics: fundamental unbreakable rules built in to the ‘brain’ of all robots in his fictional world precisely to stop this sort of situation. The rules he formulated were:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
A robot carefully touching fingers with a human
Image by Pete Linforth from Pixabay 

Clearly, had they been in place, the chess robot would not have harmed the boy, and Moxon’s automaton would not have been able to do anything too bad as a result of its temper either.

Asimov devised his rules as a result of a discussion with the Science Fiction magazine editor John W. Campbell, Jr. He then spent much of his Science Fiction career writing robot stories around how humans could end up being hurt despite the apparently clear rules. That aside their key idea was that, to ensure robots were safe around people, they would need built-in logic that could not be circumvented to stop them hurting them. They needed a fail-safe system monitoring their actions that would take over when breaches were possible. Clearly this chess-playing robot was not “perfectly safe” and not even fail-safe as if it was the boy would not have been harmed whatever he did. The robot did not have anything at all akin to a working, unbreakable First Law programmed into it.

Dangerous Machines

Asimov’s robots were intelligent and able to think for themselves in a more general sense than any that currently exist. The First Law essentially prevented them from deciding to harm a human not just do so by accident. However, perhaps the day will soon come when they can start to think for themselves, so perhaps a first law will soon be important. In any case, machines can harm humans without being able to think. That humans need to be wary around robots is obvious from the fact that there have been numerous injuries and even fatalities in factories using industrial robots in the decades since they were introduced. They are dangerous machines. Fortunately, the carnage inflicted on children is at least not quite that of the industrial accidents in the Industrial Revolution. It is still a problem though. People do have to take care and follow safety rules around them!

Rather than humans having to obey safety laws, we perhaps ought to be taking Asimov’s Laws more seriously for all robots, therefore. Why can’t those laws just be built in? It is certainly an interesting research problem to think about. The idea of a fail-safe is standard in engineering, so its not that general idea that is the problem. The problem is that, rather than intelligence being needed for robots to harm us, intelligence is needed to avoid them doing so.

Implementing the First Law

Let’s imagine building in the first law to chess playing robots and in particular the one that hurt the boy. For starters the chess playing robot would have needed to have recognised that the boy WAS a human so should not be harmed. It would also need to be able to recognise that his finger was a part of him and that gripping its finger would harm him. It would also need to know that it was gripping his finger (not a piece) at the time. It would then need a way to stop before it was too late, and do no harm in the stopping. It clearly needs to understand a lot about the world to be able to avoid hurting people in general.

Some of this is almost within our grasp. Computers can certainly do a fairly good job of recognising humans now through image recognition code. They can even recognise individuals, so actually that first fundamental part of knowing what is and isn’t a human is more or less possible now, just not perfectly yet. Recognising objects in general is perhaps harder. The chess robot presumably has code for recognising pieces already though a finger perhaps even looks like a piece at least to a robot. To generally, avoid causing harm in any situation it needs to be able to recognise what lots of objects are not just chess pieces. It also needs to differentiate them from what is part of a human not just what is a human. Object recognition like this is possible at least in well-defined situations. It is much harder to manage it in general, even if the precise objects have never been encountered before. Even harder though is probably recognising all the ways that would constitute doing harm to the human identified in front of it including with any of those objects that are around.

Staying in control

The code to do all this would also have to be in some sense at a higher level of control than that making the robot take actions as it has to overrule them ALWAYS. For the chess robot, there was presumably a bug that allowed it to grip a human’s finger as no programmer will have intended that, so it isn’t about monitoring the code itself. The fail-safe code has to be monitoring what is actually happening in the world and be in a position to take over. It also can’t just make the robot freeze as that may be enough to do the damage of a broken finger if already in the robot’s grip (and that may have been part of the problem for the boy’s finger). It also can’t just move its arm back suddenly as what if another child (a toddler perhaps) has just crawled up behind it! It has to monitor the effects of its own commands too! A simple version of such a monitor is probably straightforward though. The robot’s computer architecture just needs to be designed accordingly. One way robots are designed is for new modules to build on top of existing ones giving new more complex behaviour as a result, which possibly fits what is needed here. Having additional computers acting as monitors to take over when others go wrong is also not really that difficult (bugs in their own code aside) and a standard idea for mission-critical systems.

So it is probably all the complexity of the world with unexpected things happening in it that is the problem that makes a general version of the First Law hard at the moment… If Asimov’s laws in all their generalness are currently a little beyond us, perhaps we should just think about the problem in another more limited way (at least for now)…

Can a chess playing robot be safe?

In the chess game situation, if anything is moving in front of the robot then it perhaps should just be keeping well out of the way. To do so just needs monitoring code that can detect movement in a small fixed area. It doesn’t need to understand anything about the world apart from movement versus non-movement. That is easily in the realms of what computers can do – even some simple toy robots can detect movement. The monitoring code would still need to be able to override the rest of the code of course, bugs included.

Why also could the robot grip a finger with enough pressure to break it, anyway. Perhaps it just needed more accurate sensors in its fingers to avoid doing harm, together with a sensor that just let go if it felt too much resistance back. After all chess pieces don’t resist much!

And one last idea, if a little bit more futuristic. A big research area at the moment is soft robotics: robots that are soft and squidgy not hard and solid, precisely so they can do less harm. Perhaps if the chess robot’s robotic claw-like fingers had instead been totally soft and squishy it would not have harmed him even if it did grab his finger.

Had the robot designers tried hard enough they surely could have come up with solutions to make it safer, even if they didn’t have good enough debugging skills to prevent the actual bug that caused the problem. It needs safety to be a very high priority from the outset though: and certainly safety that isn’t just pushed onto humans to be responsible for as the organisers did.

We shouldn’t be blaming children for not obeying safety rules when they are given what is essentially a hard, industrial robot to play with. Doing so just lets the robot makers off the hook from even trying to make their robots safer, when they clearly could do more. When disasters happen don’t blame the people, improve the system. On the other hand perhaps we should be thinking far more about doing the research that will allow us one day to actually implement Asimov’s Laws in all robots and so have a safety culture in robotics built-in. Then perhaps people would not have to be quite so wary around robots and certainly not have to follow safety rules themselves. That surely is the robot’s job.


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

Dressing it up

Why it might be good for robots to wear clothes

by Peter W McOwan and the CS4FN team, Queen Mary University of London

Updated from the archive

(Robot) dummies in different clothes standing in a line up a slope
Image by Peter Toporowski from Pixabay 

Even though most robots still walk around naked, the Swedish Institute of Computer Science (SICS) in Stockholm explored how to produce fashion conscious robots.

The applied computer scientists there were looking for ways to make the robots of today easier for us to get along with. As part of the LIREC project to build the first robot friends for humans they examined how our views of simple robots change when we can clothe and customise them. Does this make the robots more believable? Do people want to interact more with a fashionable robot?

How do you want it?

These days most electronic gadgets allow the human user to customise them. For example, on a phone you can change the background wallpaper or colour scheme, the ringtone or how the menus work. The ability of the owner to change the so-called ‘look and feel’ of software is called end-user programming. It’s essentially up to you how your phone looks and what it does.

Dinosaurs waking and sleeping

The Swedish team began by taking current off-the-shelf robots and adding dress-up elements to them. Enter Pleo, a toy dinosaur ‘pet’ able to learn as you play with it. Now add in that fashion twist. What happens when you can play dress up with the dinosaur? Pleo’s costumes change its behaviour, kind of like what happens when you customise your phone. For example, if you give Pleo a special watchdog necklace the robot remains active and ‘on guard’. Change the costume from necklace to pyjamas, and the robot slowly switches into ‘sleep’ mode. The costumes or accessories you choose communicate electronically with the robot’s program, and its behaviour follows suit in a way you can decide. The team explored whether this changed the way people played with them.

Clean sweeps

In another experiment the researchers played dress up with a robot vacuum cleaner. The cleaner rolls around the house sweeping the floor, and had already proven a hit with many consumers. It bleeps happily as its on-board computer works out the best path to bust your carpet dust. The SICS team gave the vacuum a special series of stick-on patches, which could add to its basic programming. They found that choosing the right patch could change the way the humans perceive the robot’s actions. Different patches can make humans think the robot is curious, aggressive or nervous. There’s even a shyness patch that makes the robot hide under the sofa.

What’s real?

If humans are to live in a world populated by robots there to help them, the robots need to be able to play by our rules. Humans have whole parts of their brains given over to predicting how other humans will react. For example, we can empathise with others because we know that other beings have thoughts like us, and we can imagine what they think. This often spills over into anthropomorphism, where we give human characteristics to non-human animal or non-living things. Classic examples are where people believe their car has a particular personality, or think their computer is being deliberately annoying – they are just machines but our brains tend to attach motives to the behaviours we see.

Real-er robots?

Robots can produce very complex behaviours depending on the situations they are in and the ways we have interacted with them, which creates the illusion that they have some sort of ‘personality’ or motives in the way they are acting. This can help robots seem more natural and able to fit in with the social world around us. It can also improve the ways they provide us with assistance because they seem that bit more believable. Projects like the SICS’s ‘actDresses’ one help us by providing new ways that human users can customise the actions of their robots in a very natural way, in their case by getting the robots to dress for the part.


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

The Mummy in an AI world: Jane Webb’s future

by Paul Curzon, Queen Mary University of London

The sarcophagus of a mummy
Image by albertr from Pixabay

Inspired by Mary Shelley’s Frankenstein, 17-year old Victorian orphan, Jane Webb secured her future by writing the first ever Mummy story. The 22nd century world in which her novel was set is perhaps the most amazing thing about the three volume book though.

On the death of her father, Jane realised she needed to find a way to support herself and did so by publishing her novel “The Mummy!” in 1827. In contrast to their modern version as stars of horror films, Webb’s Mummy, a reanimation of Cheops, was actually there to help those doing good and punish those that were evil. Napoleon had, through the start of the century, invaded Egypt, taking with him scholars intent on understanding the Ancient Egyptian society. Europe was fascinated with Ancient Egypt and awash with Egyptian artefacts and stories around them. In London, the Egyptian Hall had been built in Piccadilly in 1812 to display Egyptian artefacts and in 1821 it displayed a replica of the tomb of Seti I. The Rosetta Stone that led to the decipherment of hieroglyphics was cracked in 1822. The time was therefore ripe for someone to come up with the idea of a Mummy story.

The novel was not, however, set in Victorian times but in a 22nd century future that she imagined, and that future was perhaps more amazing than the idea of a mummy coming to life. Her version of the future was full of technological inventions supporting humanity, as well as social predictions, many of which have come to fruition such as space travel and the idea that women might wear trousers as the height of fashion (making her a feminist hero). The machines she described in the book led to her meeting her future husband, John Loudon. As a writer about farming and gardening he was so impressed by the idea of a mechanical milking machine included in the book, that he asked to meet her. They married soon after (and she became Jane Loudon).

The skilled artificial intelligences she wrote into her future society are perhaps the most amazing of her ideas in that she was the first person to really envision in fiction a world where AIs and robots were embedded in society just doing good as standard. To put this into context of other predictions, Ada Lovelace wrote her notes suggesting machines of the future would be able to compose music 20 years later.

Jane Webb’s future was also full of cunning computational contraptions: there were steam-powered robot surgeons, foreseeing the modern robots that are able to do operations (and with their steady hands are better at, for example, eye surgery than a human). She also described Artificial Intelligences replacing lawyers. Her machines were fed their legal brief, giving them instructions about the case, through tubes. Whilst robots may not yet have fully replaced barristers and judges, artificial intelligence programs are already used, for example, to decide the length of sentences of those convicted in some places, and many see it now only being a matter of time before lawyers are spending their time working with Artificial Intelligence programs as standard. Jane’s world also includes a version of the Internet, at a time before electric telegraph existed and when telegraph messages were sent by semaphore between networks of towers.

The book ultimately secured her future as required, and whilst we do not yet have any real reanimated mummy’s wandering around doing good deeds, Jane Webb did envision lots of useful inventions, many that are now a reality, and certainly had pretty good ideas about how future computer technology would pan out in society…despite computers, never mind artificial intelligences, still being well over a century away.


More on …

Related Magazines …


EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

The naked robot

by Paul Curzon, Queen Mary University of London

From the archive

A naked robot holding a flower
Image by bamenny from Pixabay 

Why are so many film robots naked? We take it for granted that robots don’t wear clothes, and why should they?

They are machines, not humans, after all. On the other hand, the quest to create artificial intelligence involves trying to create machines that share the special ingredients of humanity. One of the things that is certainly special about humans in comparison to other animals is the way we like to clothe and decorate our bodies. Perhaps we should think some more about why we do it but the robots don’t!

Shame or showoff?

The creation story in the Christian Bible suggests humans were thrown out of the Garden of Eden when Adam and Eve felt the need to cover up – when they developed shame. Humans usually wear more than just the bare minimum though, so wearing clothing can’t be all about shame. Nor is it just about practicalities like keeping warm. Turn up at an interview covering your body with the wrong sort of clothes and you won’t get the job. Go to a fancy dress party in the clothes that got you the job and you will probably feel really uncomfortable the moment you see that everyone else is wearing costumes. Clothes are about decorating our bodies as much as covering them.

Our urge to decorate our bodies certainly seems to be a deeply rooted part of what makes us human. After all, anthropologists consider finds like ancient beads as the earliest indications of humanity evolving from apehood. It is taken as evidence that there really was someone ‘in there’ back then. Body painting is used as another sign of our emerging humanity. We still paint our bodies millennia later too. Don’t think we’re only talking about children getting their faces painted – grownups do it too, as the vast make-up industry and the popularity of tattoos show. We put shiny metal and stones around our necks and on our hands too.

The fashion urge

Whatever is going on in our heads, clearly the robots are missing something. Even in the movies the intelligent ones rarely feel the need to decorate their bodies. R2D2? C3PO? Wall-E? The exceptions are the ones created specifically to pass themselves off as human like in Blade Runner.

You can of course easily program a robot to ‘want’ to decorate itself, or to refuse to leave its bedroom unless it has managed to drape some cloth over its body and shiny wire round its neck, but if it was just following a programmed rule would that be the same as when a human wears clothes? Would it be evidence of ‘someone in there’? Presumably not!

We do it because of an inner need to conform more than an inner need to wear a particular thing. That is what fashion is really all about. Perhaps programming an urge to copy others would be a start. In Wall-E, the robot shows early signs of this as he tries to copy what he sees the humans doing in the old films he watches. At one point he even uses a hubcap as a prop hat for a dance. Human decoration may have started as a part of rituals too.

Where to now?

Is this need to decorate our bodies something special, something linked to what makes us human? Should we be working on what might lead to robots doing something similar of their own accord? When archaeologists are hunting through the rubble in thousands of years’ time, will there be something other than beads that would confirm their robot equivalent to self-awareness? If robots do start to decorate and cover up their bodies because they want to rather than because it was what some God-like programmer coded them to do, surely something special will have happened. Perhaps that will be the point when the machines have to leave their Garden of Eden too.


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

Hoverflies: comin’ to get ya

by Peter W McOwan and Paul Curzon, Queen Mary University of London

(from the archive)

A hoverfly on a blade of grass

By understanding the way hoverflies mate, computer scientists found a way to sneak up on humans, giving a way to make games harder.

When hoverflies get the hots for each other they make some interesting moves. Biologists had noticed that as one hoverfly moves towards a second to try and mate, the approaching fly doesn’t go in a straight line. It makes a strange curved flight. Peter and his student Andrew Anderson thought this was an interesting observation and started to look at why it might be. They came up with a cunning idea. The hoverfly was trying to sneak up on its prospective mate unseen.

The route the approaching fly takes matches the movements of the prospective mate in such a way that, to the mate, the fly in the distance looks like it’s far away and ‘probably’ stationary.

Tracking the motion of a hoverfly and its sightlines

How does it do this? Imagine you are walking across a field with a single tree in it, and a friend is trying to sneak up on you. Your friend starts at the tree and moves in such a way that they are always in direct line of sight between your current position and the tree. As they move towards you they are always silhouetted against the tree. Their motion towards you is mimicking the stationary tree’s apparent motion as you walk past it… and that’s just what the hoverfly does when approaching a mate. It’s a stealth technique called ‘active motion camouflage’.

By building a computer model of the mating flies, the team were able to show that this complex behaviour can actually be done with only a small amount of ‘brain power’. They went on to show that humans are also fooled by active motion camouflage. They did this by creating a computer game where you had to dodge missiles. Some of those missiles used active motion camouflage. The missiles using the fly trick were the most difficult to spot.

It just goes to show: there is such a thing as a useful computer bug.


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Ant Art

by Paul Curzon, Queen Mary University of London

(from the archive)

The close up head of an ant staring at you
Image by Virvoreanu Laurentiu from Pixabay 

There are many ways Artificial Intelligences might create art. Breeding a colony of virtual ants is one of the most creative.

Photogrowth from the University of Coimbra does exactly that. The basic idea is to take an image and paint an abstract version of it. Normally you would paint with brush strokes. In ant paintings you paint with the trails of hundreds of ants as they crawl over the picture, depositing ink rather than the normal chemical trails ants use to guide other ants to food. The colours in the original image act as food for the ants, which absorb energy from its bright parts. They then use up energy as they move around. They die if they don’t find enough food, but reproduce if they have lots. The results are highly novel swirl-filled pictures.

The program uses vector graphics rather than pixel-based approaches. In pixel graphics, an image is divided into a grid of squares and each allocated a colour. That means when you zoom in to an area, you just see larger squares, not more detail. With vector graphics, the exact position of the line followed is recorded. That line is just mapped on to the particular grid of the display when you view it. The more pixels in the display, the more detailed the trail is drawn. That means you can zoom in to the pictures and just see ever more detail of the ant trails that make them up.

You become a breeder of a species of ant

that produce trails, and so images,

you will find pleasing

Because the virtual ants wander around at random, each time you run the program you will get a different image. However, there are lots of ways to control how ants can move around their world. Exploring the possibilities by hand would only ever uncover a small fraction of the possibilities. Photogrowth therefore uses a genetic algorithm. Rather than set all the options of ant behaviour for each image, you help design a fitness function for the algorithm. You do this by adjusting the importance of different aspects like the thickness of trail left and the extent the ants will try and cover the whole canvas. In effect you become a breeder of a species of ant that produce trails, and so images, you will find pleasing. Once you’ve chosen the fitness function, the program evolves a colony of ants based on it, and they then paint you a picture with their trails.

The result is a painting painted by ants bred purely to create images that please you.


More on …

Related Magazines …

Cover issue 18
cs4fn issue 4 cover

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

A storm in a bell jar

by Paul Curzon, Queen Mary University of London

(from the archive)

lightning
Image by FelixMittermeier from Pixabay 

Ada Lovelace was close friends with John Crosse, and knew his father Andrew: the ‘real Frankenstein’. Andrew Crosse apparently created insect life from electricity, stone and water…

Andrew Crosse was a ‘gentleman scientist’ doing science for his own amusement including work improving giant versions of the first batteries called ‘voltaic piles’. He was given the nickname ‘the thunder and lightning man’ because of the way he used the batteries to do giant discharges of electricity with bangs as loud as canons.

He hit the headlines when he appeared to create life from electricity, Frankenstein-like. This was an unexpected result of his experiments using electricity to make crystals. He was passing a current through water containing dissolved limestone over a period of weeks. In one experiment, about a month in, a perfect insect appeared apparently from no-where, and soon after starting to move. More and more insects then appeared over time. He mentioned it to friends, which led to a story in a local paper. It was then picked up nationally. Some of the stories said he had created the insects, and this led to outrage and death threats over his apparent blasphemy of trying to take the position of God.

(Does this start to sound like a modern social networking storm, trolls and all?) In fact he appears to have believed, and others agreed, that the mineral samples he was using must have been contaminated with tiny insect eggs, that just naturally hatched. Scientific results are only accepted if they can be replicated. Others, who took care to avoid contamination couldn’t get the same result. The secret of creating life had not been found.

While Mary Shelley, who wrote Frankenstein, did know Crosse, sadly perhaps, for the story’s sake, he can’t have been the inspiration for Frankenstein as has been suggested, given she wrote it decades earlier!


More on …

Related Magazines …


EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

Pass the screwdriver, Igor

Mary Shelley, Frankenstein’s monster and artificial life

by Paul Curzon and Peter W McOwan, Queen Mary University of London

(Updated from the archive)

Frankenstein's Monster
Image by sethJreid from Pixabay

Shortly after Ada Lovelace was born, so long before she made predictions about future “creative machines”, Mary Shelley, a friend of her father (Lord Byron), was writing a novel. In her book, Frankenstein, inanimate flesh is brought to life. Perhaps Shelley foresaw what is actually to come, what computer scientists might one day create: artificial life.

Life it may not be, but engineers are now doing pretty well in creating humanoid machines that can do their own thing. Could a machine ever be considered alive? The 21st century is undoubtedly going to be the age of the robot. Maybe it’s time to start thinking about the consequences in case they gain a sense of self.

Frankenstein was obsessed with creating life. In Mary Shelley’s story, he succeeded, though his creation was treated as a “Monster” struggling to cope with the gift of life it was given. Many science fiction books and films have toyed with these themes: the film Blade Runner, for example, explored similar ideas about how intelligent life is created; androids that believe they are human, and the consequences for the creatures concerned.

Is creating intelligent life fiction? Not totally. Several groups of computer scientists are exploring what it means to create non-biological life, and how it might be done. Some are looking at robot life, working at the level of insect life-forms, for example. Others are looking at creating intelligent life within cyberspace.

For 70 years or more scientists have tried to create artificial intelligences. They have had a great deal of success in specific areas such as computer vision and chess playing programs. They are not really intelligent in the way humans are, though they are edging closer. However none of these programs really cuts it as creating “life”. Life is something more than intelligence.

A small band of computer scientists have been trying a different approach that they believe will ultimately lead to the creation of new life forms: life forms that could one day even claim to be conscious (and who would we be to disagree with them if they think they are?) These scientists believe life can’t be engineered in a piecemeal way, but that the whole being has to be created as a coherent whole. Their approach is to build the basic building blocks and let life emerge from them.

A sodarace in action

The outline of the idea could be seen in the game Sodarace, where you could build your own creatures that move around a virtual world, and even let them evolve. One approach to building creatures, such as a spider, would be to try and work out mathematical equations about how each leg moves and program those equations. The alternative artificial life way as used in Sodarace is to instead program up the laws of physics such as gravity and friction and how masses, springs and muscles behave according to those laws. Then you just put these basic bits together in a way that corresponds to a spider. With this approach you don’t have to work out in advance every eventuality (what if it comes to a wall? Or a cliff? Or bumpy ground?) and write code to deal with it. Instead natural behaviour emerges.

The artificial life community believe, not just life-like movement, but life-like intelligence can emerge in a similar way. Rather than programming the behaviour of muscles you program the behaviour of neurones and then build brains out of them. That it turns out has been the key to the machine learning programs that are storming the world of Artificial Intelligence, turning it into an everyday tool. However, if aiming for artificial life, you would keep going and combine it with the basic biochemistry of an immune system, do a similar thing with a reproductive system, and so on.

Want to know more? A wonderful early book is Steve Grand’s: “Creation”, on how he created what at the time was claimed to be “the nearest thing to artificial life yet”… It started life as the game “Creatures”.

Then have a go at creating artificial life yourself (but be nice to it).


More on …

Related Magazines …


EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

CS4FN Advent – Day 10: Holly, Ivy and Alexa – chatbots and the useful skill of file management. Plus win at noughts and crosses

Chatbots, knowing where your files are, and winning at noughts and crosses with artificial intelligence.

Welcome to Day 10 of our CS4FN Christmas Computing Advent Calendar. We are just under halfway through our 25 days of posts, one every day between now and Christmas. You can see all our previous posts in the list at the end.

Today’s picture-theme is Holly (and ivy). Let’s see how I manage to link that to computer science 🙂

Some holly with red berries

1. Holly – or Alexa or Siri

In the comedy TV series* Red Dwarf the spaceship has ‘Holly’ an intelligent computer who talks to the crew and answers their questions. Star Trek also has ‘Computer’ who can have quite technical conversations and give reports on the health of the ship and crew.

People are now quite familiar with talking to computers, or at least giving them commands. You might have heard of Alexa (Amazon) or Siri (Apple / iPhone) and you might even have talked to one of these virtual assistants yourself.

When this article (below) was written people were much less familiar with them. How can they know all the answers to people’s questions and why do they seem to have an intelligence?

Read the article and then play a game (see 3. Today’s Puzzle) to see if you think a piece of paper can be intelligent.

Meet the Chatterbots – talking to computers thanks to artificial intelligence and virtual assistants

 

*also a book!

 

2. Are you a filing cabinet or a laundry basket?

People have different ways of saving information on their computers. Some university teachers found that when they asked their students to open a file from a particular directory their students were completely puzzled. It turned out that the (younger) students didn’t think about files and where to put them in the same way that their (older) teachers did, and the reason is partly the type of device teachers and students grew up with.

Older people grew up using computers where the best way to organise things was to save a file in a particular folder to make it easy to find it again. Sometimes there would be several folders. For example you might have a main folder for Homework, then a year folder for 2021, then folders inside for each month. In the December folder you’d put your december.doc file. The file has a file name (december.doc) and an ‘address’ (Homework/2021/December/). Pretty similar to the link to this blog post which also uses the / symbol to separate all the posts made in 2021, then December, then today.

Files and folders image by Ulrike Mai from Pixabay. Each brown folder contains files, and is itself contained in the drawer, and the drawer is contained in the cabinet.

To find your december.doc file again you’d just open each folder by following that path: first Homework, then 2021, then December – and there’s your file. It’s a bit like looking for a pair of socks in your house – first you need to open your front door and go into your home, then open your bedroom door, then open the sock drawer and there are your socks.

What your file and folder structure might look like.

Younger people have grown up with devices that make it easy to search for any file. It doesn’t really matter where the file is so people used to these devices have never really needed to think about a file’s location. People can search for the file by name, by some words that are in the file, or the date range for when it was created, even the type of file. So many options.

The first way, that the teachers were using, is like a filing cabinet in an office, with documents neatly packed away in folders within folders. The second way is a bit more like a laundry basket where your socks might be all over the house but you can easily find the pair you want by typing ‘blue socks’ into the search bar.

Which way do you use?

In most cases either is fine and you can just choose whichever way of searching or finding their files that works for you. If you’re learning programming though it can be really helpful to know a bit about file paths because the code you’re creating might need to know exactly where a file is, so that it can read from it. So now some university teachers on STEM (science, technology, engineering and maths) and computing courses are also teaching their students how to use the filing cabinet method. It could be useful for them in their future careers.

Want to find out more about files / file names / file paths and directory structures? Have a look at this great little tutorial https://superbasics.beholder.uk/file-system/

As the author says “Many consumer devices try to conceal the underlying file system from the user (for example, smart phones and some tablet computers). Graphical interfaces, applications, and even search have all made it possible for people to use these devices without being concerned with file systems. When you study Computer Science, you must look behind these interfaces.

You might be wondering what any of this has to do with ivy. Well, whenever I’ve seen a real folder structure on a Windows computer (you can see one here) I’ve often thought it looked a bit like ivy 😉

Creeping ivy at Blackheath station in London.

Further reading

File not found: A generation that grew up with Google is forcing professors to rethink their lesson plans (22 September 2021) The Verge

 

 

3. Today’s puzzle

Print or write out the instructions on page 5 of the PDF and challenge someone to a game of noughts and crosses… (there’s a good chance the bit of paper will win).

The Intelligent Piece of Paper activity.

 

4. Yesterday’s puzzle

The trick is based on a very old puzzle at least one early version of which was by Sam Lloyd. See this selection of vanishing puzzles for some variations. A very simple version of it appears in the Moscow Puzzles (puzzle 305) by Boris A. Kordemsky where a line is made to disappear.

In the picture above five medium-length lines become four longer lines. It looks like a line has disappeared but its length has just been spread among the other lines, lengthening them.

If you’d like to have a go at drawing your own disappearing puzzle have a look here.

 

5. Previous Advent Calendar posts

CS4FN Advent – Day 1 – Woolly jumpers, knitting and coding (1 December 2021)

 

CS4FN Advent – Day 2 – Pairs: mittens, gloves, pair programming, magic tricks (2 December 2021)

 

CS4FN Advent – Day 3 – woolly hat: warming versus cooling (3 December 2021)

 

CS4FN Advent – Day 4 – Ice skate: detecting neutrinos at the South Pole, figure-skating motion capture, Frozen and a puzzle (4 December 2021)

 

CS4FN Advent – Day 5 – snowman: analog hydraulic computers (aka water computers), digital compression, and a puzzle (5 December 2021)

 

CS4FN Advent – Day 6 – patterned bauble: tracing patterns in computing – printed circuit boards, spotting links and a puzzle for tourists (6 December 2021)

 

CS4FN Advent – Day 7 – Computing for the birds: dawn chorus, birds as data carriers and a Google April Fool (plus a puzzle!) (7 December 2021)

 

CS4FN Advent – Day 8: gifts, and wrapping – Tim Berners-Lee, black boxes and another computing puzzle (8 December 2021)

 

CS4FN Advent – Day 9: gingerbread man – computing and ‘food’ (cookies, spam!), and a puzzle (9 December 2021)

 

CS4FN Advent – Day 10: Holly, Ivy and Alexa – chatbots and the useful skill of file management. Plus win at noughts and crosses – (10 December 2021) – this post

 

 

 

Meet the chatterbots – talking to computers thanks to artificial intelligence and virtual assistants

This article, by Paul Curzon (QMUL) was originally published on the CS4FN website.

A line of robots

Sitting down and having a nice chat with a computer probably isn’t something you do every day. You may never have done it. We mainly still think of it as being a dream for the future. But there is lots of work being done to make it happen in the present, and the idea has roots that stretch far back into the past. It’s a dream that goes back to Alan Turing, and then even a little further.

 

The imitation game
Back around 1950, Turing was thinking about whether computers could be intelligent. He had a problem though. Once you begin thinking about intelligence, you find it is a tricky thing to pin down. Intelligence is hard to define even in humans, never mind animals or computers. Turing started to wonder if he could ask his question about machine intelligence in a different way. He turned to a Victorian parlour game called the imitation game for inspiration.

The imitation game was played with large groups at parties, but focused on two people, a man and a woman. They would go into a different room to be asked questions by a referee. The woman had to answer truthfully. The man answered in any way he believed would convince everyone else he was really the woman. Their answers were then read out to the rest of the guests. The man won the game if he could convince everyone back in the party that he was really the woman.

Pretending to be human
Turing reckoned that he could use a similar test for intelligence in a machine. In Turing’s version of the imitation game, instead of a man trying to convince everyone he’s really a woman, a computer pretends to be a human. Everyone accepts the idea that it takes a certain basic intelligence to carry on a conversation. If a computer could carry on a conversation so well that talking to it was just like talking to a human, the computer must be intelligent.

When Turing published his imitation game idea, it helped launch the field of artificial intelligence (AI). Today, the field pulls together biologists, computer scientists and psychologists in a quest to understand and replicate intelligence. AI techniques have delivered some stunning results. People have designed computers that can beat the best human at chess, diagnose diseases, and invest in stocks more successfully than humans.

A chat with a chatterbot
But what about the dream of having a chat with a computer? That’s still alive. Turing’s idea, demonstrating computer intelligence by successfully faking human conversation, became known as the Turing test. Turing thought machines would pass his test before the 20th century was over, but the goal has proved more elusive than that. People have been making better conversational chat programs, called chatterbots, since the 1960s, but no one has yet made a program that can fool everyone into thinking it’s a real human.

What’s up, Doc
On the other hand, some chatterbots have done pretty well. One of the first and still one of the most famous chatterbots was created in 1968. It was called ELIZA. Its trick was imitating the sort of conversation you might have with a therapist. ELIZA didn’t volunteer much knowledge itself, but tried to get the user to open up about what they were thinking. So the person might type “I don’t feel well”, and ELIZA would respond with “you say you don’t feel well?” In a normal social situation, that would be a frustrating response. But it’s a therapist’s job to get a patient to talk about themselves, so ELIZA could get away with it. For an early example of a chatterbot, ELIZA did pretty well, but after a few minutes of chatting users realised that ELIZA didn’t really understand what they were saying.

Where have I heard this before?
One of the big problems in making a good chatterbot is coming up with sentences that sound realistic. That’s why ELIZA tried to keep its sentences simple and non-committal. A much more recent chatterbot called Cleverbot uses another brilliantly simple solution: it doesn’t try to make up sentences at all. It just stores all the phrases that it’s ever heard, and chooses from them when it needs to say something. When a human types a phrase to say to Cleverbot, its program looks for a time in the past when it said something similar, then reuses whatever response the human gave at the time. Given that Cleverbot has had 65 million chats on the Internet since 1997, it’s got a lot to choose from. And because its sentences were all originally entered by humans, Cleverbot can speak in slang or text speak. That can lead to strange conversations, though. A member of our team at cs4fn had an online chat with Cleverbot, and found it pretty weird to have a computer tell him “I want 2 b called Silly Sally”.

Computerised con artists
Most chatterbots are designed just for fun. But some chatterbots are made for a more sinister intent. A few years ago, a program called CyberLover was stalking dating chat forums on the Internet. It would strike up flirty conversations with people, then try and get them to reveal personal details, which could then be used to steal people’s identities or credit card accounts. CyberLover even had different programmed personalities, from a more romantic flirter to a more aggressive one. Most people probably wouldn’t be fooled by a robot come-on, but that’s OK. CyberLover didn’t mind rejection: it could start up ten relationships every half an hour.

Chatterbots may be ready to hit the big time soon. Apple’s iPhone 4S includes Siri, a computerised assistant that can find answers to human questions – sometimes with a bit of attitude. Most of Siri’s humourous answers appear to be pre-programmed, but some of them come from Siri’s access to powerful search engines. Apple don’t want to give away their secrets, so they’re not saying much. But if computerised conversation continues advancing, we may not be too far off from a computer that can pass the Turing test. And while we’re waiting at least we’ve got better games to play than the Victorians had.