Al-Jazari: the father of robotics

by Paul Curzon, Queen Mary University of London

Al Jazari's hand washing automaton
Image from Wikipedia

Science fiction films are full of humanoid robots acting as servants, workers, friends or colleagues. The first were created during the Islamic Golden Age, a thousand years ago. 

Robots and automata have been the subject of science fiction for over a century, but their history in myth goes back millennia, but so does the actual building of lifelike animated machines. The Ancient Greeks and Egyptians built Automata, animal or human-like contraptions that seemed to come to life. The early automata were illusions that did not have a practical use, though, aside from entertainment or just to amaze people. 

It was the great inventor of mechanical gadgets Ismail Al-Jazari from the Islamic Golden Age of science, engineering and art in the 12th century, who first built robot-like machines with actual purposes. Powered by water, his automata acted as servants doing specific tasks. One machine was a humanoid automaton that acted as a servant during the ritual purification of hand washing before saying prayers. It poured water into a basin from a jug and then handed over a towel, mirror and comb. It used a toilet style flushing mechanism to deliver the water from a tank. Other inventions included a waitress automaton that served drinks and robotic musicians that played instruments from a boat. It may even have been programmable. 

We know about Al-Jazari’s machines because he not only created mechanical gadgets and automata, he also wrote a book about them: The Book of Knowledge of Ingenious Mechanical Devices. It’s possible that it inspired Leonardo Da Vinci who, in addition to being a famous painter of the Italian Renaissance, was a prolific inventor of machines. 

Such “robots” were not everyday machines. The hand washing automata was made for the King. Al-Jazari’s book, however, didn’t just describe the machines, it explained how to build them: possibly the first text book to cover Automata. If you weren’t a King, then perhaps you could, at least, have a go at making your own servants. 

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Future Friendly: Focus on Kerstin Dautenhahn

by Peter W McOwan, Queen Mary University of London

(from the archive)

Kerstin's team including the robot waving
Kerstin’s team
Copyright © Adaptive Systems Research Group

Kerstin Dautenhahn is a biologist with a mission: to help us make friends with robots. Kerstin was always fascinated by the natural world around her, so it was no surprise when she chose to study Biology at the University of Bielefeld in Germany. Afterwards she went on to study a Diploma in Biology where she did research on the leg reflexes in stick insects, a strange start it may seem for someone who would later become one of the world’s foremost robotics researchers. But it was through this fascinating bit of biology that Kerstin became interested in the ways that living things process information and control their body movements, an area scientists call biological cybernetics. This interest in trying to understand biology made her want to build things to test her understanding, these things would be based on ideas copied from biological animals but be run by computers, these things would be robots.

Follow that robot

From humble beginning building small robots that followed one another over a hilly landscape, she started to realise that biology was a great source of ideas for robotics, and in particular that the social intelligence that animals use to live and work with each other could be modelled and used to create sociable robots.

She started to ask fascinating questions like “What’s the best way for a robot to interrupt you if you are reading a newspaper – by gesturing with its arms, blinking its lights or making a sound?” and perhaps most importantly “When would a robot become your friend?” First at the University of Hertfordshire, now a Professor at the University of Waterloo she leads a world famous research group looking to try and build friendly robots with social intelligence.

Good robot / Bad robot – East vs West

Kerstin, like many other robotics researchers, is worried that most people tend to look on robots as being potentially evil. If we look at the way robots are portrayed in the movies that’s often how it seems: it makes a good story to have a mechanical baddie. But in reality robots can provide a real service to humans, from helping the disabled, assisting around the home and even becoming friends and companions. The baddie robot ideas tends to dominate in the west, but in Japan robots are very popular and robotics research is advancing at a phenomenal rate. There has been a long history in Japan of people finding mechanical things that mimic natural things interesting and attractive. It is partly this cultural difference that has made Japan a world leader in robot research. But Kerstin and others like her are trying to get those of us in the west to change our opinions by building friendly robots and looking at how we relate to them.

Polite Robots roam the room

When at the University of Hertfordshire, Kerstin decided that the best way to see how people would react to a robot around the house was to rent a flat near the university, and fill it with robots. Rather than examine how people interacted with robots in a laboratory, moving the experiments to a real home, with bookcases, biscuits, sofas and coffee tables, make it real. She and her team looked at how to give their robots social skills: what was the best way for a robot to approach a person, for example? At first they thought that the best approach would be straight from the front, but they found that humans felt this too aggressive, so the robots were trained to come up gently from the side. The people in the house were also given special ‘comfort buttons’, devices that let them indicate how they were feeling in the company of robots. Again interesting things happened, it turned out that not all, but quite a lot of people were on the whole happy for these robots to be close to themselves, closer in fact than they would normally let a human approach. Kerstin explains ‘This is because these people see the robot as a machine, not a person, and so are happy to be in close proximity. You are happy to move close to your microwave, and it’s the same for robots’. These are exciting first steps as we start to understand how to build robots with socially acceptable manners. But it turns out that robots need to have good looks as well as good manners if they are going to make it in human society.

Looks are everything for a robot?

This fall in acceptability
is called the ‘uncanny valley’

How we interact with robots also depends on how the robots look. Researchers had found previously that if you make a robot look too much like a human being, people expect it to be a human being, with all the social and other skills that humans have. If it doesn’t have these, we find interaction very hard. It’s like working with a zombie, and it can be very frightening. This fall in acceptability of robots that look like, but aren’t quite, human is what researchers call the ‘uncanny valley’, so people prefer to encounter a robot that looks like a robot and acts like a robot. Kerstin’s group found this effect too, so they designed their robots to look and act they way we would expect robots to look and act, and things got much more sociable. But they are still looking at how we act with more human like robots and built KASPAR, a robot toddler, which has a very realistic rubber face capable of showing expressions and smiling, and video camera eyes that allow the robot to react to your behaviours. He possesses arms so can wave goodbye or greet you with a friendly gesture. Most recently he was extended with multi-modal technology that allowed several children to play with him at the same time, He’s very lifelike and their hope was hopefully as KASPAR’s programming grew, and his abilities improved he, or some descendent of him, would emerge from the uncanny valley to become someone’s friend, and in particular, children with autism.

Autism – mind blindness and robots

The fact that most robots at present look like and act like robots can give them a big advantage to help them support children with autism. Autism is a condition that prevents you from developing an understanding of how to interact socially with the world. A current theory to explain the condition is that those who are autistic cannot form a correct understanding of others intentions, it’s called mind blindness. For example, if I came into the room wearing a hideous hat and asked you ‘Do you like my lovely new hat?’ you would probably think, ‘I don’t like the hat, but he does, so I should say I like it so as not to hurt his feelings’, you have a mental model of my state of mind (that I like my hat). An autistic person is likely to respond ‘I don’t like your hat’, if this is what he feels. Autistic people cannot create this mental model so find it hard to make friends and generally interact with people, as they can’t predict what people are likely to say, do or expect.

Playing with Robot toys

It’s different with robots, many autistic children have an affinity with robots. Robots don’t do unexpected things. Their behaviour is much simpler, because they act like robots. Using robots Kerstin’s group examined how we can use this interaction with robot toys to help some autistic children to develop skills to allow them to interact better with other people. By controlling the robot’s behaviours some of the children can develop ways to mimic social skills, which may ultimately improve their quality of life. There were some promising results, and the work continues to be only one way to try and help those suffering with this socially isolating condition.

Future friendly

It’s only polite that the last word goes to Kerstin from her time at Hertfordshire:

‘I firmly believe that robots as assistants can potentially be very useful in many application areas. For me as a researcher, working in the field of human-robot interaction is exciting and great fun. In our team we have people from various disciplines working together on a daily basis, including computer scientists, engineers and psychologist. This collaboration, where people need to have an open mind towards other fields, as well as imagination and creativity, are necessary in order to make robots more social.’

In the future, when robots become our workmates, colleagues and companions it will be in part down to Kerstin and her team’s pioneering effort as they work towards making our robot future friendly.


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Strictly Judging Objects

by Paul Curzon and Peter W. McOwan, Queen Mary University of London

Revised from TLC blog and the cs4fn archive.

Strictly Come Dancing, has been popular for a long time. It’s not just the celebs, the pros, or the dancing that make it must see TV – it’s having the right mix of personalities in the judges too. Craig Revel Horwood has made a career of being rude, if always fair. By contrast, Darcey Bussell was always far more supportive. Bruno Tonioli is supportive but always excited. Len Goodman was similarly supportive but calm. Shirley Ballas aims for supportive but strict while Motsi Mabuse was always supportive and enthusiastic. It’s often the tension between the judges that makes good TV (remember those looks that Darcy always gave Craig, never mind when they started actually arguing). However, if you believe Dr Who, the future of judges will be robots like AnneDroid in the space-age version of The Weakest Link…let’s look at the Bot future. How might you go about designing computer judges, and how might objects help?

Write the code

We need to write a program. We will use a pseudocode (a mix of code and English) here rather than any particular programming language to make things easier to follow.

The first thing to realise is we don’t want to have to program each judge separately. That would mean describing the rules for every new judge from scratch each time they swap. We want to do as little as possible to describe each new one. Judges have a lot in common so we want to pull out those common patterns and code them up just once.

What makes a judge?

First let’s describe a basic judge. We will create a plan, a bit like an architect’s plan of a building. Programmers call these a ‘class’. The thing to realise about classes is that a class for a Judge is NOT the code of any actual working judge just the code of how to create one: a blueprint. This blueprint can be used to build as many individual judges as you need.

What’s the X-factor that makes a judge a judge? First we need to decide on some basic properties or attributes of judges. We can make a list of them, and what the possibilities for each are. The things common to all judges is they have names, personalities and they make judgements on people. Let’s simply say a judge’s personality can be either supportive or rude, and their judgements are just marks out of 10 for whoever they last watched.

Name : String
CharacterTrait : SUPPORTIVE, RUDE
Judgement : 1, 2, 3, 4, 5, 6, 7, 8, 9, 10

We have just created some new ‘types’ in programming terminology. A type is just a grouping of values. The type CharacterTrait has two values possible (SUPPORTIVE or RUDE), whereas the type Judgement has 10 possible values. We also used one common, existing type: String. Strings are just sequences of letters, numbers or symbols, so we are saying something of type Name is any such sequence. (This allows both for futuristic judges and for hip hop and rapper judges- perhaps one day, in retirement C3P0 will become a judge, but also for the likes of 50 Cent one day to become one.)

Let’s start to describe Judges as people with a name, personality and capable of thinking of a mark.

DESCRIPTION OF A Judge:
    Name name
    CharacterTrait personality
    Judgement mark

This says that each judge will be described by three variables, one called name, one called personality and one called mark. This kind of variable is called an instance variable –  because each judge we create from this blueprint will have their own copy, or instance, of the instance variables that describes that judge. So we might originally have created a Len judge and so on, but many series later find we need Mostsi one and then an Anton one. Each new judge needs their own copies of the variables that describe them.

All we are saying here in the class (the blueprint) is whenever we create a Judge it will have a name, a personal character (it will be either RUDE or SUPPORTIVE) and a potential mark.

For any given judge we will always refer tco their name using variable name and their character trait using variable personality. Each new judge will also have a current judgement, which we will refer to as mark: a number between 1 and 10. Notice we use the types we created, Name, Character and Judgment to specify the possible values each of these variables can hold.

Best behaviour

We are now able to say in our judge blueprint, our class, whether a judge is rude or supportive, but we haven’t actually said what that means. We need to set out the actual behaviours associated with being rude and supportive. We will do this here in a fairly simple way, just to illustrate. Let’s assume that the personality shows in the things they say when they give their judgement. A rude judge will say “It was a disaster” unless they are awarding a mark above 8/10. For high marks they will grudgingly say “You were ok I suppose”. We translate this into commands of how to give a judgment.

IF (personality IS RUDE) AND (mark <= 8) THEN SAY “It was a disaster” IF (personality IS RUDE) AND (mark > 8)
THEN SAY “You were ok I suppose”

It would be easy for us to give them lots more things to choose to say in a similar way, it’s just more rules. We can do a similar thing for a supportive judge. They will say “You were stunning” if they award more than 5 out of 10 and otherwise say “You tried really hard”.

TO GiveJudgement:
    IF (personality IS RUDE) AND (mark <= 8)
    THEN SAY “It was a disaster”     

    IF (personality IS RUDE) AND (mark > 8)
    THEN SAY “You were ok I suppose”

    IF (personality IS SUPPORTIVE) AND (mark > 5)
    THEN SAY “You were stunning”

    IF (personality IS SUPPORTIVE) AND (mark <= 5)
    THEN SAY “You tried hard”

A ten from Len

The other thing that judges do is actually come up with their judgement, their mark.  For real judges it would be based on rules about what they saw – a lack of pointed toes pulls it down, lots of wiggle at the right times pushes it up… We will assume, to keep it simple here, that they actually just think of a random number – essentially throw a 10 sided dice under the desk with numbers 1-10 on!

TO MakeJudgement:
    mark = RANDOM (1 TO 10)

Finally, judges can reveal their mark.

TO RevealMark:
    SAY mark

Notice this doesn’t mean they say the word “mark”. mark is a variable so this means say whatever is currently stored in that judge’s mark.

Putting that all together to make our full judge class we get:

DESCRIPTION OF A Judge:
    Name name
    CharacterTrait personality
    Judgement mark

    TO GiveJudgement:
        IF (personality IS RUDE) AND (mark <= 8)
        THEN SAY “It was a disaster”

        IF (personality IS RUDE) AND (mark > 8)
        THEN SAY “You were ok I suppose

        IF (personality IS SUPPORTIVE) AND (mark > 5)
        THEN SAY “You were stunning”

        IF (personality IS SUPPORTIVE) AND (mark <= 5)
        THEN SAY “You tried hard”

    TO MakeJudgement:
        mark = RANDOM (1 TO 10)

    TO RevealMark
        SAY mark

What is a class?

So what is a class? A class says how to build an object. It defines properties or attributes (like name, personality and current mark) but it also defines behaviours: it can speak, it can make a judgement and it can reveal the current mark. These behaviours are defined by methods – mini-programs that specify how any Judge should behave. Our class says that each Judge will have its own set of the methods that use that Judge’s own instance variables to store its properties and decide what to do.

So a class is a blueprint that tells us how to make particular things: objects. It defines their instance variables – what the state of each judge consists of and some rules to apply in particular situations. We have so far made a class definition for making Judges. It is important to realise that we haven’t made any actual objects (no actual judge) so far though – defining a class does not in itself give you any actual objects – no actual Judges (no Bruno, no Motsi, …) exist to judge anything yet. We need to write specific commands to create them as we will see.

We can store away our blueprint and just pull it out to make use of it when we need to create some actual judges (eg once a year in the summer when this year’s judges are announced).

Kind words for our contestants?

Suppose Strictly is starting up (as it is as I write, but let’s suppose all the judges will be androids this year) so we want to create some judges, starting with a rude judge, called Craig Devil Droidwood. We can use our class as the blueprint to do so. We need to say what its personality is (Judges just think of a mark when they actually see an act so we don’t have to give a mark now.)

CraigDevilDroidwood IS A NEW Judge 
                    WITH name “Craig Devil Droidwood”
                    AND personality RUDE

This creates a new judge called Craig Devil Droidwood and makes it Rude. We have instantiated the class to give a Judge object. We store this object in a variable called CraigDevilDroidwood.

For a supportive judge that we decide to call Len Goodroid we would just say (instantiating the class in a different way):

LenGoodroid IS A NEW Judge 
            WITH name “Len Goodroid”
            AND personality SUPPORTIVE

Another supportive judge DarC3PO BussL would be created with

DarC3POBussL IS A NEW Judge 
             WITH name “DarC3PO BussL”
             AND personality SUPPORTIVE

Whereas in the class we are describing a blueprint to use to create a Judge, here we are actually using that blueprint and making different Judges from it. So this way we can quickly and easily make new judge clones without copying out all the description again. These commands executed at the start of our program (and TV programme) actually create the objects. They create instances of class Judge, which just means they create actual virtual judges with their own name and personality. They also each have their own copy of the rules for the behaviour of judges.

Execute them

Once actual judges are created, they can execute commands to start the judging. First the program tells them to make judgements using their judgement method. We execute the MakeJudgement method associated with each separate judge object in turn. Each has the same instructions but those instructions work on the particular judges instance variables, so do different things.

EXECUTE MakeJudgement OF CraigDevilDroidwood
EXECUTE MakeJudgement OF DarC3POBussL
EXECUTE MakeJudgement OF LenGoodroid

Then the program has commands telling them to say what they think,

EXECUTE GiveJudgement OF CraigDevilDroidwood
EXECUTE GiveJudgement OF DarC3POBussL
EXECUTE GiveJudgement OF LenGoodroid

and finally give their mark.

EXECUTE RevealMark OF CraigDevilDroidwood
EXECUTE RevealMark OF DarC3POBussL
EXECUTE RevealMark OF LenGoodroid

In our actual program this would sit in a loop so our program might be something like:

CraigDevilDroidwood IS A NEW Judge 
                    WITH name “Craig Devil Droidwood”
                    AND personality RUDE
DarC3POBussL        IS A NEW Judge 
                    WITH name “DarC3PO BussL”
                    AND personality SUPPORTIVE
LenGoodroid         IS A NEW Judge 
                    WITH name “Len Goodroid”
                    AND personality SUPPORTIVE

FOR EACH contestant DO THE FOLLOWING
    EXECUTE MakeJudgement OF CraigDevilDroidwood
    EXECUTE MakeJudgement OF DarC3POBussL
    EXECUTE MakeJudgement OF LenGoodroid

    EXECUTE GiveJudgement OF CraigDevilDroidwood
    EXECUTE GiveJudgement OF DarC3POBussL
    EXECUTE GiveJudgement OF LenGoodroid

    EXECUTE RevealMark OF CraigDevilDroidwood
    EXECUTE RevealMark OF DarC3POBussL
    EXECUTE RevealMark OF LenGoodroid

So we can now create judges to our heart’s content, fixing their personalities and putting the words in their mouths based on our single description of what a Judge is. Of course our behaviours so far are simple. We really want to add more kinds of personality like strict judges (Shirley) and excited ones (Bruno). Ideally we want to be able to do different combinations making perhaps excited rude judges as well as excited supportive ones. This really just takes more rules.

A classless society?

Computer Scientists are lazy beings – if they can find a way to do something that involves less work, they do it, allowing them to stay in bed longer. The idea we have been using to save work here is just that of describing classes of things and their properties and behaviour. Scientists have been doing a similar thing for a long time:

Birds have feathers (a property) and lay eggs (a behaviour).

Spiders have eight legs (a property) and make silk (a behaviour)

We can say something is a particular instance of a class of thing and that tells us a lot about it without having to spell it all out each time, even for fictional things: eg Hedwig is a bird (so feathers and eggs). Charlotte is a spider (so legs and silk). The class is capturing the common patterns behind the things we are describing. The difference when Computer Scientists write them is because they are programs they can then come alive!

All change

We have specified what it means to be a robotic judge and we’ve only had to specify the basics of Judgeness once to do it. That means that if we decide to change anything in the basic judge (like giving them a better way to come up with a mark than doing it randomly or having them choose things to say from a big database of supportive or rude comments) changing it in the plan will apply to all the judges of whatever kind. That is one of the most powerful reasons for programming in this way.

We could create robot performers in a similar way (after all don’t all the winners seem to merge into one in the long run?). We would then also have to write some instructions about how to work out who won – does the audience have a vote? How many get knocked out each week? … and so on.

Of course we’ve not written a full program, even for judges, just sketched a program in pseudocode. The next step is to convert this into a program in an object-oriented programming language like Python or Java. Is that hard? Why not give it a try and judge for yourself?


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

The First Law of Humans

Preventing robots from hurting us

by Paul Curzon, Queen Mary University of London

A chess board with pieces lined up
Image by Pexels from Pixabay 

The first rule of humans when around robots is apparently that they should not do anything too unexpected…

A 7-year old child playing chess in a chess tournament has had his finger broken by a chess-playing robot which grabbed it as the boy made his move. The child was blamed by one of the organisers, who claimed that it happened because the boy “broke the safety rules”! The organisers also apparently claimed that the robot itself was perfectly safe!

What seems to have happened is, after the robot played its move, the boy played his own move very quickly before the robot had finished. Somehow this triggered the wrong lines of code in the robot’s program: instructions that were intended for some other situation. In the actual situation of the boy’s hand being over the board at the wrong time it led the robot to grab his finger and not let go.

Spoiler Alert

The situation immediately brings to mind the classic science fiction story “Moxon’s Master” by Ambrose Bierce published way back in 1899. It is the story of a chess playing automaton (ie robot) and what happens when it is check-mated when playing a game with its creator Moxon. It flies into a dangerous rage. However, there the problems are apparently because the robot has developed emotions and so emotional reactions. In both situations however a robot intended simply to play chess is capable of harming a human as a result.

The three laws of robotics

Isaac Asimov is famous for his laws of robotics: fundamental unbreakable rules built in to the ‘brain’ of all robots in his fictional world precisely to stop this sort of situation. The rules he formulated were:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
A robot carefully touching fingers with a human
Image by Pete Linforth from Pixabay 

Clearly, had they been in place, the chess robot would not have harmed the boy, and Moxon’s automaton would not have been able to do anything too bad as a result of its temper either.

Asimov devised his rules as a result of a discussion with the Science Fiction magazine editor John W. Campbell, Jr. He then spent much of his Science Fiction career writing robot stories around how humans could end up being hurt despite the apparently clear rules. That aside their key idea was that, to ensure robots were safe around people, they would need built-in logic that could not be circumvented to stop them hurting them. They needed a fail-safe system monitoring their actions that would take over when breaches were possible. Clearly this chess-playing robot was not “perfectly safe” and not even fail-safe as if it was the boy would not have been harmed whatever he did. The robot did not have anything at all akin to a working, unbreakable First Law programmed into it.

Dangerous Machines

Asimov’s robots were intelligent and able to think for themselves in a more general sense than any that currently exist. The First Law essentially prevented them from deciding to harm a human not just do so by accident. However, perhaps the day will soon come when they can start to think for themselves, so perhaps a first law will soon be important. In any case, machines can harm humans without being able to think. That humans need to be wary around robots is obvious from the fact that there have been numerous injuries and even fatalities in factories using industrial robots in the decades since they were introduced. They are dangerous machines. Fortunately, the carnage inflicted on children is at least not quite that of the industrial accidents in the Industrial Revolution. It is still a problem though. People do have to take care and follow safety rules around them!

Rather than humans having to obey safety laws, we perhaps ought to be taking Asimov’s Laws more seriously for all robots, therefore. Why can’t those laws just be built in? It is certainly an interesting research problem to think about. The idea of a fail-safe is standard in engineering, so its not that general idea that is the problem. The problem is that, rather than intelligence being needed for robots to harm us, intelligence is needed to avoid them doing so.

Implementing the First Law

Let’s imagine building in the first law to chess playing robots and in particular the one that hurt the boy. For starters the chess playing robot would have needed to have recognised that the boy WAS a human so should not be harmed. It would also need to be able to recognise that his finger was a part of him and that gripping its finger would harm him. It would also need to know that it was gripping his finger (not a piece) at the time. It would then need a way to stop before it was too late, and do no harm in the stopping. It clearly needs to understand a lot about the world to be able to avoid hurting people in general.

Some of this is almost within our grasp. Computers can certainly do a fairly good job of recognising humans now through image recognition code. They can even recognise individuals, so actually that first fundamental part of knowing what is and isn’t a human is more or less possible now, just not perfectly yet. Recognising objects in general is perhaps harder. The chess robot presumably has code for recognising pieces already though a finger perhaps even looks like a piece at least to a robot. To generally, avoid causing harm in any situation it needs to be able to recognise what lots of objects are not just chess pieces. It also needs to differentiate them from what is part of a human not just what is a human. Object recognition like this is possible at least in well-defined situations. It is much harder to manage it in general, even if the precise objects have never been encountered before. Even harder though is probably recognising all the ways that would constitute doing harm to the human identified in front of it including with any of those objects that are around.

Staying in control

The code to do all this would also have to be in some sense at a higher level of control than that making the robot take actions as it has to overrule them ALWAYS. For the chess robot, there was presumably a bug that allowed it to grip a human’s finger as no programmer will have intended that, so it isn’t about monitoring the code itself. The fail-safe code has to be monitoring what is actually happening in the world and be in a position to take over. It also can’t just make the robot freeze as that may be enough to do the damage of a broken finger if already in the robot’s grip (and that may have been part of the problem for the boy’s finger). It also can’t just move its arm back suddenly as what if another child (a toddler perhaps) has just crawled up behind it! It has to monitor the effects of its own commands too! A simple version of such a monitor is probably straightforward though. The robot’s computer architecture just needs to be designed accordingly. One way robots are designed is for new modules to build on top of existing ones giving new more complex behaviour as a result, which possibly fits what is needed here. Having additional computers acting as monitors to take over when others go wrong is also not really that difficult (bugs in their own code aside) and a standard idea for mission-critical systems.

So it is probably all the complexity of the world with unexpected things happening in it that is the problem that makes a general version of the First Law hard at the moment… If Asimov’s laws in all their generalness are currently a little beyond us, perhaps we should just think about the problem in another more limited way (at least for now)…

Can a chess playing robot be safe?

In the chess game situation, if anything is moving in front of the robot then it perhaps should just be keeping well out of the way. To do so just needs monitoring code that can detect movement in a small fixed area. It doesn’t need to understand anything about the world apart from movement versus non-movement. That is easily in the realms of what computers can do – even some simple toy robots can detect movement. The monitoring code would still need to be able to override the rest of the code of course, bugs included.

Why also could the robot grip a finger with enough pressure to break it, anyway. Perhaps it just needed more accurate sensors in its fingers to avoid doing harm, together with a sensor that just let go if it felt too much resistance back. After all chess pieces don’t resist much!

And one last idea, if a little bit more futuristic. A big research area at the moment is soft robotics: robots that are soft and squidgy not hard and solid, precisely so they can do less harm. Perhaps if the chess robot’s robotic claw-like fingers had instead been totally soft and squishy it would not have harmed him even if it did grab his finger.

Had the robot designers tried hard enough they surely could have come up with solutions to make it safer, even if they didn’t have good enough debugging skills to prevent the actual bug that caused the problem. It needs safety to be a very high priority from the outset though: and certainly safety that isn’t just pushed onto humans to be responsible for as the organisers did.

We shouldn’t be blaming children for not obeying safety rules when they are given what is essentially a hard, industrial robot to play with. Doing so just lets the robot makers off the hook from even trying to make their robots safer, when they clearly could do more. When disasters happen don’t blame the people, improve the system. On the other hand perhaps we should be thinking far more about doing the research that will allow us one day to actually implement Asimov’s Laws in all robots and so have a safety culture in robotics built-in. Then perhaps people would not have to be quite so wary around robots and certainly not have to follow safety rules themselves. That surely is the robot’s job.


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

Dressing it up

Why it might be good for robots to wear clothes

by Peter W McOwan and the CS4FN team, Queen Mary University of London

Updated from the archive

(Robot) dummies in different clothes standing in a line up a slope
Image by Peter Toporowski from Pixabay 

Even though most robots still walk around naked, the Swedish Institute of Computer Science (SICS) in Stockholm explored how to produce fashion conscious robots.

The applied computer scientists there were looking for ways to make the robots of today easier for us to get along with. As part of the LIREC project to build the first robot friends for humans they examined how our views of simple robots change when we can clothe and customise them. Does this make the robots more believable? Do people want to interact more with a fashionable robot?

How do you want it?

These days most electronic gadgets allow the human user to customise them. For example, on a phone you can change the background wallpaper or colour scheme, the ringtone or how the menus work. The ability of the owner to change the so-called ‘look and feel’ of software is called end-user programming. It’s essentially up to you how your phone looks and what it does.

Dinosaurs waking and sleeping

The Swedish team began by taking current off-the-shelf robots and adding dress-up elements to them. Enter Pleo, a toy dinosaur ‘pet’ able to learn as you play with it. Now add in that fashion twist. What happens when you can play dress up with the dinosaur? Pleo’s costumes change its behaviour, kind of like what happens when you customise your phone. For example, if you give Pleo a special watchdog necklace the robot remains active and ‘on guard’. Change the costume from necklace to pyjamas, and the robot slowly switches into ‘sleep’ mode. The costumes or accessories you choose communicate electronically with the robot’s program, and its behaviour follows suit in a way you can decide. The team explored whether this changed the way people played with them.

Clean sweeps

In another experiment the researchers played dress up with a robot vacuum cleaner. The cleaner rolls around the house sweeping the floor, and had already proven a hit with many consumers. It bleeps happily as its on-board computer works out the best path to bust your carpet dust. The SICS team gave the vacuum a special series of stick-on patches, which could add to its basic programming. They found that choosing the right patch could change the way the humans perceive the robot’s actions. Different patches can make humans think the robot is curious, aggressive or nervous. There’s even a shyness patch that makes the robot hide under the sofa.

What’s real?

If humans are to live in a world populated by robots there to help them, the robots need to be able to play by our rules. Humans have whole parts of their brains given over to predicting how other humans will react. For example, we can empathise with others because we know that other beings have thoughts like us, and we can imagine what they think. This often spills over into anthropomorphism, where we give human characteristics to non-human animal or non-living things. Classic examples are where people believe their car has a particular personality, or think their computer is being deliberately annoying – they are just machines but our brains tend to attach motives to the behaviours we see.

Real-er robots?

Robots can produce very complex behaviours depending on the situations they are in and the ways we have interacted with them, which creates the illusion that they have some sort of ‘personality’ or motives in the way they are acting. This can help robots seem more natural and able to fit in with the social world around us. It can also improve the ways they provide us with assistance because they seem that bit more believable. Projects like the SICS’s ‘actDresses’ one help us by providing new ways that human users can customise the actions of their robots in a very natural way, in their case by getting the robots to dress for the part.


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

The Mummy in an AI world: Jane Webb’s future

by Paul Curzon, Queen Mary University of London

The sarcophagus of a mummy
Image by albertr from Pixabay

Inspired by Mary Shelley’s Frankenstein, 17-year old Victorian orphan, Jane Webb secured her future by writing the first ever Mummy story. The 22nd century world in which her novel was set is perhaps the most amazing thing about the three volume book though.

On the death of her father, Jane realised she needed to find a way to support herself and did so by publishing her novel “The Mummy!” in 1827. In contrast to their modern version as stars of horror films, Webb’s Mummy, a reanimation of Cheops, was actually there to help those doing good and punish those that were evil. Napoleon had, through the start of the century, invaded Egypt, taking with him scholars intent on understanding the Ancient Egyptian society. Europe was fascinated with Ancient Egypt and awash with Egyptian artefacts and stories around them. In London, the Egyptian Hall had been built in Piccadilly in 1812 to display Egyptian artefacts and in 1821 it displayed a replica of the tomb of Seti I. The Rosetta Stone that led to the decipherment of hieroglyphics was cracked in 1822. The time was therefore ripe for someone to come up with the idea of a Mummy story.

The novel was not, however, set in Victorian times but in a 22nd century future that she imagined, and that future was perhaps more amazing than the idea of a mummy coming to life. Her version of the future was full of technological inventions supporting humanity, as well as social predictions, many of which have come to fruition such as space travel and the idea that women might wear trousers as the height of fashion (making her a feminist hero). The machines she described in the book led to her meeting her future husband, John Loudon. As a writer about farming and gardening he was so impressed by the idea of a mechanical milking machine included in the book, that he asked to meet her. They married soon after (and she became Jane Loudon).

The skilled artificial intelligences she wrote into her future society are perhaps the most amazing of her ideas in that she was the first person to really envision in fiction a world where AIs and robots were embedded in society just doing good as standard. To put this into context of other predictions, Ada Lovelace wrote her notes suggesting machines of the future would be able to compose music 20 years later.

Jane Webb’s future was also full of cunning computational contraptions: there were steam-powered robot surgeons, foreseeing the modern robots that are able to do operations (and with their steady hands are better at, for example, eye surgery than a human). She also described Artificial Intelligences replacing lawyers. Her machines were fed their legal brief, giving them instructions about the case, through tubes. Whilst robots may not yet have fully replaced barristers and judges, artificial intelligence programs are already used, for example, to decide the length of sentences of those convicted in some places, and many see it now only being a matter of time before lawyers are spending their time working with Artificial Intelligence programs as standard. Jane’s world also includes a version of the Internet, at a time before electric telegraph existed and when telegraph messages were sent by semaphore between networks of towers.

The book ultimately secured her future as required, and whilst we do not yet have any real reanimated mummy’s wandering around doing good deeds, Jane Webb did envision lots of useful inventions, many that are now a reality, and certainly had pretty good ideas about how future computer technology would pan out in society…despite computers, never mind artificial intelligences, still being well over a century away.


More on …

Related Magazines …


EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

The naked robot

by Paul Curzon, Queen Mary University of London

From the archive

A naked robot holding a flower
Image by bamenny from Pixabay 

Why are so many film robots naked? We take it for granted that robots don’t wear clothes, and why should they?

They are machines, not humans, after all. On the other hand, the quest to create artificial intelligence involves trying to create machines that share the special ingredients of humanity. One of the things that is certainly special about humans in comparison to other animals is the way we like to clothe and decorate our bodies. Perhaps we should think some more about why we do it but the robots don’t!

Shame or showoff?

The creation story in the Christian Bible suggests humans were thrown out of the Garden of Eden when Adam and Eve felt the need to cover up – when they developed shame. Humans usually wear more than just the bare minimum though, so wearing clothing can’t be all about shame. Nor is it just about practicalities like keeping warm. Turn up at an interview covering your body with the wrong sort of clothes and you won’t get the job. Go to a fancy dress party in the clothes that got you the job and you will probably feel really uncomfortable the moment you see that everyone else is wearing costumes. Clothes are about decorating our bodies as much as covering them.

Our urge to decorate our bodies certainly seems to be a deeply rooted part of what makes us human. After all, anthropologists consider finds like ancient beads as the earliest indications of humanity evolving from apehood. It is taken as evidence that there really was someone ‘in there’ back then. Body painting is used as another sign of our emerging humanity. We still paint our bodies millennia later too. Don’t think we’re only talking about children getting their faces painted – grownups do it too, as the vast make-up industry and the popularity of tattoos show. We put shiny metal and stones around our necks and on our hands too.

The fashion urge

Whatever is going on in our heads, clearly the robots are missing something. Even in the movies the intelligent ones rarely feel the need to decorate their bodies. R2D2? C3PO? Wall-E? The exceptions are the ones created specifically to pass themselves off as human like in Blade Runner.

You can of course easily program a robot to ‘want’ to decorate itself, or to refuse to leave its bedroom unless it has managed to drape some cloth over its body and shiny wire round its neck, but if it was just following a programmed rule would that be the same as when a human wears clothes? Would it be evidence of ‘someone in there’? Presumably not!

We do it because of an inner need to conform more than an inner need to wear a particular thing. That is what fashion is really all about. Perhaps programming an urge to copy others would be a start. In Wall-E, the robot shows early signs of this as he tries to copy what he sees the humans doing in the old films he watches. At one point he even uses a hubcap as a prop hat for a dance. Human decoration may have started as a part of rituals too.

Where to now?

Is this need to decorate our bodies something special, something linked to what makes us human? Should we be working on what might lead to robots doing something similar of their own accord? When archaeologists are hunting through the rubble in thousands of years’ time, will there be something other than beads that would confirm their robot equivalent to self-awareness? If robots do start to decorate and cover up their bodies because they want to rather than because it was what some God-like programmer coded them to do, surely something special will have happened. Perhaps that will be the point when the machines have to leave their Garden of Eden too.


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

The Ultimate (do nothing) machine

by Jo Brodie, Queen Mary University of London

A black box with an on-off switch at ON. The top flips open and a robotivc finger pokes out to push the switch back to OFF.
This ultimate machine is a commercially produced version of Minsky’s idea. Image by Drpixie from Wikimedia CC-BY-SA-4.0

In 1952 computer scientist and playful inventor, Marvin Minsky, designed a machine which did one thing, and one thing only. It switched itself off. It was just a box with a motor, switch and something to flip (toggle) the switch off again after someone turned it on. Science fiction writer Arthur C. Clarke thought there was something ‘unspeakably sinister’ about a machine that exists just to switch itself off and hobbyist makers continue to create their own variations today.


More on …

Related Magazines …


This article was funded by UKRI, through Professor Ursula Martin’s grant EP/K040251/2 and grant EP/W033615/1.

An ode to technology

by Paul Curzon, Queen Mary University of London

Cunning contraptions date back to ancient civilisations.

A female statue staring with head turned
Image by Devanath from Pixabay

People have always been fascinated by automata: robot-style contraptions allowing inanimate animal and human figures to move, long before computers could take the place of a brain.

Records show they were created in ancient Egypt, China, and Greece. In the renaissance Leonardo designed them for entertainment, and more recently magicians have bedazzled audiences with them.

The island of Rhodes was a centre for mechanical engineering in Ancient Greek times, and the Greeks were great inventors who loved automata. According to an Ode by Pindar the island was covered with automata:

The animated figures stand.

Adorning every public street.

And seem to breathe in stone,

Or move their marble feet.

Pindar

More on …

Cunning Computational Contraptions

Related Magazines …


This article was funded by UKRI, through Professor Ursula Martin’s grant EP/K040251/2 and grant EP/W033615/1.

Swallow a slug-bot to catch a …

by Paul Curzon, Queen Mary University of London

Imagine swallowing a slug (hint not only a yucky thought but also not a good idea as it could kill you)…now imagine swallowing a slug-bot … also yucky but in the future it might save your life.

When people accidentally swallow solid objects that won’t pass through their digestive system, or are toxic, it can be a big problem. Once an object passes beyond your stomach it becomes hard to get at.

That is where the slug shaped robot comes in. The idea of scientists at the Chinese University of Hong Kong is that a robot like a slug could crawl down your throat to retrieve whatever you had swallowed.

If you think of robots as solid, hard things then that would be the last thing you might want to swallow (aside from an actual slug), and certainly not to catch the previous solid thing you swallowed. You may be right. However, that is where the soft slug-shaped robot comes in.

It is easy to make or buy slime-like “silly” putty. Add iron filings to slime putty and you can make it stretch and sway and even move around with magnets yourself. You can buy such magnetic slime at science museums…it is fun to play with though you definitely shouldn’t swallow it.

The scientists have taken that general idea though and using special materials created a similar highly controllable bot that can be moved around using a magnet-based control system. It is made of a special material that is magnetic and slime-like but coated in silicon dioxide to stop it being poisonous.

They have shown that they can control it to squeeze through narrow gaps and encircle small objects, carrying them away with it…essentially what would be needed to recover objects that have been swallowed.

It needs a lot more work to make sure it is safe to really be swallowed. Also to be a real autonomous robot it would need to have sensors included somehow, and be connected to some sort of intelligent system to automatically control its behaviour. However, with more research that all may become possible.

So in the future if you don’t fancy swallowing a slug-bot, you’d better be far more careful about what else you swallow first. Of course, if it turns out slug like robots can break down, so get stuck themselves, you may then be in a position of needing to swallow a bird-bot to catch the slug-bot. How absurd …


More on …

Related Magazines …


The cs4fn blog is funded by EPSRC, through grant EP/W033615/1.