Strictly Judging Objects

by Paul Curzon and Peter W. McOwan, Queen Mary University of London

Revised from TLC blog and the cs4fn archive.

Strictly Come Dancing, has been popular for a long time. It’s not just the celebs, the pros, or the dancing that make it must see TV – it’s having the right mix of personalities in the judges too. Craig Revel Horwood has made a career of being rude, if always fair. By contrast, Darcey Bussell was always far more supportive. Bruno Tonioli is supportive but always excited. Len Goodman was similarly supportive but calm. Shirley Ballas aims for supportive but strict while Motsi Mabuse was always supportive and enthusiastic. It’s often the tension between the judges that makes good TV (remember those looks that Darcy always gave Craig, never mind when they started actually arguing). However, if you believe Dr Who, the future of judges will be robots like AnneDroid in the space-age version of The Weakest Link…let’s look at the Bot future. How might you go about designing computer judges, and how might objects help?

Write the code

We need to write a program. We will use a pseudocode (a mix of code and English) here rather than any particular programming language to make things easier to follow.

The first thing to realise is we don’t want to have to program each judge separately. That would mean describing the rules for every new judge from scratch each time they swap. We want to do as little as possible to describe each new one. Judges have a lot in common so we want to pull out those common patterns and code them up just once.

What makes a judge?

First let’s describe a basic judge. We will create a plan, a bit like an architect’s plan of a building. Programmers call these a ‘class’. The thing to realise about classes is that a class for a Judge is NOT the code of any actual working judge just the code of how to create one: a blueprint. This blueprint can be used to build as many individual judges as you need.

What’s the X-factor that makes a judge a judge? First we need to decide on some basic properties or attributes of judges. We can make a list of them, and what the possibilities for each are. The things common to all judges is they have names, personalities and they make judgements on people. Let’s simply say a judge’s personality can be either supportive or rude, and their judgements are just marks out of 10 for whoever they last watched.

Name : String
CharacterTrait : SUPPORTIVE, RUDE
Judgement : 1, 2, 3, 4, 5, 6, 7, 8, 9, 10

We have just created some new ‘types’ in programming terminology. A type is just a grouping of values. The type CharacterTrait has two values possible (SUPPORTIVE or RUDE), whereas the type Judgement has 10 possible values. We also used one common, existing type: String. Strings are just sequences of letters, numbers or symbols, so we are saying something of type Name is any such sequence. (This allows both for futuristic judges and for hip hop and rapper judges- perhaps one day, in retirement C3P0 will become a judge, but also for the likes of 50 Cent one day to become one.)

Let’s start to describe Judges as people with a name, personality and capable of thinking of a mark.

DESCRIPTION OF A Judge:
    Name name
    CharacterTrait personality
    Judgement mark

This says that each judge will be described by three variables, one called name, one called personality and one called mark. This kind of variable is called an instance variable –  because each judge we create from this blueprint will have their own copy, or instance, of the instance variables that describes that judge. So we might originally have created a Len judge and so on, but many series later find we need Mostsi one and then an Anton one. Each new judge needs their own copies of the variables that describe them.

All we are saying here in the class (the blueprint) is whenever we create a Judge it will have a name, a personal character (it will be either RUDE or SUPPORTIVE) and a potential mark.

For any given judge we will always refer tco their name using variable name and their character trait using variable personality. Each new judge will also have a current judgement, which we will refer to as mark: a number between 1 and 10. Notice we use the types we created, Name, Character and Judgment to specify the possible values each of these variables can hold.

Best behaviour

We are now able to say in our judge blueprint, our class, whether a judge is rude or supportive, but we haven’t actually said what that means. We need to set out the actual behaviours associated with being rude and supportive. We will do this here in a fairly simple way, just to illustrate. Let’s assume that the personality shows in the things they say when they give their judgement. A rude judge will say “It was a disaster” unless they are awarding a mark above 8/10. For high marks they will grudgingly say “You were ok I suppose”. We translate this into commands of how to give a judgment.

IF (personality IS RUDE) AND (mark <= 8) THEN SAY “It was a disaster” IF (personality IS RUDE) AND (mark > 8)
THEN SAY “You were ok I suppose”

It would be easy for us to give them lots more things to choose to say in a similar way, it’s just more rules. We can do a similar thing for a supportive judge. They will say “You were stunning” if they award more than 5 out of 10 and otherwise say “You tried really hard”.

TO GiveJudgement:
    IF (personality IS RUDE) AND (mark <= 8)
    THEN SAY “It was a disaster”     

    IF (personality IS RUDE) AND (mark > 8)
    THEN SAY “You were ok I suppose”

    IF (personality IS SUPPORTIVE) AND (mark > 5)
    THEN SAY “You were stunning”

    IF (personality IS SUPPORTIVE) AND (mark <= 5)
    THEN SAY “You tried hard”

A ten from Len

The other thing that judges do is actually come up with their judgement, their mark.  For real judges it would be based on rules about what they saw – a lack of pointed toes pulls it down, lots of wiggle at the right times pushes it up… We will assume, to keep it simple here, that they actually just think of a random number – essentially throw a 10 sided dice under the desk with numbers 1-10 on!

TO MakeJudgement:
    mark = RANDOM (1 TO 10)

Finally, judges can reveal their mark.

TO RevealMark:
    SAY mark

Notice this doesn’t mean they say the word “mark”. mark is a variable so this means say whatever is currently stored in that judge’s mark.

Putting that all together to make our full judge class we get:

DESCRIPTION OF A Judge:
    Name name
    CharacterTrait personality
    Judgement mark

    TO GiveJudgement:
        IF (personality IS RUDE) AND (mark <= 8)
        THEN SAY “It was a disaster”

        IF (personality IS RUDE) AND (mark > 8)
        THEN SAY “You were ok I suppose

        IF (personality IS SUPPORTIVE) AND (mark > 5)
        THEN SAY “You were stunning”

        IF (personality IS SUPPORTIVE) AND (mark <= 5)
        THEN SAY “You tried hard”

    TO MakeJudgement:
        mark = RANDOM (1 TO 10)

    TO RevealMark
        SAY mark

What is a class?

So what is a class? A class says how to build an object. It defines properties or attributes (like name, personality and current mark) but it also defines behaviours: it can speak, it can make a judgement and it can reveal the current mark. These behaviours are defined by methods – mini-programs that specify how any Judge should behave. Our class says that each Judge will have its own set of the methods that use that Judge’s own instance variables to store its properties and decide what to do.

So a class is a blueprint that tells us how to make particular things: objects. It defines their instance variables – what the state of each judge consists of and some rules to apply in particular situations. We have so far made a class definition for making Judges. It is important to realise that we haven’t made any actual objects (no actual judge) so far though – defining a class does not in itself give you any actual objects – no actual Judges (no Bruno, no Motsi, …) exist to judge anything yet. We need to write specific commands to create them as we will see.

We can store away our blueprint and just pull it out to make use of it when we need to create some actual judges (eg once a year in the summer when this year’s judges are announced).

Kind words for our contestants?

Suppose Strictly is starting up (as it is as I write, but let’s suppose all the judges will be androids this year) so we want to create some judges, starting with a rude judge, called Craig Devil Droidwood. We can use our class as the blueprint to do so. We need to say what its personality is (Judges just think of a mark when they actually see an act so we don’t have to give a mark now.)

CraigDevilDroidwood IS A NEW Judge 
                    WITH name “Craig Devil Droidwood”
                    AND personality RUDE

This creates a new judge called Craig Devil Droidwood and makes it Rude. We have instantiated the class to give a Judge object. We store this object in a variable called CraigDevilDroidwood.

For a supportive judge that we decide to call Len Goodroid we would just say (instantiating the class in a different way):

LenGoodroid IS A NEW Judge 
            WITH name “Len Goodroid”
            AND personality SUPPORTIVE

Another supportive judge DarC3PO BussL would be created with

DarC3POBussL IS A NEW Judge 
             WITH name “DarC3PO BussL”
             AND personality SUPPORTIVE

Whereas in the class we are describing a blueprint to use to create a Judge, here we are actually using that blueprint and making different Judges from it. So this way we can quickly and easily make new judge clones without copying out all the description again. These commands executed at the start of our program (and TV programme) actually create the objects. They create instances of class Judge, which just means they create actual virtual judges with their own name and personality. They also each have their own copy of the rules for the behaviour of judges.

Execute them

Once actual judges are created, they can execute commands to start the judging. First the program tells them to make judgements using their judgement method. We execute the MakeJudgement method associated with each separate judge object in turn. Each has the same instructions but those instructions work on the particular judges instance variables, so do different things.

EXECUTE MakeJudgement OF CraigDevilDroidwood
EXECUTE MakeJudgement OF DarC3POBussL
EXECUTE MakeJudgement OF LenGoodroid

Then the program has commands telling them to say what they think,

EXECUTE GiveJudgement OF CraigDevilDroidwood
EXECUTE GiveJudgement OF DarC3POBussL
EXECUTE GiveJudgement OF LenGoodroid

and finally give their mark.

EXECUTE RevealMark OF CraigDevilDroidwood
EXECUTE RevealMark OF DarC3POBussL
EXECUTE RevealMark OF LenGoodroid

In our actual program this would sit in a loop so our program might be something like:

CraigDevilDroidwood IS A NEW Judge 
                    WITH name “Craig Devil Droidwood”
                    AND personality RUDE
DarC3POBussL        IS A NEW Judge 
                    WITH name “DarC3PO BussL”
                    AND personality SUPPORTIVE
LenGoodroid         IS A NEW Judge 
                    WITH name “Len Goodroid”
                    AND personality SUPPORTIVE

FOR EACH contestant DO THE FOLLOWING
    EXECUTE MakeJudgement OF CraigDevilDroidwood
    EXECUTE MakeJudgement OF DarC3POBussL
    EXECUTE MakeJudgement OF LenGoodroid

    EXECUTE GiveJudgement OF CraigDevilDroidwood
    EXECUTE GiveJudgement OF DarC3POBussL
    EXECUTE GiveJudgement OF LenGoodroid

    EXECUTE RevealMark OF CraigDevilDroidwood
    EXECUTE RevealMark OF DarC3POBussL
    EXECUTE RevealMark OF LenGoodroid

So we can now create judges to our heart’s content, fixing their personalities and putting the words in their mouths based on our single description of what a Judge is. Of course our behaviours so far are simple. We really want to add more kinds of personality like strict judges (Shirley) and excited ones (Bruno). Ideally we want to be able to do different combinations making perhaps excited rude judges as well as excited supportive ones. This really just takes more rules.

A classless society?

Computer Scientists are lazy beings – if they can find a way to do something that involves less work, they do it, allowing them to stay in bed longer. The idea we have been using to save work here is just that of describing classes of things and their properties and behaviour. Scientists have been doing a similar thing for a long time:

Birds have feathers (a property) and lay eggs (a behaviour).

Spiders have eight legs (a property) and make silk (a behaviour)

We can say something is a particular instance of a class of thing and that tells us a lot about it without having to spell it all out each time, even for fictional things: eg Hedwig is a bird (so feathers and eggs). Charlotte is a spider (so legs and silk). The class is capturing the common patterns behind the things we are describing. The difference when Computer Scientists write them is because they are programs they can then come alive!

All change

We have specified what it means to be a robotic judge and we’ve only had to specify the basics of Judgeness once to do it. That means that if we decide to change anything in the basic judge (like giving them a better way to come up with a mark than doing it randomly or having them choose things to say from a big database of supportive or rude comments) changing it in the plan will apply to all the judges of whatever kind. That is one of the most powerful reasons for programming in this way.

We could create robot performers in a similar way (after all don’t all the winners seem to merge into one in the long run?). We would then also have to write some instructions about how to work out who won – does the audience have a vote? How many get knocked out each week? … and so on.

Of course we’ve not written a full program, even for judges, just sketched a program in pseudocode. The next step is to convert this into a program in an object-oriented programming language like Python or Java. Is that hard? Why not give it a try and judge for yourself?


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

The First Law of Humans

Preventing robots from hurting us

by Paul Curzon, Queen Mary University of London

A chess board with pieces lined up
Image by Pexels from Pixabay 

The first rule of humans when around robots is apparently that they should not do anything too unexpected…

A 7-year old child playing chess in a chess tournament has had his finger broken by a chess-playing robot which grabbed it as the boy made his move. The child was blamed by one of the organisers, who claimed that it happened because the boy “broke the safety rules”! The organisers also apparently claimed that the robot itself was perfectly safe!

What seems to have happened is, after the robot played its move, the boy played his own move very quickly before the robot had finished. Somehow this triggered the wrong lines of code in the robot’s program: instructions that were intended for some other situation. In the actual situation of the boy’s hand being over the board at the wrong time it led the robot to grab his finger and not let go.

Spoiler Alert

The situation immediately brings to mind the classic science fiction story “Moxon’s Master” by Ambrose Bierce published way back in 1899. It is the story of a chess playing automaton (ie robot) and what happens when it is check-mated when playing a game with its creator Moxon. It flies into a dangerous rage. However, there the problems are apparently because the robot has developed emotions and so emotional reactions. In both situations however a robot intended simply to play chess is capable of harming a human as a result.

The three laws of robotics

Isaac Asimov is famous for his laws of robotics: fundamental unbreakable rules built in to the ‘brain’ of all robots in his fictional world precisely to stop this sort of situation. The rules he formulated were:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
A robot carefully touching fingers with a human
Image by Pete Linforth from Pixabay 

Clearly, had they been in place, the chess robot would not have harmed the boy, and Moxon’s automaton would not have been able to do anything too bad as a result of its temper either.

Asimov devised his rules as a result of a discussion with the Science Fiction magazine editor John W. Campbell, Jr. He then spent much of his Science Fiction career writing robot stories around how humans could end up being hurt despite the apparently clear rules. That aside their key idea was that, to ensure robots were safe around people, they would need built-in logic that could not be circumvented to stop them hurting them. They needed a fail-safe system monitoring their actions that would take over when breaches were possible. Clearly this chess-playing robot was not “perfectly safe” and not even fail-safe as if it was the boy would not have been harmed whatever he did. The robot did not have anything at all akin to a working, unbreakable First Law programmed into it.

Dangerous Machines

Asimov’s robots were intelligent and able to think for themselves in a more general sense than any that currently exist. The First Law essentially prevented them from deciding to harm a human not just do so by accident. However, perhaps the day will soon come when they can start to think for themselves, so perhaps a first law will soon be important. In any case, machines can harm humans without being able to think. That humans need to be wary around robots is obvious from the fact that there have been numerous injuries and even fatalities in factories using industrial robots in the decades since they were introduced. They are dangerous machines. Fortunately, the carnage inflicted on children is at least not quite that of the industrial accidents in the Industrial Revolution. It is still a problem though. People do have to take care and follow safety rules around them!

Rather than humans having to obey safety laws, we perhaps ought to be taking Asimov’s Laws more seriously for all robots, therefore. Why can’t those laws just be built in? It is certainly an interesting research problem to think about. The idea of a fail-safe is standard in engineering, so its not that general idea that is the problem. The problem is that, rather than intelligence being needed for robots to harm us, intelligence is needed to avoid them doing so.

Implementing the First Law

Let’s imagine building in the first law to chess playing robots and in particular the one that hurt the boy. For starters the chess playing robot would have needed to have recognised that the boy WAS a human so should not be harmed. It would also need to be able to recognise that his finger was a part of him and that gripping its finger would harm him. It would also need to know that it was gripping his finger (not a piece) at the time. It would then need a way to stop before it was too late, and do no harm in the stopping. It clearly needs to understand a lot about the world to be able to avoid hurting people in general.

Some of this is almost within our grasp. Computers can certainly do a fairly good job of recognising humans now through image recognition code. They can even recognise individuals, so actually that first fundamental part of knowing what is and isn’t a human is more or less possible now, just not perfectly yet. Recognising objects in general is perhaps harder. The chess robot presumably has code for recognising pieces already though a finger perhaps even looks like a piece at least to a robot. To generally, avoid causing harm in any situation it needs to be able to recognise what lots of objects are not just chess pieces. It also needs to differentiate them from what is part of a human not just what is a human. Object recognition like this is possible at least in well-defined situations. It is much harder to manage it in general, even if the precise objects have never been encountered before. Even harder though is probably recognising all the ways that would constitute doing harm to the human identified in front of it including with any of those objects that are around.

Staying in control

The code to do all this would also have to be in some sense at a higher level of control than that making the robot take actions as it has to overrule them ALWAYS. For the chess robot, there was presumably a bug that allowed it to grip a human’s finger as no programmer will have intended that, so it isn’t about monitoring the code itself. The fail-safe code has to be monitoring what is actually happening in the world and be in a position to take over. It also can’t just make the robot freeze as that may be enough to do the damage of a broken finger if already in the robot’s grip (and that may have been part of the problem for the boy’s finger). It also can’t just move its arm back suddenly as what if another child (a toddler perhaps) has just crawled up behind it! It has to monitor the effects of its own commands too! A simple version of such a monitor is probably straightforward though. The robot’s computer architecture just needs to be designed accordingly. One way robots are designed is for new modules to build on top of existing ones giving new more complex behaviour as a result, which possibly fits what is needed here. Having additional computers acting as monitors to take over when others go wrong is also not really that difficult (bugs in their own code aside) and a standard idea for mission-critical systems.

So it is probably all the complexity of the world with unexpected things happening in it that is the problem that makes a general version of the First Law hard at the moment… If Asimov’s laws in all their generalness are currently a little beyond us, perhaps we should just think about the problem in another more limited way (at least for now)…

Can a chess playing robot be safe?

In the chess game situation, if anything is moving in front of the robot then it perhaps should just be keeping well out of the way. To do so just needs monitoring code that can detect movement in a small fixed area. It doesn’t need to understand anything about the world apart from movement versus non-movement. That is easily in the realms of what computers can do – even some simple toy robots can detect movement. The monitoring code would still need to be able to override the rest of the code of course, bugs included.

Why also could the robot grip a finger with enough pressure to break it, anyway. Perhaps it just needed more accurate sensors in its fingers to avoid doing harm, together with a sensor that just let go if it felt too much resistance back. After all chess pieces don’t resist much!

And one last idea, if a little bit more futuristic. A big research area at the moment is soft robotics: robots that are soft and squidgy not hard and solid, precisely so they can do less harm. Perhaps if the chess robot’s robotic claw-like fingers had instead been totally soft and squishy it would not have harmed him even if it did grab his finger.

Had the robot designers tried hard enough they surely could have come up with solutions to make it safer, even if they didn’t have good enough debugging skills to prevent the actual bug that caused the problem. It needs safety to be a very high priority from the outset though: and certainly safety that isn’t just pushed onto humans to be responsible for as the organisers did.

We shouldn’t be blaming children for not obeying safety rules when they are given what is essentially a hard, industrial robot to play with. Doing so just lets the robot makers off the hook from even trying to make their robots safer, when they clearly could do more. When disasters happen don’t blame the people, improve the system. On the other hand perhaps we should be thinking far more about doing the research that will allow us one day to actually implement Asimov’s Laws in all robots and so have a safety culture in robotics built-in. Then perhaps people would not have to be quite so wary around robots and certainly not have to follow safety rules themselves. That surely is the robot’s job.


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

Dressing it up

Why it might be good for robots to wear clothes

by Peter W McOwan and the CS4FN team, Queen Mary University of London

Updated from the archive

(Robot) dummies in different clothes standing in a line up a slope
Image by Peter Toporowski from Pixabay 

Even though most robots still walk around naked, the Swedish Institute of Computer Science (SICS) in Stockholm explored how to produce fashion conscious robots.

The applied computer scientists there were looking for ways to make the robots of today easier for us to get along with. As part of the LIREC project to build the first robot friends for humans they examined how our views of simple robots change when we can clothe and customise them. Does this make the robots more believable? Do people want to interact more with a fashionable robot?

How do you want it?

These days most electronic gadgets allow the human user to customise them. For example, on a phone you can change the background wallpaper or colour scheme, the ringtone or how the menus work. The ability of the owner to change the so-called ‘look and feel’ of software is called end-user programming. It’s essentially up to you how your phone looks and what it does.

Dinosaurs waking and sleeping

The Swedish team began by taking current off-the-shelf robots and adding dress-up elements to them. Enter Pleo, a toy dinosaur ‘pet’ able to learn as you play with it. Now add in that fashion twist. What happens when you can play dress up with the dinosaur? Pleo’s costumes change its behaviour, kind of like what happens when you customise your phone. For example, if you give Pleo a special watchdog necklace the robot remains active and ‘on guard’. Change the costume from necklace to pyjamas, and the robot slowly switches into ‘sleep’ mode. The costumes or accessories you choose communicate electronically with the robot’s program, and its behaviour follows suit in a way you can decide. The team explored whether this changed the way people played with them.

Clean sweeps

In another experiment the researchers played dress up with a robot vacuum cleaner. The cleaner rolls around the house sweeping the floor, and had already proven a hit with many consumers. It bleeps happily as its on-board computer works out the best path to bust your carpet dust. The SICS team gave the vacuum a special series of stick-on patches, which could add to its basic programming. They found that choosing the right patch could change the way the humans perceive the robot’s actions. Different patches can make humans think the robot is curious, aggressive or nervous. There’s even a shyness patch that makes the robot hide under the sofa.

What’s real?

If humans are to live in a world populated by robots there to help them, the robots need to be able to play by our rules. Humans have whole parts of their brains given over to predicting how other humans will react. For example, we can empathise with others because we know that other beings have thoughts like us, and we can imagine what they think. This often spills over into anthropomorphism, where we give human characteristics to non-human animal or non-living things. Classic examples are where people believe their car has a particular personality, or think their computer is being deliberately annoying – they are just machines but our brains tend to attach motives to the behaviours we see.

Real-er robots?

Robots can produce very complex behaviours depending on the situations they are in and the ways we have interacted with them, which creates the illusion that they have some sort of ‘personality’ or motives in the way they are acting. This can help robots seem more natural and able to fit in with the social world around us. It can also improve the ways they provide us with assistance because they seem that bit more believable. Projects like the SICS’s ‘actDresses’ one help us by providing new ways that human users can customise the actions of their robots in a very natural way, in their case by getting the robots to dress for the part.


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

The Mummy in an AI world: Jane Webb’s future

by Paul Curzon, Queen Mary University of London

The sarcophagus of a mummy
Image by albertr from Pixabay

Inspired by Mary Shelley’s Frankenstein, 17-year old Victorian orphan, Jane Webb secured her future by writing the first ever Mummy story. The 22nd century world in which her novel was set is perhaps the most amazing thing about the three volume book though.

On the death of her father, Jane realised she needed to find a way to support herself and did so by publishing her novel “The Mummy!” in 1827. In contrast to their modern version as stars of horror films, Webb’s Mummy, a reanimation of Cheops, was actually there to help those doing good and punish those that were evil. Napoleon had, through the start of the century, invaded Egypt, taking with him scholars intent on understanding the Ancient Egyptian society. Europe was fascinated with Ancient Egypt and awash with Egyptian artefacts and stories around them. In London, the Egyptian Hall had been built in Piccadilly in 1812 to display Egyptian artefacts and in 1821 it displayed a replica of the tomb of Seti I. The Rosetta Stone that led to the decipherment of hieroglyphics was cracked in 1822. The time was therefore ripe for someone to come up with the idea of a Mummy story.

The novel was not, however, set in Victorian times but in a 22nd century future that she imagined, and that future was perhaps more amazing than the idea of a mummy coming to life. Her version of the future was full of technological inventions supporting humanity, as well as social predictions, many of which have come to fruition such as space travel and the idea that women might wear trousers as the height of fashion (making her a feminist hero). The machines she described in the book led to her meeting her future husband, John Loudon. As a writer about farming and gardening he was so impressed by the idea of a mechanical milking machine included in the book, that he asked to meet her. They married soon after (and she became Jane Loudon).

The skilled artificial intelligences she wrote into her future society are perhaps the most amazing of her ideas in that she was the first person to really envision in fiction a world where AIs and robots were embedded in society just doing good as standard. To put this into context of other predictions, Ada Lovelace wrote her notes suggesting machines of the future would be able to compose music 20 years later.

Jane Webb’s future was also full of cunning computational contraptions: there were steam-powered robot surgeons, foreseeing the modern robots that are able to do operations (and with their steady hands are better at, for example, eye surgery than a human). She also described Artificial Intelligences replacing lawyers. Her machines were fed their legal brief, giving them instructions about the case, through tubes. Whilst robots may not yet have fully replaced barristers and judges, artificial intelligence programs are already used, for example, to decide the length of sentences of those convicted in some places, and many see it now only being a matter of time before lawyers are spending their time working with Artificial Intelligence programs as standard. Jane’s world also includes a version of the Internet, at a time before electric telegraph existed and when telegraph messages were sent by semaphore between networks of towers.

The book ultimately secured her future as required, and whilst we do not yet have any real reanimated mummy’s wandering around doing good deeds, Jane Webb did envision lots of useful inventions, many that are now a reality, and certainly had pretty good ideas about how future computer technology would pan out in society…despite computers, never mind artificial intelligences, still being well over a century away.


More on …

Related Magazines …


EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

The naked robot

by Paul Curzon, Queen Mary University of London

From the archive

A naked robot holding a flower
Image by bamenny from Pixabay 

Why are so many film robots naked? We take it for granted that robots don’t wear clothes, and why should they?

They are machines, not humans, after all. On the other hand, the quest to create artificial intelligence involves trying to create machines that share the special ingredients of humanity. One of the things that is certainly special about humans in comparison to other animals is the way we like to clothe and decorate our bodies. Perhaps we should think some more about why we do it but the robots don’t!

Shame or showoff?

The creation story in the Christian Bible suggests humans were thrown out of the Garden of Eden when Adam and Eve felt the need to cover up – when they developed shame. Humans usually wear more than just the bare minimum though, so wearing clothing can’t be all about shame. Nor is it just about practicalities like keeping warm. Turn up at an interview covering your body with the wrong sort of clothes and you won’t get the job. Go to a fancy dress party in the clothes that got you the job and you will probably feel really uncomfortable the moment you see that everyone else is wearing costumes. Clothes are about decorating our bodies as much as covering them.

Our urge to decorate our bodies certainly seems to be a deeply rooted part of what makes us human. After all, anthropologists consider finds like ancient beads as the earliest indications of humanity evolving from apehood. It is taken as evidence that there really was someone ‘in there’ back then. Body painting is used as another sign of our emerging humanity. We still paint our bodies millennia later too. Don’t think we’re only talking about children getting their faces painted – grownups do it too, as the vast make-up industry and the popularity of tattoos show. We put shiny metal and stones around our necks and on our hands too.

The fashion urge

Whatever is going on in our heads, clearly the robots are missing something. Even in the movies the intelligent ones rarely feel the need to decorate their bodies. R2D2? C3PO? Wall-E? The exceptions are the ones created specifically to pass themselves off as human like in Blade Runner.

You can of course easily program a robot to ‘want’ to decorate itself, or to refuse to leave its bedroom unless it has managed to drape some cloth over its body and shiny wire round its neck, but if it was just following a programmed rule would that be the same as when a human wears clothes? Would it be evidence of ‘someone in there’? Presumably not!

We do it because of an inner need to conform more than an inner need to wear a particular thing. That is what fashion is really all about. Perhaps programming an urge to copy others would be a start. In Wall-E, the robot shows early signs of this as he tries to copy what he sees the humans doing in the old films he watches. At one point he even uses a hubcap as a prop hat for a dance. Human decoration may have started as a part of rituals too.

Where to now?

Is this need to decorate our bodies something special, something linked to what makes us human? Should we be working on what might lead to robots doing something similar of their own accord? When archaeologists are hunting through the rubble in thousands of years’ time, will there be something other than beads that would confirm their robot equivalent to self-awareness? If robots do start to decorate and cover up their bodies because they want to rather than because it was what some God-like programmer coded them to do, surely something special will have happened. Perhaps that will be the point when the machines have to leave their Garden of Eden too.


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

The Ultimate (do nothing) machine

by Jo Brodie, Queen Mary University of London

A black box with an on-off switch at ON. The top flips open and a robotivc finger pokes out to push the switch back to OFF.
This ultimate machine is a commercially produced version of Minsky’s idea. Image by Drpixie from Wikimedia CC-BY-SA-4.0

In 1952 computer scientist and playful inventor, Marvin Minsky, designed a machine which did one thing, and one thing only. It switched itself off. It was just a box with a motor, switch and something to flip (toggle) the switch off again after someone turned it on. Science fiction writer Arthur C. Clarke thought there was something ‘unspeakably sinister’ about a machine that exists just to switch itself off and hobbyist makers continue to create their own variations today.


More on …

Related Magazines …


This article was funded by UKRI, through Professor Ursula Martin’s grant EP/K040251/2 and grant EP/W033615/1.

An ode to technology

by Paul Curzon, Queen Mary University of London

Cunning contraptions date back to ancient civilisations.

A female statue staring with head turned
Image by Devanath from Pixabay

People have always been fascinated by automata: robot-style contraptions allowing inanimate animal and human figures to move, long before computers could take the place of a brain.

Records show they were created in ancient Egypt, China, and Greece. In the renaissance Leonardo designed them for entertainment, and more recently magicians have bedazzled audiences with them.

The island of Rhodes was a centre for mechanical engineering in Ancient Greek times, and the Greeks were great inventors who loved automata. According to an Ode by Pindar the island was covered with automata:

The animated figures stand.

Adorning every public street.

And seem to breathe in stone,

Or move their marble feet.

Pindar

More on …

Cunning Computational Contraptions

Related Magazines …


This article was funded by UKRI, through Professor Ursula Martin’s grant EP/K040251/2 and grant EP/W033615/1.

Swallow a slug-bot to catch a …

by Paul Curzon, Queen Mary University of London

Imagine swallowing a slug (hint not only a yucky thought but also not a good idea as it could kill you)…now imagine swallowing a slug-bot … also yucky but in the future it might save your life.

When people accidentally swallow solid objects that won’t pass through their digestive system, or are toxic, it can be a big problem. Once an object passes beyond your stomach it becomes hard to get at.

That is where the slug shaped robot comes in. The idea of scientists at the Chinese University of Hong Kong is that a robot like a slug could crawl down your throat to retrieve whatever you had swallowed.

If you think of robots as solid, hard things then that would be the last thing you might want to swallow (aside from an actual slug), and certainly not to catch the previous solid thing you swallowed. You may be right. However, that is where the soft slug-shaped robot comes in.

It is easy to make or buy slime-like “silly” putty. Add iron filings to slime putty and you can make it stretch and sway and even move around with magnets yourself. You can buy such magnetic slime at science museums…it is fun to play with though you definitely shouldn’t swallow it.

The scientists have taken that general idea though and using special materials created a similar highly controllable bot that can be moved around using a magnet-based control system. It is made of a special material that is magnetic and slime-like but coated in silicon dioxide to stop it being poisonous.

They have shown that they can control it to squeeze through narrow gaps and encircle small objects, carrying them away with it…essentially what would be needed to recover objects that have been swallowed.

It needs a lot more work to make sure it is safe to really be swallowed. Also to be a real autonomous robot it would need to have sensors included somehow, and be connected to some sort of intelligent system to automatically control its behaviour. However, with more research that all may become possible.

So in the future if you don’t fancy swallowing a slug-bot, you’d better be far more careful about what else you swallow first. Of course, if it turns out slug like robots can break down, so get stuck themselves, you may then be in a position of needing to swallow a bird-bot to catch the slug-bot. How absurd …


More on …

Related Magazines …


The cs4fn blog is funded by EPSRC, through grant EP/W033615/1.

How to get a head in robotics (includes a free papercraft activity with a robot that expresses ’emotions’)

by Paul Curzon, Queen Mary University of London

EMYS robot

If humans are ever to get to like and live with robots we need to understand each other. One of the ways that people let others know how they are feeling is through the expressions on their faces. A smile or a frown on someone’s face tells us something about how they are feeling and how they are likely to react. Some scientists think it might be possible for robots to express feelings this way too, but understanding how a robot can usefully express its ‘emotions’ (what its internal computer program is processing and planning to do next), is still in its infancy. A group of researchers in Poland, at Wroclaw University of Technology, have come up with a clever new design for a robot head that could help a computer show its feelings. It’s inspired by the Teenage Mutant Ninja Turtles cartoon and movie series.

The real Teenage Mutant Ninja Turtle
Their turtle-inspired robotic head called EMYS, which stands for EMotive headY System is cleverly also the name of a European pond turtle, Emys orbicularis. Taking his inspiration from cartoons, the project’s principal ‘head’ designer Jan Kedzierski created a mechanical marvel that can convey a whole range of different emotions by tilting a pair of movable discs, one of which contains highly flexible eyes and eyebrows.

The real Emys orbicularis (European pond turtle)

Eye see
The lower disc imitates the movements of the human lower jaw, while the upper disk can mimic raising the eyebrows and wrinkling the forehead. There are eyelids and eyebrows linked to each eye. Have a look at your face in the mirror, then try pulling some expressions like sadness and anger. In particular look at what these do to your eyes. In the robot, as in humans, the eyelids can move to cover the eye. This helps in the expression of emotions like sadness or anger, as your mirror experiment probably showed.

Pop eye
But then things get freaky and fun. Following the best traditions of cartoons, when EMYS is ‘surprised’ the robot’s eyes can shoot out to a distance of more than 10 centimetres! This well-known ‘eyes out on stalks’ cartoon technique, which deliberately over-exaggerates how people’s eyes widen and stare when they are startled, is something we instinctively understand even though our eyes don’t really do this. It makes use of the fact that cartoons take the real world to extremes, and audiences understand and are entertained by this sort of comical exaggeration. In fact it’s been shown that people are faster at recognising cartoons of people than recognising the un- exaggerated original.

High tech head builder
The mechanical internals of EMYS consist of lightweight aluminium, while the covering external elements, such as the eyes and discs, are made of lightweight plastic using 3D rapid prototyping technology. This technology allows a design on the computer to be ‘printed’ in plastic in three dimensions. The design in the computer is first converted into a stack of thin slices. Each slice of the design, from the bottom up, individually oozes out of a printer and on to the slice underneath, so layer-by-layer the design in the computer becomes a plastic reality, ready for use.

Facing the future
A ‘gesture generator’ computer program controls the way the head behaves. Expressions like ‘sad’ and ‘surprised’ are broken down into a series of simple commands to the high-speed motors, moving the various lightweight parts of the face. In this way EMYS can behave in an amazingly fluid way – its eyes can ‘blink’, its neck can turn to follow a person’s face or look around. EMYS can even shake or nod its head. EMYS is being used on the Polish group’s social robot FLASH (FLexible Autonomous Social Helper) and also with other robot bodies as part of the LIREC project (www.lirec.eu [archived]). This big project explores the question of how robot companions could interact with humans, and helps find ways for robots to usefully show their ‘emotions’.

Do try this at home
In this issue, there is a chance for you to program an EMYS-like robot. Follow the instructions on the Emotion Machine in the centre of the magazine (see printable version below) and build your own EMYS. By selecting a series of different commands in the Emotion Engine boxes, the expression on EMYS’s face will change. How many different expressions can you create? What are the instructions you need to send to the face for a particular expression? What emotion do you think that expression looks like – how would you name it? What would you expect the robot to be ‘feeling’ if it pulled that face?

Print, cut out and make your own emotional robot. The strips of paper at the top (‘sliders’) containing the expressions and letters are slotted into the grooves on the robot’s face and happy or annoyed faces can created by moving the sliders.

Go further
Why not draw your own sliders, with different eye shapes, mouth shapes and so on. Explore and experiment! That’s what computer scientists do.

*****************************

This article was originally published on CS4FN (Computer Science For Fun) and on page 7 of issue 13 of the CS4FN magazine. You can download a free PDF copy of that issue, as well as all of our other free magazines and booklets.

 

Standup Robots

‘How do robots eat pizza?’… ‘One byte at a time’. Computational Humour is real, but it’s not jokes about computers, it’s computers telling their own jokes.

Robot performing
Image from istockphoto

Computers can create art, stories, slogans and even magic tricks. But can computers perform themselves? Can robots invent their own jokes? Can they tell jokes?

Combining Artificial Intelligence, computational linguistics and humour studies (yes you can study how to be funny!) a team of Scottish researchers made an early attempt at computerised standup comedy! They came up with Standup (System to Augment Non Speakers Dialogue Using Puns): a program that generates riddles for kids with language difficulties. Standup has a dictionary and joke-building mechanism, but does not perform, it just creates the jokes. You will have to judge for yourself as to whether the puns are funny. You can download the software from here. What makes a pun funny? It is a about the word having two meanings at exactly the same time in a sentence. It is also about generating an expectation that you then break: a key idea about what is at the core of creativity too.

A research team at Virginia Tech in the US created a system that started to learn about funny pictures. Having defined a ‘funniness score’ they created a computational model for humorous scenes, and trained it to predict funniness, perhaps with an eye to spotting pics for social media posting, or not.

But are there funny robots out there? Yes! RoboThespian programmed by researchers at Queen Mary University of London, and Data, created by researchers at Carnegie Mellon University are both robots programmed to do stand-up comedy. Data has a bank of jokes and responds to audience reaction. His developers don’t actually know what he will do when he performs, as he is learning all the time. At his first public gig, he got the crowd laughing, but his timing was poor. You can see his performance online, in a TED Talk.

RoboThespian did a gig at the London Barbican alongside human comedians. The performance was a live experiment to understand whether the robot could ‘work the audience’ as well as a human comedian. They found that even relatively small changes in the timing of delivery make a big difference to audience response.

What have these all got in common? Artificial Intelligence, machine learning and studies to understand what humour actually is, are being combined to make something that is funny. Comedy is perhaps the pinnacle of creativity. It’s certainly not easy for a human to write even one joke, so think how hard it is distill that skill into algorithms and train a computer to create loads of them.

You have to laugh!

Watch RoboThespian [EXTERNAL]

– Jane Waite, Queen Mary University of London, Summer 2017

Download Issue 22 of the cs4fn magazine “Creative Computing” here

Lots more computing jokes on our Teaching London Computing site