Aaron and the art of art

Aaron is a successful American painter. Aaron’s delicate and colourful compositions on canvas sell well in the American art market, and have been exhibited worldwide, in London’s Tate Modern gallery and the San Francisco Museum of Modern Art for example. Oh and by the way, Aaron is a robot!

Yes, Aaron is a robot, controlled by artificial intelligence, and part of a lifelong experiment undertaken by the late Harold Cohen to create a creative machine. Aaron never paints the same picture twice; it doesn’t simply recall pictures from some big database. Instead Aaron has been programmed to work autonomously. That is, once it starts there is no further human intervention, Aaron just draws and paints following the rules for art that it has been taught.

Perfecting the art of painting

Aaron’s computer program has grown and developed over the years, and like other famous painters, has passed though a number of artistic periods. Back in the early 1970s all Aaron could do was draw simple shapes, albeit shapes that looked hand drawn – not the sorts of precise geometric shapes that normal computer graphics produced. No, Aaron was going to be a creative artist. In the late 1970s Aaron learned something about artistic perspective, namely that objects in the foreground are larger than objects in a picture’s background. In the late 80s Aaron could start to draw human figures, knowing how the various shapes of the human body were joined together, and then learning how to change these shapes as a body moved in three dimensions. Now Aaron knows how to add colour to its drawings, to get those clever compositions of shades just spot on and to produce bold, unique pictures, painted with brush on canvas by its robotic arm.

It’s what you know that counts

When creating a new painting Aaron draws on two types of knowledge. First Aaron knows about things in the real world: the shapes that make up the human body, or a simple tree. This so called declarative (declared) knowledge is encoded in rules in Aaron’s programming. It’s a little like human memory: you know something about how the different shapes in the world work. This information is stored somewhere in your brain. The second type of knowledge Aaron uses is called procedural knowledge. Procedural knowledge allows you to move (process) from a start to an end through a chain of connected steps. Aaron, for example, knows how to proceed through painting areas of a scene to get the colour balance correct and in particular, getting the tone or brightness of the colour right. That is often more artistically important than the actual colours themselves. Inside Aaron’s computer program these two types of knowledge, declarative and procedural, are continuously interacting with each other in complex ways. Perhaps this blending of the two types of knowledge is the root of artistic creativity?

Creating Creativity

Though a successful artist, and capable of producing pleasing and creative pictures, Aaron’s computer program still has many limitations. Though the pictures look impressive, that’s not enough. To really understand creativity we need to examine the process by which they have been made. We have an ‘artist’ that we can take to pieces and examine in detail. Studying what Aaron can do, given we know exactly what’s been programmed into it, allows us to examine human creativity. What about it is different from the way humans paint, for example? What would we need to add to Aaron to make its process of painting more similar to human creativity?

Not quite human

Unlike a human artist Aaron cannot go back and correct what it does. Studies of great artist’s paintings often show that under the top layer of paint there are many other parts of the picture that have been painted out, or initial sketches that have been redrawn as the artist progresses through the work, perfecting it as they go. Aaron always starts in the foreground of the picture and moves toward painting the background later, whereas human artists can chop and change which part of a picture to work on to get it just right. Perhaps in the future, with human help Aaron or robots like him will develop new human-like painting skills and produce even better paintings. Until then the art world will need to content itself with Aaron’s early period work.

the CS4FN team (updated from the archive)

Some of Aaron’s (and Harold COhen’s) work is on display at the Tate modern until June 2025 as part of the Electric Dreams exhibition.

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page and talk are funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSrC logos

The Blue Planet?

A Blue planet
Image by spieriz from Pixabay (cropped)

How much should we change the world to make it easier for our machines to work?

Plant scientists have spotted a problem they can solve. Weeding robots are finding it difficult to weed. It is a hard problem for them. All those weeds look just like the real crop which they aren’t supposed to destroy. So the robots are pulling up the wrong things. What is a robot to do? Should we make it easy for them?

Plant Scientists have seen a need for their technology which is looking for solutions any where it can. Robots are good at distinguishing colour. That is easy. So why not just genetically modify weeds to be blue. This is possible as there are already lots of genes causing blueness in plants (think blueberries). Problem solved. The robots then won’t get it wrong again and the crops are safe.

What could possibly go wrong? Well, to work the genes will need to be spread wildly and perhaps they could escape and get into our crops or other plants that are just there to be plants, or just plants in the food chain, We could end up with a blue planet a bit like the red one the martians brought int he War of the Worlds. Alternatively, evolution might step up and continually produce mutant weeds that subverted that gene, given that gene killed them. Perhaps all the problems can guarantee to be avoided, though the wise person does not bet against natural selection finding a way round problems presented to it in the long term.

Isn’t it time we learnt our lesson and stopped changing the planet to make our machines lives easier? Of course we have been doing that for a long time – think of all the roads scarring the countryside so cars work or rails so trains work. Perhaps we should think more about the needs of the planet as well as of people, rather than the needs of our machines when innovating, especially when undoubtedly eventually (if we don’t destroy ourselves first) we will have machines clever enough to work it out.

There are always lots of ways of solving problems and it is important to think about the planet now not just our machines. Perhaps robots should just not weed until they can do it without us having to change the problem (and the planet) for them so they can!

Paul Curzon, Queen Mary University of London

More on …


Magazines and booklets …


Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

Sarah Angliss: Hugo is no song bird

What was the first technology for recording music: CDs? Records? 78s, The phonograph? No. Trained songbirds came before all of them.

Composer, musician, engineer and visiting fellow at Goldsmiths University, Sarah Angliss, usually has a robot on stage performing live with her. These robots are not slick high tech cyber-beings, but junk modelled automata. One, named Hugo, sports a spooky ventriloquist dolls head! Sarah builds and programs her robots, herself.

She is also a sound historian, and worked on a Radio 4 documentary, ‘The Bird Fancyer’s Delight‘, uncovering how birds have been used to provide music across the ages. During the 1700’s people trained songbirds to sing human invented tunes in their homes. You could buy special manuals showing how to train your pet bird. By playing young birds a tune over and over again, and in the absence of other birds to put them right, they would adopt that song as their own. Playing the recorder was one way to train them, but special instruments were also invented to do the job automatically.

With the invention of the phonograph, home songbird popularity plummeted but it didn’t completely die out. Blackbirds, thrushes, canaries, budgies, bullfinches and other songbirds have continued to be schooled to learn songs that they would never sing in the wild.

Jane Waite, Queen Mary University of London


Related Magazine …


More from Sarah Angliss


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Testing AIs in Minecraft

by Paul Curzon, Queen Mary University of London

What makes a good environment for child AI learning development? Possibly the same as for human child learning development: Minecraft.

Lego is one of the best games to play for impactful learning development for children. The word Lego is based on the words Play and Well in Danish. In the virtual world, Minecraft has of course taken up the mantle. A large part of why they are wonderful games is because they are open-ended and flexible. There are infinite possibilities over what you can build and do. They therefore help encourage not just focussing on something limited to learn as many other games do, but support open-ended creativity and so educational development. Given how positive it can be for children, it shouldn’t be surprising that Minecraft is now being used to help AIs develop too.

Games have long been used to train and test Artificial Intelligence programs. Early programs were developed to play and ultimately beat humans at specific games like Checkers, Chess and then later Go. That mastered they started to learn to play individual arcade games as a way to extend their abilities. A key part of our intelligence is flexibility though, we can learn new games. Aiming to copy this, the AIs were trained to follow suit and so became more flexible, and showed they could learn to play multiple arcade games well. 

This is still missing a vital part of our flexibility though. The thing about all these games is that the whole game experience is designed to be part of the game and so the task the player has to complete. Everything is there for a reason. It is all an integral part of the game. There are no pieces at all in a chess game that are just there to look nice and will never, ever play a part in winning or losing. Likewise all the rules matter. When problem solving in real life, though, most of the world, whether objects, the way things behave or whatever, is not there explicitly to help you solve the problem. It is not even there just to be a designed distractor. The real world also doesn’t have just a few distractors, it has lots and lots. Looking round my living room, for example, there are thousands of objects, but only one will help me turn on the tv.

AIs that are trained on games may, therefore, just become good at working in such unreal environments. They may need to be told what matters and what to ignore to solve problems. Real problems are much more messy, so put them in the real world, or even a more realistic virtual world, to problem solve and they may turn out to be not very clever at all. Tests of their skills that are based on such tasks may not really test them at all.

Researchers at the University of Witwatersrand in South Africa decided to tackle this issue, but using yet another game: Minecraft.  Because Minecraft is an open-ended virtual world, tackling challenges created in it will involve working in a world that is much more than just about the problem itself. The Witwatersrand team’s resulting MinePlanner system is a collection of 45 challenges, some easy, some harder. They include gathering tasks (like finding and gathering wood) and building tasks (like building a log cabin), as well as tasks that include combinations of these things. Each comes in three versions. In the easy version nothing is irrelevant. The medium version contains a variety of extraneous things that are not at all useful to the task. The hard version is in a full Minecraft world where there are thousands of objects that might be used.

To tackle these challenges an AI (or human) needs to solve not just the complex problem set, but also work out for themselves what in the Minecraft world is relevant to the task they are trying to perform and what isn’t. What matters and what doesn’t?

The team hope that by setting such tests they will help encourage researchers to develop more flexible intelligences, taking us closer to having real artificial intelligence. The problems are proposed as a benchmark for others to test their AIs against. The Witwatersrand team have already put existing state-of-the-art AI planning systems to the test. They weren’t actually that great at solving the problems and even the best could not complete the harder tasks.

So it is back to school for the AIs but hopefully now they will get a much better, flexible and fun education playing games like Minecraft. Let’s just hope the robots get to play with Lego too, so they don’t get left behind educationally.

More on …

Magazines …


Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Designing robots that care

by Nicola Plant, Queen Mary University of London

Think of the perfect robot companion. A robot you can hang out with, chat to and who understands how you feel. Robots can already understand some of what we say and talk back. They can even respond to the emotions we express in the tone of our voice. But, what about body language? We also show how we feel by the way we stand, we describe things with our hands and we communicate with the expressions on our faces. Could a robot use body language to show that it understands how we feel? Could a robot show empathy?

If a robot companion did show this kind of empathetic body language we would likely feel that it understood us, and shared our feelings and experiences. For robots to be able to behave like this though, we first need to understand more about how humans use movement to show empathy with one another.

Think about how you react when a friend talks about their headache. You wouldn’t stay perfectly still. But what would you do? We’ve used motion capture to track people’s movements as they talk to each other. Motion capture is the technology used in films to make computer-animated creatures like Gollum in Lord of the Rings, or the Apes in the Planet of the Apes. Lots of cameras are used together to create a very precise computer model of the movements being recorded. Using motion capture, we’ve been able to see what people actually do when chatting about their experiences.

It turns out that we share our understanding of things like a headache by performing it together. We share the actions of the headache as if we have it ourselves. If I hit my head, wince and say ‘ouch’, you might wince and say ‘ouch’ too – you give a multimodal performance, with actions and words, to show me you understand how I feel.

So should we just program robots to copy us? It isn’t as simple as that. We don’t copy exactly. A perfect copy wouldn’t show understanding of how we feel. A robot doing that would seem like a parrot, repeating things without any understanding. For the robot to show that it understands how you feel it must perform a headache like it owns it – as though it were really theirs! That means behaving in a similar way to you; but adapted to the unique type of headache it has.

Designing the way robots should behave in social situations isn’t easy. If we work out exactly how humans interact with each other to share their experiences though, we can use that understanding to program robot companions. Then one day your robot friend will be able to hang out with you, chat and show they understand how you feel. Just like a real friend.

multimodal = two or more different ways of doing something. With communication that might be spoken words, facial expressions and hand gestures.


Related Magazine …


See also (previous post and related career options)

Click to read about the AMPER project

We have recently written about the AMPER project which uses a tablet-based AI tool / robot to support people with dementia and their carers. It prompts the person to discuss events from their younger life and adapts to their needs. We also linked this with information about the types of careers people working in this area might do – the examples given were for a project based in the Netherlands called ‘Dramaturgy for Devices’ – using lessons learned from the study of theatre and theatrical performances in designing social robots so that their behaviour feels more natural and friendly to the humans who’ll be using them.

Click to see one of the four jobs in this area with another three linked from it

See our collection of posts about Career paths in Computing.


EPSRC supports this blog through research grant EP/W033615/1.

Al-Jazari: the father of robotics

Al Jazari's hand washing automaton
Image by user:Grenavitar, Public domain, via Wikimedia Commons

Science fiction films are full of humanoid robots acting as servants, workers, friends or colleagues. The first were created during the Islamic Golden Age, a thousand years ago. 

Robots and automata have been the subject of science fiction for over a century, but their history in myth goes back millennia, but so does the actual building of lifelike animated machines. The Ancient Greeks and Egyptians built Automata, animal or human-like contraptions that seemed to come to life. The early automata were illusions that did not have a practical use, though, aside from entertainment or just to amaze people. 

It was the great inventor of mechanical gadgets Ismail Al-Jazari from the Islamic Golden Age of science, engineering and art in the 12th century, who first built robot-like machines with actual purposes. Powered by water, his automata acted as servants doing specific tasks. One machine was a humanoid automaton that acted as a servant during the ritual purification of hand washing before saying prayers. It poured water into a basin from a jug and then handed over a towel, mirror and comb. It used a toilet style flushing mechanism to deliver the water from a tank. Other inventions included a waitress automaton that served drinks and robotic musicians that played instruments from a boat. It may even have been programmable. 

We know about Al-Jazari’s machines because he not only created mechanical gadgets and automata, he also wrote a book about them: The Book of Knowledge of Ingenious Mechanical Devices. It’s possible that it inspired Leonardo Da Vinci who, in addition to being a famous painter of the Italian Renaissance, was a prolific inventor of machines. 

Such “robots” were not everyday machines. The hand washing automata was made for the King. Al-Jazari’s book, however, didn’t just describe the machines, it explained how to build them: possibly the first text book to cover Automata. If you weren’t a King, then perhaps you could, at least, have a go at making your own servants. 

Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Future Friendly: Focus on Kerstin Dautenhahn

by Peter W McOwan, Queen Mary University of London

(from the archive)

Large robot facing a man in his home
Robot at home Image by Meera Patil from Pixabay

Kerstin Dautenhahn is a biologist with a mission: to help us make friends with robots. Kerstin was always fascinated by the natural world around her, so it was no surprise when she chose to study Biology at the University of Bielefeld in Germany. Afterwards she went on to study a Diploma in Biology where she did research on the leg reflexes in stick insects, a strange start it may seem for someone who would later become one of the world’s foremost robotics researchers. But it was through this fascinating bit of biology that Kerstin became interested in the ways that living things process information and control their body movements, an area scientists call biological cybernetics. This interest in trying to understand biology made her want to build things to test her understanding, these things would be based on ideas copied from biological animals but be run by computers, these things would be robots.

Follow that robot

From humble beginning building small robots that followed one another over a hilly landscape, she started to realise that biology was a great source of ideas for robotics, and in particular that the social intelligence that animals use to live and work with each other could be modelled and used to create sociable robots.

She started to ask fascinating questions like “What’s the best way for a robot to interrupt you if you are reading a newspaper – by gesturing with its arms, blinking its lights or making a sound?” and perhaps most importantly “When would a robot become your friend?” First at the University of Hertfordshire, now a Professor at the University of Waterloo she leads a world famous research group looking to try and build friendly robots with social intelligence.

Good robot / Bad robot – East vs West

Kerstin, like many other robotics researchers, is worried that most people tend to look on robots as being potentially evil. If we look at the way robots are portrayed in the movies that’s often how it seems: it makes a good story to have a mechanical baddie. But in reality robots can provide a real service to humans, from helping the disabled, assisting around the home and even becoming friends and companions. The baddie robot ideas tends to dominate in the west, but in Japan robots are very popular and robotics research is advancing at a phenomenal rate. There has been a long history in Japan of people finding mechanical things that mimic natural things interesting and attractive. It is partly this cultural difference that has made Japan a world leader in robot research. But Kerstin and others like her are trying to get those of us in the west to change our opinions by building friendly robots and looking at how we relate to them.

Polite Robots roam the room

When at the University of Hertfordshire, Kerstin decided that the best way to see how people would react to a robot around the house was to rent a flat near the university, and fill it with robots. Rather than examine how people interacted with robots in a laboratory, moving the experiments to a real home, with bookcases, biscuits, sofas and coffee tables, make it real. She and her team looked at how to give their robots social skills: what was the best way for a robot to approach a person, for example? At first they thought that the best approach would be straight from the front, but they found that humans felt this too aggressive, so the robots were trained to come up gently from the side. The people in the house were also given special ‘comfort buttons’, devices that let them indicate how they were feeling in the company of robots. Again interesting things happened, it turned out that not all, but quite a lot of people were on the whole happy for these robots to be close to themselves, closer in fact than they would normally let a human approach. Kerstin explains ‘This is because these people see the robot as a machine, not a person, and so are happy to be in close proximity. You are happy to move close to your microwave, and it’s the same for robots’. These are exciting first steps as we start to understand how to build robots with socially acceptable manners. But it turns out that robots need to have good looks as well as good manners if they are going to make it in human society.

Looks are everything for a robot?

This fall in acceptability
is called the ‘uncanny valley’

How we interact with robots also depends on how the robots look. Researchers had found previously that if you make a robot look too much like a human being, people expect it to be a human being, with all the social and other skills that humans have. If it doesn’t have these, we find interaction very hard. It’s like working with a zombie, and it can be very frightening. This fall in acceptability of robots that look like, but aren’t quite, human is what researchers call the ‘uncanny valley’, so people prefer to encounter a robot that looks like a robot and acts like a robot. Kerstin’s group found this effect too, so they designed their robots to look and act they way we would expect robots to look and act, and things got much more sociable. But they are still looking at how we act with more human like robots and built KASPAR, a robot toddler, which has a very realistic rubber face capable of showing expressions and smiling, and video camera eyes that allow the robot to react to your behaviours. He possesses arms so can wave goodbye or greet you with a friendly gesture. Most recently he was extended with multi-modal technology that allowed several children to play with him at the same time, He’s very lifelike and their hope was hopefully as KASPAR’s programming grew, and his abilities improved he, or some descendent of him, would emerge from the uncanny valley to become someone’s friend, and in particular, children with autism.

Autism – mind blindness and robots

The fact that most robots at present look like and act like robots can give them a big advantage to help them support children with autism. Autism is a condition that prevents you from developing an understanding of how to interact socially with the world. A current theory to explain the condition is that those who are autistic cannot form a correct understanding of others intentions, it’s called mind blindness. For example, if I came into the room wearing a hideous hat and asked you ‘Do you like my lovely new hat?’ you would probably think, ‘I don’t like the hat, but he does, so I should say I like it so as not to hurt his feelings’, you have a mental model of my state of mind (that I like my hat). An autistic person is likely to respond ‘I don’t like your hat’, if this is what he feels. Autistic people cannot create this mental model so find it hard to make friends and generally interact with people, as they can’t predict what people are likely to say, do or expect.

Playing with Robot toys

It’s different with robots, many autistic children have an affinity with robots. Robots don’t do unexpected things. Their behaviour is much simpler, because they act like robots. Using robots Kerstin’s group examined how we can use this interaction with robot toys to help some autistic children to develop skills to allow them to interact better with other people. By controlling the robot’s behaviours some of the children can develop ways to mimic social skills, which may ultimately improve their quality of life. There were some promising results, and the work continues to be only one way to try and help those suffering with this socially isolating condition.

Future friendly

It’s only polite that the last word goes to Kerstin from her time at Hertfordshire:

‘I firmly believe that robots as assistants can potentially be very useful in many application areas. For me as a researcher, working in the field of human-robot interaction is exciting and great fun. In our team we have people from various disciplines working together on a daily basis, including computer scientists, engineers and psychologist. This collaboration, where people need to have an open mind towards other fields, as well as imagination and creativity, are necessary in order to make robots more social.’

In the future, when robots become our workmates, colleagues and companions it will be in part down to Kerstin and her team’s pioneering effort as they work towards making our robot future friendly.


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Strictly Judging Objects

Elegant ballroom dancers
Image by Paulette Butler from Pixabay

Strictly Come Dancing, has been popular for a long time. It’s not just the celebs, the pros, or the dancing that make it must see TV – it’s having the right mix of personalities in the judges too. Craig Revel Horwood has made a career of being rude, if always fair. By contrast, Darcey Bussell was always far more supportive. Bruno Tonioli is supportive but always excited. Len Goodman was similarly supportive but calm. Shirley Ballas aims for supportive but strict while Motsi Mabuse was always supportive and enthusiastic. It’s often the tension between the judges that makes good TV (remember those looks that Darcy always gave Craig, never mind when they started actually arguing). However, if you believe Dr Who, the future of judges will be robots like AnneDroid in the space-age version of The Weakest Link…let’s look at the Bot future. How might you go about designing computer judges, and how might objects help?

Write the code

We need to write a program. We will use a pseudocode (a mix of code and English) here rather than any particular programming language to make things easier to follow.

The first thing to realise is we don’t want to have to program each judge separately. That would mean describing the rules for every new judge from scratch each time they swap. We want to do as little as possible to describe each new one. Judges have a lot in common so we want to pull out those common patterns and code them up just once.

What makes a judge?

First let’s describe a basic judge. We will create a plan, a bit like an architect’s plan of a building. Programmers call these a ‘class’. The thing to realise about classes is that a class for a Judge is NOT the code of any actual working judge just the code of how to create one: a blueprint. This blueprint can be used to build as many individual judges as you need.

What’s the X-factor that makes a judge a judge? First we need to decide on some basic properties or attributes of judges. We can make a list of them, and what the possibilities for each are. The things common to all judges is they have names, personalities and they make judgements on people. Let’s simply say a judge’s personality can be either supportive or rude, and their judgements are just marks out of 10 for whoever they last watched.

Name : String
CharacterTrait : SUPPORTIVE, RUDE
Judgement : 1, 2, 3, 4, 5, 6, 7, 8, 9, 10

We have just created some new ‘types’ in programming terminology. A type is just a grouping of values. The type CharacterTrait has two values possible (SUPPORTIVE or RUDE), whereas the type Judgement has 10 possible values. We also used one common, existing type: String. Strings are just sequences of letters, numbers or symbols, so we are saying something of type Name is any such sequence. (This allows both for futuristic judges and for hip hop and rapper judges- perhaps one day, in retirement C3P0 will become a judge, but also for the likes of 50 Cent one day to become one.)

Let’s start to describe Judges as people with a name, personality and capable of thinking of a mark.

DESCRIPTION OF A Judge:
    Name name
    CharacterTrait personality
    Judgement mark

This says that each judge will be described by three variables, one called name, one called personality and one called mark. This kind of variable is called an instance variable –  because each judge we create from this blueprint will have their own copy, or instance, of the instance variables that describes that judge. So we might originally have created a Len judge and so on, but many series later find we need Mostsi one and then an Anton one. Each new judge needs their own copies of the variables that describe them.

All we are saying here in the class (the blueprint) is whenever we create a Judge it will have a name, a personal character (it will be either RUDE or SUPPORTIVE) and a potential mark.

For any given judge we will always refer tco their name using variable name and their character trait using variable personality. Each new judge will also have a current judgement, which we will refer to as mark: a number between 1 and 10. Notice we use the types we created, Name, Character and Judgment to specify the possible values each of these variables can hold.

Best behaviour

We are now able to say in our judge blueprint, our class, whether a judge is rude or supportive, but we haven’t actually said what that means. We need to set out the actual behaviours associated with being rude and supportive. We will do this here in a fairly simple way, just to illustrate. Let’s assume that the personality shows in the things they say when they give their judgement. A rude judge will say “It was a disaster” unless they are awarding a mark above 8/10. For high marks they will grudgingly say “You were ok I suppose”. We translate this into commands of how to give a judgment.

IF (personality IS RUDE) AND (mark <= 8) THEN SAY “It was a disaster” IF (personality IS RUDE) AND (mark > 8)
THEN SAY “You were ok I suppose”

It would be easy for us to give them lots more things to choose to say in a similar way, it’s just more rules. We can do a similar thing for a supportive judge. They will say “You were stunning” if they award more than 5 out of 10 and otherwise say “You tried really hard”.

TO GiveJudgement:
    IF (personality IS RUDE) AND (mark <= 8)
    THEN SAY “It was a disaster”     

    IF (personality IS RUDE) AND (mark > 8)
    THEN SAY “You were ok I suppose”

    IF (personality IS SUPPORTIVE) AND (mark > 5)
    THEN SAY “You were stunning”

    IF (personality IS SUPPORTIVE) AND (mark <= 5)
    THEN SAY “You tried hard”

A ten from Len

The other thing that judges do is actually come up with their judgement, their mark.  For real judges it would be based on rules about what they saw – a lack of pointed toes pulls it down, lots of wiggle at the right times pushes it up… We will assume, to keep it simple here, that they actually just think of a random number – essentially throw a 10 sided dice under the desk with numbers 1-10 on!

TO MakeJudgement:
    mark = RANDOM (1 TO 10)

Finally, judges can reveal their mark.

TO RevealMark:
    SAY mark

Notice this doesn’t mean they say the word “mark”. mark is a variable so this means say whatever is currently stored in that judge’s mark.

Putting that all together to make our full judge class we get:

DESCRIPTION OF A Judge:
    Name name
    CharacterTrait personality
    Judgement mark

    TO GiveJudgement:
        IF (personality IS RUDE) AND (mark <= 8)
        THEN SAY “It was a disaster”

        IF (personality IS RUDE) AND (mark > 8)
        THEN SAY “You were ok I suppose

        IF (personality IS SUPPORTIVE) AND (mark > 5)
        THEN SAY “You were stunning”

        IF (personality IS SUPPORTIVE) AND (mark <= 5)
        THEN SAY “You tried hard”

    TO MakeJudgement:
        mark = RANDOM (1 TO 10)

    TO RevealMark
        SAY mark

What is a class?

So what is a class? A class says how to build an object. It defines properties or attributes (like name, personality and current mark) but it also defines behaviours: it can speak, it can make a judgement and it can reveal the current mark. These behaviours are defined by methods – mini-programs that specify how any Judge should behave. Our class says that each Judge will have its own set of the methods that use that Judge’s own instance variables to store its properties and decide what to do.

So a class is a blueprint that tells us how to make particular things: objects. It defines their instance variables – what the state of each judge consists of and some rules to apply in particular situations. We have so far made a class definition for making Judges. It is important to realise that we haven’t made any actual objects (no actual judge) so far though – defining a class does not in itself give you any actual objects – no actual Judges (no Bruno, no Motsi, …) exist to judge anything yet. We need to write specific commands to create them as we will see.

We can store away our blueprint and just pull it out to make use of it when we need to create some actual judges (eg once a year in the summer when this year’s judges are announced).

Kind words for our contestants?

Suppose Strictly is starting up (as it is as I write, but let’s suppose all the judges will be androids this year) so we want to create some judges, starting with a rude judge, called Craig Devil Droidwood. We can use our class as the blueprint to do so. We need to say what its personality is (Judges just think of a mark when they actually see an act so we don’t have to give a mark now.)

CraigDevilDroidwood IS A NEW Judge 
                    WITH name “Craig Devil Droidwood”
                    AND personality RUDE

This creates a new judge called Craig Devil Droidwood and makes it Rude. We have instantiated the class to give a Judge object. We store this object in a variable called CraigDevilDroidwood.

For a supportive judge that we decide to call Len Goodroid we would just say (instantiating the class in a different way):

LenGoodroid IS A NEW Judge 
            WITH name “Len Goodroid”
            AND personality SUPPORTIVE

Another supportive judge DarC3PO BussL would be created with

DarC3POBussL IS A NEW Judge 
             WITH name “DarC3PO BussL”
             AND personality SUPPORTIVE

Whereas in the class we are describing a blueprint to use to create a Judge, here we are actually using that blueprint and making different Judges from it. So this way we can quickly and easily make new judge clones without copying out all the description again. These commands executed at the start of our program (and TV programme) actually create the objects. They create instances of class Judge, which just means they create actual virtual judges with their own name and personality. They also each have their own copy of the rules for the behaviour of judges.

Execute them

Once actual judges are created, they can execute commands to start the judging. First the program tells them to make judgements using their judgement method. We execute the MakeJudgement method associated with each separate judge object in turn. Each has the same instructions but those instructions work on the particular judges instance variables, so do different things.

EXECUTE MakeJudgement OF CraigDevilDroidwood
EXECUTE MakeJudgement OF DarC3POBussL
EXECUTE MakeJudgement OF LenGoodroid

Then the program has commands telling them to say what they think,

EXECUTE GiveJudgement OF CraigDevilDroidwood
EXECUTE GiveJudgement OF DarC3POBussL
EXECUTE GiveJudgement OF LenGoodroid

and finally give their mark.

EXECUTE RevealMark OF CraigDevilDroidwood
EXECUTE RevealMark OF DarC3POBussL
EXECUTE RevealMark OF LenGoodroid

In our actual program this would sit in a loop so our program might be something like:

CraigDevilDroidwood IS A NEW Judge 
                    WITH name “Craig Devil Droidwood”
                    AND personality RUDE
DarC3POBussL        IS A NEW Judge 
                    WITH name “DarC3PO BussL”
                    AND personality SUPPORTIVE
LenGoodroid         IS A NEW Judge 
                    WITH name “Len Goodroid”
                    AND personality SUPPORTIVE

FOR EACH contestant DO THE FOLLOWING
    EXECUTE MakeJudgement OF CraigDevilDroidwood
    EXECUTE MakeJudgement OF DarC3POBussL
    EXECUTE MakeJudgement OF LenGoodroid

    EXECUTE GiveJudgement OF CraigDevilDroidwood
    EXECUTE GiveJudgement OF DarC3POBussL
    EXECUTE GiveJudgement OF LenGoodroid

    EXECUTE RevealMark OF CraigDevilDroidwood
    EXECUTE RevealMark OF DarC3POBussL
    EXECUTE RevealMark OF LenGoodroid

So we can now create judges to our heart’s content, fixing their personalities and putting the words in their mouths based on our single description of what a Judge is. Of course our behaviours so far are simple. We really want to add more kinds of personality like strict judges (Shirley) and excited ones (Bruno). Ideally we want to be able to do different combinations making perhaps excited rude judges as well as excited supportive ones. This really just takes more rules.

A classless society?

Computer Scientists are lazy beings – if they can find a way to do something that involves less work, they do it, allowing them to stay in bed longer. The idea we have been using to save work here is just that of describing classes of things and their properties and behaviour. Scientists have been doing a similar thing for a long time:

Birds have feathers (a property) and lay eggs (a behaviour).

Spiders have eight legs (a property) and make silk (a behaviour)

We can say something is a particular instance of a class of thing and that tells us a lot about it without having to spell it all out each time, even for fictional things: eg Hedwig is a bird (so feathers and eggs). Charlotte is a spider (so legs and silk). The class is capturing the common patterns behind the things we are describing. The difference when Computer Scientists write them is because they are programs they can then come alive!

All change

We have specified what it means to be a robotic judge and we’ve only had to specify the basics of Judgeness once to do it. That means that if we decide to change anything in the basic judge (like giving them a better way to come up with a mark than doing it randomly or having them choose things to say from a big database of supportive or rude comments) changing it in the plan will apply to all the judges of whatever kind. That is one of the most powerful reasons for programming in this way.

We could create robot performers in a similar way (after all don’t all the winners seem to merge into one in the long run?). We would then also have to write some instructions about how to work out who won – does the audience have a vote? How many get knocked out each week? … and so on.

Of course we’ve not written a full program, even for judges, just sketched a program in pseudocode. The next step is to convert this into a program in an object-oriented programming language like Python or Java. Is that hard? Why not give it a try and judge for yourself?

Paul Curzon and Peter W. McOwan, Queen Mary University of London

Revised from TLC blog and the cs4fn archive.

More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

QMUL CS4FN EPSRC logos

The First Law of Humans

Preventing robots from hurting us

A chess board with pieces lined up
Image by Pexels from Pixabay 

The first rule of humans when around robots is apparently that they should not do anything too unexpected near a robot…

A 7-year old child playing chess in a chess tournament has had his finger broken by a chess-playing robot which grabbed it as the boy made his move. The child was blamed by one of the organisers, who claimed that it happened because the boy “broke the safety rules”! The organisers also apparently claimed that the robot itself was perfectly safe!

What seems to have happened is, after the robot played its move, the boy played his own move very quickly before the robot had finished. Somehow this triggered the wrong lines of code in the robot’s program: instructions that were intended for some other situation. In the actual situation of the boy’s hand being over the board at the wrong time it led the robot to grab his finger and not let go.

Spoiler Alert

The situation immediately brings to mind the classic science fiction story “Moxon’s Master” by Ambrose Bierce published way back in 1899. It is the story of a chess playing automaton (ie robot) and what happens when it is check-mated when playing a game with its creator Moxon. It flies into a dangerous rage. However, there the problems are apparently because the robot has developed emotions and so emotional reactions. In both situations however a robot intended simply to play chess is capable of harming a human as a result.

The three laws of robotics

Isaac Asimov is famous for his laws of robotics: fundamental unbreakable rules built in to the ‘brain’ of all robots in his fictional world precisely to stop this sort of situation. The rules he formulated were:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
A robot carefully touching fingers with a human
Image by Pete Linforth from Pixabay 

Clearly, had they been in place, the chess robot would not have harmed the boy, and Moxon’s automaton would not have been able to do anything too bad as a result of its temper either.

Asimov devised his rules as a result of a discussion with the Science Fiction magazine editor John W. Campbell, Jr. He then spent much of his Science Fiction career writing robot stories around how humans could end up being hurt despite the apparently clear rules. That aside their key idea was that, to ensure robots were safe around people, they would need built-in logic that could not be circumvented to stop them hurting them. They needed a fail-safe system monitoring their actions that would take over when breaches were possible. Clearly this chess-playing robot was not “perfectly safe” and not even fail-safe as if it was the boy would not have been harmed whatever he did. The robot did not have anything at all akin to a working, unbreakable First Law programmed into it.

Dangerous Machines

Asimov’s robots were intelligent and able to think for themselves in a more general sense than any that currently exist. The First Law essentially prevented them from deciding to harm a human not just do so by accident. However, perhaps the day will soon come when they can start to think for themselves, so perhaps a first law will soon be important. In any case, machines can harm humans without being able to think. That humans need to be wary around robots is obvious from the fact that there have been numerous injuries and even fatalities in factories using industrial robots in the decades since they were introduced. They are dangerous machines. Fortunately, the carnage inflicted on children is at least not quite that of the industrial accidents in the Industrial Revolution. It is still a problem though. People do have to take care and follow safety rules around them!

Rather than humans having to obey safety laws, we perhaps ought to be taking Asimov’s Laws more seriously for all robots, therefore. Why can’t those laws just be built in? It is certainly an interesting research problem to think about. The idea of a fail-safe is standard in engineering, so its not that general idea that is the problem. The problem is that, rather than intelligence being needed for robots to harm us, intelligence is needed to avoid them doing so.

Implementing the First Law

Let’s imagine building in the first law to chess playing robots and in particular the one that hurt the boy. For starters the chess playing robot would have needed to have recognised that the boy WAS a human so should not be harmed. It would also need to be able to recognise that his finger was a part of him and that gripping its finger would harm him. It would also need to know that it was gripping his finger (not a piece) at the time. It would then need a way to stop before it was too late, and do no harm in the stopping. It clearly needs to understand a lot about the world to be able to avoid hurting people in general.

Some of this is almost within our grasp. Computers can certainly do a fairly good job of recognising humans now through image recognition code. They can even recognise individuals, so actually that first fundamental part of knowing what is and isn’t a human is more or less possible now, just not perfectly yet. Recognising objects in general is perhaps harder. The chess robot presumably has code for recognising pieces already though a finger perhaps even looks like a piece at least to a robot. To generally, avoid causing harm in any situation it needs to be able to recognise what lots of objects are not just chess pieces. It also needs to differentiate them from what is part of a human not just what is a human. Object recognition like this is possible at least in well-defined situations. It is much harder to manage it in general, even if the precise objects have never been encountered before. Even harder though is probably recognising all the ways that would constitute doing harm to the human identified in front of it including with any of those objects that are around.

Staying in control

The code to do all this would also have to be in some sense at a higher level of control than that making the robot take actions as it has to overrule them ALWAYS. For the chess robot, there was presumably a bug that allowed it to grip a human’s finger as no programmer will have intended that, so it isn’t about monitoring the code itself. The fail-safe code has to be monitoring what is actually happening in the world and be in a position to take over. It also can’t just make the robot freeze as that may be enough to do the damage of a broken finger if already in the robot’s grip (and that may have been part of the problem for the boy’s finger). It also can’t just move its arm back suddenly as what if another child (a toddler perhaps) has just crawled up behind it! It has to monitor the effects of its own commands too! A simple version of such a monitor is probably straightforward though. The robot’s computer architecture just needs to be designed accordingly. One way robots are designed is for new modules to build on top of existing ones giving new more complex behaviour as a result, which possibly fits what is needed here. Having additional computers acting as monitors to take over when others go wrong is also not really that difficult (bugs in their own code aside) and a standard idea for mission-critical systems.

So it is probably all the complexity of the world with unexpected things happening in it that is the problem that makes a general version of the First Law hard at the moment… If Asimov’s laws in all their generalness are currently a little beyond us, perhaps we should just think about the problem in another more limited way (at least for now)…

Can a chess playing robot be safe?

In the chess game situation, if anything is moving in front of the robot then it perhaps should just be keeping well out of the way. To do so just needs monitoring code that can detect movement in a small fixed area. It doesn’t need to understand anything about the world apart from movement versus non-movement. That is easily in the realms of what computers can do – even some simple toy robots can detect movement. The monitoring code would still need to be able to override the rest of the code of course, bugs included.

Why also could the robot grip a finger with enough pressure to break it, anyway. Perhaps it just needed more accurate sensors in its fingers to avoid doing harm, together with a sensor that just let go if it felt too much resistance back. After all chess pieces don’t resist much!

And one last idea, if a little bit more futuristic. A big research area at the moment is soft robotics: robots that are soft and squidgy not hard and solid, precisely so they can do less harm. Perhaps if the chess robot’s robotic claw-like fingers had instead been totally soft and squishy it would not have harmed him even if it did grab his finger.

Had the robot designers tried hard enough they surely could have come up with solutions to make it safer, even if they didn’t have good enough debugging skills to prevent the actual bug that caused the problem. It needs safety to be a very high priority from the outset though: and certainly safety that isn’t just pushed onto humans to be responsible for as the organisers did.

We shouldn’t be blaming children for not obeying safety rules when they are given what is essentially a hard, industrial robot to play with. Doing so just lets the robot makers off the hook from even trying to make their robots safer, when they clearly could do more. When disasters happen don’t blame the people, improve the system. On the other hand perhaps we should be thinking far more about doing the research that will allow us one day to actually implement Asimov’s Laws in all robots and so have a safety culture in robotics built-in. Then perhaps people would not have to be quite so wary around robots and certainly not have to follow safety rules themselves. That surely is the robot’s job.

Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

QMUL CS4FN EPSRC logos

Dressing it up

Why it might be good for robots to wear clothes

(Robot) dummies in different clothes standing in a line up a slope
Image by Peter Toporowski from Pixabay 

Even though most robots still walk around naked, the Swedish Institute of Computer Science (SICS) in Stockholm explored how to produce fashion conscious robots.

The applied computer scientists there were looking for ways to make the robots of today easier for us to get along with. As part of the LIREC project to build the first robot friends for humans they examined how our views of simple robots change when we can clothe and customise them. Does this make the robots more believable? Do people want to interact more with a fashionable robot?

How do you want it?

These days most electronic gadgets allow the human user to customise them. For example, on a phone you can change the background wallpaper or colour scheme, the ringtone or how the menus work. The ability of the owner to change the so-called ‘look and feel’ of software is called end-user programming. It’s essentially up to you how your phone looks and what it does.

Dinosaurs waking and sleeping

The Swedish team began by taking current off-the-shelf robots and adding dress-up elements to them. Enter Pleo, a toy dinosaur ‘pet’ able to learn as you play with it. Now add in that fashion twist. What happens when you can play dress up with the dinosaur? Pleo’s costumes change its behaviour, kind of like what happens when you customise your phone. For example, if you give Pleo a special watchdog necklace the robot remains active and ‘on guard’. Change the costume from necklace to pyjamas, and the robot slowly switches into ‘sleep’ mode. The costumes or accessories you choose communicate electronically with the robot’s program, and its behaviour follows suit in a way you can decide. The team explored whether this changed the way people played with them.

Clean sweeps

In another experiment the researchers played dress up with a robot vacuum cleaner. The cleaner rolls around the house sweeping the floor, and had already proven a hit with many consumers. It bleeps happily as its on-board computer works out the best path to bust your carpet dust. The SICS team gave the vacuum a special series of stick-on patches, which could add to its basic programming. They found that choosing the right patch could change the way the humans perceive the robot’s actions. Different patches can make humans think the robot is curious, aggressive or nervous. There’s even a shyness patch that makes the robot hide under the sofa.

What’s real?

If humans are to live in a world populated by robots there to help them, the robots need to be able to play by our rules. Humans have whole parts of their brains given over to predicting how other humans will react. For example, we can empathise with others because we know that other beings have thoughts like us, and we can imagine what they think. This often spills over into anthropomorphism, where we give human characteristics to non-human animal or non-living things. Classic examples are where people believe their car has a particular personality, or think their computer is being deliberately annoying – they are just machines but our brains tend to attach motives to the behaviours we see.

Real-er robots?

Robots can produce very complex behaviours depending on the situations they are in and the ways we have interacted with them, which creates the illusion that they have some sort of ‘personality’ or motives in the way they are acting. This can help robots seem more natural and able to fit in with the social world around us. It can also improve the ways they provide us with assistance because they seem that bit more believable. Projects like the SICS’s ‘actDresses’ one help us by providing new ways that human users can customise the actions of their robots in a very natural way, in their case by getting the robots to dress for the part.

– Peter W McOwan and the CS4FN team, Queen Mary University of London (Updated from the archive)

More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

QMUL CS4FN EPSRC logos