Service Model: a review

A robot butler outline on a blood red background
Image by OpenClipart-Vectors from Pixabay

Artificial Intelligences are just tools, that do nothing but follow their programming. They are not self-aware and have no ability for self-determination. They are a what not a who. So what is it like to be a robot just following its (complex) program, making decisions based on data alone? What is it like to be an artificial intelligence? What is the real difference between being self-aware and not? What is the difference to being human? These are the themes explored by the dystopian (or is it utopian?) and funny science fiction novel “Service Model” by Adrian Tchaikovsky.

In a future where the tools of computer science and robotics have been used to make human lives as comfortable as conceivably possible, Charles(TM) is a valet robot looking after his Master’s every whim. His every action is controlled by a task list turned into sophisticated human facing interaction. Charles is designed to be totally logical but also totally loyal. What could go wrong? Everything it turns out when he apparently murders his master. Why did it happen? Did he actually do it? Is there a bug in his program? Has he been infected by a virus? Was he being controlled by others as part of an uprising? Has he become self-aware and able to made his own decision to turn on his evil master. And that should he do now? Will his task list continue to guide him once he is in a totally alien context he was never designed for, and where those around him are apparently being illogical?

The novel explores important topics we all need to grapple with, in a fun but serious way. It looks at what AI tools are for and the difference between a tool and a person even when doing the same jobs. Is it actually good to replace the work of humans with programs just because we can? Who actually benefits and who suffers? AI is being promoted as a silver bullet that will solve our economic problems. But, we have been replacing humans with computers for decades now based on that promise, but prices still go up and inequality seems to do nothing but rise with ever more children living in poverty. Who is actually benefiting? A small number of billionaires certainly are. Is everyone? We have many better “toys” that superficially make life easier and more comfortable – we can buy anything we want from the comfort of our sofas, self-driving cars will soon take us anywhere we want, we can get answers to any question we care to ask, ever more routine jobs are done by machines, many areas of work, boring or otherwise are becoming a thing of the past with a promise of utopia, but are we solving problems or making them with our drive to automate everything. Is it good for society as a whole or just good for vested interests? Are we losing track of what is most important about being human? Charles will perhaps help us find out.

Thinking about the consequences of technology is an important part of any computer science education and all CS professionals should think about ethics of what they are involved in. Reading great science fiction such as this is one good way to explore the consequences, though as Ursula Le Guin has said: the best science fiction doesn’t predict the future, it tells us about ourselves in the present. Following in the tradition of “The Machine Stops” and “I, Robot”, “Service Model” (and the short story “Human Resources” that comes with it) does that, if in a satyrical way. It is a must read for anyone involved in the design of AI tools especially those promoting the idea of utopian futures.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The robot always wins

Children playing Rock Paper Scissors (Janken)
image by HeungSoon from Pixabay

Researchers in Japan made a robot arm that always wins at rock, paper, scissors (a game completely of chance). Not with ultra-clever psychology, which is the way that the best humans play, but with old-fashioned cheating. The robot uses high-speed motors and precise computer vision systems to recognise whether its human opponent is making the sign for rock, paper or scissors. One millisecond later, it can play the sign that beats whatever the human chooses. Because the whole process is so quick, it looks to humans like the robot is playing at the same time. See for yourself by clicking below to watch the video of this amazing cheating robot.

Above: Janken (rock-paper-scissors) Robot with 100% winning rate (26 June 2012)

– Paul Curzon, Queen Mary University of London

Did you know?

The word ‘robot’ came to the English language over 100 years ago in the early 1920s. Before that the words ‘automaton’ or ‘android’ were used. In 1920 Czech playwright Karel Čapek published his play “R.U.R.” (Rossum’s Universal Robots, or Rossumovi Univerzální Roboti) and his brother Josef suggested using ‘roboti’, from the Slavic / Czech word meaning ‘forced labour’. In the late 1930s there was a performance of the play at the People’s Palace in London’s Stepney Green / Mile End – this building is now part of Queen Mary University of London (some of our computer science lectures take place there) and, one hundred years on, QMUL also has a Centre for Advanced Robotics.

More on … cheating

1. Winning at Rock Paper Scissors – Numberphile

Above: an entertaining look at a research paper investigating potential winning strategies (January 2015).

2. Bullseye! Mark Rober’s intelligent dart board

Above: our earlier article on Mark Rober’s robotic darts board which, like the rock paper robot, uses high-speed cameras to sense a dart, computing to work out where it will land and high-speed motors to move itself into position so your throw gets a high score.

3. The Intelligent Piece of Paper Activity

Above: a strategy for never losing at noughts and crosses (tic-tac-toe) – as long as you go first.


Related Magazine …

More on robotics

Above: our portal gathers together lots of our articles on robots and robotics.


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

An experiment in buoyancy

Here is a little science experiment anyone can do to help understand the physics of marine animals and their buoyancy. It helps give insight into how animals such as ancient ammonites and now cuttlefish can move up and down at will just by changing the density of internal fluids.* (See Ammonite propulsion of underwater robots). It also shows how marine robots could do the same with a programmed ammonite brain.

First take a beaker of water and a biro pen top. Put a small piece of blu tack over the the top of the pen top (to cover the holes that are there to hopefully stop you suffocating if you were to swallow one – never chew pen tops!). Next, put a larger blob of blu tack round the bottom of the pen top. You will have to use trial and error to get the right amount. Your aim is to make the pen top float vertically upright in the water, with the smaller blu tack just floating above the surface. Try it, by carefully placing the pen top vertically into the water. If it doesn’t float like that, dry the blu tack then add or remove a bit more until it does float correctly.

It now has neutral buoyancy. The force of gravity pulling it down is the same as the buoyancy force (or upthrust) pushing it upwards, caused by the air trapped in the top of the lid… so it stays put, neither sinking nor rising.

Now fill a drink bottle with water all the way to the top. Then add a little more water so the water curves up above the top of the bottle (held in place by surface tension). Carefully, drop in the weighted pen top and screw on the top of the bottle tightly.

The pen top should now just float in the water at some depth. It is acting just like the swim bladder of a fish, with the air in the pen top preventing the weight of the blue tack pulling it down to the bottom.

Now, squeeze the side of the bottle. As you squeeze, the pen top should suddenly sink to the bottom! Let go and it rises back up. What is happening? The force of gravity is still pulling down the same as it was (the mass hasn’t changed), so if it is sinking the buoyancy force pushing up must be less that it was.

What is happening? We are increasing the pressure inside the bottle, so the water is now compressing the air in the pen top, reducing its volume and increasing its density. The more dense your little diving bell is, the less the buoyancy force pushing up, so it sinks.

That is essentially the trick that ammonites evolved, many, many millions of years ago, squeezing the gas inside their shell to suddenly sink to get away quickly when they sensed danger. It is what cuttlefish still do today squeezing the gas in their cuttlebone so the cuttlefish becomes denser.

So, if you were basing a marine robot on an ammonite (with movement also possible by undulating its arms, and by jet propulsion, perhaps) then your programming task for controlling its movement would involve it being able to internally squeeze an air space by just the right amount at the right time!

In fact, several groups of researchers have created marine robots based on ammonites. For example, a group at Utah have been doing so to better understand the real but extinct ammonites themselves, including how they did actually move. For example, the team have been testing different shell shapes to see if some shapes work better than others, and so just how efficient ammonite shell shapes actually were. By programming an ammonite robot brain, you could similarly, for example, better understand how they controlled their movement and how effective it really was in practice (not just in theory).

Science can now be done in a completely different way to the traditional version of just using discovery, observation and experiment. You can now do computer and robotic modelling too, running experiments on your creations. If you want to study marine biology, or even fancy being a Palaeontologist with a difference, understanding long extinct life, you can now do it through robotics and computer science, not just by watching animals or digging up fossils (but understanding some physics is still important to get you started).

– Paul Curzon, Queen Mary University of London

More on …

*Thanks to the Dorset Wildlife Trusts at the Chisel Beach Visitor Centre, Portland where I personally learnt about ammonite and cuttlefish propulsion in a really fun science talk on the physics of marine biology, including demonstrating this experiment.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Ammonite propulsion of underwater robots

Ammonite statue showing creature inside its shell
Image by M W from Pixabay

Intending to make a marine robot that will operate under the ocean? Time to start learning, not just engineering and computing, but the physics of marine biology! And, it turns out you can learn a lot from ammonites: marine creatures that ruled the ocean for millennia and died out while dinosaurs ruled the earth. Perhaps your robot needs a shell, not for protection, but to help it move efficiently.

If you set yourself the task of building an underwater robot, perhaps to work with divers in exploring wrecks or studying marine life, you immediately have to solve a problem that is different to traditional land-based robotics researchers. Most of the really cool videos of the latest robots tend to show how great they are at balancing on two legs, doing some martial art, perhaps, or even gymnastics. Or maybe they are hyping how good they are running through the forest like a wolf, now on four legs. Once you go underwater all that exciting stuff with legs becomes a bit pointless. Now its all about floating not balancing. So what do you do?

The obvious thing perhaps is to just look at boats, submarines and torpedoes and design a propulsion system with propellers, maybe using an AI to design the most efficient propellor shape, then write some fancy software to control it as efficiently as possible. Alternatively, you could look at what the fish do and copy them!

What do fish do? They don’t have propellors! The most obvious thing is they have tails and fins and wiggle a lot. Perhaps your marine robot could be streamlined like a fish and well, swim, its way through the sea. That involves the fish using its muscles to make waves ripple along its body pushing against the water. In exerting a force on the water, by Newton’s Laws, the water pushes back and the fish moves forward.

Of course, your robot is likely to be heavy so will sink. That raises the other problem. Unlike on land, in water you need to be able to move up (and down) too. Being heavy, moving down is easy. But then that is the same for fish. All that fishy muscle is heavier than water so sinks too. Unless they have evolved a way to solve the problem, fish sink to the bottom and have to actively swim upwards if they want to be anywhere else. Some live on the bottom so that is exactly what they want. Maybe your robot is to crawl about on the sea floor too, so that may be right for it too.

Many, many other fish don’t want to be at the bottom. They float without needing to expend any energy to do so. How? They evolved a swim bladder that uses the physics of buoyancy to make them naturally float, neither rising or sinking. They have what is called neutral buoyancy. Perhaps that would be good for your robot too, not least to preserve its batteries for more important things like moving forwards. How do swim bladders do it? They are basically bags of air that give the fish buoyancy – a bit like you wearing a life jacket. Get the amount of air right and the buoyancy, which provides an upward force, can exactly counteract the force of gravity that is pushing your robot down to the depths. The result is the robot just floats under the water where it is. It now has to actively swim if it wants to move down towards the sea floor. So, if you want your robot to do more than crawl around on the bottom, designing in a swim bladder is a good idea.

Perhaps, you can save more energy and simplify things even more though. Perhaps, your robot could learn from ammonites. These are long extinct, dying out with the dinosaurs and now found only as fossils, fearsome predators that evolved a really neat way to move up and down in the water. Ammonites were once believed to be curled up snakes turned to stone, but they were actually molluscs (like snails) and the distinctive spiral structure preserved in fossils was their shell. They didn’t live deep in the spiral though, just in the last chamber at the mouth of the spiral with their multi-armed octopus like body sticking out the end to catch prey. So what were the rest of the chambers for? Filled with liquid or gas, they would act exactly like a swim bladder providing buoyancy control. However, it is likely that, as with the similar modern day nautilus, the ammonite could squeeze the gas or liquid of its spiral shell into a smaller volume, changing its density. Doing that changes its buoyancy: with increased density the buoyancy is less, so gravity exerts a greater force than the lift the shell’s content is giving and it suddenly sinks. Decrease the density by letting the gas or liquid expand and it rises again.

You can see how it works with this simple experiment.

You don’t needs a shell of course, other creatures have evolved more sophisticated versions. A cuttlebone does the same job. It is an internal organ of the cuttlefish (which are not fish but cephalopods like octopus and squid, so related to ammonites). They are the white elongated disks that you find washed up on the beach (especially along the south and west coasts in the UK). They are really hard on one side but slightly softer on the other. They act like an adjustable swim bladder. The hard upper side prevents gas escaping (whilst also adding a layer of armour). The soft lower side is full of microscopic chambers that the cuttlefish can push gas into or pull gas out of at will with the same effect as that of the ammonites shell.

This whole mechanism is essentially how the buoyancy tanks of a submarine work. First used in the original practical submarine, the Nautilus of 1800, they are flooded and emptied to make a submarine sink and rise.

Build the idea of a cuttlebone or ammonite shell into your robot and it can rise and sink at will with minimal energy wasted. Cuttlefish, though, also have another method of propulsion (aside from undulating their body) that allows it to escape from danger in a hurry: jet propulsion. By ejecting water stored in their mantle through their syphon (a tube), they can suddenly give themselves lots of acceleration just like a jet engine gives a plane. That would normally be a very inefficient form of propulsion, using lots of energy. However, experiments show that when used with negative buoyancy such as provided by the cuttlebone, this jet propulsion is actually much more efficient than it would be. So the cuttlebone saves energy again. And a rare ammonite fossil with the preserved muscles of the actual animal suggests that ammonites had similar jet propulsion too. Given some ammonites grew as large as several metres across, that would have been an amazing sight to see!

To be a great robotics engineer, rather than inventing everything from scratch, you could do well to learn from biological physics. Some of the best solutions are already out there and may even be older than the dinosaurs, You might then find your programming task is to program the equivalent of the brain of an ammonite.

Paul Curzon, Queen Mary University of London

More on …

Thanks to the Dorset Wildlife Trusts at the Chisel Beach Visitor Centre, Portland where I personally learnt about ammonite and cuttlefish propulsion in a really fun science talk on the physics of marine biology.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Film Futures: The Lord of the Rings

Image by Ondřej Neduchal from Pixabay

What if there was Computer Science in Middle Earth?…Computer Scientists and digital artists are behind the fabulous special effects and computer generated imagery we see in today’s movies, but for a bit of fun, in this series, we look at how movie plots could change if they involved Computer Scientists. Here we look at an alternative version of the film series (and of course book trilogy): The Lord of the Rings.

***SPOILER ALERT***

The Lord of the Rings is an Oscar winning film series by Peter Jackson. It follows the story of Frodo as he tries to destroy the darkly magical, controlling One Ring of Power, by throwing it in to the fires of Mount Doom at Mordor. This involves a three film epic journey across Middle Earth where he and “the company of the Ring” are chased by the Nazgûl, the Ringwraiths of the evil Sauron. Their aim is to get to Mordor, without being killed and the Ring taken from them and returned to Sauron who created it, or it being stolen by Golem who once owned it.

The Lord of the Rings: with computer science

In our computer science film future version, Frodo discovers there is a better way than setting out on a long and dangerous quest. Aragorn, has been tinkering with drones in his spare time, and so builds a drone to carry the Ring to Mount Doom controlled remotely. Frodo pilots it from the safety of Rivendell. However, on its first test flight, its radio signal is jammed by the magic of Saruman from his tower. The drone crashes and is lost. It looks like a the company must set off on a quest after all.

However, the wise Elf, the Lady Galadriel suggests that they control the drone by impossible-to-jam fibre optic cable. The Elves are experts at creating such cables using them in their highly sophisticated communication networks that span Middle Earth (unknown to the other peoples of Middle Earth), sending messages encoded in light down the cables.

They create a huge spool containing the hundreds of miles needed. Having also learnt from their first attempt, they build a new drone that uses stealth technology devised by Gandalf to make it invisible to the magic of Wizards, bouncing magical signals off it in a way that means even the ever watchful Eye of Sauron does not detect it until it is too late. The new drone sets off trailing a fine strand of silk-like cable behind, with the One Ring within. At its destination, the drone is piloted into the lava of Mount Doom, destroying the ring forever. Sauron’s power collapses, and peace returns to Middle Earth. Frodo does not suffer from post-traumatic stress disorder, and lives happily ever after, though what becomes of Golem is unknown (he was last seen on Mount Doom through the Drones camera, chasing after it, as the drone was piloted into the crater).

In real life…

Drones are being touted for lots of roles, from delivering packages to people’s doors to helping in disaster emergency areas. They have most quickly found their place as a weapon, however. At regular intervals a new technology changes war forever, whether it is the long bow, the musket, the cannon, the tank, the plane… The most recent technology to change warfare on the battlefield has been the introduction of drone technology. It is essentially the use of robots in warfare, just remote controlled, flying ones rather than autonomous humanoid ones, Terminator style (but watch this space – the military are not ones to hold back on a ‘good’ idea). The vast majority of deaths in the Russia-Ukraine war on both sides have been caused by drone strikes. Now countries around the world are scrambling to update their battle readiness, adding drones into their defence plans.

The earliest drones to be used on the battlefield were remote controlled by radio, The trouble with anything controlled that way is it is very easy to jam – either sending your own signals at higher power to take over control, or more easily to just swamp the airwaves with signal so the one controlling the drone does not get through. The need to avoid weapons being jammed is not a new problem. In World War II, some early torpedoes were radio controlled to their target, but that became ineffectual as jamming technology was introduced. Movie star Hedy Lamar is famous for patenting a mechanism whereby a torpedo could be controlled by radio signals that jumped from frequency to frequency, making it harder to jam (without knowing the exact sequence and timing of the frequency jumps). In London, torpedo stations protecting the Thames from enemy shipping had torpedoes controlled by wire so they could be guided all the way to the target. Unfortunately though it was not a great success, the only time one was used in a test it blew up a harmless fishing boat passing by (luckily no-one died).

And that is the solution adopted by both sides in the Ukraine war to overcome jamming. Drones flying across the front lines are controlled by miles of fibre optic cable that is run out on spools (tens of miles rather than the hundreds we suggested above). The light signals controlling the drone, pass down the glass fibre so cannot be jammed or interfered with. As a result the front lines in the Ukraine are now criss-crossed with gossamer thin fibres, left behind once the drones hit their target or are taken out by the opposing side. It looks as though the war is being fought by robotic spiders (which one day may be the case but not yet). With this advent of fibre-optic drone control, the war has changed again and new defences against this new technology are needed. By the time they are effective, likely the technology will have morphed into something new once more.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Robot runners

The first ever half marathon allowing humanoid robots to run against humans was held in Beijing this weekend (April 2025). 12,000 humans ran the event alongside 21 robots…and for now the humans definitely are the winners.

A robot called Tiangong Ultra, was the robot winner, one of 6 robots that managed to finish. It completed the half marathon in just over 2 hours, 40 minutes. The fastest human, for comparison, finished in 1 hour 2 minutes and Jacob Kiplimo, of Uganga holds the half-marathon world record at 56 minutes 42 seconds set in Feb 2025 in Barcelona. The first world record from 1960 being 1 hour 7 minutes. The robots, therefore, have a long way to go.

The robots struggled in various ways reminiscent of human runners such as over-heating and finding it hard to even keep standing (though for humans the latter usually only happens towards the end, not on the start line as with one robot!). While humans need to constantly take water and nutrients, the winning robot similarly needed several battery changes. It’s winning performance was put down to it copying the way that human marathon runners run by Tang Jian, chief technology officer from the Beijing Innovation Centre of Human Robotics who built it. It also has relatively long legs which also is certainly an advantage to human runners (given it had mastered standing in the first place on such long legs).

Totally autonomous marathon running is relatively difficult for a machine because it takes physical ability, including dealing with kerbs, rough road surfaces and the like but also navigating the course and avoiding other runners. In this race the robots each had a team of human ‘trainers’ with them, in some cases giving them physical support, but also for safety (though one took out its trainer as it crashed into the side barriers!)

So the robots still have to make a lot of progress before they take the world record and show themselves to be superhuman as runners (as they have already done in games including chess, go, poker, jeopardy and more). Expect the records to tumble quickly, though, now they have entered the race.

Of course, a robot does not need to run on 2 legs at all, apart from due to our human centred preferences. Whilst it is a great, fun challenge for robotics researchers that helps push forward our understanding, it is plausible that the future of robotics is in some other form of locomotion: centipede-like perhaps with hundreds of creepy crawly legs, or maybe we will settle on centaur-like robots in the future (four legs being better than two for stability and speed). After all evolution has only settled on 2 legs because it has to work with what came before and standing upright is a way to free up our hands to do other things…so if designing from scratch why not go for 4 legs and 2 arms.

So the future of robot marathons is likely to involve a large number of categories from centipedal all the way down to humanoid. Of course, expect robot Formula 1 for wheeled self driving robots too in any future robot olympics. Will other robots ever enjoy watching such sport? That remains to be seen.

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The wrong trousers? Not any more!

A metal figure sitting on the floor head down
Image by kalhh from Pixabay

Inspired by the Wallace & Gromit film ‘The Wrong Trousers’, Johnathan Rossiter of the University of Bristol builds robotic trousers. We could all need them as we get older.

Think of a robot and you probably think of something metal: something solid and hard. But a new generation of robot researchers are exploring soft robotics: robots made of materials that are squishy. When it comes to wearable robots, being soft is obviously a plus. That is the idea behind Jonathan’s work. He is building trousers to help people stand and walk.

Being unable to get out of an armchair without help can be devastating to a person’s life. There are many conditions like arthritis and multiple sclerosis, never mind just plain old age, that make standing up difficult. It gets to us all eventually and having difficulty moving around makes life hard and can lead to isolation and loneliness. The less you move about, the harder it gets to do, because your muscles get weaker, so it becomes a vicious circle. Soft robotic trousers may be able to break the cycle.

We are used to the idea of walking sticks, frames, wheelchairs and mobility scooters to help people get around. Robotic clothes may be next. Early versions of Jonathan’s trousers include tubes like a string of sausages that when pumped full of air become more solid, shortening as they bulge
out, so straightening the leg. Experiments have shown that inflating trousers fitted with them, can make a robot wearing them stand. The problem is that you need to carry gas canisters around, and put up with the psshhht! sound whenever you stand!

The team have more futuristic (and quieter) ideas though. They are working on designs
based on ‘electroactive polymers’. These are fabrics that change when electricity
is applied. One group that can be made into trousers, a bit like lycra tights, silently shrink with an electric current: exactly what you need for robotic trousers. To make it work you need a computer control system that shrinks and expands them in the right places at the right time to move the leg
wearing them. You also need to be able to store enough energy in a light enough way that the trousers can be used without frequent recharging.

It’s still early days, but one day they hope to build a working system that really can help older people stand. Jonathan promises he will eventually build the right trousers.

– Paul Curzon, Queen Mary University of London (from the archive)

More on …

The rise of the robots [PORTAL]


Related Magazine …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Music-making mates for Mortimer

Image of Mortimer provided by Louis McCallum for this article

Robots are cool. Fact. But can they keep you interested for more than a short time? Over months? Years even? Louis McCallum of Queen Mary University of London tells us about his research using Mortimer a drumming robot.

Roboticists (thats what we’re called) have found it hard to keep humans engaged with robots once the novelty wears off. They’re either too simple and boring, or promise too much and disappoint. So, at Queen Mary University of London we’ve built a robot called Mortimer that can not only play the drums, but also listen to humans play the piano and jam along. He can talk (a bit) and smile too. We hope people will build long term relationships with him through the power of music.

Robots have been part of our lives for a long time, but we rarely see them. They’ve been building our cars and assembling circuit boards in factories, not dealing with humans directly. Designing robots to have social interactions is a completely different challenge that involves engineering and artificial intelligence, but also psychology and cognitive science. Should a robot be polite? How long and accurate should a robot’s memory be? What type of voice should it have and how near should it get to you?

It turns out that making a robot interact like a human is tricky, even the slightest errors make people feel weird. Just getting a robot to speak naturally and understand what we’re saying is far from easy. And if we could, would we get bored of them asking the same questions every day? Would we believe their concern if they asked how we were feeling?

Would we believe their concern
if they asked how we were feeling?

Music is emotionally engaging but in a way that doesn’t seem fake or forced. It also changes constantly as we learn new skills and try new ideas. Although there have been many examples of family bands, duetting couples, and band members who were definitely not friends, we think there are lots of similarities between our relationships with people we play music with and ‘voluntary non-kin social relationships’ (as robotocists call them – ‘friendships’ to most people!). In fact, we have found that people get the same confidence boosting reassurance and guidance from friends as they do from people they play music with.

So, even if we are engaged with a machine, is it enough? People might spend lots of time playing with a guitar or drum machine but is this a social relationship? We tested whether people would treat Mortimer differently if it was presented as a robot you could socially interact with or simply as a clever music machine. We found people played for longer uninterrupted and stopped the robot whilst it was playing less often if they thought you could socially interact with it. They also spent more time looking at the robot when not playing and less time looking at the piano when playing. We think this shows they were not only engaged with playing music together but also treating him in a social manner, rather than just as a machine. In fact, just because he had a face, people talked to Mortimer even though they’d been told he couldn’t hear or understand them!

So, if you want to start a relationship with a creative robot, perhaps you should learn to play an instrument!

– Louis McCallum, Queen Mary University of London (from the archive)

Watch the video Louis made with the Royal Institution about Mortimer:

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page and talk are funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSrC logos

Soft squidgy robots

A smiling octopus
Image by OpenClipart-Vectors from Pixabay

Think of a robot and you probably think of something hard, metal, solid. Bang into one and it would hurt! But researchers are inventing soft robots, ones that are either completely squidgy or have squidgy skins.

Researchers often copy animals for new ideas for robots and lots of animals are soft. Some have no bones in them at all nor even hard shells to keep them safe: think slugs and octopuses. And the first soft robot that was “fully autonomous”, meaning it could move completely on its own, was called Octopod. Shaped like an Octopus, its body was made of silicone gel. It swam through the water by blowing gas into hollow tubes in its arms like a balloon, to straighten them, before letting the gas out again. 

Soft, squidgy animals are very successful in nature. They can squeeze into tiny spaces for safety or to chase prey, for example. Soft squidgy machines may be useful for similar reasons. There are plenty of good reasons for making robots soft, including

  • they are less dangerous around people, 
  • they can squeeze into small spaces,
  • they can be made of material that biodegrades so better for the planet, and
  • they can be better at gently gripping fragile things.

Soft robots might be good around people for example in caring roles. Squeezing into small spaces could be very useful in disaster areas, looking for people who are trapped. Tiny ones might move around inside an ill person’s body to find out what is wrong or help make them better.

Soft robotics is an important current research area with lots of potential. The future of robotics may well be squidgy.

Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page and talk are funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSrC logos

Aaron and the art of art

Aaron is a successful American painter. Aaron’s delicate and colourful compositions on canvas sell well in the American art market, and have been exhibited worldwide, in London’s Tate Modern gallery and the San Francisco Museum of Modern Art for example. Oh and by the way, Aaron is a robot!

Yes, Aaron is a robot, controlled by artificial intelligence, and part of a lifelong experiment undertaken by the late Harold Cohen to create a creative machine. Aaron never paints the same picture twice; it doesn’t simply recall pictures from some big database. Instead Aaron has been programmed to work autonomously. That is, once it starts there is no further human intervention, Aaron just draws and paints following the rules for art that it has been taught.

Perfecting the art of painting

Aaron’s computer program has grown and developed over the years, and like other famous painters, has passed though a number of artistic periods. Back in the early 1970s all Aaron could do was draw simple shapes, albeit shapes that looked hand drawn – not the sorts of precise geometric shapes that normal computer graphics produced. No, Aaron was going to be a creative artist. In the late 1970s Aaron learned something about artistic perspective, namely that objects in the foreground are larger than objects in a picture’s background. In the late 80s Aaron could start to draw human figures, knowing how the various shapes of the human body were joined together, and then learning how to change these shapes as a body moved in three dimensions. Now Aaron knows how to add colour to its drawings, to get those clever compositions of shades just spot on and to produce bold, unique pictures, painted with brush on canvas by its robotic arm.

It’s what you know that counts

When creating a new painting Aaron draws on two types of knowledge. First Aaron knows about things in the real world: the shapes that make up the human body, or a simple tree. This so called declarative (declared) knowledge is encoded in rules in Aaron’s programming. It’s a little like human memory: you know something about how the different shapes in the world work. This information is stored somewhere in your brain. The second type of knowledge Aaron uses is called procedural knowledge. Procedural knowledge allows you to move (process) from a start to an end through a chain of connected steps. Aaron, for example, knows how to proceed through painting areas of a scene to get the colour balance correct and in particular, getting the tone or brightness of the colour right. That is often more artistically important than the actual colours themselves. Inside Aaron’s computer program these two types of knowledge, declarative and procedural, are continuously interacting with each other in complex ways. Perhaps this blending of the two types of knowledge is the root of artistic creativity?

Creating Creativity

Though a successful artist, and capable of producing pleasing and creative pictures, Aaron’s computer program still has many limitations. Though the pictures look impressive, that’s not enough. To really understand creativity we need to examine the process by which they have been made. We have an ‘artist’ that we can take to pieces and examine in detail. Studying what Aaron can do, given we know exactly what’s been programmed into it, allows us to examine human creativity. What about it is different from the way humans paint, for example? What would we need to add to Aaron to make its process of painting more similar to human creativity?

Not quite human

Unlike a human artist Aaron cannot go back and correct what it does. Studies of great artist’s paintings often show that under the top layer of paint there are many other parts of the picture that have been painted out, or initial sketches that have been redrawn as the artist progresses through the work, perfecting it as they go. Aaron always starts in the foreground of the picture and moves toward painting the background later, whereas human artists can chop and change which part of a picture to work on to get it just right. Perhaps in the future, with human help Aaron or robots like him will develop new human-like painting skills and produce even better paintings. Until then the art world will need to content itself with Aaron’s early period work.

the CS4FN team (updated from the archive)

Some of Aaron’s (and Harold COhen’s) work is on display at the Tate modern until June 2025 as part of the Electric Dreams exhibition.

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page and talk are funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSrC logos