Service Model: a review

A robot butler outline on a blood red background
Image by OpenClipart-Vectors from Pixabay

Artificial Intelligences are just tools, that do nothing but follow their programming. They are not self-aware and have no ability for self-determination. They are a what not a who. So what is it like to be a robot just following its (complex) program, making decisions based on data alone? What is it like to be an artificial intelligence? What is the real difference between being self-aware and not? What is the difference to being human? These are the themes explored by the dystopian (or is it utopian?) and funny science fiction novel “Service Model” by Adrian Tchaikovsky.

In a future where the tools of computer science and robotics have been used to make human lives as comfortable as conceivably possible, Charles(TM) is a valet robot looking after his Master’s every whim. His every action is controlled by a task list turned into sophisticated human facing interaction. Charles is designed to be totally logical but also totally loyal. What could go wrong? Everything it turns out when he apparently murders his master. Why did it happen? Did he actually do it? Is there a bug in his program? Has he been infected by a virus? Was he being controlled by others as part of an uprising? Has he become self-aware and able to made his own decision to turn on his evil master. And that should he do now? Will his task list continue to guide him once he is in a totally alien context he was never designed for, and where those around him are apparently being illogical?

The novel explores important topics we all need to grapple with, in a fun but serious way. It looks at what AI tools are for and the difference between a tool and a person even when doing the same jobs. Is it actually good to replace the work of humans with programs just because we can? Who actually benefits and who suffers? AI is being promoted as a silver bullet that will solve our economic problems. But, we have been replacing humans with computers for decades now based on that promise, but prices still go up and inequality seems to do nothing but rise with ever more children living in poverty. Who is actually benefiting? A small number of billionaires certainly are. Is everyone? We have many better “toys” that superficially make life easier and more comfortable – we can buy anything we want from the comfort of our sofas, self-driving cars will soon take us anywhere we want, we can get answers to any question we care to ask, ever more routine jobs are done by machines, many areas of work, boring or otherwise are becoming a thing of the past with a promise of utopia, but are we solving problems or making them with our drive to automate everything. Is it good for society as a whole or just good for vested interests? Are we losing track of what is most important about being human? Charles will perhaps help us find out.

Thinking about the consequences of technology is an important part of any computer science education and all CS professionals should think about ethics of what they are involved in. Reading great science fiction such as this is one good way to explore the consequences, though as Ursula Le Guin has said: the best science fiction doesn’t predict the future, it tells us about ourselves in the present. Following in the tradition of “The Machine Stops” and “I, Robot”, “Service Model” (and the short story “Human Resources” that comes with it) does that, if in a satyrical way. It is a must read for anyone involved in the design of AI tools especially those promoting the idea of utopian futures.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Oh no! Not again…

What a mess. There’s flour all over the kitchen floor. A fortnight ago I opened the cupboard to get sugar for my hot chocolate. As I pulled out the sugar, it knocked against the bag of flour which was too close to the edge… Luckily the bag didn’t burst and I cleared it up quickly before anyone found out. Now it’s two weeks later and exactly the same thing just happened to my brother. This time the bag did burst and it went everywhere. Now he’s in big trouble for being so clumsy!

Flour cascading everywhere
Cropped image of that by Anastasiya Komarova from Pixabay
Image by Anastasiya Komarova from Pixabay

In safety-critical industries, like healthcare and the airline industry, especially, it is really important that there is a culture of reporting incidents including near misses. It also, it turns out, is important that the mechanisms of reporting issues is appropriately designed, and that there is a no blame culture especially so that people are encouraged to report incidents and do so accurately and without ambiguity.

Was the flour incident my brother’s fault? Should he have been more careful? He didn’t choose to put the sugar in a high cupboard with the flour. Maybe it was my fault? I didn’t choose to put the sugar there either. But I didn’t tell anyone about the first time it happened. I didn’t move the sugar to a lower cupboard so it was easier to reach either. So maybe it was my fault after all? I knew it was a problem, and I didn’t do anything about it. Perhaps thinking about blame is the wrong thing to do!

Now think about your local hospital.

James is a nurse, working in intensive care. Penny is really ill and is being given insulin by a machine that pumps it directly into her vein. The insulin is causing a side effect though – a drop in blood potassium level – and that is life threatening. They don’t have time to set up a second pump, so the doctor decides to stop the insulin for a while and to give a dose of potassium through a second tube controlled by the same pump. James sets up the bag of potassium and carefully programs the pump to deliver it, then turns his attention to his next task. A few minutes later, he glances at the pump again and realises that he forgot to release the clamp on the tube from the bag of potassium. Penny is still receiving insulin, not the potassium she urgently needs. He quickly releases the clamp, and the potassium starts to flow. An hour later, Penny’s blood potassium levels are pretty much back to normal: she’s still ill, but out of danger. Phew! Good job he noticed in time and no-one else knows about the mistake!

Two weeks later, James’ colleague, Julia, is on duty. She makes a similar mistake treating a different patient, Peter. Except that she doesn’t notice her mistake until the bag of insulin has emptied. Because it took so long to spot, Peter needs emergency treatment. It’s touch-and-go for a while, but luckily he recovers.

Julia reports the incident through the hospital’s incident reporting system, so at least it can be prevented from happening again. She is wracked with guilt for making the mistake, but also hopes fervently that she won’t be blamed and so punished for what happened

Don’t miss the near misses

Why did it happen? There are a whole bunch of problems that are nothing to do with Julia or James. Why wasn’t it standard practice to always have a second pump set up for critically ill patients in case such emergency treatment is needed? Why can’t the pump detect which bag the fluid is being pumped from? Why isn’t it really obvious whether the clamp is open or closed? Why can’t the pump detect it. If the first incident – a ‘near miss’ – had been reported perhaps some of these problems might have been spotted and fixed. How many other times has it happened but not reported?

What can we learn from this? One thing is that there are lots of ways of setting up and using systems, and some may well make them safer. Another is that reporting “near misses” is really important. They are a valuable source of learning that can alert other people to mistakes they might make and lead to a search for ways of making the system safer, perhaps by redesigning the equipment or changing the way it is used, for example – but only if people tell others about the incidents. Reporting near-misses can help prevent the same thing happening again.

The above was just a story, but it’s based on an account of a real incident… one that has been reported so it might just save lives in the future.

Report it!

The mechanisms used to do it, as well as culture around reporting incidents can make a big difference to whether incidents are reported. However, even when incidents are reported, the reporting systems and culture can help or hinder the learning that results.

Chrystie Myketiak at Queen Mary analysed actual incident reports for the kind of language used by those writing them. She found that the people doing the reporting used different strategies in they way they wrote the reports depending on the kind of incident it was. In situations where there was no obvious implication that a person made a mistake (such as where sterilization equipment had not successfully worked) they used one kind of language. Where those involved were likely to be seen to be responsible, so blamed, (eg when a wrong number had been entered in a medical device, for example) they used a different kind of language.

In the former, where “user errors” might have been involved, those doing the reporting were more likely to write in a way that hid the identity of any person involved, eg saying “The pump was programmed” or writing about he or she rather than a named person. They were also more likely to write in a way that added ambiguity. For example, in the user error reports it was less clear whether the person making the report was the one involved or whether someone else was writing it such as a witness or someone not involved at all.

Writing in the kinds of ways found, and the fact that it differed to those with no one likely to be blamed, suggests that those completing the reports were aware that their words might be misinterpreted by those who read them. The fact that people might be blamed hung over the reporting.

The result of adding what Christie called “precise ambiguity” might mean important information was inadvertently concealed making it harder to understand why the incident happened so work out how best to avoid it. As a result, patient safety might then not be improved even though the incident was reported. This shows one of the reasons why a strong culture of no-fault reporting is needed if a system is to be made as safe as possible. In the airline industry, which is incredibly safe, there is a clear system of no fault reporting, with pilots, for example, being praised for reporting near-misses of plane crashes rather than being punished for any mistake that led to the near miss.

This work was part of the EPSRC funded CHI+MED research project led by Ann Blandford at UCL looking at design for patient safety. In separate work on the project, Alexis Lewis, at Swansea University, explored how best to design the actual incident reporting forms as part of her PhD. A variety of forms are used in hospitals across the UK and she examined more than 20 different ones. Many had features that would make it harder than necessary for nurses and doctors to report incidents accurately even if they wanted to openly so that hospital staff would learn as much as possible from the incidents that did happen. Some forms failed to ask about important facts and many didn’t encourage feedback. It wasn’t clear how much detail or even what should be reported. She used the results to design a new reporting form that avoided the problems and that could be built into a system that encourages the reporting of incidents . Ultimately her work led to changes to the reporting form and process used within at least one health board she was working with.

People make mistakes, but safety does not come from blaming those that make them. That just discourages a learning culture. To really improve safety you need to praise those that report near misses, as well as ensuring the forms and mechanisms they must use to do so helps them provide the information needed.

Updated from the archive, written by the CHI+MED team.

More on …

Magazines …

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



EPSRC supports this blog through research grant EP/W033615/1. 

Much ado about nothing

by Paul Curzon, Queen Mary University of London

A blurred image of a hospital ward
Image by Tyli Jura from Pixabay

The nurse types in a dose of 100.1 mg [milligrams] of a powerful drug and presses start. It duly injects 1001 mg into the patient without telling the nurse that it didn’t do what it was told. You wouldn’t want to be that patient!

Designing a medical device is difficult. It’s not creating the physical machine that causes problems so much as writing the software that controls everything that that machine does. The software is complex and it has to be right. But what do we mean by “right”? The most obvious thing is that when a nurse sets it to do something, that is exactly what it does.

Getting it right is subtler than that though. It must also be easy to use and not mislead the nurse: the human-computer interface has to be right too. It is the software that allows you to interact with a gadget – what buttons you press to get things done and what feedback you are given. There are some basic principles to follow when designing interfaces. One is that the person using it should always be clearly told what it is doing.

Manufacturers need ways to check their devices meet these principles: to know that they got it right.

It’s not just the manufacturers, though. Regulators have the job of checking that machines that might harm people are ‘right’ before they allow them to be sold. That’s really difficult given the software could be millions of lines long. Worse they only have a short time to give an answer.

Million to one chances are guaranteed to happen.

Problems may only happen once in a million times a device is used. They are virtually impossible to find by having someone try possibilities to see what happens, the traditional way software is checked. Of course, if a million devices are bought, then a million to one chance will happen to someone, somewhere almost immediately!

Paolo Masci at Queen Mary University of London, has come up with a way to help and in doing so found a curious problem. He’s been working with the US regulator for medical devices – the FDA – and developed a way to use maths to find problems. It involves creating a mathematical description of what critical parts of the interface program do. Properties, like the user always knowing what is going on, can then be checked using maths. Paolo tried it out on the code for entering numbers of a real medical device and found some subtle problems. He showed that if you typed in certain numbers, the machine actually treated them as a number ten times bigger. Type in a dose of 100.1 and the machine really did set the dose to be 1001. It ignored the decimal point because on such a large dose it assumed small fractions are irrelevant. However another part of the code allows you to continue typing digits. Worse still the device ignores that decimal point silently. It doesn’t make any attempt to help a nurse notice the change. A busy nurse would need to be extremely vigilant to see the tiny decimal point was missing given the lack of warning.

A useful thing about Paolo’s approach is that it gives you the button presses that lead to the problem. With that you can check other devices very quickly. He found that medical devices from three other manufacturers had exactly the same problem. Different teams had all programmed in the same problem. None had thought that if their code ignored a decimal point, it ought to warn the nurse about it rather than create a number ten times bigger. It turns out that different programmers are likely to think the same way and so make the same mistakes (see ‘Double or Nothing‘).

Now the problem is known, nurses can be warned to be extra careful and the manufacturers can update the software. Better still they and the regulators now have an easy way to check their programmers haven’t made the same mistake in future devices. In future, whether vigilant or not, a nurse won’t be able to get it wrong.


Further reading

This article was first published on the CS4FN website (archived copy) and there is a copy on page 8 of issue 17 of the CS4FN magazine which you can download below.


Related Magazine …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Pit-stop heart surgery

by Paul Curzon, Queen Mary University of London

(Updated from the archive)

Image by Peter Fischer from Pixabay

The Formula 1 car screams to a stop in the pit-lane. Seven seconds later, it has roared away again, back into the race. In those few seconds it has been refuelled and all four wheels changed. Formula 1 pit-stops are the ultimate in high-tech team work. Now the Ferrari pit stop team have helped improve the hospital care of children after open-heart surgery!

Open-heart surgery is obviously a complicated business. It involves a big team of people working with a lot of technology to do a complicated operation. Both during and after the operation the patient is kept alive by computer: lots of computers, in fact. A ventilator is breathing for them, other computers are pumping drugs through their veins and yet more are monitoring them so the doctors know how their body is coping. Designing how this is done is not just about designing the machines and what they do. It is also about designing what the people do – how the system as a whole works is critical.

Pass it on

One of the critical times in open-heart surgery is actually after it is all over. The patient has to be moved from the operating theatre to the intensive care unit where a ‘handover’ happens. All the machines they were connected to have to be removed, moved with them or swapped for those in the intensive care unit. Not only that, a lot of information has to be passed from the operating team to the care team. The team taking over need to know the important details of what happened and especially any problems, if they are to give the best care possible.

A research team from the University of Oxford and Great Ormond Street Hospital in London wondered if hospital teams could learn anything from the way other critical teams work. This is an important part of computational thinking – the way computer scientists solve problems. Rather than starting from scratch, find a similar problem that has already been solved and adapt its solution for the new situation.

Rather than starting from scratch,
find a similar problem
that has already been solved

Just as the pit-stop team are under intense time pressure, the operating theatre team are under pressure to be back in the operating theatre for the next operation as soon as possible. In a handover from surgery there is lots of scope for small mistakes to be made that slow things down or cause problems that need to be fixed. In situations like this, it’s not just the technology that matters but the way everyone works together around it. The system as a whole needs to be well designed and pit stop teams are clearly in the lead.

Smooth moves

To find out more, the research team watched the Ferrari F1 team practice pit-stops as well as talking to the race director about how they worked. They then talked to operating theatre and intensive care unit teams to see how the ideas might work in a hospital handover. They came up with lots of changes to the way the hospital did the handover.

For example, in a pit-stop there is one person coordinating everything – the person with the ‘lollipop’ sign that reminds the driver to keep their brakes on. In the hospital handover there was no person with that job. In the new version the anaesthetist was given the overall job for coordinating the team. Once the handover was completed that responsibility was formally passed to the intensive care unit doctor. In Formula 1 each person has only one or two clear tasks to do. In the hospital people’s roles were less obvious. So each person was given a clear responsibility: the nurses were made responsible for issues with draining fluids from the patient, anaesthetist for ventilation issues, and so on. In Formula 1 checklists are used to avoid people missing steps. Nothing like that was used in the handover so a checklist was created, to be used by the team taking on the patient.

These and other changes led to what the researchers hoped would be a much improved way of doing handovers. But was it better?

Calm efficiency saves the day

To find out they studied 50 handovers – roughly half before the change was made and half after. That way they had a direct way of seeing the difference. They used a checklist of common problems noting both mistakes made and steps that proved unusually difficult. They also noted how well the teams worked together: whether they were calm and supported each other, planned what they did, whether equipment was available when needed, and so on.

They found that the changes led to clearly better handovers. Fewer errors were made both with the technology and in passing on information. Better still, while the best performance still happened when the teams worked well, the changes meant that teamwork problems became less critical. Pit-stops and open-heart surgery may be a world apart, with one being about getting every last millisecond of speed and the other about giving as good care as possible. But if you want to improve how well technology and people work together, you need to think about more than just the gadgets. It is worth looking for solutions anywhere: children can be helped to recover from heart surgery even by the high-octane glitz of Formula 1.

More on …

Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Barbara Liskov: Byzantine birthdays

The scroll of a cello
Image by HeungSoon from Pixabay
Image by HeungSoon from Pixabay 

The ACM Turing Award, which Barbara won, is the computing equivalent of a Nobel Prize. She was awarded it for more than 40 years’ work. Her early work on programming languages has been used in every significant new language that has been invented since. More recently she has moved on to problems about ‘distributed systems’: computer systems involving lots of separate computers that have to work together. That’s where the arguing comes in.

You may not think of computers as argumentative, but some of them do actually bicker quite a lot, and for good reason. Many hi-tech systems we depend on rely on the right computers getting their way, and the problems involved are surprisingly tricky to get right. Barbara Liskov’s contributions to this fiendishly difficult problem of making sure the right computers win their arguments helped her scoop the world’s top computing prize in 2009.

Barbara has been working on an area of distributed computing called ‘Byzantine fault tolerance’. It’s all about providing a good service despite the fact that just about anything can go wrong to the computers or the network connecting them. It’s so important because those computers could be doing anything from running an ecommerce site over the Internet to keeping an airliner in the air.

Happy birthday

Here’s a thought experiment to show you how tricky distributed computing problems can be. Alice is a cellist in an orchestra and it turns out that the birthday of the conductor, Cassie, is on the day they are playing a concert. Jo, the conductor’s partner, has asked Alice to arrange for the orchestra to play Happy Birthday mid-concert as a surprise. The rest of the orchestra were enthusiastic. The only trouble is they didn’t have the chance to agree which break to do it in. In fact, no one’s sure they’re definitely doing it at all, and now they are now all sitting in their places ready to start.

For it to work they all have to start precisely together or it will just sound horrible. If anyone starts on their own they will just look and sound silly. Can it be done, or should they all just forget the whole thing?

Alice decides to go ahead. She can tell the others it’s on by whispering to those next to her and telling them to pass the message on. As long as her message gets to enough people across the different sections it will be ok. Won’t it?

Actually no.

The problem is: how will she know enough people did get the message? It has to be passed when people aren’t playing, so some could presumably not get it in time. How will she know? If the whispers didn’t get through and she starts to play, she will be the embarrassed one.

Are you in?

That seems an easy thing to solve though – when each person gets the message they just send one back saying, “I’m in”. If she gets enough back, she knows it’s on, doesn’t she? Ahh! There the problem lies. She knows, but no one else can be sure she knows. If any are in doubt they won’t play and it will still go horribly wrong. How does she let everyone know that enough people are willing to go for it? Alice is in the same situation she was at the start! She doesn’t know it will happen for sure and neither does anyone else.

She can start whispering messages again saying that enough people have agreed but that won’t help in the end either. How does she know all the new messages get through?

Change the problem

A computer scientist might have a solution for Alice – change the problem. Following this advice, she starts by whispering to everyone that she will stand up and conduct at an appointed time. Are they in? Now all she needs to be sure of is that when she stands up, enough people have agreed to play so that she won’t look silly. The others send back their message saying ‘I’m in’, but no one else needs to know in advance whether the song is definitely going ahead. If she doesn’t stand up they won’t play. If she does, they go ahead.

Delivering a good service

General knowledge
It’s called Byzantine Fault Tolerance after some imaginary generals in the ancient days of the Eastern Roman Empire, whose capital was Byzantium. The original problem was about how two generals could know to attack a city at the same time.

Byzantine fault tolerance is about designing this kind of system: one that involves lots of ‘agents’ (people or computers) that have to come to an agreement about what they know and will do. The aim is for them to work together to deliver a service. That service might be for an orchestra to play Happy Birthday, but is more likely to be something like taking airline bookings over the Internet, or even deciding on action to take to keep the airliner they are flying in the air. The separate computers have to agree as a group even when some could stop working, make mistakes due to software bugs or even behave maliciously due to a virus at any point. Can a way be engineered that allows the system as a whole to still behave correctly and deliver that service? This is the problem Barbara Liskov has been working on with Miguel Castro at MIT. Of course they weren’t thinking of birthdays and orchestras. They were interested in providing the service of looking after documents so they can be accessed anytime, anywhere. A simple computer does this with a file system. It keeps track of the names of your documents and where it has stored them in its memory. When you open a document it uses the records it has kept to go and fetch it from wherever it was put. With this kind of file system, though, if something goes wrong with your machine you could lose your documents forever.

Spread it around

A way to guard against this is to create a file system that distributes copies of each file to different computers around the Internet. When you make changes, those changes are also sent to all the other computers with copies. Then if the copy on one machine is corrupted, perhaps by a hacker or just by a disk crash, the file system as a whole can still give you the correct document. That is where the computers need to start arguing. When you ask for your document back how do all the computers with (possibly different) copies decide which is the correct, uncorrupted version? That sounds easy, but as the orchestra example shows, as soon as you create a situation where the different agents (whether human or computer) are distributed, and worse you can’t trust anyone or anything for sure, there are lots of subtle ways it can all go horribly wrong.

The way Barbara and Miguel’s solution to this problem works is similar to what Alice was doing. One computer acts as what is called the ‘primary’ (Alice played this role). It is where the request from the client (Jo) goes. The primary sends out a request to all the backup machines for the document, just like Alice’s whispered messages. All the backups reply with their copy of the document. As soon as more than some predetermined number come back with the same document, then that is the good copy.

Not so simple

Of course the detail of Barbara and Miguel’s method is a lot trickier than that. They’ve had to figure out how to cope if something goes wrong with the primary (Alice herself) to ensure that the client still gets their document. Their version also works without any synchronisation to make things happen in lockstep (suppose Alice is at the back so can’t stand up and conduct to keep things in time). There are lots of other details in Barbara and Miguel’s version too. Messages are timestamped, for example, so that the recipients can tell if a message is new or just a copy of an old one.

Practically perfect

The particularly special thing about Barbara and Miguel’s way of providing fault tolerance, though, is that it doesn’t take an excessively long time. Various people had come up with solutions before, but they were so slow no one could really use them. The new method is so fast it’s almost as if you weren’t bothering about fault tolerance at all. Better still, the fact that it doesn’t need synchronisation – no conducting – means it also works when the replicated services are on the Internet where the computers act independently and there is no way to keep them in lockstep.

Barbara’s work might never actually help with an orchestral surprise like Alice’s, However, because of it future computers will be in a better position to shrug off their own kind turning rogue due to hackers, cosmic rays or even just plain old breakdowns. Not a bad reason to have a byzantine argument.

– Paul Curzon, Queen Mary University of London

(from the archive, originally in our special issue “The women are here”)

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

The First Law of Humans

Preventing robots from hurting us

A chess board with pieces lined up
Image by Pexels from Pixabay 

The first rule of humans when around robots is apparently that they should not do anything too unexpected near a robot…

A 7-year old child playing chess in a chess tournament has had his finger broken by a chess-playing robot which grabbed it as the boy made his move. The child was blamed by one of the organisers, who claimed that it happened because the boy “broke the safety rules”! The organisers also apparently claimed that the robot itself was perfectly safe!

What seems to have happened is, after the robot played its move, the boy played his own move very quickly before the robot had finished. Somehow this triggered the wrong lines of code in the robot’s program: instructions that were intended for some other situation. In the actual situation of the boy’s hand being over the board at the wrong time it led the robot to grab his finger and not let go.

Spoiler Alert

The situation immediately brings to mind the classic science fiction story “Moxon’s Master” by Ambrose Bierce published way back in 1899. It is the story of a chess playing automaton (ie robot) and what happens when it is check-mated when playing a game with its creator Moxon. It flies into a dangerous rage. However, there the problems are apparently because the robot has developed emotions and so emotional reactions. In both situations however a robot intended simply to play chess is capable of harming a human as a result.

The three laws of robotics

Isaac Asimov is famous for his laws of robotics: fundamental unbreakable rules built in to the ‘brain’ of all robots in his fictional world precisely to stop this sort of situation. The rules he formulated were:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
A robot carefully touching fingers with a human
Image by Pete Linforth from Pixabay 

Clearly, had they been in place, the chess robot would not have harmed the boy, and Moxon’s automaton would not have been able to do anything too bad as a result of its temper either.

Asimov devised his rules as a result of a discussion with the Science Fiction magazine editor John W. Campbell, Jr. He then spent much of his Science Fiction career writing robot stories around how humans could end up being hurt despite the apparently clear rules. That aside their key idea was that, to ensure robots were safe around people, they would need built-in logic that could not be circumvented to stop them hurting them. They needed a fail-safe system monitoring their actions that would take over when breaches were possible. Clearly this chess-playing robot was not “perfectly safe” and not even fail-safe as if it was the boy would not have been harmed whatever he did. The robot did not have anything at all akin to a working, unbreakable First Law programmed into it.

Dangerous Machines

Asimov’s robots were intelligent and able to think for themselves in a more general sense than any that currently exist. The First Law essentially prevented them from deciding to harm a human not just do so by accident. However, perhaps the day will soon come when they can start to think for themselves, so perhaps a first law will soon be important. In any case, machines can harm humans without being able to think. That humans need to be wary around robots is obvious from the fact that there have been numerous injuries and even fatalities in factories using industrial robots in the decades since they were introduced. They are dangerous machines. Fortunately, the carnage inflicted on children is at least not quite that of the industrial accidents in the Industrial Revolution. It is still a problem though. People do have to take care and follow safety rules around them!

Rather than humans having to obey safety laws, we perhaps ought to be taking Asimov’s Laws more seriously for all robots, therefore. Why can’t those laws just be built in? It is certainly an interesting research problem to think about. The idea of a fail-safe is standard in engineering, so its not that general idea that is the problem. The problem is that, rather than intelligence being needed for robots to harm us, intelligence is needed to avoid them doing so.

Implementing the First Law

Let’s imagine building in the first law to chess playing robots and in particular the one that hurt the boy. For starters the chess playing robot would have needed to have recognised that the boy WAS a human so should not be harmed. It would also need to be able to recognise that his finger was a part of him and that gripping its finger would harm him. It would also need to know that it was gripping his finger (not a piece) at the time. It would then need a way to stop before it was too late, and do no harm in the stopping. It clearly needs to understand a lot about the world to be able to avoid hurting people in general.

Some of this is almost within our grasp. Computers can certainly do a fairly good job of recognising humans now through image recognition code. They can even recognise individuals, so actually that first fundamental part of knowing what is and isn’t a human is more or less possible now, just not perfectly yet. Recognising objects in general is perhaps harder. The chess robot presumably has code for recognising pieces already though a finger perhaps even looks like a piece at least to a robot. To generally, avoid causing harm in any situation it needs to be able to recognise what lots of objects are not just chess pieces. It also needs to differentiate them from what is part of a human not just what is a human. Object recognition like this is possible at least in well-defined situations. It is much harder to manage it in general, even if the precise objects have never been encountered before. Even harder though is probably recognising all the ways that would constitute doing harm to the human identified in front of it including with any of those objects that are around.

Staying in control

The code to do all this would also have to be in some sense at a higher level of control than that making the robot take actions as it has to overrule them ALWAYS. For the chess robot, there was presumably a bug that allowed it to grip a human’s finger as no programmer will have intended that, so it isn’t about monitoring the code itself. The fail-safe code has to be monitoring what is actually happening in the world and be in a position to take over. It also can’t just make the robot freeze as that may be enough to do the damage of a broken finger if already in the robot’s grip (and that may have been part of the problem for the boy’s finger). It also can’t just move its arm back suddenly as what if another child (a toddler perhaps) has just crawled up behind it! It has to monitor the effects of its own commands too! A simple version of such a monitor is probably straightforward though. The robot’s computer architecture just needs to be designed accordingly. One way robots are designed is for new modules to build on top of existing ones giving new more complex behaviour as a result, which possibly fits what is needed here. Having additional computers acting as monitors to take over when others go wrong is also not really that difficult (bugs in their own code aside) and a standard idea for mission-critical systems.

So it is probably all the complexity of the world with unexpected things happening in it that is the problem that makes a general version of the First Law hard at the moment… If Asimov’s laws in all their generalness are currently a little beyond us, perhaps we should just think about the problem in another more limited way (at least for now)…

Can a chess playing robot be safe?

In the chess game situation, if anything is moving in front of the robot then it perhaps should just be keeping well out of the way. To do so just needs monitoring code that can detect movement in a small fixed area. It doesn’t need to understand anything about the world apart from movement versus non-movement. That is easily in the realms of what computers can do – even some simple toy robots can detect movement. The monitoring code would still need to be able to override the rest of the code of course, bugs included.

Why also could the robot grip a finger with enough pressure to break it, anyway. Perhaps it just needed more accurate sensors in its fingers to avoid doing harm, together with a sensor that just let go if it felt too much resistance back. After all chess pieces don’t resist much!

And one last idea, if a little bit more futuristic. A big research area at the moment is soft robotics: robots that are soft and squidgy not hard and solid, precisely so they can do less harm. Perhaps if the chess robot’s robotic claw-like fingers had instead been totally soft and squishy it would not have harmed him even if it did grab his finger.

Had the robot designers tried hard enough they surely could have come up with solutions to make it safer, even if they didn’t have good enough debugging skills to prevent the actual bug that caused the problem. It needs safety to be a very high priority from the outset though: and certainly safety that isn’t just pushed onto humans to be responsible for as the organisers did.

We shouldn’t be blaming children for not obeying safety rules when they are given what is essentially a hard, industrial robot to play with. Doing so just lets the robot makers off the hook from even trying to make their robots safer, when they clearly could do more. When disasters happen don’t blame the people, improve the system. On the other hand perhaps we should be thinking far more about doing the research that will allow us one day to actually implement Asimov’s Laws in all robots and so have a safety culture in robotics built-in. Then perhaps people would not have to be quite so wary around robots and certainly not have to follow safety rules themselves. That surely is the robot’s job.

Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

QMUL CS4FN EPSRC logos