Let the brain take the strain

Cockpit controls
Image by Michi S from Pixabay

Whenever humans have complicated, repetitive jobs to do, designers set to work making computer systems that do those jobs automatically. Autopilot systems in airplanes are a good example. Flying a commercial airliner is incredibly complex, so a computer system helps the pilots by doing a lot of the boring, repetitive stuff automatically. But in any automated system, there has to be a balance between human and computer so that the human still has ultimate control. It’s a strange characteristic of human-computer interaction: the better an automated program, the more its users rely on it, and the more dangerous it can be.

The problem is that the unpredictable always happens. Automated systems run into situations the designers haven’t anticipated, and humans are still much better at dealing with the unexpected. If humans can’t take back control from the system, accidents can happen. For example, some airplanes used to have autopilots that took control of a landing until the wheels touched the ground. But then, one rainy night, a runway in Warsaw was so wet that the plane began skidding along the runway when it touched down. The skid was so severe that the sensors never registered the touchdown of the plane, and so the pilots couldn’t control the brakes. The airplane only stopped when it had overshot the runway. The designers had relied so much on the automation that the humans couldn’t fix the problem.

Many designers now think it’s better to give some control back to the operators of any automated system. Instead of doing everything, the computer helps the user by giving them feedback. For example, if a smart car detects that it’s too close to the car ahead of it, the accelerator becomes more difficult to press. The human brain is still much better than any computer system at coming up with solutions to unexpected situations. Computers are much better off letting our brains do the tricky thinking.

– Paul Curzon, Queen Mary University of London


This article was first published on the original CS4FN website and a copy is available on page 19 of Issue 15 of the CS4FN magazine, which you can download as a PDF by clicking on the panel below. All of our previous issues are free to download as PDFs here.


Related Magazine …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Much ado about nothing

A blurred image of a hospital ward
Image by Tyli Jura from Pixabay

The nurse types in a dose of 100.1 mg [milligrams] of a powerful drug and presses start. It duly injects 1001 mg into the patient without telling the nurse that it didn’t do what it was told. You wouldn’t want to be that patient!

Designing a medical device is difficult. It’s not creating the physical machine that causes problems so much as writing the software that controls everything that that machine does. The software is complex and it has to be right. But what do we mean by “right”? The most obvious thing is that when a nurse sets it to do something, that is exactly what it does.

Getting it right is subtler than that though. It must also be easy to use and not mislead the nurse: the human-computer interface has to be right too. It is the software that allows you to interact with a gadget – what buttons you press to get things done and what feedback you are given. There are some basic principles to follow when designing interfaces. One is that the person using it should always be clearly told what it is doing.

Manufacturers need ways to check their devices meet these principles: to know that they got it right.

It’s not just the manufacturers, though. Regulators have the job of checking that machines that might harm people are ‘right’ before they allow them to be sold. That’s really difficult given the software could be millions of lines long. Worse they only have a short time to give an answer.

Million to one chances are guaranteed to happen.

Problems may only happen once in a million times a device is used. They are virtually impossible to find by having someone try possibilities to see what happens, the traditional way software is checked. Of course, if a million devices are bought, then a million to one chance will happen to someone, somewhere almost immediately!

Paolo Masci at Queen Mary University of London, has come up with a way to help and in doing so found a curious problem. He’s been working with the US regulator for medical devices – the FDA – and developed a way to use maths to find problems. It involves creating a mathematical description of what critical parts of the interface program do. Properties, like the user always knowing what is going on, can then be checked using maths. Paolo tried it out on the code for entering numbers of a real medical device and found some subtle problems. He showed that if you typed in certain numbers, the machine actually treated them as a number ten times bigger. Type in a dose of 100.1 and the machine really did set the dose to be 1001. It ignored the decimal point because on such a large dose it assumed small fractions are irrelevant. However another part of the code allows you to continue typing digits. Worse still the device ignores that decimal point silently. It doesn’t make any attempt to help a nurse notice the change. A busy nurse would need to be extremely vigilant to see the tiny decimal point was missing given the lack of warning.

A useful thing about Paolo’s approach is that it gives you the button presses that lead to the problem. With that you can check other devices very quickly. He found that medical devices from three other manufacturers had exactly the same problem. Different teams had all programmed in the same problem. None had thought that if their code ignored a decimal point, it ought to warn the nurse about it rather than create a number ten times bigger. It turns out that different programmers are likely to think the same way and so make the same mistakes (see ‘Double or Nothing‘).

Now the problem is known, nurses can be warned to be extra careful and the manufacturers can update the software. Better still they and the regulators now have an easy way to check their programmers haven’t made the same mistake in future devices. In future, whether vigilant or not, a nurse won’t be able to get it wrong.

Paul Curzon, Queen Mary University of London

More on …


Related Magazine …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos