Oh no! Not again…

What a mess. There’s flour all over the kitchen floor. A fortnight ago I opened the cupboard to get sugar for my hot chocolate. As I pulled out the sugar, it knocked against the bag of flour which was too close to the edge… Luckily the bag didn’t burst and I cleared it up quickly before anyone found out. Now it’s two weeks later and exactly the same thing just happened to my brother. This time the bag did burst and it went everywhere. Now he’s in big trouble for being so clumsy!

Flour cascading everywhere
Cropped image of that by Anastasiya Komarova from Pixabay
Image by Anastasiya Komarova from Pixabay

In safety-critical industries, like healthcare and the airline industry, especially, it is really important that there is a culture of reporting incidents including near misses. It also, it turns out, is important that the mechanisms of reporting issues is appropriately designed, and that there is a no blame culture especially so that people are encouraged to report incidents and do so accurately and without ambiguity.

Was the flour incident my brother’s fault? Should he have been more careful? He didn’t choose to put the sugar in a high cupboard with the flour. Maybe it was my fault? I didn’t choose to put the sugar there either. But I didn’t tell anyone about the first time it happened. I didn’t move the sugar to a lower cupboard so it was easier to reach either. So maybe it was my fault after all? I knew it was a problem, and I didn’t do anything about it. Perhaps thinking about blame is the wrong thing to do!

Now think about your local hospital.

James is a nurse, working in intensive care. Penny is really ill and is being given insulin by a machine that pumps it directly into her vein. The insulin is causing a side effect though – a drop in blood potassium level – and that is life threatening. They don’t have time to set up a second pump, so the doctor decides to stop the insulin for a while and to give a dose of potassium through a second tube controlled by the same pump. James sets up the bag of potassium and carefully programs the pump to deliver it, then turns his attention to his next task. A few minutes later, he glances at the pump again and realises that he forgot to release the clamp on the tube from the bag of potassium. Penny is still receiving insulin, not the potassium she urgently needs. He quickly releases the clamp, and the potassium starts to flow. An hour later, Penny’s blood potassium levels are pretty much back to normal: she’s still ill, but out of danger. Phew! Good job he noticed in time and no-one else knows about the mistake!

Two weeks later, James’ colleague, Julia, is on duty. She makes a similar mistake treating a different patient, Peter. Except that she doesn’t notice her mistake until the bag of insulin has emptied. Because it took so long to spot, Peter needs emergency treatment. It’s touch-and-go for a while, but luckily he recovers.

Julia reports the incident through the hospital’s incident reporting system, so at least it can be prevented from happening again. She is wracked with guilt for making the mistake, but also hopes fervently that she won’t be blamed and so punished for what happened

Don’t miss the near misses

Why did it happen? There are a whole bunch of problems that are nothing to do with Julia or James. Why wasn’t it standard practice to always have a second pump set up for critically ill patients in case such emergency treatment is needed? Why can’t the pump detect which bag the fluid is being pumped from? Why isn’t it really obvious whether the clamp is open or closed? Why can’t the pump detect it. If the first incident – a ‘near miss’ – had been reported perhaps some of these problems might have been spotted and fixed. How many other times has it happened but not reported?

What can we learn from this? One thing is that there are lots of ways of setting up and using systems, and some may well make them safer. Another is that reporting “near misses” is really important. They are a valuable source of learning that can alert other people to mistakes they might make and lead to a search for ways of making the system safer, perhaps by redesigning the equipment or changing the way it is used, for example – but only if people tell others about the incidents. Reporting near-misses can help prevent the same thing happening again.

The above was just a story, but it’s based on an account of a real incident… one that has been reported so it might just save lives in the future.

Report it!

The mechanisms used to do it, as well as culture around reporting incidents can make a big difference to whether incidents are reported. However, even when incidents are reported, the reporting systems and culture can help or hinder the learning that results.

Chrystie Myketiak at Queen Mary analysed actual incident reports for the kind of language used by those writing them. She found that the people doing the reporting used different strategies in they way they wrote the reports depending on the kind of incident it was. In situations where there was no obvious implication that a person made a mistake (such as where sterilization equipment had not successfully worked) they used one kind of language. Where those involved were likely to be seen to be responsible, so blamed, (eg when a wrong number had been entered in a medical device, for example) they used a different kind of language.

In the former, where “user errors” might have been involved, those doing the reporting were more likely to write in a way that hid the identity of any person involved, eg saying “The pump was programmed” or writing about he or she rather than a named person. They were also more likely to write in a way that added ambiguity. For example, in the user error reports it was less clear whether the person making the report was the one involved or whether someone else was writing it such as a witness or someone not involved at all.

Writing in the kinds of ways found, and the fact that it differed to those with no one likely to be blamed, suggests that those completing the reports were aware that their words might be misinterpreted by those who read them. The fact that people might be blamed hung over the reporting.

The result of adding what Christie called “precise ambiguity” might mean important information was inadvertently concealed making it harder to understand why the incident happened so work out how best to avoid it. As a result, patient safety might then not be improved even though the incident was reported. This shows one of the reasons why a strong culture of no-fault reporting is needed if a system is to be made as safe as possible. In the airline industry, which is incredibly safe, there is a clear system of no fault reporting, with pilots, for example, being praised for reporting near-misses of plane crashes rather than being punished for any mistake that led to the near miss.

This work was part of the EPSRC funded CHI+MED research project led by Ann Blandford at UCL looking at design for patient safety. In separate work on the project, Alexis Lewis, at Swansea University, explored how best to design the actual incident reporting forms as part of her PhD. A variety of forms are used in hospitals across the UK and she examined more than 20 different ones. Many had features that would make it harder than necessary for nurses and doctors to report incidents accurately even if they wanted to openly so that hospital staff would learn as much as possible from the incidents that did happen. Some forms failed to ask about important facts and many didn’t encourage feedback. It wasn’t clear how much detail or even what should be reported. She used the results to design a new reporting form that avoided the problems and that could be built into a system that encourages the reporting of incidents . Ultimately her work led to changes to the reporting form and process used within at least one health board she was working with.

People make mistakes, but safety does not come from blaming those that make them. That just discourages a learning culture. To really improve safety you need to praise those that report near misses, as well as ensuring the forms and mechanisms they must use to do so helps them provide the information needed.

Updated from the archive, written by the CHI+MED team.

More on …

Magazines …

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



EPSRC supports this blog through research grant EP/W033615/1. 

Much ado about nothing

by Paul Curzon, Queen Mary University of London

A blurred image of a hospital ward
Image by Tyli Jura from Pixabay

The nurse types in a dose of 100.1 mg [milligrams] of a powerful drug and presses start. It duly injects 1001 mg into the patient without telling the nurse that it didn’t do what it was told. You wouldn’t want to be that patient!

Designing a medical device is difficult. It’s not creating the physical machine that causes problems so much as writing the software that controls everything that that machine does. The software is complex and it has to be right. But what do we mean by “right”? The most obvious thing is that when a nurse sets it to do something, that is exactly what it does.

Getting it right is subtler than that though. It must also be easy to use and not mislead the nurse: the human-computer interface has to be right too. It is the software that allows you to interact with a gadget – what buttons you press to get things done and what feedback you are given. There are some basic principles to follow when designing interfaces. One is that the person using it should always be clearly told what it is doing.

Manufacturers need ways to check their devices meet these principles: to know that they got it right.

It’s not just the manufacturers, though. Regulators have the job of checking that machines that might harm people are ‘right’ before they allow them to be sold. That’s really difficult given the software could be millions of lines long. Worse they only have a short time to give an answer.

Million to one chances are guaranteed to happen.

Problems may only happen once in a million times a device is used. They are virtually impossible to find by having someone try possibilities to see what happens, the traditional way software is checked. Of course, if a million devices are bought, then a million to one chance will happen to someone, somewhere almost immediately!

Paolo Masci at Queen Mary University of London, has come up with a way to help and in doing so found a curious problem. He’s been working with the US regulator for medical devices – the FDA – and developed a way to use maths to find problems. It involves creating a mathematical description of what critical parts of the interface program do. Properties, like the user always knowing what is going on, can then be checked using maths. Paolo tried it out on the code for entering numbers of a real medical device and found some subtle problems. He showed that if you typed in certain numbers, the machine actually treated them as a number ten times bigger. Type in a dose of 100.1 and the machine really did set the dose to be 1001. It ignored the decimal point because on such a large dose it assumed small fractions are irrelevant. However another part of the code allows you to continue typing digits. Worse still the device ignores that decimal point silently. It doesn’t make any attempt to help a nurse notice the change. A busy nurse would need to be extremely vigilant to see the tiny decimal point was missing given the lack of warning.

A useful thing about Paolo’s approach is that it gives you the button presses that lead to the problem. With that you can check other devices very quickly. He found that medical devices from three other manufacturers had exactly the same problem. Different teams had all programmed in the same problem. None had thought that if their code ignored a decimal point, it ought to warn the nurse about it rather than create a number ten times bigger. It turns out that different programmers are likely to think the same way and so make the same mistakes (see ‘Double or Nothing‘).

Now the problem is known, nurses can be warned to be extra careful and the manufacturers can update the software. Better still they and the regulators now have an easy way to check their programmers haven’t made the same mistake in future devices. In future, whether vigilant or not, a nurse won’t be able to get it wrong.


Further reading

This article was first published on the CS4FN website (archived copy) and there is a copy on page 8 of issue 17 of the CS4FN magazine which you can download below.


Related Magazine …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Double or nothing: an extra copy of your software, just in case

by Paul Curzon, Queen Mary University of London

Ariane 5 on the launchpad
Ariane 5 on the launch pad. Photo Credit: (NASA/Chris Gunn) Public Domain via Wikimedia Commons.

If you spent billions of dollars on a gadget you’d probably like it to last more than a minute before it blows up. That’s what happened to a European Space Agency rocket. How do you make sure the worst doesn’t happen to you? How do you make machines reliable?

A powerful way to improve reliability is to use redundancy: double things up. A plane with four engines can keep flying if one fails. Worried about a flat tyre? You carry a spare in the boot. These situations are about making physical parts reliable. Most machines are a combination of hardware and software though. What about software redundancy?

You can have spare copies of software too. Rather than a single version of a program you can have several copies running on different machines. If one program goes wrong another can take over. It would be nice if it was that simple, but software is different to hardware. Two identical programs will fail in the same way at the same time: they are both following the same instructions so if one goes wrong the other will too. That was vividly shown by the maiden flight of the Ariane 5 rocket. Less than 40 seconds from launch things went wrong. The problem was to do with a big number that needed 64 bits of storage space to hold it. The program’s instructions moved it to a storage place with only 16 bits. With not enough space, the number was mangled to fit. That led to calculations by its guidance system going wrong. The rocket veered off course and exploded. The program was duplicated, but both versions were the same so both agreed on the same wrong answers. Seven billion dollars went up in smoke.

Can you get round this? One solution is to get different teams to write programs to do the same thing. The separate teams may make mistakes but surely they won’t all get the same thing wrong! Run them on different machines and let them vote on what to do. Then as long as more than half agree on the right answer the system as a whole will do the right thing. That’s the theory anyway. Unfortunately in practice it doesn’t always work. Nancy Leveson, an expert in software safety from MIT, ran an experiment where different programmers were given programs to write. She found they wrote code that gave the same wrong answers. Even if it had used independently written redundant code it’s still possible Ariane 5 would have exploded.

Redundancy is a big help but it can’t guarantee software works correctly. When designing systems to be highly reliable you have to assume things will still go wrong. You must still have ways to check for problems and to deal with them so that a mistake (whether by human or machine) won’t turn into a disaster.


Related Magazine …


Further reading


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Pit-stop heart surgery

by Paul Curzon, Queen Mary University of London

(Updated from the archive)

Image by Peter Fischer from Pixabay

The Formula 1 car screams to a stop in the pit-lane. Seven seconds later, it has roared away again, back into the race. In those few seconds it has been refuelled and all four wheels changed. Formula 1 pit-stops are the ultimate in high-tech team work. Now the Ferrari pit stop team have helped improve the hospital care of children after open-heart surgery!

Open-heart surgery is obviously a complicated business. It involves a big team of people working with a lot of technology to do a complicated operation. Both during and after the operation the patient is kept alive by computer: lots of computers, in fact. A ventilator is breathing for them, other computers are pumping drugs through their veins and yet more are monitoring them so the doctors know how their body is coping. Designing how this is done is not just about designing the machines and what they do. It is also about designing what the people do – how the system as a whole works is critical.

Pass it on

One of the critical times in open-heart surgery is actually after it is all over. The patient has to be moved from the operating theatre to the intensive care unit where a ‘handover’ happens. All the machines they were connected to have to be removed, moved with them or swapped for those in the intensive care unit. Not only that, a lot of information has to be passed from the operating team to the care team. The team taking over need to know the important details of what happened and especially any problems, if they are to give the best care possible.

A research team from the University of Oxford and Great Ormond Street Hospital in London wondered if hospital teams could learn anything from the way other critical teams work. This is an important part of computational thinking – the way computer scientists solve problems. Rather than starting from scratch, find a similar problem that has already been solved and adapt its solution for the new situation.

Rather than starting from scratch,
find a similar problem
that has already been solved

Just as the pit-stop team are under intense time pressure, the operating theatre team are under pressure to be back in the operating theatre for the next operation as soon as possible. In a handover from surgery there is lots of scope for small mistakes to be made that slow things down or cause problems that need to be fixed. In situations like this, it’s not just the technology that matters but the way everyone works together around it. The system as a whole needs to be well designed and pit stop teams are clearly in the lead.

Smooth moves

To find out more, the research team watched the Ferrari F1 team practice pit-stops as well as talking to the race director about how they worked. They then talked to operating theatre and intensive care unit teams to see how the ideas might work in a hospital handover. They came up with lots of changes to the way the hospital did the handover.

For example, in a pit-stop there is one person coordinating everything – the person with the ‘lollipop’ sign that reminds the driver to keep their brakes on. In the hospital handover there was no person with that job. In the new version the anaesthetist was given the overall job for coordinating the team. Once the handover was completed that responsibility was formally passed to the intensive care unit doctor. In Formula 1 each person has only one or two clear tasks to do. In the hospital people’s roles were less obvious. So each person was given a clear responsibility: the nurses were made responsible for issues with draining fluids from the patient, anaesthetist for ventilation issues, and so on. In Formula 1 checklists are used to avoid people missing steps. Nothing like that was used in the handover so a checklist was created, to be used by the team taking on the patient.

These and other changes led to what the researchers hoped would be a much improved way of doing handovers. But was it better?

Calm efficiency saves the day

To find out they studied 50 handovers – roughly half before the change was made and half after. That way they had a direct way of seeing the difference. They used a checklist of common problems noting both mistakes made and steps that proved unusually difficult. They also noted how well the teams worked together: whether they were calm and supported each other, planned what they did, whether equipment was available when needed, and so on.

They found that the changes led to clearly better handovers. Fewer errors were made both with the technology and in passing on information. Better still, while the best performance still happened when the teams worked well, the changes meant that teamwork problems became less critical. Pit-stops and open-heart surgery may be a world apart, with one being about getting every last millisecond of speed and the other about giving as good care as possible. But if you want to improve how well technology and people work together, you need to think about more than just the gadgets. It is worth looking for solutions anywhere: children can be helped to recover from heart surgery even by the high-octane glitz of Formula 1.

More on …

Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Nurses in the mist

by Paul Curzon, Queen Mary University of London

(From the archive)

A gorilla hugging a baby gorilla
Image by Angela from Pixabay

What do you do when your boss tells you “go and invent a new product”? Lock yourself away and stare out the window? Go for a walk, waiting for inspiration? Medical device system engineers Pat Baird and Katie Hansbro did some anthropology.

Dian Fossey is perhaps the most famous anthropologist. She spent over a decade living in the jungle with gorillas so that she could understand them in a way no one had done before. She started to see what it was really like to be a gorilla, showing that their fierce King Kong image was wrong and that they are actually gentle giants: social animals with individual personalities and strong family ties. Her book and film, ‘Gorillas in the Mist’, tells the story.

Pat and Katie work for Baxter Healthcare. They are responsible for developing medical devices like the infusion pumps hospitals use to pump drugs into people to keep them alive or reduce their pain. Hospitals don’t buy medical devices like we buy phones, of course. They aren’t bought just because they have lots of sexy new features. Hospitals buy new medical devices if they solve real problems. They want solutions that save lives, or save money, and if possible both! To invent something new that sells you ideally need to solve problems your competitors aren’t even aware of. Challenged to come up with something new, Pat and Katie wondered if, given the equivalent was so productive for Dian Fossey, perhaps immersing themselves in hospitals with nurses would give the advantage their company was after. Their idea was that understanding what it was really like to be a nurse would make a big difference to their ability to design medical devices. That helped with the real problems nurses had rather than those that the sales people said were problems. After all the sales people only talk to the managers, and the managers don’t work on the wards. They were right.

Taking notes

They took a team on a 3-month hospital tour, talking to people, watching them do their jobs and keeping notes of everything. They noted things like the layout of rooms and how big they were, recorded the temperature, how noisy it was, how many flashing lights and so on. They spent a lot of time in the critical care wards where infusion pumps were used the most but they also went to lots of other wards and found the pumps being used in other ways. They didn’t just talk to nurses either. Patients are moved around to have scans or change wards, so they followed them, talking to the porters doing the pushing. They observed the rooms where the devices were cleaned and stored. They looked for places where people were doing ad hoc things like sticking post it note reminders on machines. That might be an opportunity for them to help. They looked at the machines around the pumps. That told them about opportunities for making the devices fit into the bigger tasks the nurses were using them as part of.

The hot Texan summer was a problem

So did Katie and Pat come up with a new product as their boss wanted? Yes. They developed a whole new service that is bringing in the money, but they did much more too. They showed that anthropology brings lots of advantages for medical device companies. One part of Pat’s job, for example, is to troubleshoot when his customers are having problems. He found after the study that, because he understood so much more about how pumps were used, he could diagnose problems more easily. That saved time and money for everyone. For example, touch screen pumps were being damaged. It was because when they were stored together on a shelf their clips were scratching the ones behind. They had also seen patients sitting outside in the ambulance bays with their pumps for long periods smoking. Not their problem, apart from it was Texas and the temperature outside was higher than the safe operating limit of the electronics. Hospitals don’t get that hot so no one imagined there might be a problem. Now they knew.

Porters shouldn’t be missed

Pat and Katie also showed that to design a really good product you had to design for people you might not even think about, never mind talk to. By watching the porters they saw there was a problem when a patient was on lots of drugs each with its own pump. The porter pushing the bed also had to pull along a gaggle of pumps. How do you do that? Drag them behind by the tubes? Maybe the manufacturers can design in a way to make it easy. No one had ever bothered talking to the porters before. After all they are the low paid people, doing the grunt jobs, expected to be invisible. Except they are important and their problems matter to patient safety. The advantages didn’t stop there, either. Because of all that measuring, the company had the raw data to create models of lots of different ward environments that all the team could use when designing. It meant they could explore in a virtual environment how well introducing new technology might fix problems (or even see what problems it would cause).

All in all anthropology was a big success. It turns out observing the detail matters. It gives a commercial advantage, and all that mundane knowledge of what really goes on allowed the designers to redesign their pumps to fix potential problems. That makes the machines more reliable, and saves money on repairs. It’s better for everyone.

Talking to porters, observing cupboards, watching ambulance bays: sometimes it’s the mundane things that make the difference. To be a great systems designer you have to deeply understand all the people and situations you are designing for, not just the power users and the normal situations. If you want to innovate, like Pat and Katie, take a leaf out of Dian Fossey’s book. Try anthropology.

More on …

Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Screaming Headline Kills!!!

A pile of newspapers
Image by congerdesign from Pixabay

Most people in hospital get great treatment but if something does go wrong the victims often want something good to come of it. They want to understand why it happened and be sure it won’t happen to anyone else. Medical mistakes can make a big news story though with screaming headlines vilifying those ‘responsible’. It may sell papers but it could also make things worse.

If press and politicians are pressurising hospitals to show they have done something, they may only sack the person who made the mistake. They may then not improve things meaning the same thing could happen again if it was an accident waiting to happen. Worse if we’re too quick to blame and punish someone, other people will be reluctant to report their mistakes, and without that sharing we can’t learn from them. One of the reasons flying is so safe is that pilots always report ‘near misses’ knowing they will be praised for doing so, rather than getting into trouble. It’s far better to learn from mistakes where nothing really bad happens than wait for a tragedy.

Share mistakes to learn from them

Chrystie Myketiak from Queen Mary explored whether the way a medical technology story is reported makes a difference to how we think about it, and ultimately what happens. She analysed news stories about three similar incidents in the UK, America and Canada. She wanted to see what the papers said, but also how they said it. The press often sensationalise stories but Chrystie found that this didn’t always happen. Some news stories did imply that the person who’d made the mistake was the problem (it’s rarely that simple!) but others were more careful to highlight that they were busy people working under stressful conditions and that the mistakes only happened because there were other problems. Regulations in Canada mean the media can’t report on specific details of a story while it is being investigated. Chrystie found that, in the incidents she looked at, that led to much more reasoned reporting. In that kind of environment hospitals are more likely to improve rather than just blame staff. How the hospital handled a case also affected what was written – being open and honest about a problem is better than ignoring requests for comment and pretending there isn’t a problem.

Everyone makes mistakes (if you don’t believe that, the next time you’re at a magic show, make sure none of the tricks fool you!). Often mistakes happen because the system wasn’t able to prevent them. Rather than blame, retrain or sack someone its far better to improve the system. That way something good will come of tragedies.

– Paul Curzon, Queen Mary University of London (From the archive)

More on …

Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Microwave Racing

Making everyday devices easier to use

An image of a microwave (cartoon), all in grey with dials and a button.
Microwave image by Paul from Pixabay

When you go shopping for a new gadget like a smartphone or perhaps a microwave are you mostly wowed by its sleek looks, do you drool over its long list of extra functionality? Do you then not use those extra functions because you don’t know how? Rather than just drooling, why not go to the races to help find a device you will actually use, because it is easy to use!

On your marks, get set… microwave

Take an everyday gadget like a microwave. They have been around a while, so manufacturers have had a long time to improve their designs and so make them easy to use. You wouldn’t expect there to be problems would you! There are lots of ways a gadget can be harder to use than necessary – more button presses maybe, lots of menus to get lost in, more special key sequences to forget, easy opportunities to make mistakes, no obvious feedback to tell you what it’s doing… Just trying to do simple things with each alternative is one way to check out how easy they are to use. How simple is it to cook some peas with your microwave? Could it be even simpler? Dom Furniss, a researcher at UCL decided to video some microwave racing as a fun way to find out…

Everyday devices still cause people problems even when they are trying to do really simple things. What is clear from Microwave racing is that some really are easier to use than others. Does it matter? Perhaps not if it’s just an odd minute wasted here or there cooking dinner or if actually, despite your drooling in the shop, you don’t really care that you never use any of those ‘advanced’ features because you can never remember how to.

Better design helps avoid mistakes

Would it matter to you more though if the device in question was a medical device that keeps a patient alive, but where a mistake could kill? There are lots of such gadgets: infusion pumps for example. They are the machines you are hook up to in a hospital via tubes. They pump life-saving drugs, nutrient rich solutions or extra fluids to keep you hydrated directly into your body. If the nurse makes a mistake setting the rate or volume then it could make you worse rather than better. Surely then you want the device to help the nurse to get it right.

Making safer medical devices is what the research project, called CHI+MED, that Dom works* on is actually about. While the consequences are completely different, the core task in setting an infusion pump is actually very similar to setting a microwave – “set a number for the volume of drug and another for the rate to infuse it and hit start” versus “set a number for the power and another for the cooking time, then hit start”. The same types of design solutions (both good and bad) crop up in both cases. Nurses have to set such gadgets day in day out. In an intensive care unit, they will be using several at a time with each patient. Do you really want to waste lots of minutes of such a nurse’s time day in, day out? Do you want a nurse to easily be able to make mistakes in doing so?

User feedback

What the microwave racing video shows is that the designers of gadgets can make them trivially simple to use. They can also make them very hard to use if they focus more on the looks and functions of the thing than ease of use. Manufacturers of devices are only likely to take ease of use seriously if the people doing the buying make it clear that we care. Mostly we give the impression that we want features so that is what we get. Microwave racing may not be the best way to do it (follow the links below to explore more about actual ways professionals evaluate devices), but next time you are out looking for a new gadget check how easy it is to use before you buy … especially if the gadget is an infusion pump and you happen to be the person placing orders for a hospital!

– Dom Furniss and Paul Curzon, 2015

*The CHI+MED project ended in 2015 and this issue of CS4FN was one of the project’s outputs.

Magazines …

The original version of this article was originally published on the CS4FN website and on page 16 of Issue 17 of CS4FN, “Machines making medicine safer“, which is free to download as a PDF, along with all of our other free material, here: https://cs4fndownloads.wordpress.com/

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Playing Bridge, but not as we know it – the sound of the Human Harp

Looking upwards at the curve of a bright white suspension bridge gleaming in the sunshine with a blue sky behind it
               Elizabeth Quay Bridge in Australia.
Image Sam Wilson, CC BY-SA 4.0 , via Wikimedia Commons

Clifton, Forth and Brooklyn are all famous suspension bridges where, through a feat of engineering greatness, the roadway hangs from cables slung from sturdy towers. The Human Harp project created by Di Mainstone, Artist in Residence at Queen Mary, involves attaching digital sensors to bridge cables attached by lines to the performer’s clothing. As the bridge vibrates to traffic and people, and the performer moves, the angle and length of the lines are measured and different sounds produced. In effect human and bridge become one augmented instrument, making music mutually. Find out more at www.humanharp.org


– Paul Curzon, Queen Mary University of London

 More on …

Magazines

This article was originally published on CS4FN and a copy can also be found (on page 17) in Issue 17 of CS4FN, Machines making medicine safer, which you can download as a PDF.

All of our free material can be downloaded here: https://cs4fndownloads.wordpress.com

 

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The red sock of doom – trying to catch mistakes before they happen

A red sock in with your white clothes wash – guess what happened next? What can you do to prevent it from happening again? Why should a computer scientist care? It turns out that red socks have something to teach us about medical gadgets.

Washing machine mistake
Washing machine with red sock in white washing. Image by Dominic Furniss from Errordiary and Flikr, provided for this article
CC BY-NC-SA 2.0

How can we stop red socks from ever turning our clothes pink again? We need a strategy. Here are some possibilities.

  • Don’t wear red socks.
  • Take a ‘how to wash your clothes’ course.
  • Never make mistakes.
  • Get used to pink clothes.

Let’s look at them in turn – will they work?

Don’t wear red socks: That might help but it’s not much use if you like red socks or if you need them to match your outfit. And how would it help when you wear purple, blue or green socks? Perhaps your clothes will just turn green instead.

Take a ‘how to wash your clothes’ course: Training might help: you’d certainly learn that a red sock and white clothes shouldn’t be mixed, you probably did know that anyway, though. It won’t stop you making a similar mistake again.

Never make misteaks: Just never leave a red sock in your white wash. If only! Unfortunately everyone makes mistakes – that’s why we have erasers on pencils and a delete key on computers – this idea just won’t work.

Get used to pink clothes: Maybe, but it’s not ideal. It might not be so great turning up to school in a pink shirt if everyone else is wearing a white one.

What if the problem’s more serious?

We can probably live with pink clothes, but what happens if a similar mistake is made at a hospital? Not socks, but medicines. We know everyone makes mistakes so how do we stop those mistakes from harming patients? Special machines are used in hospitals to pump medicine directly into a patient’s arm, for example, and a nurse needs to tell it how much medicine to give – if the dose is wrong the patient won’t get better, and might even get worse.

What have we learned from our red sock strategies? We can’t stop giving patients medicine and we don’t want to get used to mistakes so our first and fourth strategies won’t work. We can give nurses more training but everyone makes mistakes even when trained, so the third suggestion isn’t good enough either and it doesn’t stop someone else making the same mistake.

We need to stop thinking of mistakes as a problem that people make and instead as a problem that systems thinking can solve. That way we can find solutions that work for everyone. One possibility is to check whether changes to the device might make mistakes less likely in the first place.

Errors? Or arrows?

Most medical machines are controlled with a panel with numbered keys (a number keypad) like on mobile phones, or up and down arrows (an arrow keypad) like you sometimes get on alarm clocks. CHI+MED researchers have been asking questions like: which way is best for entering numbers quickly, but also which is best for entering numbers accurately? They’ve been running experiments where people use different keypads, are timed and their mistakes are recorded. The researchers also track where people are looking while they use the keypads. Another approach has been to create mathematical descriptions of the different keypads and then mathematically explore how bad different errors might be.

It turns out that if you can see the numbers on a keypad in front of you it’s very easy to type them in quickly, though not always correctly! You need to check the display to see if you have actually put in the right ones. Worse, mistakes that are made are often massive – ten times too much or more. The arrow keypads are a little slower to use but because people are already looking at the display (to see what numbers are appearing) they can help nurses be more accurate, not only are fewer mistakes made but those that are made tend to be smaller.

Smart machines help users

A medical device that actively helps users avoid mistakes helps everyone using it (and the patients it’s being used on!). Changing the interface to reduce errors isn’t the only solution though. Modern machines have ‘intelligent drug libraries’ that contain information about the medicines and what sort of doses are likely and safe. Someone might still mistakenly tell the machine to give too high a dose but now it can catch the error and ask the nurse to double-check. That’s like having a washing machine that can spot a brightly coloured sock in a white wash and that refuses to switch on till it has been removed.

Building machines with a better ability to catch errors (remember, we all make mistakes) and helping users to recover from them easily is much more reliable than trying to get rid of all possible errors by training people. It’s not about avoiding red socks, or errors, but about putting better systems in place to make sure that we find them before we press that big ‘Start’ button.

Further reading / watching
You can find a copy of this article on pages 4 and 5 in issue 17 (Machines Making Medicine Safer) of CS4FN 17. This article was originally published on CHI+MED