Computers compress files to save space. But it also allows them to create music!
Music is special. It’s one of the things, like language, that makes us human, separating us from animals. It’s also special as art, because it doesn’t exist as an object in the world – it depends on human memory. “But what about CDs? They’re objects in the world”, you might say and you’d be right, but the CD is not the music. The CD contains data files of numbers. Those numbers are translated by electronics into the movements in a loudspeaker, to create sound waves. Even the sound waves aren’t music! They only become music when a human hears them, because understanding music is about noticing repetition, variation and development in its structure. That’s why songs have verses and choruses: so we can find a starting point to understand its structure. In fact, we’re so good at understanding musical structure, we don’t even notice we’re doing it. What’s more, music affects us emotionally: we get excited (using the same chemicals that get us excited when we’re in love or ready to flee danger) when we hear the anthem section of a trance track, or recognise the big theme returning at the end of a symphony.
Surprisingly, brains seem to understand musical structure in a way that’s like the algorithms computer scientists use to compress data. It’s better to store data compressed than uncompressed, because it takes less storage space. We think that’s why brains do it too.
Even more surprisingly, brains also seem to be able to learn the best way to store compressed music data. Computers use bits as their basic storage unit, but we can make groups of bits work like other things (numbers, words, pictures, angry birds…); brains seem to do something similar. For example, pitch (high vs. low notes) in sequence is an important part of music: we build melodies by lining up notes of different pitch one after the other. As we learn to hear music (starting before birth, and continuing throughout life), we learn to remember pitch in ever more efficient ways, giving our compression algorithms better and better chances to compress well. And so we remember music better.
Our team use compression algorithms to understand how music works in the human mind. We have discovered that, when our programs compress music, they can sometimes predict musical structures, even if neither they nor a human have “heard” them before. To compress something, you find large sections of repeated data and replace each with a label saying “this is one of those”. It’s like labelling a book with its title: if you’ve read Lord of the Rings, when I say the title you know what I mean without me telling the story. If we do this to the internal structure of music, there are little repetitions everywhere, and the order that they appear is what makes up the music’s structure.
If we compress music, but then decompress it in a different way, we can get a new piece of music in a similar style or genre. We have evidence that human composers do that too!
What our programs are doing is learning to create new music. There’s a long way to go before they produce music you’ll want to dance to – but we’re getting there!
Photo credit: Galton Box by Klaus-Dieter Keller, Public Domain, via Wikimedia Commons, via the Wikipedia page for the Galton board
Have you played the seaside arcade game where shiny metal balls drops down to ping, ping off little metal pegs and settle in one of a series of channels? After you have fired lots of balls, did you notice a pattern as the silver spheres collect in the channels? A smooth glistening curve of tiny balls forming a dome, a bell curve forms. High scores are harder to get than lower ones. Francis Galton pops up again*, but this time as a fellow Victorian trend setter for future computer design.
Francis Galton invented this special combination of row after row of offset pins and narrow receiving channels to demonstrate a statistical theory called normal distribution: the bell curve. Balls are more likely to bounce their way to the centre, distributing themselves in an elegant sweep down to the left and right edges of the board. But instead of ball bearings, Galton used beans, it was called the bean machine. The point here though is that the machine does a computation – it computes the bell curve.
Skip forward 100 years and ‘Boson Samplers’, based on Galton’s bean machine, are being used to drive forward the next big thing in computer design, quantum computers.
Instead of beans or silver balls computer scientists fire photons, particles of light through minuscule channels on optical chips. These tiny bundles of energy bounce and collide to create a unique pattern, a distribution though one that a normal digital computer would find hard to calculate. By setting it up in different ways, the patterns that result can correspond to different computations. It is computing answers to different calculations set for it.
Through developing these specialised quantum circuits scientists are bouncing beams of light forwards on the path that will hopefully lead to conventional digital technology being replaced with the next generation of supercomputers.
*Francis Galton appears earlier in Issue 20, you can read more about him on page 15 of the PDF. Although a brilliant mathematician he held views about people that are unacceptable today. In 2020 University College London (UCL) changed the name of its Galton Lecture Theatre, which had been named previously in his honour, to Lecture Theatre 115.
EPSRC supports this blog through research grant EP/W033615/1.
One Saturday afternoon one spring in San Francisco, a queue of people stretched down the pavement from a neighbourhood market. There was no shortage of other food shops nearby, so why were hundreds of people waiting to buy everything from crisps to cat litter at this one place? Because that shop had pledged to donate more than a fifth of that day’s profits to improving its environmental footprint.
Pillow fights and parties
The organisation behind the busy shopping day is called Carrotmob. The tactics they used to summon so many people to the tiny market in San Francisco had already been working all over the world for less serious stuff. From a huge pillow fight in New York’s Times Square to a mass disco at Victoria Station in London where people danced along to their MP3 players, the concept of the flashmob can seem to create a party out of thin air. From a simple idea, word can spread over social networking sites, email and word of mouth until a few people have turned into a huge crowd.
Start the bidding
Carrotmob’s founder, Brent Schulkin, wanted to try and entice businesses into going green using a language he thought they’d understand: cash. In return for getting loads of new customers to buy things, the owners had to give back some of their windfall profit to the Earth. To test his idea he went round to food shops in his neighbourhood. He said he could bring lots of extra customers to the shop on a particular day, and asked each of them how much of that day’s profit they’d be willing to spend on making their businesses more environmentally friendly. K&D Market won the bidding war by promising to spend 22% of the profits putting in greener lighting and making their fridges more energy-efficient. Now that K&D had agreed to the deal, Brent had to bring in the punters. He needed a flashmob.
Flashmobs work because it’s now so easy to stay in touch with large numbers of people. If we find out about a cool event we can share it with all our friends just by making one post on sites like Facebook or Twitter. We can make plans to do something as a group just by sending a few texts. When lots of people spread word around like this, suddenly a small idea like Carrotmob, armed with only a website and a few videos, can drop an hour-long queue on the doorstep of a market in San Francisco.
Success!
It’s not easy to enjoy yourself when you’re waiting for an hour to buy a packet of instant noodles, but that’s another advantage of the flashmob: the party atmosphere, the feeling that you’re part of something big. The results were big: the impromptu shoppers brought in more than $9000 – four times what the shop ordinarily rings up on a Saturday afternoon. Lots of the purchases went to a food bank, so even more people shared in the benefits. In the end the shop did well, the Earth did well, and the Carrotmobbers got a party. Plus the good feeling you get from helping the environment probably stays with you longer than the good feeling from getting hit in the face with a pillow.
The Formula 1 car screams to a stop in the pit-lane. Seven seconds later, it has roared away again, back into the race. In those few seconds it has been refuelled and all four wheels changed. Formula 1 pit-stops are the ultimate in high-tech team work. Now the Ferrari pit stop team have helped improve the hospital care of children after open-heart surgery!
Open-heart surgery is obviously a complicated business. It involves a big team of people working with a lot of technology to do a complicated operation. Both during and after the operation the patient is kept alive by computer: lots of computers, in fact. A ventilator is breathing for them, other computers are pumping drugs through their veins and yet more are monitoring them so the doctors know how their body is coping. Designing how this is done is not just about designing the machines and what they do. It is also about designing what the people do – how the system as a whole works is critical.
Pass it on
One of the critical times in open-heart surgery is actually after it is all over. The patient has to be moved from the operating theatre to the intensive care unit where a ‘handover’ happens. All the machines they were connected to have to be removed, moved with them or swapped for those in the intensive care unit. Not only that, a lot of information has to be passed from the operating team to the care team. The team taking over need to know the important details of what happened and especially any problems, if they are to give the best care possible.
A research team from the University of Oxford and Great Ormond Street Hospital in London wondered if hospital teams could learn anything from the way other critical teams work. This is an important part of computational thinking – the way computer scientists solve problems. Rather than starting from scratch, find a similar problem that has already been solved and adapt its solution for the new situation.
Rather than starting from scratch, find a similar problem that has already been solved
Just as the pit-stop team are under intense time pressure, the operating theatre team are under pressure to be back in the operating theatre for the next operation as soon as possible. In a handover from surgery there is lots of scope for small mistakes to be made that slow things down or cause problems that need to be fixed. In situations like this, it’s not just the technology that matters but the way everyone works together around it. The system as a whole needs to be well designed and pit stop teams are clearly in the lead.
Smooth moves
To find out more, the research team watched the Ferrari F1 team practice pit-stops as well as talking to the race director about how they worked. They then talked to operating theatre and intensive care unit teams to see how the ideas might work in a hospital handover. They came up with lots of changes to the way the hospital did the handover.
For example, in a pit-stop there is one person coordinating everything – the person with the ‘lollipop’ sign that reminds the driver to keep their brakes on. In the hospital handover there was no person with that job. In the new version the anaesthetist was given the overall job for coordinating the team. Once the handover was completed that responsibility was formally passed to the intensive care unit doctor. In Formula 1 each person has only one or two clear tasks to do. In the hospital people’s roles were less obvious. So each person was given a clear responsibility: the nurses were made responsible for issues with draining fluids from the patient, anaesthetist for ventilation issues, and so on. In Formula 1 checklists are used to avoid people missing steps. Nothing like that was used in the handover so a checklist was created, to be used by the team taking on the patient.
These and other changes led to what the researchers hoped would be a much improved way of doing handovers. But was it better?
Calm efficiency saves the day
To find out they studied 50 handovers – roughly half before the change was made and half after. That way they had a direct way of seeing the difference. They used a checklist of common problems noting both mistakes made and steps that proved unusually difficult. They also noted how well the teams worked together: whether they were calm and supported each other, planned what they did, whether equipment was available when needed, and so on.
They found that the changes led to clearly better handovers. Fewer errors were made both with the technology and in passing on information. Better still, while the best performance still happened when the teams worked well, the changes meant that teamwork problems became less critical. Pit-stops and open-heart surgery may be a world apart, with one being about getting every last millisecond of speed and the other about giving as good care as possible. But if you want to improve how well technology and people work together, you need to think about more than just the gadgets. It is worth looking for solutions anywhere: children can be helped to recover from heart surgery even by the high-octane glitz of Formula 1.
Paul Curzon, Queen Mary University of London (Updated from the archive)
In a galaxy far, far away cyber security matters. So much so, that the whole film Rogue One is about it. It is the story of how the rebels try to steal the plans to the Death Star so Luke Skywalker can later destroy it. Protecting information is everything. The key is good authentication. The Empire screws up!
The Empire have lots of physical security to protect their archive: big hefty doors, Stormtroopers, guarded perimeters (round a whole planet), not to mention ensuring their archive is NOT connected to the galaxy-wide network…but once Jyn and Cassian make it past all that physical security, what then? They need to prove they are allowed to access the data. They need to authenticate! Authentication is about how you tell who a person is and so what they are, and are not, allowed to do. The Empire have a high-tech authentication system. To gain access you have to have the right handprint. Luckily, for the rest of the series, Jyn easily subverts it.
Sharing a secret
Authentication is based on the idea that those allowed in (a computer, a building, a network,…) possess something that no one else has: a shared secret. That is all a password is: a secret known to only you and the computer. The PIN you use to lock your phone is a secret shared between you and your phone. The trouble is that secrets are hard to remember and if we write them down or tell them to someone else they no longer work as a secret.
A secure token
A different kind of authentication is based on physical things or ‘tokens’. You only get in if you have one. Your door key provides this kind of check on your identity. Your bank card provides it too. Tokens work as long as only people allowed them actually do possess them. They have to be impossibly hard to copy to be secure. They can also be stolen or lost (and you can forget to take them with you when you set off to save the Galaxy).
Biometrics
Biometrics, as used by the Empire, avoids these problems. They rely on a feature unique to each person like their fingerprint. Others rely on the uniqueness of the pattern in your iris or your voice print. They have the advantage that you can’t lose them or forget them. They can’t be stolen or inadvertently given to someone else. Of course for each galactic species, from Ewok to Wookie, you need a feature unique to each member of that species.
Just because Biometrics are high-tech, doesn’t mean they are foolproof, as the Empire found out. If a biometric can be copied, and a copy can fool the system, then it can be broken. The rebels didn’t even need to copy the hand print. They just killed a person who had access and put their hand against the reader. If it works when the person is dead they are just a token that someone else can possess. In real life 21st century Japan, at least one unfortunate driver had his finger cut off by thieves stealing his car as it used his fingerprint as the key! Biometric readers need to be able to tell whether the thing being read is part of a living person.
The right side of the door
Of course if the person with access can be coerced, biometrics are no help. Perhaps all Cassian needed to do was hold a blaster to the archivist’s head to get in. If a person with access is willing to help it may not matter whether they have to be alive or not (except of course to them). Part of the flaw in the Empire’s system is that the archivist was outside the security perimeter. You could get to him and his console without any authentication. Better to have him working on the other side of the door, the other side of the authentication system.
Anything one can do …
The Empire could have used ‘Multi-factor authentication’: ask for several pieces of evidence. Your bank cashpoint asks for a shared secret (something you know – your PIN) and a physical token (something you possess – your bank card). Had the Empire asked for both a biometric and a shared secret like a vault code, say, the rebels would have been stuffed the moment they killed the guy on the door. You have to be careful in your choice of factors too. Had the two things been a key and handprint, the archive would have been no more secure than with the handprint alone. Kill the guard and you have both.
We’re in!
A bigger problem is once in they had access to everything. Individual items, including the index, should have been separately protected. Once the rebels find the file containing the schematics for the Death Star and beam it across the Galaxy, anyone can then read it without any authentication. If each file had been separately protected then the Empire could still have foiled the rebel plot. Even your computer can do that. You can set individual passwords on individual files. The risk here is that if you require more passwords than a person can remember, legitimate people could lose access.
Level up!
Levels help. Rather than require lots of passwords, you put documents and people into clearance levels. When you authenticate you are given access to documents of your clearance level or lower. Only if you have “Top Secret” clearance are you able to access “Top Secret” documents. The Empire would still need a way to ensure information can never be leaked to a lower clearance level area though (like beaming it across the galaxy).
So if you ever invent something as important to your plans as a Death Star, don’t rely on physical security and a simple authentication system. For that matter, don’t put your trust in your mastery of the Force alone either, as Darth Vader discovered to his cost. Instead of a rebel planet, your planet-destroying-planet may just be destroyed itself, along with your plans for galactic domination.
In a galaxy far, far away cyber security matters quite a lot. So much so, in fact, that the whole film Rogue One is about it. The plot is all about the bad guys trying to keep their plans secret, and the good guys trying to steal them.
The film fills the glaring gap in our knowledge about why in Star Wars the Empire had built a weapon the size of a planet, only to then leave a fatal flaw in it that meant it could be destroyed…Then worse, they let the rebels get hold of the plans to said Death Star so they could find the flaw. Protecting information is everything.
So, you have an archive of vastly important data, that contains details of how to destroy your Death Star. What do you do with it to keep the information secure? Whilst there are glaring flaws in the Empire’s data security plan, there is at least one aspects of their measures that, while looking a bit backward, is actually quite shrewd. They use physical security. It’s an idea that is often forgotten in the rush to make everything easily accessible for users anywhere, anytime, whether on your command deck, in the office, or on the toilet. That of course applies to hackers too. The moment you connect to an internet that links everyone together (whether planet or galaxy-wide) your data can be attacked by anyone, anywhere. Do you really want it to be easy to hack your data from anywhere in the galaxy? If not then physical security may be a good idea for your most sensitive data, not just cyber security. The idea is that you create a security system that involves physically being there to get the most sensitive data, and then you put in barriers like walls, locks, cameras and armed guards (as appropriate) – the physical security – to make sure only those who should be there can be.
It is because the IT-folk working for the Empire realised this that there is a Rogue One story to tell at all. Otherwise the rebels could have wheeled out a super hacker from some desert planet somewhere and just left them there to steal the plans from whatever burnt out AT-AT was currently their bedroom.
Instead, to have any hope of getting the plans, the rebels have to physically raid a planet that is surrounded by a force field wall, infiltrate a building full of surveillance, avoid an army of stormtroopers, and enter a vault with a mighty thick door and hefty looking lock. That’s quite a lot of physical security!
It gets worse for the rebels though. Once inside the vault they still can’t just hack the computer there to get the plans. It is stored in a tower with a big gap and massive drop between you and it. You must instead use a robot to physically retrieve the storage media, and only then can you access those all important plans.
Pretty good security on paper. Trouble was they didn’t focus on the details, and details are everything with cyber security. Security is only as strong as the weakest link. Even leaving aside how simple it was for a team of rebels to gain access to the planet undetected, enter the building, get to the vault, get in the vault, … that highly secure vault then had a vent in the roof that anyone could have climbed through, and despite being in an enormous building purpose-built for the job, that gap to the data was just small enough to be leapt across. Oh well. As we said detail is what matters with security. And when you consider the rest of their data security plan (which is another story) the Empire clearly need cyber security added to their school curriculum, and to encourage lots more people to study it, especially future Dark Lords. Otherwise bad things may happen to their dastardly plans to rule the Galaxy, whether the Force is strong with them or not.
What do you do when your boss tells you “go and invent a new product”? Lock yourself away and stare out the window? Go for a walk, waiting for inspiration? Medical device system engineers Pat Baird and Katie Hansbro did some anthropology.
Dian Fossey is perhaps the most famous anthropologist. She spent over a decade living in the jungle with gorillas so that she could understand them in a way no one had done before. She started to see what it was really like to be a gorilla, showing that their fierce King Kong image was wrong and that they are actually gentle giants: social animals with individual personalities and strong family ties. Her book and film, ‘Gorillas in the Mist’, tells the story.
Pat and Katie work for Baxter Healthcare. They are responsible for developing medical devices like the infusion pumps hospitals use to pump drugs into people to keep them alive or reduce their pain. Hospitals don’t buy medical devices like we buy phones, of course. They aren’t bought just because they have lots of sexy new features. Hospitals buy new medical devices if they solve real problems. They want solutions that save lives, or save money, and if possible both! To invent something new that sells you ideally need to solve problems your competitors aren’t even aware of. Challenged to come up with something new, Pat and Katie wondered if, given the equivalent was so productive for Dian Fossey, perhaps immersing themselves in hospitals with nurses would give the advantage their company was after. Their idea was that understanding what it was really like to be a nurse would make a big difference to their ability to design medical devices. That helped with the real problems nurses had rather than those that the sales people said were problems. After all the sales people only talk to the managers, and the managers don’t work on the wards. They were right.
Taking notes
They took a team on a 3-month hospital tour, talking to people, watching them do their jobs and keeping notes of everything. They noted things like the layout of rooms and how big they were, recorded the temperature, how noisy it was, how many flashing lights and so on. They spent a lot of time in the critical care wards where infusion pumps were used the most but they also went to lots of other wards and found the pumps being used in other ways. They didn’t just talk to nurses either. Patients are moved around to have scans or change wards, so they followed them, talking to the porters doing the pushing. They observed the rooms where the devices were cleaned and stored. They looked for places where people were doing ad hoc things like sticking post it note reminders on machines. That might be an opportunity for them to help. They looked at the machines around the pumps. That told them about opportunities for making the devices fit into the bigger tasks the nurses were using them as part of.
The hot Texan summer was a problem
So did Katie and Pat come up with a new product as their boss wanted? Yes. They developed a whole new service that is bringing in the money, but they did much more too. They showed that anthropology brings lots of advantages for medical device companies. One part of Pat’s job, for example, is to troubleshoot when his customers are having problems. He found after the study that, because he understood so much more about how pumps were used, he could diagnose problems more easily. That saved time and money for everyone. For example, touch screen pumps were being damaged. It was because when they were stored together on a shelf their clips were scratching the ones behind. They had also seen patients sitting outside in the ambulance bays with their pumps for long periods smoking. Not their problem, apart from it was Texas and the temperature outside was higher than the safe operating limit of the electronics. Hospitals don’t get that hot so no one imagined there might be a problem. Now they knew.
Porters shouldn’t be missed
Pat and Katie also showed that to design a really good product you had to design for people you might not even think about, never mind talk to. By watching the porters they saw there was a problem when a patient was on lots of drugs each with its own pump. The porter pushing the bed also had to pull along a gaggle of pumps. How do you do that? Drag them behind by the tubes? Maybe the manufacturers can design in a way to make it easy. No one had ever bothered talking to the porters before. After all they are the low paid people, doing the grunt jobs, expected to be invisible. Except they are important and their problems matter to patient safety. The advantages didn’t stop there, either. Because of all that measuring, the company had the raw data to create models of lots of different ward environments that all the team could use when designing. It meant they could explore in a virtual environment how well introducing new technology might fix problems (or even see what problems it would cause).
All in all anthropology was a big success. It turns out observing the detail matters. It gives a commercial advantage, and all that mundane knowledge of what really goes on allowed the designers to redesign their pumps to fix potential problems. That makes the machines more reliable, and saves money on repairs. It’s better for everyone.
Talking to porters, observing cupboards, watching ambulance bays: sometimes it’s the mundane things that make the difference. To be a great systems designer you have to deeply understand all the people and situations you are designing for, not just the power users and the normal situations. If you want to innovate, like Pat and Katie, take a leaf out of Dian Fossey’s book. Try anthropology.
Magicians often fool their audience into ‘looking over there’ (literally or metaphorically), getting them to pay attention to the wrong thing so that they’re not focusing on what the magician is doing and can enjoy the trick without seeing how it was done. Computers, phones and medical devices let you interact with them using a human-friendly interface (such as a ‘graphical user interface’) which make them easier to use, but which can also hide the underlying computing processes from view. Normally that’s exactly what you want but if there’s a problem, and one that you’d really need to know about, how well does the device make that clear? Sometimes the design of the device itself can mask important information, sometimes the way in which devices are used can mask it too. Here is a case where nurses were blamed but it was later found that the medical devices involved, blood glucose meters, had (unintentionally) tripped everyone up.A useful workaround seemed to be working well, but caused problems later on.
Negligent nurses? Or dodgy digital?
by Harold Thimbleby, Swansea University and Paul Curzon, Queen Mary University of London
It’s easy to get excited about new technology and assume it must make things better. It’s rarely that easy. Medical technology is a case in point, as one group of nurses found out. It was all about one simple device and wearable ID bracelets. Nurses were taken to court, blamed for what went wrong.
The nurses taken to court worked in a stroke unit and were charged with wilfully neglecting their patients. Around 70 others were also disciplined though not sent to court.
There were problems with many nurses’ record-keeping. A few were selected to be charged by the police on the rather arbitrary basis that they had more odd records than the others.
Critical Tests
The case came about because of a single complaint. As the hospital, and then police, investigated, they found more and more oddities, with lots of nurses suddenly implicated. They all seemed to have fabricated their records. Repeatedly, their paper records did not tally with the computer logs. Therefore, the nurses must have been making up the patient records.
The gadget at the centre of the story was a portable glucometer. Glucometers allow the blood-glucose (aka blood sugar) levels of patients to be tested. This matters. If blood-sugar problems are not caught quickly, seriously ill patients could die.
Whenever they did a test, the nurses recorded it in the patient’s paper record. The glucometer system also had a better, supposedly infallible, way to do this. The nurse scanned their ID badge using the glucometer, telling it who they were. They then scanned the patient’s barcode bracelet, and took the patient’s blood-sugar reading. They finally wrote down what the glucometer said in the paper records, and the glucometer automatically added the reading to that patient’s electronic record.
Over and over again, the nurses were claiming in the notes of patients that they had taken readings, when the computer logs showed no reading had been taken. As machines don’t lie, the nurses must all be liars. They had just pretended to take these vital tests. It was a clear case of lazy nurses colluding to have an easy life!
What really happened?
In court, witnesses gave evidence. A new story unfolded. The glucometers were not as simple as they seemed. No-one involved actually understood them, how the system really worked, or what had actually happened.
In reality the nurses were looking after their patients … despite the devices.
The real story starts with those barcode bracelets that the patients wore. Sometimes the reader couldn’t read the barcode. You’ve probably seen this happen in supermarkets. Every so often the reader can’t tell what is being scanned. The nurses needed to sort it out as they had lots of ill patients to look after. Luckily, there was a quick and easy solution. They could just scan their own ID twice. The system accepted this ‘double tapping’. The first scan was their correct staff ID. The second scan was of their staff card ID instead of the patient ID. That made the glucometer happy so they could use it, but of course they weren’t using a valid patient ID.
As they wrote the test result in the patient’s paper record no harm was done. When checked, over 200 nurses sometimes used double tapping to take readings. It was a well-known (at least by nurses), and commonly used, work-around for a problem with the barcode system.
The system was also much more complicated than that anyway. It involved a complex computing network, and a lot of complex software, not just a glucometer. Records often didn’t make it to the computer database for a variety of reasons. The network went down, manually entered details contained mistakes, the database sometimes crashed, and the way the glucometers had been programmed meant they had no way to check that the data they sent to the database actually got there. Results didn’t go straight to the patient record anyway. It happened when the glucometer was docked (for recharging), but they were constantly in use so might not be docked for days. Indeed, a fifth of the entries in the database had an error flag indicating something had gone wrong. In reality, you just couldn’t rely on the electronic record. It was the nurses’ old fashioned paper records that really were the ones you could trust.
The police had got it the wrong way round! They thought the computers were reliable and the nurses untrustworthy, but the nurses were doing a good job and the computers were somehow failing to record the patient information. Worse, they were failing to record that they were failing to record things correctly! … So nobody realised.
Disappearing readings
What happened to all the readings with invalid patient IDs? There was no place to file them so the system silently dropped them into a separate electronic bin of unknowns. They could then be manually assigned, but no way had been set up to do that.
During the trial the defence luckily noticed an odd discrepancy in the computer logs. It was really spiky in an unexplained way. On some days hardly any readings seemed to be taken, for example. One odd trough corresponded to a day the manufacturer said they had visited the hospital. They were asked to explain what they had done…
The hospital had asked them to get the data ready to give to the police. The manufacturer’s engineer who visited therefore ‘tidied up’ the database, deleting all the incomplete records…including all the ones the nurses had supposedly fabricated! The police had no idea this had been done.
Suddenly, no evidence
When this was revealed in court, the judge ruled that all the prosecution’s evidence was unusable. The prosecution said, therefore, they had no evidence at all to present. In this situation, the trial ‘collapses’: the nurses were completely innocent, and the trial immediately stopped.
The trial had already blighted the careers of lots of good nurses though. In fact, some of the other nurses pleaded guilty as they had no memory of what had actually happened but had been confronted with the ‘fact’ that they must have been negligent as “the computers could not lie”. Some were jailed. In the UK, you can be given a much shorter jail sentence, or maybe none at all, if you plead guilty. It can make sense to plead guilty even if you know you aren’t — you only need to think the court will find you guilty. Which isn’t the same thing.
Silver bullets?
Governments see digitalisation as a silver bullet to save money and improve care. It can do that if you get it right. But digital is much harder to get right than most people realise. In the story here, not getting the digital right — and not understanding it — caused serious problems for lots of nurses.
It takes skill and deep understanding to design digital things to work in a way that really makes things better. It’s hard for hospitals to understand the complexities in what they are buying. Ultimately, it’s nurses and doctors who make it work. They have to.
They shouldn’t be automatically blamed when things go wrong because digital technology is hard to design well.
Most people in hospital get great treatment but if something does go wrong the victims often want something good to come of it. They want to understand why it happened and be sure it won’t happen to anyone else. Medical mistakes can make a big news story though with screaming headlines vilifying those ‘responsible’. It may sell papers but it could also make things worse.
If press and politicians are pressurising hospitals to show they have done something, they may only sack the person who made the mistake. They may then not improve things meaning the same thing could happen again if it was an accident waiting to happen. Worse if we’re too quick to blame and punish someone, other people will be reluctant to report their mistakes, and without that sharing we can’t learn from them. One of the reasons flying is so safe is that pilots always report ‘near misses’ knowing they will be praised for doing so, rather than getting into trouble. It’s far better to learn from mistakes where nothing really bad happens than wait for a tragedy.
Share mistakes to learn from them
Chrystie Myketiak from Queen Mary explored whether the way a medical technology story is reported makes a difference to how we think about it, and ultimately what happens. She analysed news stories about three similar incidents in the UK, America and Canada. She wanted to see what the papers said, but also how they said it. The press often sensationalise stories but Chrystie found that this didn’t always happen. Some news stories did imply that the person who’d made the mistake was the problem (it’s rarely that simple!) but others were more careful to highlight that they were busy people working under stressful conditions and that the mistakes only happened because there were other problems. Regulations in Canada mean the media can’t report on specific details of a story while it is being investigated. Chrystie found that, in the incidents she looked at, that led to much more reasoned reporting. In that kind of environment hospitals are more likely to improve rather than just blame staff. How the hospital handled a case also affected what was written – being open and honest about a problem is better than ignoring requests for comment and pretending there isn’t a problem.
Everyone makes mistakes (if you don’t believe that, the next time you’re at a magic show, make sure none of the tricks fool you!). Often mistakes happen because the system wasn’t able to prevent them. Rather than blame, retrain or sack someone its far better to improve the system. That way something good will come of tragedies.
– Paul Curzon, Queen Mary University of London (From the archive)
Simulators are a common way to train when gaining skills that are dangerous or difficult to practice for real. Pilots, for example, do lots of training on flight simulators. Doctors also use simulators to train for surgery, and the simulators are increasingly accurate. They can even make it feel like you’re working on the real thing by giving you feedback through your sense of touch: haptics. Practicing sinus or eye surgery and your practice sessions will feel real, for example. Haptics can help not just doctors, but vets training too – and it can help not just in the head but, errr, at the other end too.
Trainee vets have to learn how to feel for animals’ organs. In small animals like dogs and cats you can do that just by feeling the outside of their tummies, but in larger animals like cows or horses you have to actually put your hands inside them. That’s right, up there. Now the thing is, this is a very difficult thing to learn how to do properly. A teacher can’t demonstrate it, because the student can’t see what they’re doing. Likewise when the student tries it, the teacher can’t see to know if they’re doing it right. Usually they just rely on describing what they’re doing (and how the animal reacts, of course).
Fortunately for teacher, student and especially animal, Sarah Baillie and her colleagues at the Royal Veterinary College invented a simulator called the Haptic Cow. It’s a haptic model of a cow’s rear end, complete with ‘Ouchometer’ – a graph that shows whether the student’s movements are too gentle to be effective, just right, or too rough to be safe. By using the Haptic Cow, students get an accurate idea of what they’ll be doing in their real jobs, the teachers can see better feedback of how well the student’s doing, and real cows don’t have to worry about being practised on. For doctors, vets and their patients, haptics are helping to make sure that practice doesn’t have to mean petrifying.