Oh no! Not again…

What a mess. There’s flour all over the kitchen floor. A fortnight ago I opened the cupboard to get sugar for my hot chocolate. As I pulled out the sugar, it knocked against the bag of flour which was too close to the edge… Luckily the bag didn’t burst and I cleared it up quickly before anyone found out. Now it’s two weeks later and exactly the same thing just happened to my brother. This time the bag did burst and it went everywhere. Now he’s in big trouble for being so clumsy!

Flour cascading everywhere
Cropped image of that by Anastasiya Komarova from Pixabay
Image by Anastasiya Komarova from Pixabay

In safety-critical industries, like healthcare and the airline industry, especially, it is really important that there is a culture of reporting incidents including near misses. It also, it turns out, is important that the mechanisms of reporting issues is appropriately designed, and that there is a no blame culture especially so that people are encouraged to report incidents and do so accurately and without ambiguity.

Was the flour incident my brother’s fault? Should he have been more careful? He didn’t choose to put the sugar in a high cupboard with the flour. Maybe it was my fault? I didn’t choose to put the sugar there either. But I didn’t tell anyone about the first time it happened. I didn’t move the sugar to a lower cupboard so it was easier to reach either. So maybe it was my fault after all? I knew it was a problem, and I didn’t do anything about it. Perhaps thinking about blame is the wrong thing to do!

Now think about your local hospital.

James is a nurse, working in intensive care. Penny is really ill and is being given insulin by a machine that pumps it directly into her vein. The insulin is causing a side effect though – a drop in blood potassium level – and that is life threatening. They don’t have time to set up a second pump, so the doctor decides to stop the insulin for a while and to give a dose of potassium through a second tube controlled by the same pump. James sets up the bag of potassium and carefully programs the pump to deliver it, then turns his attention to his next task. A few minutes later, he glances at the pump again and realises that he forgot to release the clamp on the tube from the bag of potassium. Penny is still receiving insulin, not the potassium she urgently needs. He quickly releases the clamp, and the potassium starts to flow. An hour later, Penny’s blood potassium levels are pretty much back to normal: she’s still ill, but out of danger. Phew! Good job he noticed in time and no-one else knows about the mistake!

Two weeks later, James’ colleague, Julia, is on duty. She makes a similar mistake treating a different patient, Peter. Except that she doesn’t notice her mistake until the bag of insulin has emptied. Because it took so long to spot, Peter needs emergency treatment. It’s touch-and-go for a while, but luckily he recovers.

Julia reports the incident through the hospital’s incident reporting system, so at least it can be prevented from happening again. She is wracked with guilt for making the mistake, but also hopes fervently that she won’t be blamed and so punished for what happened

Don’t miss the near misses

Why did it happen? There are a whole bunch of problems that are nothing to do with Julia or James. Why wasn’t it standard practice to always have a second pump set up for critically ill patients in case such emergency treatment is needed? Why can’t the pump detect which bag the fluid is being pumped from? Why isn’t it really obvious whether the clamp is open or closed? Why can’t the pump detect it. If the first incident – a ‘near miss’ – had been reported perhaps some of these problems might have been spotted and fixed. How many other times has it happened but not reported?

What can we learn from this? One thing is that there are lots of ways of setting up and using systems, and some may well make them safer. Another is that reporting “near misses” is really important. They are a valuable source of learning that can alert other people to mistakes they might make and lead to a search for ways of making the system safer, perhaps by redesigning the equipment or changing the way it is used, for example – but only if people tell others about the incidents. Reporting near-misses can help prevent the same thing happening again.

The above was just a story, but it’s based on an account of a real incident… one that has been reported so it might just save lives in the future.

Report it!

The mechanisms used to do it, as well as culture around reporting incidents can make a big difference to whether incidents are reported. However, even when incidents are reported, the reporting systems and culture can help or hinder the learning that results.

Chrystie Myketiak at Queen Mary analysed actual incident reports for the kind of language used by those writing them. She found that the people doing the reporting used different strategies in they way they wrote the reports depending on the kind of incident it was. In situations where there was no obvious implication that a person made a mistake (such as where sterilization equipment had not successfully worked) they used one kind of language. Where those involved were likely to be seen to be responsible, so blamed, (eg when a wrong number had been entered in a medical device, for example) they used a different kind of language.

In the former, where “user errors” might have been involved, those doing the reporting were more likely to write in a way that hid the identity of any person involved, eg saying “The pump was programmed” or writing about he or she rather than a named person. They were also more likely to write in a way that added ambiguity. For example, in the user error reports it was less clear whether the person making the report was the one involved or whether someone else was writing it such as a witness or someone not involved at all.

Writing in the kinds of ways found, and the fact that it differed to those with no one likely to be blamed, suggests that those completing the reports were aware that their words might be misinterpreted by those who read them. The fact that people might be blamed hung over the reporting.

The result of adding what Christie called “precise ambiguity” might mean important information was inadvertently concealed making it harder to understand why the incident happened so work out how best to avoid it. As a result, patient safety might then not be improved even though the incident was reported. This shows one of the reasons why a strong culture of no-fault reporting is needed if a system is to be made as safe as possible. In the airline industry, which is incredibly safe, there is a clear system of no fault reporting, with pilots, for example, being praised for reporting near-misses of plane crashes rather than being punished for any mistake that led to the near miss.

This work was part of the EPSRC funded CHI+MED research project led by Ann Blandford at UCL looking at design for patient safety. In separate work on the project, Alexis Lewis, at Swansea University, explored how best to design the actual incident reporting forms as part of her PhD. A variety of forms are used in hospitals across the UK and she examined more than 20 different ones. Many had features that would make it harder than necessary for nurses and doctors to report incidents accurately even if they wanted to openly so that hospital staff would learn as much as possible from the incidents that did happen. Some forms failed to ask about important facts and many didn’t encourage feedback. It wasn’t clear how much detail or even what should be reported. She used the results to design a new reporting form that avoided the problems and that could be built into a system that encourages the reporting of incidents . Ultimately her work led to changes to the reporting form and process used within at least one health board she was working with.

People make mistakes, but safety does not come from blaming those that make them. That just discourages a learning culture. To really improve safety you need to praise those that report near misses, as well as ensuring the forms and mechanisms they must use to do so helps them provide the information needed.

Updated from the archive, written by the CHI+MED team.

More on …

Magazines …

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



EPSRC supports this blog through research grant EP/W033615/1. 

Avoiding loneliness with StudyBuddy

A girl in a corner of a red space head on knees
Lonely Image by Foundry Co from Pixabay

University has always been a place where you make great friends for life. Social media means everyone can easily make as many online friends as they like, and ever more students go to university, meaning more potential friends to make. So surely things now are better than ever. And yet many students suffer from loneliness while at university. We somehow seem to have ever greater disconnection the more connections we make. Klara Brodahl realised there was a novel need here that no one was addressing well and decided to try to solve it for the final year project of her computer science degree. Her solution was StudyBuddy and with the support of an angel investor she has now set up a startup company and is rolling it out for real.

A loneliness epidemic

In the digital age, university students face an unexpected challenge—loneliness. Although they’re more “connected” than ever through social media and virtual interactions, the quality of these connections is often shallow. A 2023 study, for example, found that 92% of students in the UK feel lonely at some point during their university life. This “loneliness epidemic” has profound effects, contributing to issues like anxiety, depression, and struggling with their programme.

During her own university years, Klara Brodahl  had experienced first hand the challenge of forming meaningful friends in an environment where everyone seemed socially engaged online but weren’t always connected in real life. She soon discovered that it wasn’t just her but a shared struggle by students across the country. Inspired by this, she set out to write a program that would fill the void in student’s lives and bridge the gap between studying and social life.

Combatting loneliness in the real world

She came up with StudyBuddy: a mobile app designed to combat student loneliness by supporting genuine, in-person connections between university students, not just virtual ones. Her aim was that it would help students meet, study, and connect in real time and in shared spaces. 

She realised that technology does have the potential to strengthen social bonds, but how it’s designed and used makes all the difference. The social neuroscientist John Cacioppo has pointed out that using social media primarily as a destination in its own right often leaves people feeling distant and dissatisfied. However, when technology is designed to serve as a bridge to offline human engagement, it can reduce loneliness and improve well-being. StudyBuddy embodies this approach by encouraging students to connect in person rather than trying to replace meeting face-to-face.

Study together in the real world

Part of making this work is in having reasons to meet for real. Klara realised that the need to study, and the fact that doing this in groups rather than alone can help everyone do better, could provide the excuse for this. StudyBuddy, therefore, integrates study goals with social interaction, allowing friendships to form around shared academic interests—an ideal icebreaker for those who feel nervous in traditional social settings.

The app uses location-based technology to connect students for co-study sessions, making in-person meetings easy and natural. Through a live map, students can see where others are checked in nearby at study spots like libraries, cafes, or student common areas. They can join existing study groups or start their own. The app uses university ID verification to help ensure connections are built on a trusted network.

From idea to startup company

Klara didn’t originally plan for StudyBuddy to become a real company. Like many graduates, she thought starting a business was something to perhaps try later, once she had some professional experience from a more ‘normal’ graduate job. However, when the graduate scheme she won a place on after graduating was unexpectedly delayed, she found herself with time on her hands. Rather than do nothing she decided to keep working on the app as a side project. It was at this point that StudyBuddy caught the attention of an angel investor, whose enthusiasm for the app gave Klara the confidence to keep going.

When her graduate scheme finally began, she was therefore already deeply invested in StudyBuddy. Trying to manage both roles, she quickly realised she preferred the challenge and creativity of her startup work over the graduate scheme. And when it became impossible to balance both, she took a leap of faith, quitting her graduate job to focus on StudyBuddy full-time—a decision that has since paid off. She gained early positive feedback, ran a pilot at Queen Mary University of London, and won early funding for investors willing to invest in what was essentially still an idea, rather than a product with a known market. As a result StudyBuddy has gradually turned into a useful mission-driven platform, providing students with a safe, real-world way to connect.

Making a difference

StudyBuddy has the potential to transform the university experience by reducing loneliness and fostering authentic, in-person friendships. By rethinking what engagement in the digital age means, the app also serves as a model for how technology can promote meaningful social interaction more generally. Klara has shown that with thoughtful design, technology can be a powerful tool for bridging digital and physical divides, creating a campus environment where students thrive both academically and socially. Her experience also shows how the secret to being a great entrepreneur is to be able to see a human need that no one else has seen or solved well. Then, if you can come up with a creative solution that really solves that need, your ideas can become reality and really make a difference to people’s lives.

– Klara Brodahl, StudyBuddy and Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page and talk are funded by EPSRC on research agreement EP/W033615/1.

The illusion of good software design

Ouchi eye based on an illusion by Ouchi
Image by CS4FN based on an illusion by Ouchi

When disasters involving technology occur, human error is often given as the reason, but even experts make mistakes using poor technology. Rather than blame the person, human error should be seen as a design failure. Bad design can make mistakes more likely and good design can often eliminate them. Optical illusions and magic tricks show how we can design things that cause everyone to make the same systematic mistake, and we need to use the same understanding of the brain when designing software and hardware. This is especially important if the gadgets are medical devices where mistakes can have terrible consequences. The best computer scientists and programmers don’t just understand technology, they understand people too, and especially our brain’s fallibilities. If they don’t, then mistakes using their software and gadgets are more likely. If people make mistakes, don’t blame the person, fix the design and save lives.

Illusions

Optical illusions and magic tricks give a mirror on the limits of our brains. Even when you know an optical illusion is an illusion you cannot stop seeing the effect. For example, this image of an eye is completely flat and stationary: nothing is moving. And yet if you move your head very slightly from side to side the centre pops out and seems to be moving separately to the rest of the eye.

Illusions occur because our brains have limited resources and take short cuts in processing the vast amount of information that our senses deliver. These short cuts allow us to understand what we see faster and do so with less resources. Illusions happen when the short cuts are applied in a way where they do not apply.

What this means is that we do not see the world as it really is but see a simplified version constructed by our subconscious brain and provided to our conscious brain. It is very much like in the film, the Matrix, except it is our own brains providing the fake version of the world we experience rather than alien computers.

Attention

The way we focus our attention is one example of this. You may think that you see the world as it is, but you only directly see the things you focus on, your brain fills out the rest rather than constantly feeding the actual information to you constantly. It does this based on what it last saw there but also on the basis of just completing patterns. The following illusion shows this in action. There are 12 black dots and as you move your attention from one to the next you can see and count them all. However, you cannot see them all at once. The ones in your peripheral vision disappear as you look away as the powerful pattern of grey lines takes over. You are not seeing everything that is there to be seen!

Find more on the links to magic in our book

Conjuring with Computation” .

Our brains also have very limited working memory and limited attention. Magicians also exploit this to design “magical systems” where a whole audience make the same mistake at the same time. Design the magic well so that these limitations are triggered and people miss things that are there to be seen, forget things they knew a few moments before, and so on. For example, by distracting their attention they make them miss something that was there to be seen.

What does this mean to computer scientists?

When we design the way we interact with a computer system, whether software and hardware, it is also possible to trigger the same limitations a magician or optical illusion does. A good interaction designer therefore does the opposite to a magician and, for example: draws a user’s attention to things that must not be missed at a critical time; they ensure they do not forget things that are important, they help them keep track of the state of the system, they give good feedback so they know what has happened.

Most software is poorly designed leading to people making mistakes, not all the time, but some of the time. The best designs will help people avoid making mistakes and also help them spot and fix mistakes as soon as they do make them.

Examples of poor medical device design

The following are examples of the interfaces of actual medical devices found in a day of exploration by one researcher (Paolo Masci) at a single very good hospital (in the US).

When the nurse or doctor types the following key sequence as a drug dose rate:

this infusion pump, without any explicit warning, other than the number being displayed, registered the number entered as 1001.

Probably, the programmer had been told that when doses are as large as 100, then fractional doses are so relatively small that they make no difference. A user typing in such fractional amounts, is likely making an error as such a dose is unlikely to be prescribed. The typing of the decimal point is therefore just ignored as a mistake by the infusion pump. Separately, (perhaps coded by a different programmer in the team, or at a different time) until the ENTER key is pressed the code treats the number as incomplete. Any further digits typed are therefore just accepted as part of the number.

This different design by a different manufacturer also treats the key sequence as 1001 (though in the case shown 1001 is rejected as it exceeds the maximum allowable rate, caused by the same issue of the device silently ignoring a decimal point).

This suggests two different coding teams indipendently coded in the same design flaw that led to the same user error.

What is wrong with that?

Devices should never silently ignore and/or correct input if bad mistakes are to be avoided. Here, that original design flaw, could lead to a dose 10x too big being infused into a patient and that could kill. It relies on the person typing the number noticing that the decimal point has been ignored (with no help from the device). Decimal points are small and easily missed of course. Also, their attention cannot be guaranteed to be on the machine and, in fact, with a digit keypad for entering numbers that attnetion is likely to be on the keys. Alarms or other distractions elsewhere could easily mean they do not notice the missing decimal point (which is a tiny thing to see).

An everyday example of the same kind of problem, showing how easily mistakes are missed is in auto-completion / auto-correction of spelling mistakes in texts and word processors. Goofs where an auto-corrected word are missed are very common. Anything that common needs to be designed away in a safety critical system.

Design Rules

One of the ways that such problems can be avoided is by programmers following interaction design rules. The machine (and the programmer writing the code) does not know what a user is trying to input when they make a mistake. One design rule is therefore that a program should therefore NEVER correct any user error silently. Here perhaps the mistake was pressing 0 twice rather than pressing the decimal point. In the case of user errors, the program should raise awareness of the error, and not allow further input until the error is corrected. The program should explicitly draw the person’s attention to the problem (eg changing colour, flashing, beeping, etc). This involves using the same understanding of cognitive psychology as a magician, to control their attention. Whereas a magician would be taking their attention away from the thing that matters, the programmer draws theur attention to it.

It should make clear in an easily understandable error message what the problem is (eg here “Doses over 99 should not include decimal fractions. Please delete the decimal point.”) It should then leave the user to make the correction (eg deleting the decimal point) not do it itself.

By following a design rule such as this programmers can avoid user errors, which are bound to happen, from causing a big problem.

Avoiding errors

Sometimes the way we design software interfaces and their interaction design we can do even better than this, though. We are letting people make mistakes and then telling them to help them pick up the pieces afterward. Sometimes we can do better than this and with better design help them avoid making the mistake in the first place or spot the mistake themselves as soon as they make it.

Doing this is again about controlling user attention as a magician does. An interaction designer needs to do this again in the opposite wayto the magician though, directing the users attention to the place it needs to be to see what is really happening as they take actions rather than away from it.

To use a digit keypad, the users attention has to be on their fingers so they can see where to put their fingers to press a given digit. They look at the keypad, not the screen. The design of the digit keypad draws their attention to the wrong place. However, there are lots of ways to enter numbers and the digit keypad is only one. One other way is to use cursor keys (left, right, up and down) and have a cursor on the screen move to the position where a digit will be changed. Now, once the person’s finger is on say the up arrow, attention naturally moves to the screen as that button is just pressed repeatedly until the correct digit is reached. The user is watching what is happening, watching the program’s output, rather than their input, so is now less likely to make a mistake. If they do overshoot, their attention is in the right place to see it and immediately correct it. Experiments showed that this design did lead to fewer large errors though is slower. With numbers though accuracy is more likely to matter than absolute speed, especially in medical situations.

There are still subtleties to the design though – should a digit roll over from 9 back to 0, for example? If it does should the next digit increase by 1 automatically? Probably not, as these are the things that lead to other errors (out by a factor of 10). Instead going up from 9 should lead to a warning.

Learn from magicians

Magicians are expert at making people make mistakes without them even realising they have. The delight in magic comes from being so easily fooled so that the impossible seems to have happened. When writing software we need to using the same understanding of our cognitive resources and how to manipulate them to prevent our users making mistakes. There are many ways to do this, but we should certainly never write software that silently corrects user errors. We should control the users attention from the outset using similar techniques to a magician so that their attention is in the right place to avoid problems. Ideally a number entry system such as using cursor keys to enter the number rather than a digit keypad should be used as then the attention of the user is more likely to be on the number entered in the first place.

– Paul Curzon, Queen Mary University of London

More on …

Related Careers

Careers related to this article include:

  • Interaction designer
    • Responsible for the design of not just the interface but how a device or software is used. Applying creativity and applying existing design rules to come up with solutions. Has a deep understanding both of technical issues and of the limitations of human cognition (how our brains work).
  • Usability consultant
    • Give advice on making software and gadgets generally easier to use, evaluate designs for features that will make them hard to use or increase the likelihood of errors, finding problems at an early stage.
  • User experience (UX) consultant
    • Give advice on ensuring users of software have a good positive experience and that using it is not for example, frustrating.
  • Medical device developer
    • Develop software or hardware for medical devices used in hospitals or increasingly in the home by patients. Could be improvements to existing devices or completely novel devices based on medical or biomedical breakthroughs, or on computer science breakthroughs, such as in artificial intelligence.
  • Research and Development Scientist
    • Do experiments to learn more about the way our brains work, and/or apply it to give computers and robots a way to see the world like we do. Use it to develop and improve products for a spin-off company.

Magazines …

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Ask About Asthma

by Daniel Gill, Queen Mary University of London

An inhaler being pressed so the mist of drug can be seen emerging
Image of inhaler by Cnordic CNordic from Pixabay

This week (9-15 September), as many young people are heading back to school after their summer holiday, NHS England is suggesting that teachers, employers and government workers #AskAboutAsthma. The goal is to raise awareness of the experiences of those with asthma, and to suggest techniques to put in place to help children and young people with asthma live their best lives.

One of the key bits of kit in the arsenal of people with asthma is an inhaler. When used, an inhaler can administer medication directly into the lungs and airways as the user breathes in. In the case of those with asthma, an inhaler can help to reduce inflammation in the airways which might prevent air from entering the lungs, especially during an asthma attack.

It’s only recently, however, that inhalers are getting the technology treatment. Smart inhalers can help to remind those with asthma to take their medication as prescribed (a common challenge for those with asthma) as well as tracking their use which can be shared with doctors, carers, or parents. Some smart inhalers can also identify if the correct inhaler technique is being used. Researchers have been able to achieve this by putting the audio of people using an inhaler through a neural network (a form of artificial intelligence), which can then classify between a good and bad technique.

As with any medical technology, these smart inhalers need to be tested with people with asthma to check that they are safe and healthy, and importantly to check that they are better than the existing solutions. One such study started in Leicester in July 2024, where smart inhalers (in this case, ones that clip onto existing inhalers) are being given to around 300 children in the city. The researchers will wait to see if these children have better outcomes than those who are using regular inhalers.

This sort of technology is a great example of what computer scientists call the “Internet of Things” (IoT). This refers to small computers which might be embedded within other devices which can interact over the internet… think smart lights in your home that connect to a home assistant, or fridges that can order food when you run out. 

A lot of medical devices are being integrated into the internet like this… a smart watch can track the wearer’s heart rate continuously and store it in a database for later, for example. Will this help us to live happier, healthier lives though? Or could we end up finding concerning patterns where there are none?

More on…

Linked Research Papers

Magazines …


EPSRC supports this blog through research grant EP/W033615/1,

AMPER: AI helping future you remember past you

by Jo Brodie, Queen Mary University of London

Have you ever heard a grown up say “I’d completely forgotten about that!” and then share a story from some long-forgotten memory? While most of us can remember all sorts of things from our own life history it sometimes takes a particular cue for us to suddenly recall something that we’d not thought about for years or even decades. 

As we go through life we add more and more memories to our own personal library, but those memories aren’t neatly organised like books on a shelf. For example, can you remember what you were doing on Thursday 20th September 2018 (or can you think of a way that would help you find out)? You’re more likely to be able to remember what you were doing on the last Tuesday in December 2018 (but only because it was Christmas Day!). You might not spontaneously recall a particular toy from your childhood but if someone were to put it in your hands the memories about how you played with it might come flooding back.

Accessing old memories

In Alzheimer’s Disease (a type of dementia) people find it harder to form new memories or retain more recent information which can make daily life difficult and bewildering and they may lose their self-confidence. Their older memories, the ones that were made when they were younger, are often less affected however. The memories are still there but might need drawing out with a prompt, to help bring them to the surface.

Perhaps a newspaper advert will jog your memory in years to come… Image by G.C. from Pixabay

An EPSRC-funded project at Heriot-Watt University in Scotland is developing a tablet-based ‘story facilitator’ agent (a software program designed to adapt its response to human interaction) which contains artificial intelligence to help people with Alzheimer’s disease and their carers. The device, called ‘AMPER’*, could improve wellbeing and a sense of self in people with dementia by helping them to uncover their ‘autobiographical memories’, about their own life and experiences – and also help their carers remember them ‘before the disease’.

Our ‘reminiscence bump’

We form some of our most important memories between our teenage years and early adulthood – we start to develop our own interests in music and the subjects that we like studying, we might experience first loves, perhaps going to university, starting a career and maybe a family. We also all live through a particular period of time where we’re each experiencing the same world events as others of the same age, and those experiences are fitted into our ‘memory banks’ too. If someone was born in the 1950s then their ‘reminiscence bump’ will be events from the 1970s and 1980s – those memories are usually more available and therefore people affected by Alzheimer’s disease would be able to access them until more advanced stages of the disease process. Big important things that, when we’re older, we’ll remember more easily if prompted.

In years to come you might remember fun nights out with friends.
Image by ericbarns from Pixabay

Talking and reminiscing about past life events can help people with dementia by reinforcing their self-identity, and increasing their ability to communicate – at a time when they might otherwise feel rather lost and distressed. 

AMPER will explore the potential for AI to help access an individual’s personal memories residing in the still viable regions of the brain by creating natural, relatable stories. These will be tailored to their unique life experiences, age, social context and changing needs to encourage reminiscing.”

Dr Mei Yii Lim, who came up with the idea for AMPER(3).

Saving your preferences

AMPER comes pre-loaded with publicly available information (such as photographs, news clippings or videos) about world events that would be familiar to an older person. It’s also given information about the person’s likes and interests. It offers examples of these as suggested discussion prompts and the person with Alzheimer’s disease can decide with their carer what they might want to explore and talk about. Here comes the clever bit – AMPER also contains an AI feature that lets it adapt to the person with dementia. If the person selects certain things to talk about instead of others then in future the AI can suggest more things that are related to their preferences over less preferred things. Each choice the person with dementia makes now reinforces what the AI will show them in future. That might include preferences for watching a video or looking at photos over reading something, and the AI can adjust to shorter attention spans if necessary. 

Reminiscence therapy is a way of coordinated storytelling with people who have dementia, in which you exercise their early memories which tend to be retained much longer than more recent ones, and produce an interesting interactive experience for them, often using supporting materials — so you might use photographs for instance

Prof Ruth Aylett, the AMPER project’s lead at Heriot-Watt University(4).

When we look at a photograph, for example, the memories it brings up haven’t been organised neatly in our brain like a database. Our memories form connections with all our other memories, more like the branches of a tree. We might remember the people that we’re with in the photo, then remember other fun events we had with them, perhaps places that we visited and the sights and smells we experienced there. AMPER’s AI can mimic the way our memories branch and show new information prompts based on the person’s previous interactions.

​​Although AMPER can help someone with dementia rediscover themselves and their memories it can also help carers in care homes (who didn’t know them when they were younger) learn more about the person they’re caring for.

*AMPER stands for ‘Agent-based Memory Prosthesis to Encourage Reminiscing’.


Suggested classroom activities – find some prompts!

  • What’s the first big news story you and your class remember hearing about? Do you think you will remember that in 60 years’ time?
  • What sort of information about world or local events might you gather to help prompt the memories for someone born in 1942, 1959, 1973 or 1997? (Remember that their reminiscence bump will peak in the 15 to 30 years after they were born – some of them may still be in the process of making their memories the first time!).

See also

If you live near Blackheath in South East London why not visit the Age Exchange and reminiscence centre which is an arts charity providing creative group activities for those living with dementia and their carers. It has a very nice cafe.

Related careers

The AMPER project is interdisciplinary, mixing robots and technology with psychology, healthcare and medical regulation.

We have information about four similar-ish job roles on our TechDevJobs blog that might be of interest. This was a group of job adverts for roles in the Netherlands related to the ‘Dramaturgy^ for Devices’ project. This is a project linking technology with the performing arts to adapt robots’ behaviour and improve their social interaction and communication skills.

Below is a list of four job adverts (which have now closed!) which include information about the job description, the types of people that the employers were looking for and the way in which they wanted them to apply. You can find our full list of jobs that involve computer science directly or indirectly here.

^Dramaturgy refers to the study of the theatre, plays and other artistic performances.

Dramaturgy for Devices – job descriptions

References

1. Agent-based Memory Prosthesis to Encourage Reminiscing (AMPER) Gateway to Research
2. The Digital Human: Reminiscence (13 November 2023) BBC Sounds – a radio programme that talks about the AMPER Project.
3. Storytelling AI set to improve wellbeing of people with dementia (14 March 2022) Heriot-Watt University news
4. AMPER project to improve life for people with dementia (14 January 2022) The Engineer


EPSRC supports this blog through research grant EP/W033615/1.

Pit-stop heart surgery

by Paul Curzon, Queen Mary University of London

(Updated from the archive)

Image by Peter Fischer from Pixabay

The Formula 1 car screams to a stop in the pit-lane. Seven seconds later, it has roared away again, back into the race. In those few seconds it has been refuelled and all four wheels changed. Formula 1 pit-stops are the ultimate in high-tech team work. Now the Ferrari pit stop team have helped improve the hospital care of children after open-heart surgery!

Open-heart surgery is obviously a complicated business. It involves a big team of people working with a lot of technology to do a complicated operation. Both during and after the operation the patient is kept alive by computer: lots of computers, in fact. A ventilator is breathing for them, other computers are pumping drugs through their veins and yet more are monitoring them so the doctors know how their body is coping. Designing how this is done is not just about designing the machines and what they do. It is also about designing what the people do – how the system as a whole works is critical.

Pass it on

One of the critical times in open-heart surgery is actually after it is all over. The patient has to be moved from the operating theatre to the intensive care unit where a ‘handover’ happens. All the machines they were connected to have to be removed, moved with them or swapped for those in the intensive care unit. Not only that, a lot of information has to be passed from the operating team to the care team. The team taking over need to know the important details of what happened and especially any problems, if they are to give the best care possible.

A research team from the University of Oxford and Great Ormond Street Hospital in London wondered if hospital teams could learn anything from the way other critical teams work. This is an important part of computational thinking – the way computer scientists solve problems. Rather than starting from scratch, find a similar problem that has already been solved and adapt its solution for the new situation.

Rather than starting from scratch,
find a similar problem
that has already been solved

Just as the pit-stop team are under intense time pressure, the operating theatre team are under pressure to be back in the operating theatre for the next operation as soon as possible. In a handover from surgery there is lots of scope for small mistakes to be made that slow things down or cause problems that need to be fixed. In situations like this, it’s not just the technology that matters but the way everyone works together around it. The system as a whole needs to be well designed and pit stop teams are clearly in the lead.

Smooth moves

To find out more, the research team watched the Ferrari F1 team practice pit-stops as well as talking to the race director about how they worked. They then talked to operating theatre and intensive care unit teams to see how the ideas might work in a hospital handover. They came up with lots of changes to the way the hospital did the handover.

For example, in a pit-stop there is one person coordinating everything – the person with the ‘lollipop’ sign that reminds the driver to keep their brakes on. In the hospital handover there was no person with that job. In the new version the anaesthetist was given the overall job for coordinating the team. Once the handover was completed that responsibility was formally passed to the intensive care unit doctor. In Formula 1 each person has only one or two clear tasks to do. In the hospital people’s roles were less obvious. So each person was given a clear responsibility: the nurses were made responsible for issues with draining fluids from the patient, anaesthetist for ventilation issues, and so on. In Formula 1 checklists are used to avoid people missing steps. Nothing like that was used in the handover so a checklist was created, to be used by the team taking on the patient.

These and other changes led to what the researchers hoped would be a much improved way of doing handovers. But was it better?

Calm efficiency saves the day

To find out they studied 50 handovers – roughly half before the change was made and half after. That way they had a direct way of seeing the difference. They used a checklist of common problems noting both mistakes made and steps that proved unusually difficult. They also noted how well the teams worked together: whether they were calm and supported each other, planned what they did, whether equipment was available when needed, and so on.

They found that the changes led to clearly better handovers. Fewer errors were made both with the technology and in passing on information. Better still, while the best performance still happened when the teams worked well, the changes meant that teamwork problems became less critical. Pit-stops and open-heart surgery may be a world apart, with one being about getting every last millisecond of speed and the other about giving as good care as possible. But if you want to improve how well technology and people work together, you need to think about more than just the gadgets. It is worth looking for solutions anywhere: children can be helped to recover from heart surgery even by the high-octane glitz of Formula 1.

More on …

Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Nurses in the mist

by Paul Curzon, Queen Mary University of London

(From the archive)

A gorilla hugging a baby gorilla
Image by Angela from Pixabay

What do you do when your boss tells you “go and invent a new product”? Lock yourself away and stare out the window? Go for a walk, waiting for inspiration? Medical device system engineers Pat Baird and Katie Hansbro did some anthropology.

Dian Fossey is perhaps the most famous anthropologist. She spent over a decade living in the jungle with gorillas so that she could understand them in a way no one had done before. She started to see what it was really like to be a gorilla, showing that their fierce King Kong image was wrong and that they are actually gentle giants: social animals with individual personalities and strong family ties. Her book and film, ‘Gorillas in the Mist’, tells the story.

Pat and Katie work for Baxter Healthcare. They are responsible for developing medical devices like the infusion pumps hospitals use to pump drugs into people to keep them alive or reduce their pain. Hospitals don’t buy medical devices like we buy phones, of course. They aren’t bought just because they have lots of sexy new features. Hospitals buy new medical devices if they solve real problems. They want solutions that save lives, or save money, and if possible both! To invent something new that sells you ideally need to solve problems your competitors aren’t even aware of. Challenged to come up with something new, Pat and Katie wondered if, given the equivalent was so productive for Dian Fossey, perhaps immersing themselves in hospitals with nurses would give the advantage their company was after. Their idea was that understanding what it was really like to be a nurse would make a big difference to their ability to design medical devices. That helped with the real problems nurses had rather than those that the sales people said were problems. After all the sales people only talk to the managers, and the managers don’t work on the wards. They were right.

Taking notes

They took a team on a 3-month hospital tour, talking to people, watching them do their jobs and keeping notes of everything. They noted things like the layout of rooms and how big they were, recorded the temperature, how noisy it was, how many flashing lights and so on. They spent a lot of time in the critical care wards where infusion pumps were used the most but they also went to lots of other wards and found the pumps being used in other ways. They didn’t just talk to nurses either. Patients are moved around to have scans or change wards, so they followed them, talking to the porters doing the pushing. They observed the rooms where the devices were cleaned and stored. They looked for places where people were doing ad hoc things like sticking post it note reminders on machines. That might be an opportunity for them to help. They looked at the machines around the pumps. That told them about opportunities for making the devices fit into the bigger tasks the nurses were using them as part of.

The hot Texan summer was a problem

So did Katie and Pat come up with a new product as their boss wanted? Yes. They developed a whole new service that is bringing in the money, but they did much more too. They showed that anthropology brings lots of advantages for medical device companies. One part of Pat’s job, for example, is to troubleshoot when his customers are having problems. He found after the study that, because he understood so much more about how pumps were used, he could diagnose problems more easily. That saved time and money for everyone. For example, touch screen pumps were being damaged. It was because when they were stored together on a shelf their clips were scratching the ones behind. They had also seen patients sitting outside in the ambulance bays with their pumps for long periods smoking. Not their problem, apart from it was Texas and the temperature outside was higher than the safe operating limit of the electronics. Hospitals don’t get that hot so no one imagined there might be a problem. Now they knew.

Porters shouldn’t be missed

Pat and Katie also showed that to design a really good product you had to design for people you might not even think about, never mind talk to. By watching the porters they saw there was a problem when a patient was on lots of drugs each with its own pump. The porter pushing the bed also had to pull along a gaggle of pumps. How do you do that? Drag them behind by the tubes? Maybe the manufacturers can design in a way to make it easy. No one had ever bothered talking to the porters before. After all they are the low paid people, doing the grunt jobs, expected to be invisible. Except they are important and their problems matter to patient safety. The advantages didn’t stop there, either. Because of all that measuring, the company had the raw data to create models of lots of different ward environments that all the team could use when designing. It meant they could explore in a virtual environment how well introducing new technology might fix problems (or even see what problems it would cause).

All in all anthropology was a big success. It turns out observing the detail matters. It gives a commercial advantage, and all that mundane knowledge of what really goes on allowed the designers to redesign their pumps to fix potential problems. That makes the machines more reliable, and saves money on repairs. It’s better for everyone.

Talking to porters, observing cupboards, watching ambulance bays: sometimes it’s the mundane things that make the difference. To be a great systems designer you have to deeply understand all the people and situations you are designing for, not just the power users and the normal situations. If you want to innovate, like Pat and Katie, take a leaf out of Dian Fossey’s book. Try anthropology.

More on …

Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Negligent nurses? Or dodgy digital? – device design can unintentionally mask errors

Magicians often fool their audience into ‘looking over there’ (literally or metaphorically), getting them to pay attention to the wrong thing so that they’re not focusing on what the magician is doing and can enjoy the trick without seeing how it was done. Computers, phones and medical devices let you interact with them using a human-friendly interface (such as a ‘graphical user interface’) which make them easier to use, but which can also hide the underlying computing processes from view. Normally that’s exactly what you want but if there’s a problem, and one that you’d really need to know about, how well does the device make that clear? Sometimes the design of the device itself can mask important information, sometimes the way in which devices are used can mask it too. Here is a case where nurses were blamed but it was later found that the medical devices involved, blood glucose meters, had (unintentionally) tripped everyone up. A useful workaround seemed to be working well, but caused problems later on.

At the end you can find more links between magic and computer science, and human-computer interaction.

Negligent nurses? Or dodgy digital?

by Harold Thimbleby, Swansea University and Paul Curzon, Queen Mary University of London

It’s easy to get excited about new technology and assume it must make things better. It’s rarely that easy. Medical technology is a case in point, as one group of nurses found out. It was all about one simple device and wearable ID bracelets. Nurses were taken to court, blamed for what went wrong.

The nurses taken to court worked in a stroke unit and were charged with wilfully neglecting their patients. Around 70 others were also disciplined though not sent to court.

There were problems with many nurses’ record-keeping. A few were selected to be charged by the police on the rather arbitrary basis that they had more odd records than the others.

Critical Tests

The case came about because of a single complaint. As the hospital, and then police, investigated, they found more and more oddities, with lots of nurses suddenly implicated. They all seemed to have fabricated their records. Repeatedly, their paper records did not tally with the computer logs. Therefore, the nurses must have been making up the patient records.

The gadget at the centre of the story was a portable glucometer. Glucometers allow the blood-glucose (aka blood sugar) levels of patients to be tested. This matters. If blood-sugar problems are not caught quickly, seriously ill patients could die.

Whenever they did a test, the nurses recorded it in the patient’s paper record. The glucometer system also had a better, supposedly infallible, way to do this. The nurse scanned their ID badge using the glucometer, telling it who they were. They then scanned the patient’s barcode bracelet, and took the patient’s blood-sugar reading. They finally wrote down what the glucometer said in the paper records, and the glucometer automatically added the reading to that patient’s electronic record.

Over and over again, the nurses were claiming in the notes of patients that they had taken readings, when the computer logs showed no reading had been taken. As machines don’t lie, the nurses must all be liars. They had just pretended to take these vital tests. It was a clear case of lazy nurses colluding to have an easy life!

What really happened?

In court, witnesses gave evidence. A new story unfolded. The glucometers were not as simple as they seemed. No-one involved actually understood them, how the system really worked, or what had actually happened.

In reality the nurses were looking after their patients … despite the devices.

The real story starts with those barcode bracelets that the patients wore. Sometimes the reader couldn’t read the barcode. You’ve probably seen this happen in supermarkets. Every so often the reader can’t tell what is being scanned. The nurses needed to sort it out as they had lots of ill patients to look after. Luckily, there was a quick and easy solution. They could just scan their own ID twice. The system accepted this ‘double tapping’. The first scan was their correct staff ID. The second scan was of their staff card ID instead of the patient ID. That made the glucometer happy so they could use it, but of course they weren’t using a valid patient ID.

As they wrote the test result in the patient’s paper record no harm was done. When checked, over 200 nurses sometimes used double tapping to take readings. It was a well-known (at least by nurses), and commonly used, work-around for a problem with the barcode system.

The system was also much more complicated than that anyway. It involved a complex computing network, and a lot of complex software, not just a glucometer. Records often didn’t make it to the computer database for a variety of reasons. The network went down, manually entered details contained mistakes, the database sometimes crashed, and the way the glucometers had been programmed meant they had no way to check that the data they sent to the database actually got there. Results didn’t go straight to the patient record anyway. It happened when the glucometer was docked (for recharging), but they were constantly in use so might not be docked for days. Indeed, a fifth of the entries in the database had an error flag indicating something had gone wrong. In reality, you just couldn’t rely on the electronic record. It was the nurses’ old fashioned paper records that really were the ones you could trust.

The police had got it the wrong way round! They thought the computers were reliable and the nurses untrustworthy, but the nurses were doing a good job and the computers were somehow failing to record the patient information. Worse, they were failing to record that they were failing to record things correctly! … So nobody realised.

Disappearing readings

What happened to all the readings with invalid patient IDs? There was no place to file them so the system silently dropped them into a separate electronic bin of unknowns. They could then be manually assigned, but no way had been set up to do that.

During the trial the defence luckily noticed an odd discrepancy in the computer logs. It was really spiky in an unexplained way. On some days hardly any readings seemed to be taken, for example. One odd trough corresponded to a day the manufacturer said they had visited the hospital. They were asked to explain what they had done…

The hospital had asked them to get the data ready to give to the police. The manufacturer’s engineer who visited therefore ‘tidied up’ the database, deleting all the incomplete records…including all the ones the nurses had supposedly fabricated! The police had no idea this had been done.

Suddenly, no evidence

When this was revealed in court, the judge ruled that all the prosecution’s evidence was unusable. The prosecution said, therefore, they had no evidence at all to present. In this situation, the trial ‘collapses’: the nurses were completely innocent, and the trial immediately stopped.

The trial had already blighted the careers of lots of good nurses though. In fact, some of the other nurses pleaded guilty as they had no memory of what had actually happened but had been confronted with the ‘fact’ that they must have been negligent as “the computers could not lie”. Some were jailed. In the UK, you can be given a much shorter jail sentence, or maybe none at all, if you plead guilty. It can make sense to plead guilty even if you know you aren’t — you only need to think the court will find you guilty. Which isn’t the same thing.

Silver bullets?

Governments see digitalisation as a silver bullet to save money and improve care. It can do that if you get it right. But digital is much harder to get right than most people realise. In the story here, not getting the digital right — and not understanding it — caused serious problems for lots of nurses.

It takes skill and deep understanding to design digital things to work in a way that really makes things better. It’s hard for hospitals to understand the complexities in what they are buying. Ultimately, it’s nurses and doctors who make it work. They have to.

They shouldn’t be automatically blamed when things go wrong because digital technology is hard to design well.


This article was originally published on the CS4FN website and a copy can be found in Issue 25 of the CS4FN magazine, below.


Related Magazine …


Magic Book

There are a number of surprising parallels between magic and computer science and so we have a number of free magic booklets (The Magic of Computer Science 1, 2 and 3 among others) to tell you all about it. The booklets show you some magic and talk about the links with computing and computational thinking. From the way a magician presents a trick (and the way in which people interact with devices) to self-working tricks which behave just like an algorithm. For the keenest apprentices of magic we also have a new book ⬇️, Conjuring with Computation, which you can buy from bookshops or as an e-book. Here are a couple of free bonus chapters.

EPSRC supports this blog through research grant EP/W033615/1.

Screaming Headline Kills!!!

A pile of newspapers
Image by congerdesign from Pixabay

Most people in hospital get great treatment but if something does go wrong the victims often want something good to come of it. They want to understand why it happened and be sure it won’t happen to anyone else. Medical mistakes can make a big news story though with screaming headlines vilifying those ‘responsible’. It may sell papers but it could also make things worse.

If press and politicians are pressurising hospitals to show they have done something, they may only sack the person who made the mistake. They may then not improve things meaning the same thing could happen again if it was an accident waiting to happen. Worse if we’re too quick to blame and punish someone, other people will be reluctant to report their mistakes, and without that sharing we can’t learn from them. One of the reasons flying is so safe is that pilots always report ‘near misses’ knowing they will be praised for doing so, rather than getting into trouble. It’s far better to learn from mistakes where nothing really bad happens than wait for a tragedy.

Share mistakes to learn from them

Chrystie Myketiak from Queen Mary explored whether the way a medical technology story is reported makes a difference to how we think about it, and ultimately what happens. She analysed news stories about three similar incidents in the UK, America and Canada. She wanted to see what the papers said, but also how they said it. The press often sensationalise stories but Chrystie found that this didn’t always happen. Some news stories did imply that the person who’d made the mistake was the problem (it’s rarely that simple!) but others were more careful to highlight that they were busy people working under stressful conditions and that the mistakes only happened because there were other problems. Regulations in Canada mean the media can’t report on specific details of a story while it is being investigated. Chrystie found that, in the incidents she looked at, that led to much more reasoned reporting. In that kind of environment hospitals are more likely to improve rather than just blame staff. How the hospital handled a case also affected what was written – being open and honest about a problem is better than ignoring requests for comment and pretending there isn’t a problem.

Everyone makes mistakes (if you don’t believe that, the next time you’re at a magic show, make sure none of the tricks fool you!). Often mistakes happen because the system wasn’t able to prevent them. Rather than blame, retrain or sack someone its far better to improve the system. That way something good will come of tragedies.

– Paul Curzon, Queen Mary University of London (From the archive)

More on …

Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Understanding matters of the heart – creating accurate computer models of human organs

by Paul Curzon, Queen Mary University of London

Ada Lovelace, the ‘first programmer’ thought the possibilities of computer science might cover a far wider breadth than anyone else of her time. For example, she mused that one day we might be able to create mathematical models of the human nervous system, essentially describing how electrical signals move around the body. University of Oxford’s Blanca Rodriguez is interested in matters of the heart. She’s a bioengineer creating accurate computer models of human organs.

How do you model a heart? Well you first have to create a 3D model of its structure. You start with MRI scans. They give you a series of pictures of slices through the heart. To turn that into a 3D model takes some serious computer science: image processing that works out, from the pictures, what is tissue and what isn’t. Next you do something called mesh generation. That involves breaking up the model into smaller parts. What you get is more than just a picture of the surface of the organ but an accurate model of its internal structure.

So far so good, but it’s still just the structure. The heart is a working, beating thing not just a sculpture. To understand it you need to see how it works. Blanca and her team are interested in simulating the electrical activity in the heart – how electrical pulses move through it. To do this they create models of the way individual cells propagate an electrical system. Once you have this you can combine it with the model of the heart’s structure to give one of how it works. You essentially have a lot of equations. Solving the equations gives a simulation of how electrical signals propagate from cell to cell.

The models Blanca’s team have created are based on a healthy rabbit heart. Now they have it they can simulate it working and see if it corresponds to the results from lab experiments. If it does then that suggests their understanding of how cells work together is correct. When the results don’t match, then that is still good as it gives new questions to research. It would mean something about their initial understanding was wrong, so would drive new work to fix the problem and so the models.

Once the models have been validated in this way – shown it is an accurate description of the way a rabbit’s heart works – they can use them to explore things you just can’t do with experiments – exploring what happens when changes are made to the structure of the virtual heart or how drugs change the way it works, for example. That can lead to new drugs.

They can also use it to explore how the human heart works. For example, early work has looked at the heart’s response to an electric shock. Essentially the heart reboots! That’s why when someone’s heart stops in hospital, the emergency team give it a big electric shock to get it going again. The model predicts in detail what actually happens to the heart when that is done. One of the surprising things is it suggests that how well an electric shock works depends on the particular structure of the person’s heart! That might mean treatment could be more effective if tailored for the person.

Computer modelling is changing the way science is done. It doesn’t replace experiments. Instead clinical work, modelling and experiments combine to give us a much deeper understanding of the way the world, and that includes our own hearts, work.



This article was originally published on the CS4FN website and a copy can be found on p16 of issue 20 of the CS4FN magazine, a free PDF copy of which can be downloaded by clicking the picture or link below, along with all of our free-to-download booklets and magazines.


The charity Cardiac Risk in the Young raises awareness of cardiac electrical rhythm abnormalities and supports testing (electrocardiograms and echocardiograms) for all young people aged 14-35.

EPSRC supports this blog through research grant EP/W033615/1.