Working in Computer Science: An Autistic Perspective (Part 1)

by Daniel Gill, Queen Mary University of London

Autism is a condition with many associated challenges, but for some people it presents some benefits. This distinction is greatly apparent in the workplace, where autistic people often find it difficult to get along with others (and their boss), and to complete the work that has been set for them. It’s not all negatives though: many autistic people find the work in which they thrive, and given the right circumstances and support, an autistic person is able to succeed in such an environment.

We often rightly hear about the greats in computer science; Ada Lovelace, Alan Turing, Lynn Conway (who sadly passed away earlier this month) – but let us not forget the incredible teams of computer scientists working around the clock; maintaining the Internet, building the software we use every day, and teaching the next generation. For this two-part article, I have spoken with Stephen Parry, an autistic computer scientist, who, after working in industry for 20 years, now teaches the subject in a sixth-form college in Sheffield. His autistic traits have caused him challenges throughout his career, but this is not a unique experience – many autistic computer scientists also face the same challenges.

Stephen’s experience with programming started at the age of 14, after being introduced to computers at a curriculum enrichment course. He decided against taking a then “really rubbish” O-Level (now GCSEs) Computer Science course, and the existence of the accompanying A-Level “just didn’t come up on my radar”. He was, however, able to take home the college’s sole RML 380Z for the summer, a powerful computer for the time, with which, he was able to continue to practice programming.

When it came time to go to university, he opted first to study chemistry, a subject he had been studying at A-Level. Though after a short amount of time he realised that he wasn’t as interested in chemistry as he first thought – so he decided to switch to computer science. In our discussions, he praised the computer science course at the University of Sheffield:

“[I] really enjoyed [the course] and got on well with it. So, I kind of drifted into it as far as doing it seriously is concerned. But it’s been a hobby of mine since I was 14 years old, and once I was on the degree, I mean, the degree at Sheffield was a bit like a sweetie shop. It really was absolutely brilliant. We did all kinds of weird and wonderful stuff, all of it [was] really interesting and engaging, and the kind of stuff that you wouldn’t get by either playing around on your own or going out into [the] workplace. As I’ve always said, that’s what a university should be. It should expose you to the kind of stuff that you can’t get anywhere else, the stuff that employers haven’t realised they need yet.”

Of autistic people who go to university, research shows they are much more likely the general population to pick STEM subjects [EXTERNAL]. For lots of autistic people, the clear logical and fundamental understanding behind scientific subjects is a great motivator. Stephen describes how this is something that appeals to him.

“[What] I enjoy about computer science is how it teaches you how the computer actually works at a fundamental level. So, you’re not just playing with a black box anymore – it’s something you understand. And especially for someone on the [autism] spectrum, that’s a really important aspect of anything you do. You want to understand how things work. If you’re working with something, and you don’t understand how it works, usually it’s not very satisfying and kind of frustrating. Whereas, if you understand the principles going on inside of it then, when you know you’ve got it, it kind of unlocks it for you.”

While autistic traits often result in challenges for autistic people, there are some which can present a benefit to someone in computer science. A previous CS4FN article described how positive traits like ‘attention to detail’ and ‘resilience and determination’ link well to programming. Stephen agrees that these traits can help him to solve problems:

“If I get focused on a problem, the hyper focus kicks in, and I will just keep plugging away until it’s done, fixed or otherwise overcome. I know it’s both a benefit and hazard – it’s a double edged sword, but at the same time, you know you have to have that attention to detail and that, to put it another way, sheer bloody mindedness to be determined that you’re going to make it work, or you’re going to understand how it works, and that does come definitely from the [autism] spectrum.”

Although he enjoyed the content greatly, Stephen had a rocky degree, both in and out of lectures. However, some unexpected benefits arose from being at university; he both found faith and met his future wife. These became essential pillars of support, as he prepared to enter the workforce. This he did, working both as a programmer and in a variety of IT admin and technical support roles. 

About 78% of autistic adults are currently out of work [EXTERNAL] (compared with 20% in the general population). This is, in part, reflective of the fact that some autistic people are unable to work because of their condition. But for many others, despite wanting to work, they cannot because they do not get the support they need (and are legally entitled to) within their workplace.

At this time, however, Stephen wasn’t aware of his condition, only receiving his diagnosis in his 40s. He described how this transition from university to work was very challenging.

“I moved into my first job, and I found it very, very difficult because I didn’t know that I’ve got this sort of difference – this different way my brain works that affects everything that you do. I didn’t know when I came across difficulties, it was difficult to understand why, at least to an extent, for me and for other people, it was deeply frustrating. I mean, speak to just about every manager I’ve ever had, and the same sort of pattern tends to come out. Most of them recognised that I was very difficult to manage because I found myself very difficult to manage. But time management is an issue with everything – trying to complete tasks to any kind of schedule, trying to plan anything. Oh, my days, when I hear the word SMART. [It’s an] acronym [meaning] specific, measurable, achievable, realistic and time specific. I hear that, and it just it makes me feel physically ill sometimes, because I cannot. I cannot SMART plan.”

However, during his time in work, he had some good luck. Despite the challenges associated with autism, some managers took advantage of the positive skills that he brings to the table:

“I found that a real challenge, interpersonally speaking, things like emotional regulation and stuff like that, which I struggle with, and I hate communicating on the phone and various other things, make me not the most promising employee. But the managers that I’ve had over the years that have valued me the most are the ones who recognised the other side of the coin, which is [that] over the years, I have absorbed so much knowledge about computer science and there are very [few] problems that you can come across that I don’t have some kind of insight into.”

This confidence in a range of areas in computer science is also a result of Stephen’s ability to link lots of areas and experiences together, a positive skill that some autistic people have:

“I found that with the mixture of different job roles I did, i.e. programming, support, network admin and database admin, my autism helped me form synergies between the different roles, allowing me to form links and crossover knowledge between the different areas. So, for example, as a support person with programming experience, I had insight into why the software I was helping the user with did not work as desired (e.g. the shortcuts or mistakes the programmer had likely made) and how maybe to persuade it to work. As a programmer with support experience, you had empathy with the user and what might give them a better UX, as well as how they might abuse the software. All this crossover, also set me up for being able to teach confidently on a huge range of aspects of CS.”

For autistic students who are planning on working in a computer science career, he has this to say:

“As an autistic person, and I would say this to anybody with [autism], you need to cultivate the part of you that really wants to get on well with people and wants to be able to care about people and understand people. Neurotypical people get that ability out of the box, and some of them take it for granted. I tend to find that the autistic people who actually find that they can understand people, that they work at it until they can, [are] often more conscientious as a result. And I think it’s important that if you’re an autistic person, to learn how to be positive about people and affirm people, and interact with them in positive ways, because it can make you a more caring and more valuable human being as a as a result.”

“Look for jobs where you can really be an asset, where your neurodiversity is the asset to what you’re trying to do, but at the same time, don’t be afraid to try to, and learn how to engage with people. Although it’s harder, it’s often more rewarding as a result.”

After working in industry for 20 years, the last half as which as a contractor, Stephen decided to take a considerable pay drop and become a computer science teacher. In the second part of this article, we will continue our conversation and find out what led him to choose a career change to teaching. 

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

EPSRC supports this blog through research grant EP/W033615/1,

Involving disabled people in the design of ICT tools and devices

by Jo Brodie, Queen Mary University of London

The World Health Organisation currently estimates that around 1.3 billion people, or one in six people on Earth, “experience significant disability”. Designers who are creating devices and tools for people to use need to make sure that the products they develop can be used by as many people as possible, not just non-disabled people, to make sure that everyone can benefit from them.

Disabled people can face lots of barriers in the workplace including some that seem simple to address – problems using everyday ICT and other tech. While there are a lot of fantastic Assistive Technology (AT) products unfortunately not all are suitable and so are abandoned by disabled people as they don’t serve their needs.

One challenge is that some of the people who have been doing the designing might not have direct experience of disability themselves, so they are less able to think about their design from that perspective. Solutions to this can include making sure that disabled computer scientists and human-computer interaction researchers are part of the team of designers and creators in the first place, or by making it easier for other disabled people to be involved at an early stage of design. This means that their experience and ideas can contribute to making the end product more relevant and useful for them and others. Alongside this there is education and advocacy – helping more young computer scientists, technologists and human-computer interaction designers to start thinking early about how their future products can be more inclusive.

An EPSRC project “Inclusive Public Activities for information and Communication Technologies” has been looking at some practical ways to help. Run by Prof. Cathy Holloway and Dr. Maryam Bandukda and their wider team at UCL they have established a panel of disabled academics and professionals who can be ‘critical friends’ to researchers planning new projects. By co-creating a set of guidelines for researchers they are providing a useful resource but it also means that disabled voices are heard at an early stage of the design process so that projects start off in the right direction.

Prof. Holloway and Dr. Bandukda are based at the Global Disability Innovation Hub (GDI Hub) in the department of computer science at UCL. GDI Hub is a global leader in disability innovation and inclusion and has research reaching over 30 million people in 60 countries. The GDI Hub also educates people to increase awareness of disability, reduce stigma and lay the groundwork for more disability-aware designers to benefit people in the future with better products.

An activity that the UCL team ran in February 2024, for schools in East London, was a week-long inclusive ICT “Digital Skills and Technology Innovation” bootcamp. They invited students in Year 9 and above to learn about 3D printing, 3D modelling, laser cutting, AI and machine learning using Python, artificial reality and virtual reality experiences along with a chance to visit Google’s Accessible Discovery Centre and use their skills to “tackle real-world challenges”.

What are some examples of Assistive Technology?

Screen-reading software can help blind or visually impaired people by reading aloud the words on the page. This is something that can help sighted people too, your document can read itself to you while you do something else. The entire world of audio books exists for this reason! D/deaf people can take part more easily in Zoom conversations if text-to-caption software is available so they can read what’s being said. That can also help those whose hearing is fine but who speak a different language and might miss some words. Similarly you can dictate your clever ideas to your computer and device which will type it for you. This can be helpful for someone with limited use of their hands, or just someone who’d rather talk than type – this might also explain the popularity of devices and tools like Alexa or Siri.

Web designers want to (and may need to*) make their websites accessible to all their visitors. You can help too – a simple thing that you can do is to add ALT Text (alternative text) to images. If you ever share an image or gif to social media it’s really helpful to describe what’s in the image for screen readers so that people who can’t view it can still understand what you meant.

*Thanks to regulations that were adopted in 2018 the designers of public sector websites (e.g. government and local council websites where people pay their council tax or apply for benefits) must make sure that their pages meet certain accessibility standards because “​​people may not have a choice when using a public sector website or mobile app, so it’s important they work for everyone. The people who need them the most are often the people who find them hardest to use”.

Further reading

You can find out more about the ‘Inclusive Public Activities for ICT’ project here. Maryam isone of five EPSRC Public Engagement in ICT Champions.

DIX Manifesto
DIX puts disability front and center in the design process, and in so doing aims to create accessible, creative new HCI solutions that will be better for everyone

You might have come across UI (User Interface(s)) and UX (User Experience), DIX is Disability Interaction – how disabled people use various tech.

📃 More articles on Neurodiversity and Disability in Computer Science

Examples of computer science and disability-related jobs

Both of the jobs listed below are CLOSED and are just for your information only.

Closed job for Londoners who live in the borough of Islington
Closed job for people to work at the GDI Hub, which features in the article

See also our collection of Computer Science & Research posts.


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Designing for autistic people

by Daniel Gill and Paul Curzon, Queen Mary University of London

What should you be thinking about when designing for a specific group with specific needs, such as autistic people? Queen Mary students were set this task and on the whole did well. The lessons though are useful when designing any technology, whether apps or gadgets.

A futuristic but complicated interface
A futuristic but complicated interface with lots of features: feature bloat?
Image by Tung Lam from Pixabay

The Interactive Systems Design module at QMUL includes a term-long realistic team interaction design project with the teaching team acting as clients. The topic changes each year but is always open-ended and aimed at helping some specific group of people. The idea is to give experience designing for a clear user group not just for anyone. A key requirement is always that the design, above all, must be very easy to use, without help. It should be intuitively obvious how to use it. At the end of the module, each team pitches their design in a short presentation as well as a client report.

This year the aim was to create something to support autistic people. What their design does, and how, was left to the teams to decide from their early research and prototyping. They had to identify a need themselves. As a consequence, the teams came up with a wide range of applications and tools to support autistic people in very different ways.

How do you come up with an idea for a design? It should be based on research. The teams had to follow a specific (if simplified) process. The first step was to find out as much as they could about the user group and other stakeholders being designed for: here autistic people and, if appropriate, their carers. The key thing is to identify their unmet goals and needs. There are lots of ways to do this: from book research (charities, for example, often provide good background information) and informally talking to people from the stakeholder group, to more rigorous methods of formal interviews, focus groups and even ethnography (where you embed yourself in a community).

Many of the QMUL teams came up with designs that clearly supported autistic people, but some projects were only quite loosely linked with autism. While the needs of autistic people were considered in the concept and design, they did not fully focus on supporting autistic people. More feedback directly from autistic people, both at the start and throughout the process, would have likely made the applications much more suitable. (That of course is quite hard in this kind of student role-playing scenario, though some groups were able to do so.) That though is key idea the module is aiming to teach – how important it is to involve users and their concerns closely throughout the design process, both in coming up with designs and evaluating them. Old fashioned waterfall models from software engineering, where designs are only tested with users at the end, are just not good enough.

From the research, the teams were then required to create design personas. These are detailed, realistic but fictional people with names, families, and lives. The more realistic the character the better (computer scientists need to be good at fiction too!) Personas are intended to represent the people being designed for in a concrete and tangible way throughout the design process. They help to ensure the designers do design for real people not some abstract tangible person that shape shifts to the needs of their ideas. Doing the latter can lead to concepts being pushed forward just because the designer is excited by their ideas rather than because they are actually useful. Throughout the design the team refer back to them – does this idea work for Mo and the things he is trying to do? 

An important part of good persona design lies around stereotypes. The QMUL groups avoided stereotypes of autistic people. One group went further, though: they included the positive traits that their autistic persona had, not just negative ones. They didn’t see their users in a simplistic way. Thinking about positive attributes is really, really important if designing for neurodivergent people, but also for those with physical disabilities too, to help make them a realistic person. That group’s persona was therefore outstanding. Alan Cooper, who came up with the idea of design personas, argued that stereotypes (such as a nurse persona being female) were good in that they could give people a quick and solid idea of the person. However, this is a very debatable view. It seems to go against the whole idea of personas. Most likely you miss the richness of real people and end up designing for a fictional person that doesn’t represent that group of people at all. The aim of personas is to help the designers see the world from the perspective of their users, so here of autistic people. A stereotype can only diminish that.

Multicolour jigsaw ribbon
Image by Oberholster Venita from Pixabay

Another core lesson of the module is the importance of avoiding feature bloat. Lots of software and gadgets are far harder to use than need be because they are packed with features: features that are hardly ever, possibly never, used. What could have been simple to use apps, focusing on some key tasks, instead are turned into ‘do everything’ apps. A really good video call app instead becomes a file store, a messaging place, chat rooms, a phone booth, a calendar, a movie player, and more. Suddenly it’s much harder to make video calls. Because there are so many features and so many modes all needing their own controls the important things the design was supposed to help you do become hard to do (think of a TV remote control – the more features the more buttons until important ones are lost). That undermines the aim that good design should make key tasks intuitively easy. The difficulty when designing such systems is balancing the desire to put as many helpful features as possible into a single application, and the complexity that this adds. That can be bad for neurotypical people, who may find it hard to use. For neurodivergent people it can be much worse – they can find themselves overwhelmed. When presented with such a system, if they can use it at all, they might have to develop their own strategies to overcome the information overload caused. For example, they might need to learn the interface bit-by-bit. For something being designed specifically for neurodiverse people, that should never happen. Some of the applications of the QMUL teams were too complicated like this. This seems to be one of the hardest things for designers to learn, as adding ideas, adding features seems to be a good thing, it is certainly vitally important not to make this mistake if designing for autistic people. 

Perhaps one of the most important points that arose from the designs was that many of the applications presented were designed to help autistic people change to fit into the world. While this would certainly be beneficial, it is important to realise that such systems are only necessary because the world is generally not welcoming for autistic people. It is much better if technology is designed to change the world instead. 

More on …

Magazines …


Front cover of CS4FN issue 29 - Diversity in Computing

EPSRC supports this blog through research grant EP/W033615/1,

Designing robots that care

by Nicola Plant, Queen Mary University of London

(See end for links to related careers)

Think of the perfect robot companion. A robot you can hang out with, chat to and who understands how you feel. Robots can already understand some of what we say and talk back. They can even respond to the emotions we express in the tone of our voice. But, what about body language? We also show how we feel by the way we stand, we describe things with our hands and we communicate with the expressions on our faces. Could a robot use body language to show that it understands how we feel? Could a robot show empathy?

If a robot companion did show this kind of empathetic body language we would likely feel that it understood us, and shared our feelings and experiences. For robots to be able to behave like this though, we first need to understand more about how humans use movement to show empathy with one another.

Think about how you react when a friend talks about their headache. You wouldn’t stay perfectly still. But what would you do? We’ve used motion capture to track people’s movements as they talk to each other. Motion capture is the technology used in films to make computer-animated creatures like Gollum in Lord of the Rings, or the Apes in the Planet of the Apes. Lots of cameras are used together to create a very precise computer model of the movements being recorded. Using motion capture, we’ve been able to see what people actually do when chatting about their experiences.

It turns out that we share our understanding of things like a headache by performing it together. We share the actions of the headache as if we have it ourselves. If I hit my head, wince and say ‘ouch’, you might wince and say ‘ouch’ too – you give a multimodal performance, with actions and words, to show me you understand how I feel.

So should we just program robots to copy us? It isn’t as simple as that. We don’t copy exactly. A perfect copy wouldn’t show understanding of how we feel. A robot doing that would seem like a parrot, repeating things without any understanding. For the robot to show that it understands how you feel it must perform a headache like it owns it – as though it were really theirs! That means behaving in a similar way to you; but adapted to the unique type of headache it has.

Designing the way robots should behave in social situations isn’t easy. If we work out exactly how humans interact with each other to share their experiences though, we can use that understanding to program robot companions. Then one day your robot friend will be able to hang out with you, chat and show they understand how you feel. Just like a real friend.

multimodal = two or more different ways of doing something. With communication that might be spoken words, facial expressions and hand gestures.


This article was previously published on the original CS4FN website and a copy is on page 16 of issue 19 of the CS4FN magazine, which you can read by clicking on the magazine cover below.


Related Magazine …


See also (previous post and related career options)

Click to read about the AMPER project

We have recently written about the AMPER project which uses a tablet-based AI tool / robot to support people with dementia and their carers. It prompts the person to discuss events from their younger life and adapts to their needs. We also linked this with information about the types of careers people working in this area might do – the examples given were for a project based in the Netherlands called ‘Dramaturgy for Devices’ – using lessons learned from the study of theatre and theatrical performances in designing social robots so that their behaviour feels more natural and friendly to the humans who’ll be using them.

Click to see one of the four jobs in this area with another three linked from it

See our collection of posts about Career paths in Computing.


EPSRC supports this blog through research grant EP/W033615/1.

AMPER: AI helping future you remember past you

by Jo Brodie, Queen Mary University of London

Have you ever heard a grown up say “I’d completely forgotten about that!” and then share a story from some long-forgotten memory? While most of us can remember all sorts of things from our own life history it sometimes takes a particular cue for us to suddenly recall something that we’d not thought about for years or even decades. 

As we go through life we add more and more memories to our own personal library, but those memories aren’t neatly organised like books on a shelf. For example, can you remember what you were doing on Thursday 20th September 2018 (or can you think of a way that would help you find out)? You’re more likely to be able to remember what you were doing on the last Tuesday in December 2018 (but only because it was Christmas Day!). You might not spontaneously recall a particular toy from your childhood but if someone were to put it in your hands the memories about how you played with it might come flooding back.

Accessing old memories

In Alzheimer’s Disease (a type of dementia) people find it harder to form new memories or retain more recent information which can make daily life difficult and bewildering and they may lose their self-confidence. Their older memories, the ones that were made when they were younger, are often less affected however. The memories are still there but might need drawing out with a prompt, to help bring them to the surface.

Perhaps a newspaper advert will jog your memory in years to come… Image by G.C. from Pixabay

An EPSRC-funded project at Heriot-Watt University in Scotland is developing a tablet-based ‘story facilitator’ agent (a software program designed to adapt its response to human interaction) which contains artificial intelligence to help people with Alzheimer’s disease and their carers. The device, called ‘AMPER’*, could improve wellbeing and a sense of self in people with dementia by helping them to uncover their ‘autobiographical memories’, about their own life and experiences – and also help their carers remember them ‘before the disease’.

Our ‘reminiscence bump’

We form some of our most important memories between our teenage years and early adulthood – we start to develop our own interests in music and the subjects that we like studying, we might experience first loves, perhaps going to university, starting a career and maybe a family. We also all live through a particular period of time where we’re each experiencing the same world events as others of the same age, and those experiences are fitted into our ‘memory banks’ too. If someone was born in the 1950s then their ‘reminiscence bump’ will be events from the 1970s and 1980s – those memories are usually more available and therefore people affected by Alzheimer’s disease would be able to access them until more advanced stages of the disease process. Big important things that, when we’re older, we’ll remember more easily if prompted.

In years to come you might remember fun nights out with friends.
Image by ericbarns from Pixabay

Talking and reminiscing about past life events can help people with dementia by reinforcing their self-identity, and increasing their ability to communicate – at a time when they might otherwise feel rather lost and distressed. 

AMPER will explore the potential for AI to help access an individual’s personal memories residing in the still viable regions of the brain by creating natural, relatable stories. These will be tailored to their unique life experiences, age, social context and changing needs to encourage reminiscing.”

Dr Mei Yii Lim, who came up with the idea for AMPER(3).

Saving your preferences

AMPER comes pre-loaded with publicly available information (such as photographs, news clippings or videos) about world events that would be familiar to an older person. It’s also given information about the person’s likes and interests. It offers examples of these as suggested discussion prompts and the person with Alzheimer’s disease can decide with their carer what they might want to explore and talk about. Here comes the clever bit – AMPER also contains an AI feature that lets it adapt to the person with dementia. If the person selects certain things to talk about instead of others then in future the AI can suggest more things that are related to their preferences over less preferred things. Each choice the person with dementia makes now reinforces what the AI will show them in future. That might include preferences for watching a video or looking at photos over reading something, and the AI can adjust to shorter attention spans if necessary. 

Reminiscence therapy is a way of coordinated storytelling with people who have dementia, in which you exercise their early memories which tend to be retained much longer than more recent ones, and produce an interesting interactive experience for them, often using supporting materials — so you might use photographs for instance

Prof Ruth Aylett, the AMPER project’s lead at Heriot-Watt University(4).

When we look at a photograph, for example, the memories it brings up haven’t been organised neatly in our brain like a database. Our memories form connections with all our other memories, more like the branches of a tree. We might remember the people that we’re with in the photo, then remember other fun events we had with them, perhaps places that we visited and the sights and smells we experienced there. AMPER’s AI can mimic the way our memories branch and show new information prompts based on the person’s previous interactions.

​​Although AMPER can help someone with dementia rediscover themselves and their memories it can also help carers in care homes (who didn’t know them when they were younger) learn more about the person they’re caring for.

*AMPER stands for ‘Agent-based Memory Prosthesis to Encourage Reminiscing’.


Suggested classroom activities – find some prompts!

  • What’s the first big news story you and your class remember hearing about? Do you think you will remember that in 60 years’ time?
  • What sort of information about world or local events might you gather to help prompt the memories for someone born in 1942, 1959, 1973 or 1997? (Remember that their reminiscence bump will peak in the 15 to 30 years after they were born – some of them may still be in the process of making their memories the first time!).

See also

If you live near Blackheath in South East London why not visit the Age Exchange and reminiscence centre which is an arts charity providing creative group activities for those living with dementia and their carers. It has a very nice cafe.

Related careers

The AMPER project is interdisciplinary, mixing robots and technology with psychology, healthcare and medical regulation.

We have information about four similar-ish job roles on our TechDevJobs blog that might be of interest. This was a group of job adverts for roles in the Netherlands related to the ‘Dramaturgy^ for Devices’ project. This is a project linking technology with the performing arts to adapt robots’ behaviour and improve their social interaction and communication skills.

Below is a list of four job adverts (which have now closed!) which include information about the job description, the types of people that the employers were looking for and the way in which they wanted them to apply. You can find our full list of jobs that involve computer science directly or indirectly here.

^Dramaturgy refers to the study of the theatre, plays and other artistic performances.

Dramaturgy for Devices – job descriptions

References

1. Agent-based Memory Prosthesis to Encourage Reminiscing (AMPER) Gateway to Research
2. The Digital Human: Reminiscence (13 November 2023) BBC Sounds – a radio programme that talks about the AMPER Project.
3. Storytelling AI set to improve wellbeing of people with dementia (14 March 2022) Heriot-Watt University news
4. AMPER project to improve life for people with dementia (14 January 2022) The Engineer


EPSRC supports this blog through research grant EP/W033615/1.

Competitive Zen

A hooded woman's intense concentration focussing on the eyes
Image by Walkerssk from Pixabay

To become a Jedi Knight you must have complete control of your thoughts. As you feel the force you start to control your surroundings and make objects move just by thinking. Telekinesis is clearly impossible, but could technology give us the same ability? The study of brain-computer interfaces is an active area of research. How can you make a computer sense and react to a person’s brain activity in a useful way?

Imagine the game of Mindball. Two competitors face each other across a coffee table. A ball sits at the centre. The challenge is to push the ball to your opponent’s end before they push it down to you. The twist is you can use the power of thought alone.

Sound like science fiction? It’s not! I played it at the Dundee Sensation Science Centre many, many years ago where it was a practical and fun demonstration of the then nascent area of brain-computer interfaces.

Each player wears a headband containing electrodes that pick up your brain waves – specifically alpha and theta waves. They are shown as lines on a monitor for all to see. The more relaxed you are, the more you can shut down your brain, the more your brain wave lines fall to the bottom of the screen and start to flatline together. This signals are linked to a computer that drives competing magnets in the table. They pull the metal ball more strongly towards the most agitated person. The more you relax the more the ball moves away from you…unless of course your opponent can out relax you.

Of course it’s not so easy to play. All around the crowd heckle, cheering on their favourite and trying to put off the opponent. You have to ignore it all. You have to think of nothing. Nothing but calm.

The ball gradually edges away from you. You see you are about to win but your excitement registers, and that makes it all go wrong! The ball hurtles back towards you. Relax again. See nothing. Make everything go black around you. Control your thoughts. Stay relaxed. Millimetre by millimetre the ball edges away again until finally it crosses the line and you have won.

Its not just a game of course. There are some serious uses. It is about learning to control your brain – something that helps people trying to overcome stress, addiction and more. Similar technology can also be used by people who are paralysed, and unable to speak, to control a computer. The most recent systems, combining this technology with machine learning to learn what thoughts correspond to different brain patterns can pick up words people are thinking.

For now though it’s about play. It’s a lot of fun, just moving a ball apparently by telekinesis. Imagine what mind games will be like when embedded in more complex gaming experiences!

– Paul Curzon, Queen Mary University of London (updated from the archive)

More on …

Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Pit-stop heart surgery

by Paul Curzon, Queen Mary University of London

(Updated from the archive)

Image by Peter Fischer from Pixabay

The Formula 1 car screams to a stop in the pit-lane. Seven seconds later, it has roared away again, back into the race. In those few seconds it has been refuelled and all four wheels changed. Formula 1 pit-stops are the ultimate in high-tech team work. Now the Ferrari pit stop team have helped improve the hospital care of children after open-heart surgery!

Open-heart surgery is obviously a complicated business. It involves a big team of people working with a lot of technology to do a complicated operation. Both during and after the operation the patient is kept alive by computer: lots of computers, in fact. A ventilator is breathing for them, other computers are pumping drugs through their veins and yet more are monitoring them so the doctors know how their body is coping. Designing how this is done is not just about designing the machines and what they do. It is also about designing what the people do – how the system as a whole works is critical.

Pass it on

One of the critical times in open-heart surgery is actually after it is all over. The patient has to be moved from the operating theatre to the intensive care unit where a ‘handover’ happens. All the machines they were connected to have to be removed, moved with them or swapped for those in the intensive care unit. Not only that, a lot of information has to be passed from the operating team to the care team. The team taking over need to know the important details of what happened and especially any problems, if they are to give the best care possible.

A research team from the University of Oxford and Great Ormond Street Hospital in London wondered if hospital teams could learn anything from the way other critical teams work. This is an important part of computational thinking – the way computer scientists solve problems. Rather than starting from scratch, find a similar problem that has already been solved and adapt its solution for the new situation.

Rather than starting from scratch,
find a similar problem
that has already been solved

Just as the pit-stop team are under intense time pressure, the operating theatre team are under pressure to be back in the operating theatre for the next operation as soon as possible. In a handover from surgery there is lots of scope for small mistakes to be made that slow things down or cause problems that need to be fixed. In situations like this, it’s not just the technology that matters but the way everyone works together around it. The system as a whole needs to be well designed and pit stop teams are clearly in the lead.

Smooth moves

To find out more, the research team watched the Ferrari F1 team practice pit-stops as well as talking to the race director about how they worked. They then talked to operating theatre and intensive care unit teams to see how the ideas might work in a hospital handover. They came up with lots of changes to the way the hospital did the handover.

For example, in a pit-stop there is one person coordinating everything – the person with the ‘lollipop’ sign that reminds the driver to keep their brakes on. In the hospital handover there was no person with that job. In the new version the anaesthetist was given the overall job for coordinating the team. Once the handover was completed that responsibility was formally passed to the intensive care unit doctor. In Formula 1 each person has only one or two clear tasks to do. In the hospital people’s roles were less obvious. So each person was given a clear responsibility: the nurses were made responsible for issues with draining fluids from the patient, anaesthetist for ventilation issues, and so on. In Formula 1 checklists are used to avoid people missing steps. Nothing like that was used in the handover so a checklist was created, to be used by the team taking on the patient.

These and other changes led to what the researchers hoped would be a much improved way of doing handovers. But was it better?

Calm efficiency saves the day

To find out they studied 50 handovers – roughly half before the change was made and half after. That way they had a direct way of seeing the difference. They used a checklist of common problems noting both mistakes made and steps that proved unusually difficult. They also noted how well the teams worked together: whether they were calm and supported each other, planned what they did, whether equipment was available when needed, and so on.

They found that the changes led to clearly better handovers. Fewer errors were made both with the technology and in passing on information. Better still, while the best performance still happened when the teams worked well, the changes meant that teamwork problems became less critical. Pit-stops and open-heart surgery may be a world apart, with one being about getting every last millisecond of speed and the other about giving as good care as possible. But if you want to improve how well technology and people work together, you need to think about more than just the gadgets. It is worth looking for solutions anywhere: children can be helped to recover from heart surgery even by the high-octane glitz of Formula 1.

More on …

Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Nurses in the mist

by Paul Curzon, Queen Mary University of London

(From the archive)

A gorilla hugging a baby gorilla
Image by Angela from Pixabay

What do you do when your boss tells you “go and invent a new product”? Lock yourself away and stare out the window? Go for a walk, waiting for inspiration? Medical device system engineers Pat Baird and Katie Hansbro did some anthropology.

Dian Fossey is perhaps the most famous anthropologist. She spent over a decade living in the jungle with gorillas so that she could understand them in a way no one had done before. She started to see what it was really like to be a gorilla, showing that their fierce King Kong image was wrong and that they are actually gentle giants: social animals with individual personalities and strong family ties. Her book and film, ‘Gorillas in the Mist’, tells the story.

Pat and Katie work for Baxter Healthcare. They are responsible for developing medical devices like the infusion pumps hospitals use to pump drugs into people to keep them alive or reduce their pain. Hospitals don’t buy medical devices like we buy phones, of course. They aren’t bought just because they have lots of sexy new features. Hospitals buy new medical devices if they solve real problems. They want solutions that save lives, or save money, and if possible both! To invent something new that sells you ideally need to solve problems your competitors aren’t even aware of. Challenged to come up with something new, Pat and Katie wondered if, given the equivalent was so productive for Dian Fossey, perhaps immersing themselves in hospitals with nurses would give the advantage their company was after. Their idea was that understanding what it was really like to be a nurse would make a big difference to their ability to design medical devices. That helped with the real problems nurses had rather than those that the sales people said were problems. After all the sales people only talk to the managers, and the managers don’t work on the wards. They were right.

Taking notes

They took a team on a 3-month hospital tour, talking to people, watching them do their jobs and keeping notes of everything. They noted things like the layout of rooms and how big they were, recorded the temperature, how noisy it was, how many flashing lights and so on. They spent a lot of time in the critical care wards where infusion pumps were used the most but they also went to lots of other wards and found the pumps being used in other ways. They didn’t just talk to nurses either. Patients are moved around to have scans or change wards, so they followed them, talking to the porters doing the pushing. They observed the rooms where the devices were cleaned and stored. They looked for places where people were doing ad hoc things like sticking post it note reminders on machines. That might be an opportunity for them to help. They looked at the machines around the pumps. That told them about opportunities for making the devices fit into the bigger tasks the nurses were using them as part of.

The hot Texan summer was a problem

So did Katie and Pat come up with a new product as their boss wanted? Yes. They developed a whole new service that is bringing in the money, but they did much more too. They showed that anthropology brings lots of advantages for medical device companies. One part of Pat’s job, for example, is to troubleshoot when his customers are having problems. He found after the study that, because he understood so much more about how pumps were used, he could diagnose problems more easily. That saved time and money for everyone. For example, touch screen pumps were being damaged. It was because when they were stored together on a shelf their clips were scratching the ones behind. They had also seen patients sitting outside in the ambulance bays with their pumps for long periods smoking. Not their problem, apart from it was Texas and the temperature outside was higher than the safe operating limit of the electronics. Hospitals don’t get that hot so no one imagined there might be a problem. Now they knew.

Porters shouldn’t be missed

Pat and Katie also showed that to design a really good product you had to design for people you might not even think about, never mind talk to. By watching the porters they saw there was a problem when a patient was on lots of drugs each with its own pump. The porter pushing the bed also had to pull along a gaggle of pumps. How do you do that? Drag them behind by the tubes? Maybe the manufacturers can design in a way to make it easy. No one had ever bothered talking to the porters before. After all they are the low paid people, doing the grunt jobs, expected to be invisible. Except they are important and their problems matter to patient safety. The advantages didn’t stop there, either. Because of all that measuring, the company had the raw data to create models of lots of different ward environments that all the team could use when designing. It meant they could explore in a virtual environment how well introducing new technology might fix problems (or even see what problems it would cause).

All in all anthropology was a big success. It turns out observing the detail matters. It gives a commercial advantage, and all that mundane knowledge of what really goes on allowed the designers to redesign their pumps to fix potential problems. That makes the machines more reliable, and saves money on repairs. It’s better for everyone.

Talking to porters, observing cupboards, watching ambulance bays: sometimes it’s the mundane things that make the difference. To be a great systems designer you have to deeply understand all the people and situations you are designing for, not just the power users and the normal situations. If you want to innovate, like Pat and Katie, take a leaf out of Dian Fossey’s book. Try anthropology.

More on …

Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Negligent nurses? Or dodgy digital? – device design can unintentionally mask errors

Magicians often fool their audience into ‘looking over there’ (literally or metaphorically), getting them to pay attention to the wrong thing so that they’re not focusing on what the magician is doing and can enjoy the trick without seeing how it was done. Computers, phones and medical devices let you interact with them using a human-friendly interface (such as a ‘graphical user interface’) which make them easier to use, but which can also hide the underlying computing processes from view. Normally that’s exactly what you want but if there’s a problem, and one that you’d really need to know about, how well does the device make that clear? Sometimes the design of the device itself can mask important information, sometimes the way in which devices are used can mask it too. Here is a case where nurses were blamed but it was later found that the medical devices involved, blood glucose meters, had (unintentionally) tripped everyone up. A useful workaround seemed to be working well, but caused problems later on.

At the end you can find more links between magic and computer science, and human-computer interaction.

Negligent nurses? Or dodgy digital?

by Harold Thimbleby, Swansea University and Paul Curzon, Queen Mary University of London

It’s easy to get excited about new technology and assume it must make things better. It’s rarely that easy. Medical technology is a case in point, as one group of nurses found out. It was all about one simple device and wearable ID bracelets. Nurses were taken to court, blamed for what went wrong.

The nurses taken to court worked in a stroke unit and were charged with wilfully neglecting their patients. Around 70 others were also disciplined though not sent to court.

There were problems with many nurses’ record-keeping. A few were selected to be charged by the police on the rather arbitrary basis that they had more odd records than the others.

Critical Tests

The case came about because of a single complaint. As the hospital, and then police, investigated, they found more and more oddities, with lots of nurses suddenly implicated. They all seemed to have fabricated their records. Repeatedly, their paper records did not tally with the computer logs. Therefore, the nurses must have been making up the patient records.

The gadget at the centre of the story was a portable glucometer. Glucometers allow the blood-glucose (aka blood sugar) levels of patients to be tested. This matters. If blood-sugar problems are not caught quickly, seriously ill patients could die.

Whenever they did a test, the nurses recorded it in the patient’s paper record. The glucometer system also had a better, supposedly infallible, way to do this. The nurse scanned their ID badge using the glucometer, telling it who they were. They then scanned the patient’s barcode bracelet, and took the patient’s blood-sugar reading. They finally wrote down what the glucometer said in the paper records, and the glucometer automatically added the reading to that patient’s electronic record.

Over and over again, the nurses were claiming in the notes of patients that they had taken readings, when the computer logs showed no reading had been taken. As machines don’t lie, the nurses must all be liars. They had just pretended to take these vital tests. It was a clear case of lazy nurses colluding to have an easy life!

What really happened?

In court, witnesses gave evidence. A new story unfolded. The glucometers were not as simple as they seemed. No-one involved actually understood them, how the system really worked, or what had actually happened.

In reality the nurses were looking after their patients … despite the devices.

The real story starts with those barcode bracelets that the patients wore. Sometimes the reader couldn’t read the barcode. You’ve probably seen this happen in supermarkets. Every so often the reader can’t tell what is being scanned. The nurses needed to sort it out as they had lots of ill patients to look after. Luckily, there was a quick and easy solution. They could just scan their own ID twice. The system accepted this ‘double tapping’. The first scan was their correct staff ID. The second scan was of their staff card ID instead of the patient ID. That made the glucometer happy so they could use it, but of course they weren’t using a valid patient ID.

As they wrote the test result in the patient’s paper record no harm was done. When checked, over 200 nurses sometimes used double tapping to take readings. It was a well-known (at least by nurses), and commonly used, work-around for a problem with the barcode system.

The system was also much more complicated than that anyway. It involved a complex computing network, and a lot of complex software, not just a glucometer. Records often didn’t make it to the computer database for a variety of reasons. The network went down, manually entered details contained mistakes, the database sometimes crashed, and the way the glucometers had been programmed meant they had no way to check that the data they sent to the database actually got there. Results didn’t go straight to the patient record anyway. It happened when the glucometer was docked (for recharging), but they were constantly in use so might not be docked for days. Indeed, a fifth of the entries in the database had an error flag indicating something had gone wrong. In reality, you just couldn’t rely on the electronic record. It was the nurses’ old fashioned paper records that really were the ones you could trust.

The police had got it the wrong way round! They thought the computers were reliable and the nurses untrustworthy, but the nurses were doing a good job and the computers were somehow failing to record the patient information. Worse, they were failing to record that they were failing to record things correctly! … So nobody realised.

Disappearing readings

What happened to all the readings with invalid patient IDs? There was no place to file them so the system silently dropped them into a separate electronic bin of unknowns. They could then be manually assigned, but no way had been set up to do that.

During the trial the defence luckily noticed an odd discrepancy in the computer logs. It was really spiky in an unexplained way. On some days hardly any readings seemed to be taken, for example. One odd trough corresponded to a day the manufacturer said they had visited the hospital. They were asked to explain what they had done…

The hospital had asked them to get the data ready to give to the police. The manufacturer’s engineer who visited therefore ‘tidied up’ the database, deleting all the incomplete records…including all the ones the nurses had supposedly fabricated! The police had no idea this had been done.

Suddenly, no evidence

When this was revealed in court, the judge ruled that all the prosecution’s evidence was unusable. The prosecution said, therefore, they had no evidence at all to present. In this situation, the trial ‘collapses’: the nurses were completely innocent, and the trial immediately stopped.

The trial had already blighted the careers of lots of good nurses though. In fact, some of the other nurses pleaded guilty as they had no memory of what had actually happened but had been confronted with the ‘fact’ that they must have been negligent as “the computers could not lie”. Some were jailed. In the UK, you can be given a much shorter jail sentence, or maybe none at all, if you plead guilty. It can make sense to plead guilty even if you know you aren’t — you only need to think the court will find you guilty. Which isn’t the same thing.

Silver bullets?

Governments see digitalisation as a silver bullet to save money and improve care. It can do that if you get it right. But digital is much harder to get right than most people realise. In the story here, not getting the digital right — and not understanding it — caused serious problems for lots of nurses.

It takes skill and deep understanding to design digital things to work in a way that really makes things better. It’s hard for hospitals to understand the complexities in what they are buying. Ultimately, it’s nurses and doctors who make it work. They have to.

They shouldn’t be automatically blamed when things go wrong because digital technology is hard to design well.


This article was originally published on the CS4FN website and a copy can be found in Issue 25 of the CS4FN magazine, below.


Related Magazine …


Magic Book

There are a number of surprising parallels between magic and computer science and so we have a number of free magic booklets (The Magic of Computer Science 1, 2 and 3 among others) to tell you all about it. The booklets show you some magic and talk about the links with computing and computational thinking. From the way a magician presents a trick (and the way in which people interact with devices) to self-working tricks which behave just like an algorithm. For the keenest apprentices of magic we also have a new book ⬇️, Conjuring with Computation, which you can buy from bookshops or as an e-book. Here are a couple of free bonus chapters.

EPSRC supports this blog through research grant EP/W033615/1.