Simulators are a common way to train when gaining skills that are dangerous or difficult to practice for real. Pilots, for example, do lots of training on flight simulators. Doctors also use simulators to train for surgery, and the simulators are increasingly accurate. They can even make it feel like you’re working on the real thing by giving you feedback through your sense of touch: haptics. Practicing sinus or eye surgery and your practice sessions will feel real, for example. Haptics can help not just doctors, but vets training too – and it can help not just in the head but, errr, at the other end too.
Trainee vets have to learn how to feel for animals’ organs. In small animals like dogs and cats you can do that just by feeling the outside of their tummies, but in larger animals like cows or horses you have to actually put your hands inside them. That’s right, up there. Now the thing is, this is a very difficult thing to learn how to do properly. A teacher can’t demonstrate it, because the student can’t see what they’re doing. Likewise when the student tries it, the teacher can’t see to know if they’re doing it right. Usually they just rely on describing what they’re doing (and how the animal reacts, of course).
Fortunately for teacher, student and especially animal, Sarah Baillie and her colleagues at the Royal Veterinary College invented a simulator called the Haptic Cow. It’s a haptic model of a cow’s rear end, complete with ‘Ouchometer’ – a graph that shows whether the student’s movements are too gentle to be effective, just right, or too rough to be safe. By using the Haptic Cow, students get an accurate idea of what they’ll be doing in their real jobs, the teachers can see better feedback of how well the student’s doing, and real cows don’t have to worry about being practised on. For doctors, vets and their patients, haptics are helping to make sure that practice doesn’t have to mean petrifying.
When we watch a film, it’s not just the pictures that make the experience, it’s the soundtrack too. The music and sound effects play a big part in setting the mood of a film. They matter. If you are to get the sinking feeling in your stomach or feel the shivers down your spine, it’s probably the music. QMUL’s Antonella Mazzoni wondered if other senses could contribute too … and designed a Mood Glove to find out.
Vibrations
We use touch as well as sight and sound to sense the world. This kind of ‘haptic feedback’ is used, for example, in phones that vibrate to tell us someone is calling. Antonella wondered if haptic feedback could heighten our mood while watching films in the way sounds do. To test her ideas she created a series of gloves. They had simple electronics built in to them that caused small pads to vibrate against the hand. She could control the order they vibrated and also the strength and frequency of the vibration. Early experiments showed it was best to make the pads vibrate on the back of the hand: when placed on the palm they tended to tickle too much. She also found that the positions of the vibrations did not make a big difference to moods, so she placed them in a simple circle.
Moods
Our moods and emotion can be broken into two parts: our levels of ‘arousal’ and of ‘valence’. Arousal is to do with the intensity of the mood. Being angry, delighted, alarmed and excited are all high arousal moods, whereas being bored, tired, sleepy and calm are low arousal ones. Valence is instead about the level of pleasure involved. High valence moods are pleasant and include being delighted, happy or calm, whereas low valence moods are unpleasant, such as being afraid, annoyed, depressed or bored. Together they give a standard way to rate mood.
Antonella next collected lots of film clips for use in her experiments. A series of volunteers watched the clips while wearing the glove and rated the experience in terms of their arousal and valence while watching them. Using these ratings as a baseline, she then ran experiments to explore if, and how, different kinds of vibration in the glove changed the wearer’s mood while watching the clips.
Suspense
In one experiment, she investigated suspense. Suspense is where the audience knows something about the plot that the characters don’t, leading to a gradual buildup of tension or expectation. Suspense can be linked to both positive and negative feelings so is not specifically about valence. It involves gradually increasing arousal. It is something that the score of a film can make a big difference to: transforming a clip with little suspense to one full of it. Antonella wondered if our sense of touch, through her Mood Glove, could deliver a similar enhancement? Perhaps, for example, a gradually building pattern of vibration on our hand could increase the build-up of arousal and so suspense. To find out, she chose 60 film clips that involved suspense. Volunteers rated them in terms of valence and arousal, and she used the 16 with most agreement. These final choices included clips from films like Inception, North by Northwest and Gravity.
Effects
Volunteers experienced heightened levels of suspense
Next she designed some simple effects to test. In her ‘buildup’ effect there was a gradual increase of both the strength and frequency of the vibration. The ‘fade in’ effect just increased the strength of the vibrations, starting from nothing and building to a peak. She also created an illusion that the effect moved across the hand, using the different vibration pads. A new set of volunteers watched the chosen film clips while wearing the glove. It gave different vibration patterns in time to each film. They rated their mood while watching the clips and Antonella also interviewed them about the experience afterwards. She found that the volunteers did experience heightened levels of suspense from certain kinds of vibration patterns for some clips. What worked differed for different clips suggesting a need to design the effect to fit the film.
Jobs
New technology creates new jobs that didn’t previously exist. You can see this in the ever increasing lengths of the credits of films, as new kinds of special effects lead to new jobs. Perhaps in future there will be a new career to follow as a `haptic composer’ for films, just as there are currently jobs composing soundtracks.
Perhaps it could be the job for you!
Paul Curzon, Queen Mary University of London(from the archive)
Pick up any computer or smart gadget and you’ll find small, colourful pictures on the screen. These ‘icons’ tell you which app is which. You can just touch them or click on them to open the app. It’s quick and easy, but it wasn’t always like that.
Up until the 1980s if you wanted to run a program you had to type a written command to tell the device what to do. This made things slow and hard. You had to remember all the different commands to type. It meant that only people who felt quite confident with computers were able to play with them.
Computer scientists wanted everyone to be able to join in (they wanted to sell more computers too!) so they developed a visual, picture-based way of letting people tell their computers what to do, instead of typing in commands. It’s called a ‘Graphical User Interface’ or GUI.
An artist, Susan Kare, was asked to design some very simple pictures – icons – that would make using computers easier. If people wanted to delete a file they would click on an icon with her drawing of a little dustbin. If people wanted to edit a letter they were writing they could click on the icon showing a pair of scissors to cut out a bit of text. She originally designed them on squared paper, with each square representing a pixel on the screen. Over the years the pictures have become more sophisticated (and sometimes more confusing) but in the early days they were both simple and clear thanks to Susan’s skill.
by Jo Brodie, Queen Mary University of London(from the archive)
Try our pixel puzzles which use the same idea. Then invent your own icons or pixel puzzles. Can you come up with your own easily recognizable pictures using as few lines as possible?
The world is heading for catastrophe. We’re hooked on power hungry devices: our mobile phones and iPods, our Playstations and laptops. Wherever you turn people are using gadgets, and those gadgets are guzzling energy – energy that we desperately need to save. We are all doomed, doomed…unless of course a hero rides in on a white charger to save us from ourselves.
Don’t worry, the cognitive crash dummies are coming!
Actually the saviours may be people like professor of human-computer interaction, Bonnie John, and her then grad student, Annie Lu Luo: people who design cognitive crash dummies. When working at Carnegie Mellon University it was their job to figure out ways for deciding how well gadgets are designed.
If you’re designing a bridge you don’t want to have to build it before finding out if it stays up in an earthquake. If you’re designing a car, you don’t want to find out it isn’t safe by having people die in crashes. Engineers use models – sometimes physical ones, sometimes mathematical ones – that show in advance what will happen. How big an earthquake can the bridge cope with? The mathematical model tells you. How slow must the car go to avoid killing the baby in the back? A crash test dummy will show you.
Even when safety isn’t the issue, engineers want models that can predict how well their designs perform. So what about designers of computer gadgets? Do they have any models to do predictions with? As it happens, they do. Their models are called ‘human behavioural models’, but think of them as ‘cognitive crash dummies’. They are mathematical models of the way people behave, and the idea is you can use them to predict how easy computer interfaces are to use.
There are lots of different kind of human behavioural model. One such ‘cognitive crash dummies’ is called ‘GOMS’. When designers want to predict which of a few suggested interfaces will be the quickest to use, they can use GOMS to do it.
Suppose you are designing a new phone interface. There are loads of little decisions you’ll have to make that affect how easy the phone is to use. You can fit a certain number of buttons on the phone or touch screen, but what should you make the buttons do? How big should they be? Should you use gestures? You can use menus, but how many levels of menus should a user have to navigate before they actually get to the thing they are trying to do? More to the point, with the different variations you have thought up, how quickly will the person be able to do things like send a text message or reply to a missed call? These are questions GOMS answers.
To do a GOMS prediction you first think up a task you want to know about – sending a text message perhaps. You then write a list of all the steps that are needed to do it. Not just the button presses, but hand movements from one button to another, thinking time, time for the machine to react, and so on. In GOMS, your imaginary user already knows how to do the task, so you don’t have to worry about spending time fiddling around or making mistakes. That means that once you’ve listed all your separate actions GOMS can work out how long the task will take just by adding up the times for all the separate actions. Those basic times have been worked out from lots and lots of experiments on a wide range of devices. The have shown, on average, how long it takes to press a button and how long users are likely to think about it first.
GOMS in 60 seconds?
GOMS has been around since the 1980s, but wasn’t being used much by industrial designers. The problem is that it is very frustrating and time-consuming to work out all those steps for all the different tasks for a new gadget. Bonnie John’s team developed a tool called CogTool to help. You make a mock-up of your phone design in it, and tell it which buttons to press to do each task. CogTool then worked out where the other actions, like hand movements and thinking time, are needed and makes predictions.
Bonnie John came up with an easier way to figure out how much human time and effort a new design uses, but what about the device itself? How about predicting which interface design uses less energy? That is where Annie Lu Luo, came in. She had the great idea that you could take a GOMS list of actions and instead of linking actions to times you could work out how much energy the device uses for each action instead. By using GOMS together with a tool like CogTools, a designer can find out whether their design is the most energy efficient too.
So it turns out you don’t need a white knight to help your battery usage, just Annie Lu Luo and her version of GOMS. Mobile phone makers saw the benefit of course. That’s why Annie walked straight into a great job on finishing university.
In 2009 Desi Cryer, who is Black, shared a light-hearted video with a serious message. He’d bought a new computer with a face tracking camera… which didn’t track his face, at all. It did track his White colleague Wanda’s face though. In the video (below) he asked her to go in front of the camera and move from side to side and the camera obediently tracked her face – wherever she moved the camera followed. When Desi moved back in front of the camera it stopped again. He wondered if the computer might be racist…
Another video, this time from 2017, showed a dark-skinned man failing to get a soap to dispenser to give him some soap. Nothing happened when he put his hand underneath the sensor but as soon as his lighter-skinned friend put his hand under it – out popped some soap! The only way the first man could get any soap dispensed was to put a white tissue on his hand first. He wondered if the soap dispenser might be racist…
What’s going on?
Probably no-one set out to maliciously design a racist device but designers might need to check that their products work with a range of different people before putting them on the market. This can save the company embarrassment as well as creating something that more people want to buy.
Sensors working overtime
Both devices use a sensor that is activated (or in these cases isn’t) by a signal. Soap dispensers shine a beam of light which bounces off a hand placed below it and some of that light is reflected back. Paler skin reflects more light (and so triggers the sensor) than darker skin. Next to the light is a sensor which responds to the reflected light – but if the device was only tested on White people then the sensor wasn’t adjusted for the full range of skin tones and so won’t respond appropriately. Similarly cameras have historically been designed for White skin tones meaning darker tones are not picked up as well.
Things can be improved!
It’s a good idea, when designing something that will be used by lots of different people, to make sure that it will work correctly with everyone. Having a diverse design team and, importantly, making sure that everyone feels empowered to contribute is a good way to start. Another is to test the design with different target audiences early in the design process so that changes can be made before it’s too late. How a company responds to feedback when they’ve made an oversight is also important. In the case of the computer company they acknowledged the problem and went to work to improve the camera’s sensitivity.
During the coronavirus pandemic many people bought a ‘pulse oximeter’, a device which clips painlessly onto a finger and measures how much oxygen is circulating in your blood (and your pulse). If the oxygen reading became too low people were advised to go to hospital. Oximeters shine red and infrared light from the top clip through the finger and the light is absorbed diferently depending on how much oxygen is present in the blood. A sensor on the lower clip measures how much light has got through but the reading can be affected by skin colour (and coloured nail polish). People were concerned that pulse oximeters would overestimate the oxygen reading for someone with darker skin (that is, tell them they had more oxygen than they actually had) and that the devices might not detect a drop in oxygen quickly enough to warn them.
In response the UK Government announced in August 2022 that it would investigate this bias in a range of medical devices to ensure that future devices work effectively for everyone.
Imagine being able to pick up an ordinary banana and use it as a phone. That’s part of the vision of ‘invoked computing’, which is being developed by Japanese researchers. A lot of the computers in our lives are camouflaged – smartphones are more like computers than phones, after all – but invoked computing would mean that computers would be everywhere and nowhere at the same time.
The idea is that in the future, computer systems could monitor an entire environment, watching your movements. Whenever you wanted to interact with a computer, you would just need to make a gesture. For example, if you picked up a banana and held one end to your ear and the other to your mouth, the computer would guess that you wanted to use the phone. It would then use a fancy speaker system to direct the sound, so you would even hear the phone call as though it were coming from the banana.
Sometimes you might find yourself needing a bit more computing power, though, right? Not to worry. You can make yourself a laptop if you just find an old pizza box. Lift the lid and the system will project the video and sound straight on to the box.
At the moment the banana phone and pizza box laptop are the only ways that you can use invoked computing in the researchers’ system, but they hope to expand it so that you can use other objects. Then, rather than having to learn how to use your computers, your computers will have to learn how you would like to use them. And when you are finished using your phone, you could eat it.
A red sock in with your white clothes wash – guess what happened next? What can you do to prevent it from happening again? Why should a computer scientist care? It turns out that red socks have something to teach us about medical gadgets.
How can we stop red socks from ever turning our clothes pink again? We need a strategy. Here are some possibilities.
Don’t wear red socks.
Take a ‘how to wash your clothes’ course.
Never make mistakes.
Get used to pink clothes.
Let’s look at them in turn – will they work?
Don’t wear red socks: That might help but it’s not much use if you like red socks or if you need them to match your outfit. And how would it help when you wear purple, blue or green socks? Perhaps your clothes will just turn green instead.
Take a ‘how to wash your clothes’ course: Training might help: you’d certainly learn that a red sock and white clothes shouldn’t be mixed, you probably did know that anyway, though. It won’t stop you making a similar mistake again.
Never make misteaks: Just never leave a red sock in your white wash. If only! Unfortunately everyone makes mistakes – that’s why we have erasers on pencils and a delete key on computers – this idea just won’t work.
Get used to pink clothes: Maybe, but it’s not ideal. It might not be so great turning up to school in a pink shirt if everyone else is wearing a white one.
What if the problem’s more serious?
We can probably live with pink clothes, but what happens if a similar mistake is made at a hospital? Not socks, but medicines. We know everyone makes mistakes so how do we stop those mistakes from harming patients? Special machines are used in hospitals to pump medicine directly into a patient’s arm, for example, and a nurse needs to tell it how much medicine to give – if the dose is wrong the patient won’t get better, and might even get worse.
What have we learned from our red sock strategies? We can’t stop giving patients medicine and we don’t want to get used to mistakes so our first and fourth strategies won’t work. We can give nurses more training but everyone makes mistakes even when trained, so the third suggestion isn’t good enough either and it doesn’t stop someone else making the same mistake.
We need to stop thinking of mistakes as a problem that people make and instead as a problem that systems thinking can solve. That way we can find solutions that work for everyone. One possibility is to check whether changes to the device might make mistakes less likely in the first place.
Errors? Or arrows?
Most medical machines are controlled with a panel with numbered keys (a number keypad) like on mobile phones, or up and down arrows (an arrow keypad) like you sometimes get on alarm clocks. CHI+MED researchers have been asking questions like: which way is best for entering numbers quickly, but also which is best for entering numbers accurately? They’ve been running experiments where people use different keypads, are timed and their mistakes are recorded. The researchers also track where people are looking while they use the keypads. Another approach has been to create mathematical descriptions of the different keypads and then mathematically explore how bad different errors might be.
It turns out that if you can see the numbers on a keypad in front of you it’s very easy to type them in quickly, though not always correctly! You need to check the display to see if you have actually put in the right ones. Worse, mistakes that are made are often massive – ten times too much or more. The arrow keypads are a little slower to use but because people are already looking at the display (to see what numbers are appearing) they can help nurses be more accurate, not only are fewer mistakes made but those that are made tend to be smaller.
Smart machines help users
A medical device that actively helps users avoid mistakes helps everyone using it (and the patients it’s being used on!). Changing the interface to reduce errors isn’t the only solution though. Modern machines have ‘intelligent drug libraries’ that contain information about the medicines and what sort of doses are likely and safe. Someone might still mistakenly tell the machine to give too high a dose but now it can catch the error and ask the nurse to double-check. That’s like having a washing machine that can spot a brightly coloured sock in a white wash and that refuses to switch on till it has been removed.
Building machines with a better ability to catch errors (remember, we all make mistakes) and helping users to recover from them easily is much more reliable than trying to get rid of all possible errors by training people. It’s not about avoiding red socks, or errors, but about putting better systems in place to make sure that we find them before we press that big ‘Start’ button.
Why might a computer scientist need to write fiction? To make sure she creates an app that people actually need.
Writing fiction doesn’t sound like the sort of skill a computer scientist might need. However, it’s part of my job at the moment. Working with expert rheumatologists Amy MacBrayne and Fran Humby, I am helping a design team understand what life with rheumatoid arthritis is like, so they can design software that is actually needed and so will be used and useful.
A big problem with developing software is that programmers tend to design things for themselves. However, programmers are not like the users of their software. They have different backgrounds and needs and they have been trained to think differently. Worse, they know the system they are developing inside out, unlike its users. An important first step in a project is to do background research to understand your users. If designing an app for people with rheumatoid arthritis, you need to know a lot about the lives of such people. To design a successful product, you particularly need to understand their unfulfilled goals. What do they want to be able to do that is currently hard or impossible?
What do you do with the research? Alan Cooper’s idea of ‘Personas’ are a powerful next step – and this is where writing fiction comes in. Based on research, you write descriptions of lots of fictional characters (personas), each representing groups of people with similar goals. They have names, photos and realistic lives. You also write scenarios about their lives that help understand their goals. Next, you merge and narrow these personas down, dropping some, creating new ones, altering others. Your aim is to eventually end up with just one, called a primary persona. The idea is that if you design for the primary persona, you will create something that meets the goals of the groups represented by the other personas it replaced.
The primary persona (let’s call her Samira) is then used throughout the design process as the person being designed for. If wondering whether some new feature or way of doing things is a good idea, the designers would ask themselves, “Would Samira actually want this? Would she be able to use it?” If they can think of her as a real person, it is much easier to make decisions than if thinking of some non-existent abstract “user” who becomes whatever each team member wants them to be. It helps stop ‘feature bloat’ where designers add in every great idea for a new feature they have but end up with a product so complex no one can, or wants to, use it.
As part of the Queen Mary PAMBAYESIAN project we have been talking to rheumatoid arthritis patients and their doctors to understand their needs and goals. I’ve then created a cast of detailed personas to represent the results. These can act as an initial set of personas to help future designers designing apps to support those with the disease.
If you thought creative writing wasn’t important to a computer scientist, think again. A good persona needs to be as powerfully written and as believable as a character in a good novel. So, you should practice writing fiction as well as writing programs.
Some diseases can’t be cured. Doctors and nurses just try to control the disease to stop them ruining people’s lives. Perhaps smartphone apps can pull off the trick of giving patients better care while giving clinicians more time to spend with the patients who most need them? A Venn diagram is at the centre of the Queen Mary team’s prototype.
What is rheumatoid arthritis?
Normally your immune system does a good job of fighting infection and keeping you healthy. But, if you have an autoimmune disease, it can also attack your healthy cells, causing inflammation and damage. Rheumatoid arthritis is like this: a painful condition that mostly affects hands, knees and feet as the person’s immune system attacks their joints, making them swell painfully. It affects around 400,000 people in the UK and is more common in women than men.
People with the disease alternate between periods when it is under control and they have few symptoms, and with days or weeks of painful ‘flares’ where it is very, very bad. During these flares it especially affects a person’s ability to live a normal life. It can be hard to move around comfortably, do exercise – plus it interferes with their ability to work. It can also leave them totally reliant on family and friends just to do everyday things like dress or eat, never mind go out. This can lead to depression and puts a strain on friendships.
Treating the disease
Treatment, which can include tablets, injections, physiotherapy and sometimes surgery, slows the disease, keeping it under control for long periods. Sufferers are also given advice on lifestyle changes. This all reduces the risk of joint damage and helps people live their life more fully.
At appointments, doctors collect information to help them see how the disease is progressing. A Disease Activity Score (DAS) calculator lets them combine measurements for pain, how tender or swollen their patient’s joints are and how many joints are affected. Regular blood tests keep track of the amount of inflammation and how the body is reacting to drugs. This helps them decide if they need to adjust the medication.
If it is caught early, modern medicine reduces the worst effects of the disease, helped by keeping a close eye on the Disease Activity Score as treatments may need to be repeatedly adjusted to control flares. This requires regular hospital visits which uses up scarce healthcare resources and is very time-consuming for patients. It is hampered because hospital appointments may only happen twice a year due to the number of patients. Everyone wants to give more personalised care, but hospitals just can’t afford to provide it.
Supporting doctors
So, what do you do when there just aren’t enough doctors to see everyone as regularly as needed to maintain their patients’ wellbeing? One solution is to use remote monitoring with an app on a patient’s smartphone, so involving patients more directly in their own care. They can use such apps to regularly record their own disease activity measurements, sharing the information with their doctor to save visiting the hospital.
A smart app
This is an improvement, but the measurements still require expert monitoring and can take more of the doctor’s time. However, if smartphones can actually be made to be, well, smart, then they could help give advice between hospital visits and alert the hospital team, when needed, so they can step in. This might involve, for example, loading the app with background knowledge about rheumatoid arthritis, expert knowledge from lots of doctors, and creating an artificial intelligence to use this information effectively for each patient.
Hospital specialists and computer scientists at Queen Mary are developing such a prototype based on Bayesian networks as the artificial intelligence core. Bayesian networks are based on reasoning about the causes of things and how likely different things are to be the cause of something being observed. Building the prototype involves finding out if patients and clinicians find such tools useful and acceptable (some people might find clinic visits reassuring, while some may be keener to avoid taking the time off work, for example).
Smart and patient centred
This still focusses on a clinician’s view of treatment using drugs though. With a smartphone app we can perhaps do better and take the person’s life into account – but how? The first step is to understand patient goals. Patients would need to be willing to share lots of information about themselves so that the software can learn as much as possible about them. Eventually, this might be done using sensors that automatically detect information: how much pain they are in, how stiff their joints are, how much they move around, how long it takes them to get out of a chair, how much sleep they get, how often they meet others, if and when they take their medicine, and so on. Rather than just focussing on medical treatment it can then focus advice ‘holistically’ on the whole person.
The Queen Mary team’s approach is centred around three different things: helping people with physical independence so they can move around and look after themselves; empowering them to manage their condition and general well-being themselves; and participation in the sense of helping them socialise, keep friendships and maintain family bonds.
The Bayesian network processes the information about patients and computes their predicted levels of independence, empowerment and participation, working out how good or bad things are for them at the moment. This places them in one of seven positions in a Venn diagram of the three dimensions over which areas need most attention. It then gives appropriate advice, aiming to keep all three dimensions in balance, monitoring what happens, but also alerting the hospital when necessary.
So, for example, if the Bayesian network judges independence low, participation high and empowerment low, the patient is in the Venn diagram intersection of low empowerment and low independence. Advice in the following weeks, based on this area of the Venn diagram, would focus on things like coping with pain and stiffness, getting better sleep, as well as how to manage the disease in general.
By personalising advice and focusing on the whole person, it is hoped patients will get more appropriate care as soon as they need it, but doctors’ time will also be freed up to focus on the patients who most need their help.
Jo Brodie, Hamit Soyel and Paul Curzon, Queen Mary University of London, Spring 2021
Fatigue is a problem that people with a variety of long-term diseases can also suffer from.
This isn’t just normal tiredness, but something much, much worse: so bad that it is a struggle to do anything at all, destroying any chance of a normal life. Doctors can often do little to help beyond managing the underlying disease, then hope the fatigue sorts itself out. Sometimes fatigue can stay with the person long, long after. Maha Albarrak, for her PhD, is exploring how computer technology might help people cope. Her first step is to interview those suffering to find out what kind of help they really need. Then she will work closely with volunteers to come up with solutions that solve the problems that matter.
Paul Curzon, Queen Mary University of London, Spring 2021