Cognitive crash dummies

by Paul Curzon, Queen Mary University of London

The world is heading for catastrophe. We’re hooked on power hungry devices: our mobile phones and iPods, our Playstations and laptops. Wherever you turn people are using gadgets, and those gadgets are guzzling energy – energy that we desperately need to save. We are all doomed, doomed…unless of course a hero rides in on a white charger to save us from ourselves.

Don’t worry, the cognitive crash dummies are coming!

Actually the saviours may be people like professor of human-computer interaction, Bonnie John, and her then grad student, Annie Lu Luo: people who design cognitive crash dummies. When working at Carnegie Mellon University it was their job to figure out ways for deciding how well gadgets are designed.

If you’re designing a bridge you don’t want to have to build it before finding out if it stays up in an earthquake. If you’re designing a car, you don’t want to find out it isn’t safe by having people die in crashes. Engineers use models – sometimes physical ones, sometimes mathematical ones – that show in advance what will happen. How big an earthquake can the bridge cope with? The mathematical model tells you. How slow must the car go to avoid killing the baby in the back? A crash test dummy will show you.

Even when safety isn’t the issue, engineers want models that can predict how well their designs perform. So what about designers of computer gadgets? Do they have any models to do predictions with? As it happens, they do. Their models are called ‘human behavioural models’, but think of them as ‘cognitive crash dummies’. They are mathematical models of the way people behave, and the idea is you can use them to predict how easy computer interfaces are to use.

There are lots of different kind of human behavioural model. One such ‘cognitive crash dummies’ is called ‘GOMS’. When designers want to predict which of a few suggested interfaces will be the quickest to use, they can use GOMS to do it.

Send in the GOMS

Suppose you are designing a new phone interface. There are loads of little decisions you’ll have to make that affect how easy the phone is to use. You can fit a certain number of buttons on the phone or touch screen, but what should you make the buttons do? How big should they be? Should you use gestures? You can use menus, but how many levels of menus should a user have to navigate before they actually get to the thing they are trying to do? More to the point, with the different variations you have thought up, how quickly will the person be able to do things like send a text message or reply to a missed call? These are questions GOMS answers.

To do a GOMS prediction you first think up a task you want to know about – sending a text message perhaps. You then write a list of all the steps that are needed to do it. Not just the button presses, but hand movements from one button to another, thinking time, time for the machine to react, and so on. In GOMS, your imaginary user already knows how to do the task, so you don’t have to worry about spending time fiddling around or making mistakes. That means that once you’ve listed all your separate actions GOMS can work out how long the task will take just by adding up the times for all the separate actions. Those basic times have been worked out from lots and lots of experiments on a wide range of devices. The have shown, on average, how long it takes to press a button and how long users are likely to think about it first.

GOMS in 60 seconds?

GOMS has been around since the 1980s, but wasn’t being used much by industrial designers. The problem is that it is very frustrating and time-consuming to work out all those steps for all the different tasks for a new gadget. Bonnie John’s team developed a tool called CogTool to help. You make a mock-up of your phone design in it, and tell it which buttons to press to do each task. CogTool then worked out where the other actions, like hand movements and thinking time, are needed and makes predictions.

Bonnie John came up with an easier way to figure out how much human time and effort a new design uses, but what about the device itself? How about predicting which interface design uses less energy? That is where Annie Lu Luo, came in. She had the great idea that you could take a GOMS list of actions and instead of linking actions to times you could work out how much energy the device uses for each action instead. By using GOMS together with a tool like CogTools, a designer can find out whether their design is the most energy efficient too.

So it turns out you don’t need a white knight to help your battery usage, just Annie Lu Luo and her version of GOMS. Mobile phone makers saw the benefit of course. That’s why Annie walked straight into a great job on finishing university.


This article was originally published on the CS4FN website and appears on pages 12 and 13 of issue 9 (‘Programmed to save the world‘) of the CS4FN magazine, which you can download (free) here along with all of our other free material.

See also the concept of ‘digital twins’ in this article from our Christmas Advent Calendar: Pairs: mittens, gloves, pair programming, magic tricks.


Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

Future Friendly: Focus on Kerstin Dautenhahn

by Peter W McOwan, Queen Mary University of London

(from the archive)

Kerstin's team including the robot waving
Kerstin’s team
Copyright © Adaptive Systems Research Group

Kerstin Dautenhahn is a biologist with a mission: to help us make friends with robots. Kerstin was always fascinated by the natural world around her, so it was no surprise when she chose to study Biology at the University of Bielefeld in Germany. Afterwards she went on to study a Diploma in Biology where she did research on the leg reflexes in stick insects, a strange start it may seem for someone who would later become one of the world’s foremost robotics researchers. But it was through this fascinating bit of biology that Kerstin became interested in the ways that living things process information and control their body movements, an area scientists call biological cybernetics. This interest in trying to understand biology made her want to build things to test her understanding, these things would be based on ideas copied from biological animals but be run by computers, these things would be robots.

Follow that robot

From humble beginning building small robots that followed one another over a hilly landscape, she started to realise that biology was a great source of ideas for robotics, and in particular that the social intelligence that animals use to live and work with each other could be modelled and used to create sociable robots.

She started to ask fascinating questions like “What’s the best way for a robot to interrupt you if you are reading a newspaper – by gesturing with its arms, blinking its lights or making a sound?” and perhaps most importantly “When would a robot become your friend?” First at the University of Hertfordshire, now a Professor at the University of Waterloo she leads a world famous research group looking to try and build friendly robots with social intelligence.

Good robot / Bad robot – East vs West

Kerstin, like many other robotics researchers, is worried that most people tend to look on robots as being potentially evil. If we look at the way robots are portrayed in the movies that’s often how it seems: it makes a good story to have a mechanical baddie. But in reality robots can provide a real service to humans, from helping the disabled, assisting around the home and even becoming friends and companions. The baddie robot ideas tends to dominate in the west, but in Japan robots are very popular and robotics research is advancing at a phenomenal rate. There has been a long history in Japan of people finding mechanical things that mimic natural things interesting and attractive. It is partly this cultural difference that has made Japan a world leader in robot research. But Kerstin and others like her are trying to get those of us in the west to change our opinions by building friendly robots and looking at how we relate to them.

Polite Robots roam the room

When at the University of Hertfordshire, Kerstin decided that the best way to see how people would react to a robot around the house was to rent a flat near the university, and fill it with robots. Rather than examine how people interacted with robots in a laboratory, moving the experiments to a real home, with bookcases, biscuits, sofas and coffee tables, make it real. She and her team looked at how to give their robots social skills: what was the best way for a robot to approach a person, for example? At first they thought that the best approach would be straight from the front, but they found that humans felt this too aggressive, so the robots were trained to come up gently from the side. The people in the house were also given special ‘comfort buttons’, devices that let them indicate how they were feeling in the company of robots. Again interesting things happened, it turned out that not all, but quite a lot of people were on the whole happy for these robots to be close to themselves, closer in fact than they would normally let a human approach. Kerstin explains ‘This is because these people see the robot as a machine, not a person, and so are happy to be in close proximity. You are happy to move close to your microwave, and it’s the same for robots’. These are exciting first steps as we start to understand how to build robots with socially acceptable manners. But it turns out that robots need to have good looks as well as good manners if they are going to make it in human society.

Looks are everything for a robot?

This fall in acceptability
is called the ‘uncanny valley’

How we interact with robots also depends on how the robots look. Researchers had found previously that if you make a robot look too much like a human being, people expect it to be a human being, with all the social and other skills that humans have. If it doesn’t have these, we find interaction very hard. It’s like working with a zombie, and it can be very frightening. This fall in acceptability of robots that look like, but aren’t quite, human is what researchers call the ‘uncanny valley’, so people prefer to encounter a robot that looks like a robot and acts like a robot. Kerstin’s group found this effect too, so they designed their robots to look and act they way we would expect robots to look and act, and things got much more sociable. But they are still looking at how we act with more human like robots and built KASPAR, a robot toddler, which has a very realistic rubber face capable of showing expressions and smiling, and video camera eyes that allow the robot to react to your behaviours. He possesses arms so can wave goodbye or greet you with a friendly gesture. Most recently he was extended with multi-modal technology that allowed several children to play with him at the same time, He’s very lifelike and their hope was hopefully as KASPAR’s programming grew, and his abilities improved he, or some descendent of him, would emerge from the uncanny valley to become someone’s friend, and in particular, children with autism.

Autism – mind blindness and robots

The fact that most robots at present look like and act like robots can give them a big advantage to help them support children with autism. Autism is a condition that prevents you from developing an understanding of how to interact socially with the world. A current theory to explain the condition is that those who are autistic cannot form a correct understanding of others intentions, it’s called mind blindness. For example, if I came into the room wearing a hideous hat and asked you ‘Do you like my lovely new hat?’ you would probably think, ‘I don’t like the hat, but he does, so I should say I like it so as not to hurt his feelings’, you have a mental model of my state of mind (that I like my hat). An autistic person is likely to respond ‘I don’t like your hat’, if this is what he feels. Autistic people cannot create this mental model so find it hard to make friends and generally interact with people, as they can’t predict what people are likely to say, do or expect.

Playing with Robot toys

It’s different with robots, many autistic children have an affinity with robots. Robots don’t do unexpected things. Their behaviour is much simpler, because they act like robots. Using robots Kerstin’s group examined how we can use this interaction with robot toys to help some autistic children to develop skills to allow them to interact better with other people. By controlling the robot’s behaviours some of the children can develop ways to mimic social skills, which may ultimately improve their quality of life. There were some promising results, and the work continues to be only one way to try and help those suffering with this socially isolating condition.

Future friendly

It’s only polite that the last word goes to Kerstin from her time at Hertfordshire:

‘I firmly believe that robots as assistants can potentially be very useful in many application areas. For me as a researcher, working in the field of human-robot interaction is exciting and great fun. In our team we have people from various disciplines working together on a daily basis, including computer scientists, engineers and psychologist. This collaboration, where people need to have an open mind towards other fields, as well as imagination and creativity, are necessary in order to make robots more social.’

In the future, when robots become our workmates, colleagues and companions it will be in part down to Kerstin and her team’s pioneering effort as they work towards making our robot future friendly.


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Devices that work for everyone #BlackHistoryMonth

A pulse oximeter on the finger of a Black person's hand

by Jo Brodie, Queen Mary University of London

In 2009 Desi Cryer, who is Black, shared a light-hearted video with a serious message. He’d bought a new computer with a face tracking camera… which didn’t track his face, at all. It did track his White colleague Wanda’s face though. In the video (below) he asked her to go in front of the camera and move from side to side and the camera obediently tracked her face – wherever she moved the camera followed. When Desi moved back in front of the camera it stopped again. He wondered if the computer might be racist…

The computer recognises Desi’s colleague Wanda, but not him

Another video (below), this time from 2017, showed a dark-skinned man failing to get a soap to dispenser to give him some soap. Nothing happened when he put his hand underneath the sensor but as soon as his lighter-skinned friend put his hand under it – out popped some soap! The only way the first man could get any soap dispensed was to put a white tissue on his hand first. He wondered if the soap dispenser might be racist…

The soap dispenser only dispenses soap if it ‘see’s a white hand

What’s going on?

Probably no-one set out to maliciously design a racist device but designers might need to check that their products work with a range of different people before putting them on the market. This can save the company embarrassment as well as creating something that more people want to buy. 

Sensors working overtime

Both devices use a sensor that is activated (or in these cases isn’t) by a signal. Soap dispensers shine a beam of light which bounces off a hand placed below it and some of that light is reflected back. Paler skin reflects more light (and so triggers the sensor) than darker skin. Next to the light is a sensor which responds to the reflected light – but if the device was only tested on White people then the sensor wasn’t adjusted for the full range of skin tones and so won’t respond appropriately. Similarly cameras have historically been designed for White skin tones meaning darker tones are not picked up as well.

In the days when film was developed the technicians would use what was called a ‘Shirley’ card (a photograph of a White woman with brown hair) to colour-correct the photographs. The colour balancing meant darker-skinned tones didn’t come out as well, however the problem was only really addressed because chocolate manufacturers and furniture companies complained that the different chocolates and dark brown wood products weren’t showing up correctly!

The Racial Bias Built Into Photography (25 April 2019) The New York Times

Things can be improved!

It’s a good idea, when designing something that will be used by lots of different people, to make sure that it will work correctly with everyone. Having a diverse design team and, importantly, making sure that everyone feels empowered to contribute is a good way to start. Another is to test the design with different target audiences early in the design process so that changes can be made before it’s too late. How a company responds to feedback when they’ve made an oversight is also important. In the case of the computer company they acknowledged the problem and went to work to improve the camera’s sensitivity. 

A problem with pulse oximeters

A pulse oximeter on the finger of a Black person's hand
Pulse oximeter image by Mufid Majnun from Pixabay
The oximeter is shown on the index finger of a Black person’s right hand.

During the coronavirus pandemic many people bought a ‘pulse oximeter’, a device which clips painlessly onto a finger and measures how much oxygen is circulating in your blood (and your pulse). If the oxygen reading became too low people were advised to go to hospital. Oximeters shine red and infrared light from the top clip through the finger and the light is absorbed diferently depending on how much oxygen is present in the blood. A sensor on the lower clip measures how much light has got through but the reading can be affected by skin colour (and coloured nail polish). People were concerned that pulse oximeters would overestimate the oxygen reading for someone with darker skin (that is, tell them they had more oxygen than they actually had) and that the devices might not detect a drop in oxygen quickly enough to warn them.

In response the UK Government announced in August 2022 that it would investigate this bias in a range of medical devices to ensure that future devices work effectively for everyone.

Further reading

See also Is your healthcare algorithm racist? (from issue 27 of the CS4FN magazine).


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


This blog is funded through EPSRC grant EP/W033615/1.

Gadgets based on works of fiction

Why might a computer scientist need to write fiction? To make sure she creates an app that people actually need.

Portrait images of lots of people used as personas.

Writing fiction doesn’t sound like the sort of skill a computer scientist might need. However, it’s part of my job at the moment. Working with expert rheumatologists Amy MacBrayne and Fran Humby, I am helping a design team understand what life with rheumatoid arthritis is like, so they can design software that is actually needed and so will be used and useful.

A big problem with developing software is that programmers tend to design things for themselves. However, programmers are not like the users of their software. They have different backgrounds and needs and they have been trained to think differently. Worse, they know the system they are developing inside out, unlike its users. An important first step in a project is to do background research to understand your users. If designing an app for people with rheumatoid arthritis, you need to know a lot about the lives of such people. To design a successful product, you particularly need to understand their unfulfilled goals. What do they want to be able to do that is currently hard or impossible?

What do you do with the research? Alan Cooper’s idea of ‘Personas’ are a powerful next step – and this is where writing fiction comes in. Based on research, you write descriptions of lots of fictional characters (personas), each representing groups of people with similar goals. They have names, photos and realistic lives. You also write scenarios about their lives that help understand their goals. Next, you merge and narrow these personas down, dropping some, creating new ones, altering others. Your aim is to eventually end up with just one, called a primary persona. The idea is that if you design for the primary persona, you will create something that meets the goals of the groups represented by the other personas it replaced.

The primary persona (let’s call her Samira) is then used throughout the design process as the person being designed for. If wondering whether some new feature or way of doing things is a good idea, the designers would ask themselves, “Would Samira actually want this? Would she be able to use it?” If they can think of her as a real person, it is much easier to make decisions than if thinking of some non-existent abstract “user” who becomes whatever each team member wants them to be. It helps stop ‘feature bloat’ where designers add in every great idea for a new feature they have but end up with a product so complex no one can, or wants to, use it.

As part of the Queen Mary PAMBAYESIAN project we have been talking to rheumatoid arthritis patients and their doctors to understand their needs and goals. I’ve then created a cast of detailed personas to represent the results. These can act as an initial set of personas to help future designers designing apps to support those with the disease.

If you thought creative writing wasn’t important to a computer scientist, think again. A good persona needs to be as powerfully written and as believable as a character in a good novel. So, you should practice writing fiction as well as writing programs.

Read some of our personas about living with rheumatoid arthritis here.

– Paul Curzon, Queen Mary University of London, Spring 2021

See the related Teaching London Computing Activity

Find out more about goal-directed design and personas from its creator in Alan Cooper’s wonderful book “The inmates are running the Asylum” (the inmates are computer scientists!)

Download Issue 27 of the cs4fn magazine on Smart Health here.

This post and issue 27 of the cs4fn magazine have been funded by EPSRC as part of the PAMBAYESIAN project.

Machines Inventing Musical Instruments

by Paul Curzon, Queen Mary University of London

based on a 2016 talk by Rebecca Fiebrink

cupped hands in dark
Image by Milada Vigerova from Pixabay
Image by Milada Vigerova from Pixabay 

Machine Learning is the technology driving driverless cars, recognising faces in your photo collection and more, but how could it help machines invent new instruments? Rebecca Fiebrink of Goldsmiths, University of London is finding out.

Rebecca is helping composers and instrument builders to design new musical instruments and giving them new ways to perform. Her work has also shown that machine learning provides an alternative to programming as a way to quickly turn design ideas into prototypes that can be tested.

Suppose you want to create a new drum machine-based musical instrument that is controlled by the wave of a hand: perhaps a fist means one beat, whereas waggling your fingers brings in a different beat. To program a prototype of your idea, you would need to write code that could recognize all the different hand gestures, perhaps based on a video feed. You would then have some kind of decision code that chose the appropriate beat. The second part is not too hard, perhaps, but writing code to recognize specific gestures in video is a lot harder, needing sophisticated programming skills. Rebecca wants even young children to be able to do it!

How can machine learning help? Rebecca has developed a machine learning program with a difference. It takes sensor input – sound, video, in fact just about any kind of sensor you can imagine. It then watches, listens…senses what is happening and learns to associate what it senses with different actions it should take. With the drum machine example, you would first select one of the kinds of beats. You then make the gesture that should trigger it: a fist perhaps. You do that a few times so it can learn what a fist looks like. It learns that the patterns it is sensing are to be linked with the beat you selected. Then you select the next beat and show it the next gesture – waggling your fingers – until it has seen enough examples. You keep doing this with each different gesture you want to control the instrument. In just a few minutes you have a working machine to try. It is learning by example how the instrument you are wanting works. You can try it, and then adjust it by showing it new examples if it doesn’t quite do what you want.

It is learning by example how
the instrument you are wanting works.

Rebecca realised that this approach of learning by example gives a really powerful new way to support creativity: to help designers design. In the traditional ways machine learning is used, you start with lots of examples of the things that you want it to recognize – lots of pictures of cats and dogs, perhaps. You know the difference, so label all these training pictures as cats or dogs, so it knows which to form the two patterns from. Your aim is for the machine to learn the difference between cat and dog patterns so it can decide for itself when it sees new pictures.

When designing something like a new musical instrument though, you don’t actually know exactly what you want at the start. You have a general idea but will work out the specifics as you go. You tinker with the design, trying new things and keeping the ideas that work, gradually refining your thoughts about what you want as you refine the design of the instrument. The machine learning program can even help by making mistakes – it might not have learnt exactly what you were thinking but as a result makes some really exciting sound you never thought of. You can then explore that new idea.

One of Rebecca’s motivations in wanting to design new instruments is to create accessible instruments that people with a wide range of illness and disability can play. The idea is to adapt the instrument to the kinds of movement the person can actually do. The result is a tailored instrument perfect for each person. An advantage of this approach is you can turn a whole room, say, into an instrument so that every movement does something: an instrument that it’s impossible not to play. It is a play space to explore.

Playing an instrument suddenly really is just playing.

More on …