Cognitive crash dummies

by Paul Curzon, Queen Mary University of London

The world is heading for catastrophe. We’re hooked on power hungry devices: our mobile phones and iPods, our Playstations and laptops. Wherever you turn people are using gadgets, and those gadgets are guzzling energy – energy that we desperately need to save. We are all doomed, doomed…unless of course a hero rides in on a white charger to save us from ourselves.

Don’t worry, the cognitive crash dummies are coming!

Actually the saviours may be people like professor of human-computer interaction, Bonnie John, and her then grad student, Annie Lu Luo: people who design cognitive crash dummies. When working at Carnegie Mellon University it was their job to figure out ways for deciding how well gadgets are designed.

If you’re designing a bridge you don’t want to have to build it before finding out if it stays up in an earthquake. If you’re designing a car, you don’t want to find out it isn’t safe by having people die in crashes. Engineers use models – sometimes physical ones, sometimes mathematical ones – that show in advance what will happen. How big an earthquake can the bridge cope with? The mathematical model tells you. How slow must the car go to avoid killing the baby in the back? A crash test dummy will show you.

Even when safety isn’t the issue, engineers want models that can predict how well their designs perform. So what about designers of computer gadgets? Do they have any models to do predictions with? As it happens, they do. Their models are called ‘human behavioural models’, but think of them as ‘cognitive crash dummies’. They are mathematical models of the way people behave, and the idea is you can use them to predict how easy computer interfaces are to use.

There are lots of different kind of human behavioural model. One such ‘cognitive crash dummies’ is called ‘GOMS’. When designers want to predict which of a few suggested interfaces will be the quickest to use, they can use GOMS to do it.

Send in the GOMS

Suppose you are designing a new phone interface. There are loads of little decisions you’ll have to make that affect how easy the phone is to use. You can fit a certain number of buttons on the phone or touch screen, but what should you make the buttons do? How big should they be? Should you use gestures? You can use menus, but how many levels of menus should a user have to navigate before they actually get to the thing they are trying to do? More to the point, with the different variations you have thought up, how quickly will the person be able to do things like send a text message or reply to a missed call? These are questions GOMS answers.

To do a GOMS prediction you first think up a task you want to know about – sending a text message perhaps. You then write a list of all the steps that are needed to do it. Not just the button presses, but hand movements from one button to another, thinking time, time for the machine to react, and so on. In GOMS, your imaginary user already knows how to do the task, so you don’t have to worry about spending time fiddling around or making mistakes. That means that once you’ve listed all your separate actions GOMS can work out how long the task will take just by adding up the times for all the separate actions. Those basic times have been worked out from lots and lots of experiments on a wide range of devices. The have shown, on average, how long it takes to press a button and how long users are likely to think about it first.

GOMS in 60 seconds?

GOMS has been around since the 1980s, but wasn’t being used much by industrial designers. The problem is that it is very frustrating and time-consuming to work out all those steps for all the different tasks for a new gadget. Bonnie John’s team developed a tool called CogTool to help. You make a mock-up of your phone design in it, and tell it which buttons to press to do each task. CogTool then worked out where the other actions, like hand movements and thinking time, are needed and makes predictions.

Bonnie John came up with an easier way to figure out how much human time and effort a new design uses, but what about the device itself? How about predicting which interface design uses less energy? That is where Annie Lu Luo, came in. She had the great idea that you could take a GOMS list of actions and instead of linking actions to times you could work out how much energy the device uses for each action instead. By using GOMS together with a tool like CogTools, a designer can find out whether their design is the most energy efficient too.

So it turns out you don’t need a white knight to help your battery usage, just Annie Lu Luo and her version of GOMS. Mobile phone makers saw the benefit of course. That’s why Annie walked straight into a great job on finishing university.

This article was originally published on the CS4FN website and appears on pages 12 and 13 of issue 9 (‘Programmed to save the world‘) of the CS4FN magazine, which you can download (free) here along with all of our other free material.

See also the concept of ‘digital twins’ in this article from our Christmas Advent Calendar: Pairs: mittens, gloves, pair programming, magic tricks.

Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

Bringing people closer when they’re far away

This article was written a few years ago, before the Covid pandemic led to many more of us keeping in touch from a distance…

by Paul Curzon, Queen Mary University of London

Photo shows two children playing with a tin-can telephone, which lets them talk to each other at a distance. Picture credit Jerry Loick KONZI and Wikipedia. Original photograph can be found here.

Living far away from the person you love is tough. You spend every day missing their presence. The Internet can help, and many couples in long-distance relationships use video chat to see more of each other. It’s not the same as being right there with someone else, but couples find ways to get as much connection as they can out of their video chats. Some researchers in Canada, at the University of Calgary and Simon Fraser University, interviewed couples in long-distance relationships to find out how they use video chat to stay connected.

Nice to see you

The first thing that the researchers found is perhaps what you might expect. Couples use video chat when it’s important to see each other. You can text little messages like ‘I love you’ to each other, or send longer stories in an email, and that’s fine. But seeing someone’s face when they’re talking to you feels much more emotionally close. One member of a couple said, “The voice is not enough. The relationship is so physical and visual. It’s not just about hearing and talking.” Others reported that seeing each other’s face helped them know what the other person was feeling. For one person, just seeing his partner’s face when she was feeling worn out helped him understand her state of mind. In other relationships, seeing one another helped avoid misunderstandings that come from trying to interpret tone of voice. Plus, having video helped couples show off new haircuts or clothes, or give each other tours of their surroundings.

Hanging out on video

The couples in the study didn’t use video chat just to have conversations. They also used it in a more casual way: to hang out with each other while they went about their lives. Their video connections might stay open for hours at a time while they did chores, worked, read, ate or played games. Long silences might pass. Couples might not even be visible to each other all the time. But each partner would, every once in a while, check back at the video screen to see what the other was up to. This kind of hanging out helped couples feel the presence of the other person, even if they weren’t having a conversation. One participant said of her partner, “At home, a lot of times at night, he likes to put on his PJs and turn out all the lights and sit there with a snack and, you know, watch TV… As long as you can see the form of somebody that’s a nice thing. I think it’s just the comfort of knowing that they’re there.”

Some couples felt connected by doing the same things together in different places. They shared evenings together in living rooms far away from each other, watching the same thing on television or even getting the same movie to watch and starting it at the same time. Some couples had dinner dates where they ordered the same kind of takeaway and ate it with each other through their video connection.

Designing to connect

This might not sound like research about human-computer interaction. It’s about the deepest kind of human interaction. But good computer design can help couples feel as connected as possible. The researchers also wanted to find out how they could help couples make their video chats better. Designers of the future might think about how to make gadgets that make video chat easier to do while getting on with other chores. It’s difficult to talk, film yourself, cook and move through the house all at the same time. What’s more, today’s gadgets aren’t really built to go everywhere in the house. Putting a laptop in a kitchen or propping one up in a bed doesn’t always work so well. The designers of operating systems need to work out how to do other stuff at the same time as video. If couples want to have a video chat connection open for hours, sometimes they might need to browse the web or write a text message at the same time. And what about couples who like to fall asleep next to one another? They might need night-vision cameras so they can see their partner without disturbing their sleep.

We’re probably going to have more long- distance relationships in the future. Easy, cheap travel makes it easier to move to faraway places. You can go to university abroad, and join a company with offices on every continent. It’s an awfully good thing that technology is making it easier to stay connected with the people who are important too. Video chat is not nearly as good as feeling your lover’s touch, but when you really miss someone, even watching them do chores helps.

This article was originally published on CS4FN and can also be found on pages 4 and 5 of CS4FN Issue 15, Does your computer understand you?, which you can download as a PDF. All of our free material can be downloaded here:

Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

Swat a way to drive

by Peter W McOwan, Queen Mary University of London

(updated from the archive)

Flies are small, fast and rather cunning. Try to swat one and you will see just how efficient their brain is, even though it has so few brain cells that each one of them can be counted and given a number. A fly’s brain is a wonderful proof that, if you know what you’re doing, you can efficiently perform clever calculations with a minimum of hardware. The average household fly’s ability to detect movement in the surrounding environment, whether it’s a fly swat or your hand, is due to some cunning wiring in their brain.

Speedy calculations

Movement is measured by detecting something changing position over time. The ratio distance/time gives us the speed, and flies have built in speed detectors. In the fly’s eye, a wonderful piece of optical engineering in itself with hundreds of lenses forming the mosaic of the compound eye, each lens looks at a different part of the surrounding world, and so each registers if something is at a particular position in space.

All the lenses are also linked by a series of nerve cells. These nerve cells each have a different delay. That means a signal takes longer to pass along one nerve than another. When a lens spots an object in its part of the world, say position A, this causes a signal to fire into the nerve cells, and these signals spread out with different delays to the other lenses’ positions.

The separation between the different areas that the lenses view (distance) and the delays in the connecting nerve cells (time) are such that a whole range of possible speeds are coded in the nerve cells. The fly’s brain just has to match the speed of the passing object with one of the speeds that are encoded in the nerve cells. When the object moves from A to B, the fly knows the correct speed if the first delayed signal from position A arrives at the same time as the new signal at position B. The arrival of the two signals is correlated. That means they are linked by a well-defined relation, in this case the speed they are representing.

Do locusts like Star Wars?

Understanding the way that insects see gives us clever new ways to build things, and can also lead to some bizarre experiments. Researchers in Newcastle showed locusts edited highlights from the original movie Star Wars. Why you might ask? Do locusts enjoy a good Science Fiction movie? It turns out that the researchers were looking to see if locusts could detect collisions. There are plenty of those in the battles between X-wing fighters and Tie fighters. They also wanted to know if this collision detecting ability could be turned into a design for a computer chip. The work, part-funded by car-maker Volvo, used such a strange way to examine locust’s vision that it won an Ig Nobel award in 2005. Ig Noble awards are presented each year for weird and wonderful scientific experiments, and have the motto ‘Research that makes people laugh then think’. You can find out more at

Car crash: who is to blame?

So what happens if we start to use these insect ‘eye’ detectors in cars, building

We now have smart cars with the artificial intelligence (AI) taking over from the driver completely or just to avoid hitting other things. An interesting question arises. When an accident does happen, who is to blame? Is it the car driver: are they in charge of the vehicle? Is it the AI to blame? Who is responsible for that: the AI itself (if one day we give machines human-like rights), the car manufacturer? Is it the computer scientists who wrote the program? If we do build cars with fly or locust like intelligence, which avoid accidents like flies avoid swatting or can spot possible collisions like locusts, is it the insect whose brain was copied that is to blame!?!What will insurance companies decide? What about the courts?

As computer science makes new things possible, society quickly needs to decide how to deal with them. Unlike the smart cars, these decisions aren’t something we can avoid.

More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1. 

Future Friendly: Focus on Kerstin Dautenhahn

by Peter W McOwan, Queen Mary University of London

(from the archive)

Kerstin's team including the robot waving
Kerstin’s team
Copyright © Adaptive Systems Research Group

Kerstin Dautenhahn is a biologist with a mission: to help us make friends with robots. Kerstin was always fascinated by the natural world around her, so it was no surprise when she chose to study Biology at the University of Bielefeld in Germany. Afterwards she went on to study a Diploma in Biology where she did research on the leg reflexes in stick insects, a strange start it may seem for someone who would later become one of the world’s foremost robotics researchers. But it was through this fascinating bit of biology that Kerstin became interested in the ways that living things process information and control their body movements, an area scientists call biological cybernetics. This interest in trying to understand biology made her want to build things to test her understanding, these things would be based on ideas copied from biological animals but be run by computers, these things would be robots.

Follow that robot

From humble beginning building small robots that followed one another over a hilly landscape, she started to realise that biology was a great source of ideas for robotics, and in particular that the social intelligence that animals use to live and work with each other could be modelled and used to create sociable robots.

She started to ask fascinating questions like “What’s the best way for a robot to interrupt you if you are reading a newspaper – by gesturing with its arms, blinking its lights or making a sound?” and perhaps most importantly “When would a robot become your friend?” First at the University of Hertfordshire, now a Professor at the University of Waterloo she leads a world famous research group looking to try and build friendly robots with social intelligence.

Good robot / Bad robot – East vs West

Kerstin, like many other robotics researchers, is worried that most people tend to look on robots as being potentially evil. If we look at the way robots are portrayed in the movies that’s often how it seems: it makes a good story to have a mechanical baddie. But in reality robots can provide a real service to humans, from helping the disabled, assisting around the home and even becoming friends and companions. The baddie robot ideas tends to dominate in the west, but in Japan robots are very popular and robotics research is advancing at a phenomenal rate. There has been a long history in Japan of people finding mechanical things that mimic natural things interesting and attractive. It is partly this cultural difference that has made Japan a world leader in robot research. But Kerstin and others like her are trying to get those of us in the west to change our opinions by building friendly robots and looking at how we relate to them.

Polite Robots roam the room

When at the University of Hertfordshire, Kerstin decided that the best way to see how people would react to a robot around the house was to rent a flat near the university, and fill it with robots. Rather than examine how people interacted with robots in a laboratory, moving the experiments to a real home, with bookcases, biscuits, sofas and coffee tables, make it real. She and her team looked at how to give their robots social skills: what was the best way for a robot to approach a person, for example? At first they thought that the best approach would be straight from the front, but they found that humans felt this too aggressive, so the robots were trained to come up gently from the side. The people in the house were also given special ‘comfort buttons’, devices that let them indicate how they were feeling in the company of robots. Again interesting things happened, it turned out that not all, but quite a lot of people were on the whole happy for these robots to be close to themselves, closer in fact than they would normally let a human approach. Kerstin explains ‘This is because these people see the robot as a machine, not a person, and so are happy to be in close proximity. You are happy to move close to your microwave, and it’s the same for robots’. These are exciting first steps as we start to understand how to build robots with socially acceptable manners. But it turns out that robots need to have good looks as well as good manners if they are going to make it in human society.

Looks are everything for a robot?

This fall in acceptability
is called the ‘uncanny valley’

How we interact with robots also depends on how the robots look. Researchers had found previously that if you make a robot look too much like a human being, people expect it to be a human being, with all the social and other skills that humans have. If it doesn’t have these, we find interaction very hard. It’s like working with a zombie, and it can be very frightening. This fall in acceptability of robots that look like, but aren’t quite, human is what researchers call the ‘uncanny valley’, so people prefer to encounter a robot that looks like a robot and acts like a robot. Kerstin’s group found this effect too, so they designed their robots to look and act they way we would expect robots to look and act, and things got much more sociable. But they are still looking at how we act with more human like robots and built KASPAR, a robot toddler, which has a very realistic rubber face capable of showing expressions and smiling, and video camera eyes that allow the robot to react to your behaviours. He possesses arms so can wave goodbye or greet you with a friendly gesture. Most recently he was extended with multi-modal technology that allowed several children to play with him at the same time, He’s very lifelike and their hope was hopefully as KASPAR’s programming grew, and his abilities improved he, or some descendent of him, would emerge from the uncanny valley to become someone’s friend, and in particular, children with autism.

Autism – mind blindness and robots

The fact that most robots at present look like and act like robots can give them a big advantage to help them support children with autism. Autism is a condition that prevents you from developing an understanding of how to interact socially with the world. A current theory to explain the condition is that those who are autistic cannot form a correct understanding of others intentions, it’s called mind blindness. For example, if I came into the room wearing a hideous hat and asked you ‘Do you like my lovely new hat?’ you would probably think, ‘I don’t like the hat, but he does, so I should say I like it so as not to hurt his feelings’, you have a mental model of my state of mind (that I like my hat). An autistic person is likely to respond ‘I don’t like your hat’, if this is what he feels. Autistic people cannot create this mental model so find it hard to make friends and generally interact with people, as they can’t predict what people are likely to say, do or expect.

Playing with Robot toys

It’s different with robots, many autistic children have an affinity with robots. Robots don’t do unexpected things. Their behaviour is much simpler, because they act like robots. Using robots Kerstin’s group examined how we can use this interaction with robot toys to help some autistic children to develop skills to allow them to interact better with other people. By controlling the robot’s behaviours some of the children can develop ways to mimic social skills, which may ultimately improve their quality of life. There were some promising results, and the work continues to be only one way to try and help those suffering with this socially isolating condition.

Future friendly

It’s only polite that the last word goes to Kerstin from her time at Hertfordshire:

‘I firmly believe that robots as assistants can potentially be very useful in many application areas. For me as a researcher, working in the field of human-robot interaction is exciting and great fun. In our team we have people from various disciplines working together on a daily basis, including computer scientists, engineers and psychologist. This collaboration, where people need to have an open mind towards other fields, as well as imagination and creativity, are necessary in order to make robots more social.’

In the future, when robots become our workmates, colleagues and companions it will be in part down to Kerstin and her team’s pioneering effort as they work towards making our robot future friendly.

More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

The joke Turing test

A funny thing happened on the way to the computer

by Peter W. McOwan, Queen Mary University of London

(from the archive)

A cabbage smiling at you
Image by Lynn Greyling from Pixabay

Laugh and the world laughs with you they say, but what if you’re a computer. Can a computer have a ‘sense of humour’?

Computer generated jokes can do more than give us a laugh. Human language in jokes can often be ambiguous: words can have two meanings. For example the word ‘bore’ can mean a person who is uninteresting or could be to do with drilling … and if spoken it could be about a male pig. It’s often this slip between the meaning of words that makes jokes work (work that joke out for yourself). To be able to understand how human based humour works, and build a computer program that can make us laugh will give us a better understanding of how the human mind works … and human minds are never boring.

Many researchers believe that jokes come from the unexpected. As humans we have a brain that can try to ‘predict the future’, for example when catching a fast ball our brains have a simple learned mathematical model of the physics so we can predict where the ball will be and catch it. Similarly in stories we have a feel for where it should be going, and when the story takes an unexpected turn, we often find this funny. The shaggy dog story is an example; it’s a long series of parts of a story that build our expectations, only to have the end prove us wrong. We laugh (or groan) when the unexpected twist occurs. It’s like the ball suddenly doing three loop-the-loops then stopping in mid-air. It’s not what we expect. It’s against the rules and we see that as funny.

Some artificial intelligence researchers who are interested in understanding how language works look at jokes as a way to understand how we use language. Graham Richie was one early such researcher, and funnily enough he presented his work at an April Fools’ Day Workshop on Computational Humour. Richie looked at puns: simple gags that work by a play on words, and created a computer program called JAPE that generates jokes.

How do we know if the computer has a sense of humour? Well how would we know a human comic had a sense of humour? We’d get them to tell a joke. Now suppose that we had a test where we had a set of jokes, some made by humans and some by computers, and suppose we couldn’t tell the difference? If you can’t tell which is computer generated and which is human generated then the argument goes that the computer program must, in some way, have captured the human ability. This is called a Turing Test after the computer scientist Alan Turing. The original idea was to use it as a test for intelligence but we can use the same idea as a test for an ability to be funny too.

So let’s finish with a joke (and test). Which of the following is a joke created by a computer program following Richie’s theory of puns, and which is a human’s attempt? Will humans or machines have the last laugh on this test?

Have your vote: which of these two jokes do you think was written by a computer and which by a human.

1) What’s fast and wiry?

… An aircraft hanger!

2) What’s green and bounces?

… A spring cabbage!

Make your choice before scrolling down to find the answer.

More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.

The answers

Could you tell which of the two jokes was written by a human’s and which by a computer?

Lots of cs4fn readers voted over several years and the voting went:

  • 58 % votes cast believed the aircraft hanger joke is computer generated
  • 42 % votes cast believed the spring cabbage joke is computer generated

In fact …

  • The aircraft hanger joke was the work of a computer.
  • The spring cabbage joke was the human generated cracker.

If the voters were doing no better than guessing then the votes would be about 50-50: no better than tossing a coin to decide. Then the computer was doing as well at being funny as the human. A vote share of 58-42 suggests (on the basis of this one joke only) that the computer is getting there, but perhaps doesn’t quite have as good a sense of humour as the human who invented the spring cabbage joke. A real test would use lots more jokes, of course. If doing a real experiment it would also be important that they were not only generated by the human/computer but selected by them too (or possibly selected at random from ones they each picked out as their best). By using ones we selected our sense of humour could be getting in the way of a fair test.

Pepper’s Ghost: an 1860s illusion used in ‘head-up displays’ ^JB

Three cute cartoon-styled plastic ghosts reflecting on a black glass panel. They are waving their arms and looking more scared than scary.

by Paul Curzon, Queen Mary University of London (first published in 2007)

A ghostly illustration including a woman in historic garb, an ornate candlestick, a grand chair and a mirror with grey curtains pulled back.
Ghostly stage image by S. Hermann / F. Richter from Pixabay

When Pepper’s Ghost first appeared on the stage as part of one of Professor Pepper’s shows on Christmas Eve, 1862 it stunned the audiences. This was more than just magic: it was miraculous. It was so amazing that some spiritualists were convinced Pepper had discovered a way of really summoning spirits. A ghostly figure appeared on the stage out of thin air, interacted with the other characters on the stage and then disappeared in an instant. This was no dark seance where ghostly effects happen in a darkened room: who knows what tricks are then being pulled in the dark to cause the effects. Neither was it modern day special effects where it is all done on film or in the virtual world of a computer. This was on a brightly lit stage in front of everyone’s eyes…

Stage setup for Pepper’s Ghost, from Wikipedia

Switch to the modern day and similar ghostly magic is now being used by fighter pilots. Have the military been funding X-files research? Well maybe, but there is nothing supernatural about Pepper’s Ghost. It is just an illusion. The show it first appeared in was a Science show, though it went on to amaze audiences as part of magic shows for years to come, and can still be found, for example in Disney Theme Parks, and onstage to make virtual band Gorillaz come to life.

Today’s “supernatural” often becomes tomorrow’s reality, thanks to technology. With Pepper’s ghost, 19th century magic has in fact become enormously useful 21st century hi-tech. 19th century magicians were more than just showmen, they were inventors, precision engineers and scientists, making use of the latest scientific results, frequently pushing technology forward themselves. People often think of magicians as being secretive, but they were also businessmen, often patenting the inventions behind their tricks, making them available for all to see but also ensuring their rivals could not use them without permission. The magic behind Pepper’s ghost was patented by Henry Dircks, a Liverpudlian engineer, in 1863 as a theatrical effect though it was probably originally invented much earlier – it was described in an Italian book back in 1558 by Baptista Porta.

Through the looking glass

So what was Pepper’s ghost? It’s a cliche to say that “it’s all done with mirrors”, but it is quite amazing what you can do with them if you both understand their physics and are innovative enough to think up extraordinary ways to use old ideas. Pepper’s ghost worked in a completely different way to the normal way mirrors are used in tricks though. It was done using a normal sheet of glass, not a silvered mirror at all. If you have ever looked at your image reflected in a window on a dark night you have seen a weak version of Pepper’s Ghost. The trick was to place a large, spotlessly clean sheet of glass at an angle in front of the stage between the actors and the audience. By using the stage lights in just the right way, it becomes a half mirror. Not only can the stage be seen through the glass, but so can anything placed at the right position off the stage where the glass is pointing. Better still, because of the physics of reflection, the reflected images don’t seem to be on the surface of the glass at all, but the same distance behind as the objects are in front. The actor playing the ghost would perform in a hidden black area so that he or she was the only thing that reflected light from that area. When the ghost was to appear a very strong light was shone on the actor. Suddenly the reflection would appear – and as long as they were standing the right distance from the mirror, they could appear anywhere desired on the stage. To make them disappear in an instant the light was just switched off.

Jump to the 21st century and a similar technique has reappeared. Now the ghosts are instrument panels. A problem with controlling a fighter plane is you don’t have time to look down. You really want the data you need to keep control of your plane wherever you are looking outside the plane. It needs not just to be in the right position on the screen but at the right depth so you don’t need to refocus your eyes. Most importantly you must also be able to see out of the plane in an unrestricted way…You need the Peppers Ghost effect. That is all “Head-up” displays display do, though the precise technology used varies.

C-130J: Co-pilot's head-up display panel
C-130J: Co-pilot’s head-up display panel by Todd Lappin (2004)
C-130J is a large, four-engine turboprop military transport aircraft known as the Super Hercules.

Satnav systems in cars are very dangerous if you have to keep looking down to see where the thing atually means you to turn. “What? This left turn or the next one?” Use a Head-up display and the instructions can hover in front of you, out on the road where your eyes are focussed. Better still you can project a yellow line (say) as though it was on the road, showing you the way off into the distance: Follow the Yellow Brick Road … Oh and wasn’t the Wizard of Oz another great magician who used science and engineering rather than magic dust.

You can make your own Pepper’s Ghost complete with your favourite band appearing live on stage.

This article was originally published on the CS4FN website and can also be found on page 4 of Issue 5 (you can download a free PDF copy from the panel below). You can also download ALL of our free material here.

Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

Featured image: Cute ghosts image by Alexa from Pixabay

Watching whales well – the travelling salesman problem ^JB

An aerial photograph of São Miguel lighthouse in the Azores showing the surrounding tree-covered cliff and winding road.

by Paul Curzon, Queen Mary University of London

Sasha owns a new tour company and her first tours are to the Azores, a group of volcanic islands in the Atlantic Ocean, off the coast of Portugal. They are one of the best places in the world to see whales and dolphins, so lots of people are signing up to go.

Sasha’s tour as advertised is to visit all nine islands in the Azores: São Miguel, Terceira, Faial, Pico, São Jorge, Santa Maria, Graciosa, Flores and Corvo. The holidaymakers go whale watching as well as visiting the attractions on each island, like swimming in the lava pools. Sasha’s first problem, though, is to sort out the itinerary. She has to work out the best order to visit the islands so her customers spend as little time as possible travelling, leaving more for watching whales and visiting volcanos. She also doesn’t want the tour to go back to the same island twice – and she needs it to end up back at the starting island, São Miguel, for the return flight back home.

Trouble in paradise

It sounds like it should be easy, but it’s actually an example of a computer science problem that dates back at least to the 1800s. It’s known as ‘The Travelling Salesman Problem’ because it is the same problem a salesman has who wants to visit a series of cities and get back to base at the end of the trip. It is surprisingly difficult.

It’s not that hard to come up with any old answer (just join the dots!), but it’s much tougher to come up with the best answer. Of course a computer scientist doesn’t want to just solve one-off problems like Sasha’s but to come up with a way of solving any variant of the problem. Sasha, of course, agrees – once she’s sorted out the Azores itinerary, she then needs to solve similar problems, like the day trip round São Miguel. Her customers will visit the lakes, the tea factory, the hot spring-fed swimming pool in the botanic gardens and so on. Not only that, once Sasha’s done with the Azores, she then needs to plan a wildlife tour of Florida. Knowing a quick way to do it would help her a lot.

The long way round

No one has yet come up with a good way to solve the Travelling Salesman problem though and it is generally believed to be impossible. You can find the best solution in theory of course: just try all the alternatives. Sasha could first work out how long it is if you go São Miguel, Terceira, Faial, Pico, São Jorge, Santa Maria, Graciosa, Flores, Corvo and back to São Miguel, then work out the time for a different order, swapping Corvo and Flores, say. Then she could try a different route, and keep on till she knew the length of every variation. She would then just pick the best. Trouble is, that takes forever.

Even this small problem with only 9 islands has over 20 000 solutions to check. Go up to a tour of 15 destinations and you have 43 billion calculations to do. Add a few more and it would take centuries for a fast computer running flat out to solve it. Bigger still and you find the computer would have to run for longer than the time left before the end of the universe. Hmmm. It’s a problem then.

Be greedy

The solution is not to be such a perfectionist and accept that a good solution will have to be good enough even though it may not be the absolute best. One way to get a good solution is called using a ‘greedy’ algorithm. You start at São Miguel and just go from there to the nearest island, from there to the nearest island not yet visited, and so on till you have done them all. That would probably work well for the Azores as they are in groups, so visiting the close ones in each group together makes sense. It doesn’t guarantee the best answer in all cases though.

Or just go climb a hill

Another way is to use a version of ‘hill climbing’. Here you take any old route and then try and optimise it, by just making small changes – swapping pairs of legs over, say: instead of going Faial to Pico and later Corvo to Flores try substituting Pico to Flores and Faial to Corvo, with the rest the same but in the opposite order. If the change is an improvement keep it and make later changes to that. Otherwise stick with the original. Either way keep trying changes on the best solution you’ve found so far, until you run out of time.

So Sasha may want to run a great tour company but there may not be enough time in the universe for her tours to be guaranteed perfect…unless of course she keeps them very small. After all, just visiting São Miguel and Terceira makes a great holiday anyway.

This article was originally published on the CS4FN website and a copy can also be found on pages 14-15 of issue 10 of the CS4FN magazine, which you can downloads as a PDF below. All of our free material can be downloaded here.

Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

Hidden Figures: NASA’s brilliant calculators #BlackHistoryMonth

Full Moon and silhouetted tree tops

by Paul Curzon, Queen Mary University of London

Full Moon with a blue filter
Full Moon image by PIRO from Pixabay

NASA Langley was the birthplace of the U.S. space program where astronauts like Neil Armstrong learned to land on the moon. Everyone knows the names of astronauts, but behind the scenes a group of African-American women were vital to the space program: Katherine Johnson, Mary Jackson and Dorothy Vaughan. Before electronic computers were invented ‘computers’ were just people who did calculations and that’s where they started out, as part of a segregated team of mathematicians. Dorothy Vaughan became the first African-American woman to supervise staff there and helped make the transition from human to electronic computers by teaching herself and her staff how to program in the early programming language, FORTRAN.

FORTRAN code on a punched card, from Wikipedia.

The women switched from being the computers to programming them. These hidden women helped put the first American, John Glenn, in orbit, and over many years worked on calculations like the trajectories of spacecraft and their launch windows (the small period of time when a rocket must be launched if it is to get to its target). These complex calculations had to be correct. If they got them wrong, the mistakes could ruin a mission, putting the lives of the astronauts at risk. Get them right, as they did, and the result was a giant leap for humankind.

See the film ‘Hidden Figures’ for more of their story (trailer below).

This story was originally published on the CS4FN website and was also published in issue 23, The Women Are (Still) Here, on p21 (see ‘Related magazine’ below).

See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing

Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

Gladys West: Where’s my satellite? Where’s my child? #BlackHistoryMonth

Satellite image of the Earth at night

by Paul Curzon, Queen Mary University of London

Satellites are critical to much modern technology, and especially GPS. It allows our smartphones, laptops and cars to work out their exact position on the surface of the earth. This is central to all mobile technology, wearable or not, that relies on knowing where you are, from plotting a route your nearest Indian restaurant to telling you where a person you might want to meet is. Many, many people were involved in creating GPS, but it was only in Black History Month of 2017 when the critical part Gladys West played became widely known.

Work hard, go far

As a child Gladys worked with her family in the fields of their farm in rural Virginia. That wasn’t the life she wanted, so she worked hard through school, leaving as the top student. She won a scholarship to university, and then landed a job as a mathematician at a US navy base.

There she solved the maths problems behind the positioning of satellites. She worked closely with the programmers to write the code to do calculations based on her maths. Nine times out of ten the results that came back weren’t exactly right so much of her time was spent working out what was going wrong with the programs, as it was vital the results were very accurate.

Seasat and Geosat

Her work on the Seasat satellite won her a commendation. It was a revolutionary satellite designed to remotely monitor the oceans. It collected data about things like temperature, wind speed and wind direction at the sea’s surface, the heights of waves, as well as sensing data about sea ice. This kind of remote sensing has since had a massive impact on our understanding of climate change. Gladys specifically worked on the satellite’s altimeter. It was a radar-based sensor that allowed Seasat to measure its precise distance from the surface of the ocean below. She continued this work on later remote sensing satellites too, including Geosat, a later earth observation satellite.

Gladys West and Sam Smith look over data from the Global Positioning System,
which Gladys helped develop. Photo credit US Navy, 1985, via Wikipedia.


Knowing the positions of satellites is the foundation for GPS. The way GPS works is that our mobile receivers pick up a timed signal from several different satellites. Calculating where we are can only be done if you first know very precisely where those satellites were when they sent the signal. That is what Gladys’ work provided.

GPS Watches

You can now buy, for example, buy GPS watches, allowing you to wear a watch that watches where you are. They can also be used by people with dementia, who have bad memory problems, allowing their carers to find them if they go out on their own but are then confused about where they are. They also allow parents to know where their kids are all the time. Do you think that’s a good use?

Since so much technology now relies on knowing exactly where we are, Gladys’ work has had a massive impact on all our lives.

This article was originally published on the CS4FN website and a copy can also be found on page 14 of Issue 25 of CS4FN, “Technology worn out (and about)“, on wearable computing, which can be downloaded as a PDF, along with all our other free material, here:  

This article is also republished during Black History Month and is part of our Diversity in Computing series, celebrating the different people working in computer science (Gladys West’s page).

This blog is funded through EPSRC grant EP/W033615/1.

Kakuro, Logic and Computer Science – problem-solving brain teasers

by Paul Curzon, Queen Mary University of London

To be a good computer scientist you have to enjoy problem solving. That is what it’s all about: working out the best way to do things. You also have to be able to think in a logical way: be a bit of a Vulcan. But what does that mean? It just means being able to think precisely, extracting all the knowledge possible from a situation just by pure reasoning. It’s about being able to say what is definitely the case given what is already known…and it’s fun to do. That’s why there is a Suduko craze going on as I write. Suduko are just pure logical thinking puzzles. Personally I like Kakuro better. They are similar to Soduko, but with a crossword format.

What is a Kakuro?

Kakuro Fragment
Part of a Kakuro puzzle

A Kakuro is a crossword-like grid, but where each square has to be filled in with a digit from 1-9 not a letter. Each horizontal or vertical block of digits must add up to the number given to the left or above, respectively. All the digits in each such block must be different. That part is similar to Soduko, though unlike Soduko, numbers can be repeated on a line as long as they are in different blocks. Also, unlike Soduko, you aren’t given any starting numbers, just a blank grid.

Where does logic come into it? Take the following fragment:

Kakuro Start - part of a Kakuro puzzle
Part of a Kakuro Puzzle

There is a horizontal block of two cells that must add up to 16. Ways that could be done using digits 1-9 are 9+7, 8+8 or 7+9. But it can’t be 8+8 as that needs two 8s in a block which is not allowed so we are left with just two possibilities: 9+7 or 7+9. Now look at the vertical blocks. One of them consists of two cells that add up to 17. That can only be 9+8 or 8+9. That doesn’t seem to have got us very far as we still don’t know any numbers for sure. But now think about the top corner. We know from across that it is definiteley 9 or 7 and from down that it is definitely 9 or 8. That means it must be 9 as that is the only way to satisfy both restrictions.

A Kakuro for you to try

A Kakuro puzzle for you to try

Here is a full Kakuro to try. There is also a printer friendly pdf version. Check your answer at the very end of this post when you are done.

Being able to think logically is important because computer programming is about coming up with precise solutions that even a dumb computer can follow. To do that you have to make sure all the possibilities have been covered. Reasoning very much like in a Kakuro is needed to convince yourself and others that a program does do what it is supposed to.

This article was included on Day 11 (The proof of the pudding… mathematical proof) of the CS4FN Advent Calendar in December 2021. Before that it was originally published on CS4FN and can also be found on page 16 of CS4FN Issue 3, which you can download as a PDF below. All of our free material can be downloaded here:

Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

The answer to the kakuro above

Answer for the kakuro
A correctly filled in answer for the kakuro puzzle