Pepper’s Ghost: an 1860s illusion used in ‘head-up displays’ ^JB

Three cute cartoon-styled plastic ghosts reflecting on a black glass panel. They are waving their arms and looking more scared than scary.

by Paul Curzon, Queen Mary University of London (first published in 2007)

A ghostly illustration including a woman in historic garb, an ornate candlestick, a grand chair and a mirror with grey curtains pulled back.
Ghostly stage image by S. Hermann / F. Richter from Pixabay

When Pepper’s Ghost first appeared on the stage as part of one of Professor Pepper’s shows on Christmas Eve, 1862 it stunned the audiences. This was more than just magic: it was miraculous. It was so amazing that some spiritualists were convinced Pepper had discovered a way of really summoning spirits. A ghostly figure appeared on the stage out of thin air, interacted with the other characters on the stage and then disappeared in an instant. This was no dark seance where ghostly effects happen in a darkened room: who knows what tricks are then being pulled in the dark to cause the effects. Neither was it modern day special effects where it is all done on film or in the virtual world of a computer. This was on a brightly lit stage in front of everyone’s eyes…

Stage setup for Pepper’s Ghost, from Wikipedia

Switch to the modern day and similar ghostly magic is now being used by fighter pilots. Have the military been funding X-files research? Well maybe, but there is nothing supernatural about Pepper’s Ghost. It is just an illusion. The show it first appeared in was a Science show, though it went on to amaze audiences as part of magic shows for years to come, and can still be found, for example in Disney Theme Parks, and onstage to make virtual band Gorillaz come to life.

Today’s “supernatural” often becomes tomorrow’s reality, thanks to technology. With Pepper’s ghost, 19th century magic has in fact become enormously useful 21st century hi-tech. 19th century magicians were more than just showmen, they were inventors, precision engineers and scientists, making use of the latest scientific results, frequently pushing technology forward themselves. People often think of magicians as being secretive, but they were also businessmen, often patenting the inventions behind their tricks, making them available for all to see but also ensuring their rivals could not use them without permission. The magic behind Pepper’s ghost was patented by Henry Dircks, a Liverpudlian engineer, in 1863 as a theatrical effect though it was probably originally invented much earlier – it was described in an Italian book back in 1558 by Baptista Porta.

Through the looking glass

So what was Pepper’s ghost? It’s a cliche to say that “it’s all done with mirrors”, but it is quite amazing what you can do with them if you both understand their physics and are innovative enough to think up extraordinary ways to use old ideas. Pepper’s ghost worked in a completely different way to the normal way mirrors are used in tricks though. It was done using a normal sheet of glass, not a silvered mirror at all. If you have ever looked at your image reflected in a window on a dark night you have seen a weak version of Pepper’s Ghost. The trick was to place a large, spotlessly clean sheet of glass at an angle in front of the stage between the actors and the audience. By using the stage lights in just the right way, it becomes a half mirror. Not only can the stage be seen through the glass, but so can anything placed at the right position off the stage where the glass is pointing. Better still, because of the physics of reflection, the reflected images don’t seem to be on the surface of the glass at all, but the same distance behind as the objects are in front. The actor playing the ghost would perform in a hidden black area so that he or she was the only thing that reflected light from that area. When the ghost was to appear a very strong light was shone on the actor. Suddenly the reflection would appear – and as long as they were standing the right distance from the mirror, they could appear anywhere desired on the stage. To make them disappear in an instant the light was just switched off.

Jump to the 21st century and a similar technique has reappeared. Now the ghosts are instrument panels. A problem with controlling a fighter plane is you don’t have time to look down. You really want the data you need to keep control of your plane wherever you are looking outside the plane. It needs not just to be in the right position on the screen but at the right depth so you don’t need to refocus your eyes. Most importantly you must also be able to see out of the plane in an unrestricted way…You need the Peppers Ghost effect. That is all “Head-up” displays display do, though the precise technology used varies.

C-130J: Co-pilot's head-up display panel
C-130J: Co-pilot’s head-up display panel by Todd Lappin (2004)
C-130J is a large, four-engine turboprop military transport aircraft known as the Super Hercules.

Satnav systems in cars are very dangerous if you have to keep looking down to see where the thing atually means you to turn. “What? This left turn or the next one?” Use a Head-up display and the instructions can hover in front of you, out on the road where your eyes are focussed. Better still you can project a yellow line (say) as though it was on the road, showing you the way off into the distance: Follow the Yellow Brick Road … Oh and wasn’t the Wizard of Oz another great magician who used science and engineering rather than magic dust.

You can make your own Pepper’s Ghost complete with your favourite band appearing live on stage.

This article was originally published on the CS4FN website and can also be found on page 4 of Issue 5 (you can download a free PDF copy from the panel below). You can also download ALL of our free material here.


Related Magazine …


This blog is funded through EPSRC grant EP/W033615/1.

Featured image: Cute ghosts image by Alexa from Pixabay

Making sense of squishiness – 3D modelling the natural world

by Paul Curzon, Queen Mary University of London

Look out the window at the human-made world. It’s full of hard, geometric shapes – our buildings, the roads, our cars. They are made of solid things like tarmac, brick and metal that are designed to be rigid and stay that way. The natural world is nothing like that though. Things bend, stretch and squish in response to the forces around them. That provides a whole bunch of fascinating problems for computer scientists like Lourdes Agapito of Queen Mary, University of London to solve.

Computer scientists interested in creating 3-dimensional models of the world have so far mainly concentrated on modelling the hard things. Why? Because they are easier! You can see the results in computer-animated films like Toy Story, and the 3D worlds like Second Life your avatar inhabits. Even the soft things tend to be rigid.

Lourdes works in this general area creating 3D computer models, but she wants to solve the problems of creating them automatically just from the flat images in videos and is specifically interested in things that deform – the squishy things.

Look out the window and watch the world go by. As you watch a woman walk past you have no problem knowing that you are looking at the same person as you were a second ago – even if she becomes partially hidden as she walks behind the post box and turns to post a letter. The sun goes behind a cloud and the scene is suddenly darker. It starts to rain and she opens an umbrella. You can still recognise her as the same object. Your brain is pulling some amazing tricks to make this seem so mundane. Essentially it is creating a model of the world – identifying all the 3-dimensional objects that you see and tracking them over time. If we can do it, why can’t a computer?

Unlike hard surfaces, deformable ones don’t look the same from one still to the next. You don’t have to just worry about changes in lighting, them being partially hidden, and that they appear different from a different angle. The object itself will be a different shape from one still to the next. That makes it far harder to work out which bits of one image are actually the same as the ones in the next. Lourdes has taken on a seriously hard problem.

Existing vision systems that create 3D objects have made things easier for themselves by using existing models. If a computer already has a model of a cube to compare what it sees with, then spotting a cube in the image stream is much easier than working it out from scratch. That doesn’t really generalise to deformable objects though because they vary too much. Another approach, used by the film industry, is to put highly visible markers on objects so that those markers can be tracked. That doesn’t help if you just want to point a camera out the window at whatever passes by though.

Software from Lourdes’ team creates a model of the human face as it deforms. A looping gif of a man’s face making different expressions next to a cartoon version which copies him. Red dots on his features are mapped to red dots on the cartoon face

Lourdes aim is to be able to point a camera at a deformable object and have a computer vision system be able to create a 3D model simply by analysing the images. No markers, no existing models of what might be there, not even previous films to train it with, just the video itself. So far her team have created a system that can do this in some situations such as with faces as a person changes their expression. Their next goal is to be able to make their system work for a whole person as they are filmed doing arbitrary things. It’s the technical challenge that inspires Lourdes the most, though once the problems of deformable objects are solved there are applications of course. One immediately obvious area is in operating theatres. Keyhole surgery is now very common. It involves a surgeon operating remotely, seeing what they are doing by looking at flat video images from a fibre optic probe inside the body of the person being operated on. The image is flat but the inside of the person that the surgeon is trying to make cuts in is 3-dimensional. It would be far less error prone if what the surgeon was looking at was an accurate 3D model of the video feed rather than just a flat picture. Of course the inside of your body is made of exactly the kind of squishy deformable surfaces that Lourdes is interested in. Get the computer science right and technologies like this will save lives.

At the same time as tackling seriously hard if squishy computer science problems, Lourdes is also a mother of three. A major reason she can fit it all in, as she points out, is that she has a very supportive partner who shares in the childcare. Without him it would be impossible to balance all the work involved in leading a top European research team. It’s also important to get away from work sometimes. Running regularly helps Lourdes cope with the pressures and as we write she is about to run her first half marathon.

Lourdes may or may not be the person who turns her team’s solutions into the applications that in the future save lives in operating theatres, spot suspicious behaviour in CCTV footage or allow film-makers to quickly animate the actions of actors. Whoever does create the applications, we still need people like Lourdes who are just excited about solving the fundamental problems in the first place.


This article was originally published on the CS4FN website in ~2011. You can read more about Women in Computing here.


This blog is funded through EPSRC grant EP/W033615/1.

Watching whales well – the travelling salesman problem ^JB

An aerial photograph of São Miguel lighthouse in the Azores showing the surrounding tree-covered cliff and winding road.

by Paul Curzon, Queen Mary University of London

Sasha owns a new tour company and her first tours are to the Azores, a group of volcanic islands in the Atlantic Ocean, off the coast of Portugal. They are one of the best places in the world to see whales and dolphins, so lots of people are signing up to go.

Sasha’s tour as advertised is to visit all nine islands in the Azores: São Miguel, Terceira, Faial, Pico, São Jorge, Santa Maria, Graciosa, Flores and Corvo. The holidaymakers go whale watching as well as visiting the attractions on each island, like swimming in the lava pools. Sasha’s first problem, though, is to sort out the itinerary. She has to work out the best order to visit the islands so her customers spend as little time as possible travelling, leaving more for watching whales and visiting volcanos. She also doesn’t want the tour to go back to the same island twice – and she needs it to end up back at the starting island, São Miguel, for the return flight back home.

Trouble in paradise

It sounds like it should be easy, but it’s actually an example of a computer science problem that dates back at least to the 1800s. It’s known as ‘The Travelling Salesman Problem’ because it is the same problem a salesman has who wants to visit a series of cities and get back to base at the end of the trip. It is surprisingly difficult.

It’s not that hard to come up with any old answer (just join the dots!), but it’s much tougher to come up with the best answer. Of course a computer scientist doesn’t want to just solve one-off problems like Sasha’s but to come up with a way of solving any variant of the problem. Sasha, of course, agrees – once she’s sorted out the Azores itinerary, she then needs to solve similar problems, like the day trip round São Miguel. Her customers will visit the lakes, the tea factory, the hot spring-fed swimming pool in the botanic gardens and so on. Not only that, once Sasha’s done with the Azores, she then needs to plan a wildlife tour of Florida. Knowing a quick way to do it would help her a lot.

The long way round

No one has yet come up with a good way to solve the Travelling Salesman problem though and it is generally believed to be impossible. You can find the best solution in theory of course: just try all the alternatives. Sasha could first work out how long it is if you go São Miguel, Terceira, Faial, Pico, São Jorge, Santa Maria, Graciosa, Flores, Corvo and back to São Miguel, then work out the time for a different order, swapping Corvo and Flores, say. Then she could try a different route, and keep on till she knew the length of every variation. She would then just pick the best. Trouble is, that takes forever.

Even this small problem with only 9 islands has over 20 000 solutions to check. Go up to a tour of 15 destinations and you have 43 billion calculations to do. Add a few more and it would take centuries for a fast computer running flat out to solve it. Bigger still and you find the computer would have to run for longer than the time left before the end of the universe. Hmmm. It’s a problem then.

Be greedy

The solution is not to be such a perfectionist and accept that a good solution will have to be good enough even though it may not be the absolute best. One way to get a good solution is called using a ‘greedy’ algorithm. You start at São Miguel and just go from there to the nearest island, from there to the nearest island not yet visited, and so on till you have done them all. That would probably work well for the Azores as they are in groups, so visiting the close ones in each group together makes sense. It doesn’t guarantee the best answer in all cases though.

Or just go climb a hill

Another way is to use a version of ‘hill climbing’. Here you take any old route and then try and optimise it, by just making small changes – swapping pairs of legs over, say: instead of going Faial to Pico and later Corvo to Flores try substituting Pico to Flores and Faial to Corvo, with the rest the same but in the opposite order. If the change is an improvement keep it and make later changes to that. Otherwise stick with the original. Either way keep trying changes on the best solution you’ve found so far, until you run out of time.

So Sasha may want to run a great tour company but there may not be enough time in the universe for her tours to be guaranteed perfect…unless of course she keeps them very small. After all, just visiting São Miguel and Terceira makes a great holiday anyway.


This article was originally published on the CS4FN website and a copy can also be found on pages 14-15 of issue 10 of the CS4FN magazine, which you can downloads as a PDF below. All of our free material can be downloaded here.


Related Magazine …


This blog is funded through EPSRC grant EP/W033615/1.

A round up of our posts for #BlackHistoryMonth 2022

The five shades used for skin tone emojis


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


This blog is funded through EPSRC grant EP/W033615/1.

Recognising (and addressing) bias in facial recognition tech – the Gender Shades Audit #BlackHistoryMonth ^JB

The five shades used for skin tone emojis

Some people have a neurological condition called face blindness (also known as ‘prosopagnosia’) which means that they are unable to recognise people, even those they know well – this can include their own face in the mirror! They only know who someone is once they start to speak but until then they can’t be sure who it is. They can certainly detect faces though, but they might struggle to classify them in terms of gender or ethnicity. In general though, most people actually have an exceptionally good ability to detect and recognise faces, so good in fact that we even detect faces when they’re not actually there – this is called pareidolia – perhaps you see a surprised face in this picture of USB sockets below.

A unit containing four sockets, 2 USB and 2 for a microphone and speakers.
Happy, though surprised, sockets

What if facial recognition technology isn’t as good at recognising faces as it has sometimes been claimed to be? If the technology is being used in the criminal justice system, and gets the identification wrong, this can cause serious problems for people (see Robert Williams’ story in “Facing up to the problems of recognising faces“).

In 2018 Joy Buolamwini and Timnit Gebru shared the results of research they’d done, testing three different commercial facial recognition systems. They found that these systems were much more likely to wrongly classify darker-skinned female faces compared to lighter- or darker-skinned male faces. In other words, the systems were not reliable.

“The findings raise questions about how today’s neural networks, which … (look for) patterns in huge data sets, are trained and evaluated.”

Study finds gender and skin-type bias in commercial artificial-intelligence systems
(11 February 2018) MIT News

The Gender Shades Audit

Facial recognition systems are trained to detect, classify and even recognise faces using a bank of photographs of people. Joy and Timnit examined two banks of images used to train facial recognition systems and found that around 80 per cent of the photos used were of people with lighter coloured skin. 

If the photographs aren’t fairly balanced in terms of having a range of people of different gender and ethnicity then the resulting technologies will inherit that bias too. Effectively the systems here were being trained to recognise light-skinned people.

The Pilot Parliaments Benchmark

They decided to create their own set of images and wanted to ensure that these covered a wide range of skin tones and had an equal mix of men and women (‘gender parity’). They did this by selecting photographs of members of various parliaments around the world which are known to have a reasonably equal mix of men and women, and selected parliaments from countries with predominantly darker skinned people (Rwanda, Senegal and South Africa) and from countries with predominantly lighter-skinned people (Iceland, Finland and Sweden). 

They labelled all the photos according to gender (they did have to make some assumptions based on name and appearance if pronouns weren’t available) and used the Fitzpatrick scale (see Different shades, below) to classify skin tones. The result was a set of photographs labelled as dark male, dark female, light male, light female with a roughly equal mix across all four categories – this time, 53 per cent of the people were light-skinned (male and female).

A composite image showing the range of skin tone classifications with the Fitzpatrick scale on top and the skin tone emojis below.

Different shades

The Fitzpatrick skin tone scale (top) is used by dermatologists (skin specialists) as a way of classifying how someone’s skin responds to ultraviolet light. There are six points on the scale with 1 being the lightest skin and 6 being the darkest. People whose skin tone has a lower Fitzpatrick score are more likely to burn in the sun and not tan, and are also at greater risk of melanoma (skin cancer). People with higher scores have darker skin which is less likely to burn and they have a lower risk of skin cancer. 

Below it is a variation of the Fitzpatrick scale, with five points, which is used to create the skin tone emojis that you’ll find on most messaging apps in addition to the ‘default’ yellow. 

Testing three face recognition systems

Joy and Timnit tested the three commercial face recognition systems against their new database of photographs – a fair test of a wide range of faces that a recognition system might come across – and this is where they found that the systems were less able to correctly identify particular groups of people. The systems were very good at spotting lighter-skinned men, and darker skinned men, but were less able to correctly identify darker-skinned women, and women overall.  

These tools, trained on sets of data that had a bias built into them, inherited those biases and this affected how well they worked. Joy and Timnit published the results of their research and it was picked up and discussed in the news as people began to realise the extent of the problem, and what this might mean for the ways in which facial recognition tech is used. 

“An audit of commercial facial-analysis tools found that dark-skinned faces are misclassified at a much higher rate than are faces from any other group. Four years on, the study is shaping research, regulation and commercial practices.”

The unseen Black faces of AI algorithms (19 October 2022) Nature

There is some good news though. The three companies made changes to improve their facial recognition technology systems and several US cities have already banned the use of this tech in criminal investigations, and more cities are calling for it too. People around the world are becoming more aware of the limitations of this type of technology and the harms to which it may be (perhaps unintentionally) put and are calling for better regulation of these systems.

Further reading

Study finds gender and skin-type bias in commercial artificial-intelligence systems (11 February 2018) MIT News
Facial recognition software is biased towards white men, researcher finds (11 February 2018) The Verge
Go read this special Nature issue on racism in science (21 October 2022) The Verge

More technical articles

• Joy Buolamwini and Timnit Gebru (2018) Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research 81:1-15.
The unseen Black faces of AI algorithms (19 October 2022) Nature News & Views


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


This blog is funded through EPSRC grant EP/W033615/1.

Devices that work for everyone #BlackHistoryMonth ^JB

A pulse oximeter on the finger of a Black person's hand

by Jo Brodie, Queen Mary University of London

In 2009 Desi Cryer, who is Black, shared a light-hearted video with a serious message. He’d bought a new computer with a face tracking camera… which didn’t track his face, at all. It did track his White colleague Wanda’s face though. In the video (below) he asked her to go in front of the camera and move from side to side and the camera obediently tracked her face – wherever she moved the camera followed. When Desi moved back in front of the camera it stopped again. He wondered if the computer might be racist…

The computer recognises Desi’s colleague Wanda, but not him

Another video (below), this time from 2017, showed a dark-skinned man failing to get a soap to dispenser to give him some soap. Nothing happened when he put his hand underneath the sensor but as soon as his lighter-skinned friend put his hand under it – out popped some soap! The only way the first man could get any soap dispensed was to put a white tissue on his hand first. He wondered if the soap dispenser might be racist…

The soap dispenser only dispenses soap if it ‘see’s a white hand

What’s going on?

Probably no-one set out to maliciously design a racist device but designers might need to check that their products work with a range of different people before putting them on the market. This can save the company embarrassment as well as creating something that more people want to buy. 

Sensors working overtime

Both devices use a sensor that is activated (or in these cases isn’t) by a signal. Soap dispensers shine a beam of light which bounces off a hand placed below it and some of that light is reflected back. Paler skin reflects more light (and so triggers the sensor) than darker skin. Next to the light is a sensor which responds to the reflected light – but if the device was only tested on White people then the sensor wasn’t adjusted for the full range of skin tones and so won’t respond appropriately. Similarly cameras have historically been designed for White skin tones meaning darker tones are not picked up as well.

In the days when film was developed the technicians would use what was called a ‘Shirley’ card (a photograph of a White woman with brown hair) to colour-correct the photographs. The colour balancing meant darker-skinned tones didn’t come out as well, however the problem was only really addressed because chocolate manufacturers and furniture companies complained that the different chocolates and dark brown wood products weren’t showing up correctly!

The Racial Bias Built Into Photography (25 April 2019) The New York Times

Things can be improved!

It’s a good idea, when designing something that will be used by lots of different people, to make sure that it will work correctly with everyone. Having a diverse design team and, importantly, making sure that everyone feels empowered to contribute is a good way to start. Another is to test the design with different target audiences early in the design process so that changes can be made before it’s too late. How a company responds to feedback when they’ve made an oversight is also important. In the case of the computer company they acknowledged the problem and went to work to improve the camera’s sensitivity. 

A problem with pulse oximeters

A pulse oximeter on the finger of a Black person's hand
Pulse oximeter image by Mufid Majnun from Pixabay
The oximeter is shown on the index finger of a Black person’s right hand.

During the coronavirus pandemic many people bought a ‘pulse oximeter’, a device which clips painlessly onto a finger and measures how much oxygen is circulating in your blood (and your pulse). If the oxygen reading became too low people were advised to go to hospital. Oximeters shine red and infrared light from the top clip through the finger and the light is absorbed diferently depending on how much oxygen is present in the blood. A sensor on the lower clip measures how much light has got through but the reading can be affected by skin colour (and coloured nail polish). People were concerned that pulse oximeters would overestimate the oxygen reading for someone with darker skin (that is, tell them they had more oxygen than they actually had) and that the devices might not detect a drop in oxygen quickly enough to warn them.

In response the UK Government announced in August 2022 that it would investigate this bias in a range of medical devices to ensure that future devices work effectively for everyone.

Further reading

See also Is your healthcare algorithm racist? (from issue 27 of the CS4FN magazine).


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


This blog is funded through EPSRC grant EP/W033615/1.

Facing up to the problems of recognising faces #BlackHistoryMonth ^JB

by Jo Brodie and Paul Curzon

How the use of facial recognition technology caused the wrong Black man to be arrested.

The police were waiting for Robert Williams when he returned home from work in Detroit, Michigan. They arrested him for robbery in front of his wife and terrified daughters aged two and five and took him to a detention centre where he was kept overnight. During his interview an officer showed him two grainy CCTV photos of a suspect alongside a photo of Williams from his driving licence. All the photos showed a large Black man, but that’s where the similarity ended – it wasn’t Williams on CCTV but a completely different man. Williams held up the photos to his face and said “I hope you don’t think all Black people look alike”, the officer replied that “the computer must have got it wrong.”

William’s problems began several months before his arrest when video clips and images of the robbery from the CCTV camera were run through face recognition software used by the Detroit Police Department. The system has access to the photos from everyone’s driving licence and can compare different faces until it finds a potential match and in this case it falsely identified Robert Williams. No system is ever perfect but studies have shown that face recognition technology is often better at correctly matching lighter skinned faces than darker skinned ones.

The way facial recognition works is not actually by comparing pictures but by comparing data. When a picture of a face is added to the system, essentially lots of measurements are taken such as how far apart the eyes are, or what the shape of the nose is. That signature is added to the database. When looking for a match from say a CCTV image, the signature of the new image is first determined. Then algorithms look for the signature in the database “nearest” to the one in the database. This gives a signature for each face made up of all the numbers. How well it works depends on the particular features chosen, amongst many other things. If the features chosen are a poor way to distinguish particular groups of people then there will be lots of bad matches. But how does it decide what is “nearest” anyway given in essence it is just comparing groups of numbers? More algorithms are used based on, for example, machine learning. The system might be trained on lots of faces and told which match and which don’t, allowing it to look for patterns that are good ways to predict matches. If, however, it is trained on mainly light-skinned faces it is likely to be bad at spotting matches for faces of other ethnic backgrounds. It may actually decide that “all black people look alike”.

However face recognition is only part of the story. A potential match is only a pointer towards someone who might be a suspect and it’s certainly not a ‘case closed’ conclusion – there’s still work to be done to check and confirm. But as Williams’ lawyer, Victoria Burton-Harris, pointed out once the computer had suggested Williams as a suspect that “framed and informed everything that officers did subsequently”. The man in the CCTV image wore a red baseball cap. It was for a team that Williams didn’t support (he’s not even a baseball fan) but no-one asked him about it. They also didn’t ask if he was in the area at the time (he wasn’t) or had an alibi (he did). Instead the investigators asked a security guard at the shop where the theft took place to look at some photos of possible suspects and he picked Williams from the line-up of images. Unfortunately the guard hadn’t been on duty on the day of the theft and had only seen the CCTV footage.

Robert Williams spent 30 hours in custody for a crime he didn’t commit after his face was mistakenly selected from a database. He was eventually released and the case dropped but his arrest is still on record along with his ‘mugshot’, fingerprints and a DNA sample. In other words he was wrongly picked from one database and has now been unfairly added to another. The experience for his whole family has been very traumatic and sadly his children’s first encounter with the police has been a distressing rather than a helpful one.

The American Civil Liberties Union (ACLU) has filed a lawsuit against the Detroit Police Department on Williams’ behalf for his wrongful arrest. It is not known how many people have been arrested because of face recognition technology but given how widely it is used it’s likely that others will have been misidentified too. The ACLU and Williams have asked for a public apology, for his police record to be cleared and for his images to be removed from any face recognition database. They have also asked that the Detroit Police Department stop using facial recognition in their investigations. If Robert Williams had lived in New Hampshire he’d never have been arrested as there is a law there which prevents face recognition software from being linked with driving license databases.

In June 2020 Amazon, Microsoft and IBM denied the police any further access to their face recognition technology and IBM has also said that it will no longer work in this area because of concerns about racial profiling (targeting a person based on assumptions about their race instead of their individual actions) and violation of privacy and human rights. Campaigners are asking for a new law that protects people if this technology is used in future. But the ACLU and Robert Williams are asking for people to just stop using it – “I don’t want my daughters’ faces to be part of some government database. I don’t want cops showing up at their door because they were recorded at a protest the government didn’t like.”

Technology is only as good as the data and the algorithms it is based on. However, that isn’t the whole story even if very accurate, it is only as good as the way it is used. If as a society our aim is to protect people from bad things happening, perhaps some technologies should not be used at all.


This article was originally published on the Teaching London Computing website where you can find references and further reading.

One of the aims of our Diversity in Computing posters (see below) is to help a classroom of young people see the range of computer scientists which includes people who look like them and people who don’t look like them. You can download our posters free from the link below.


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


This blog is funded through EPSRC grant EP/W033615/1.

Hidden Figures – NASA’s brilliant calculators #BlackHistoryMonth ^JB

Full Moon and silhouetted tree tops

by Paul Curzon, Queen Mary University of London

Full Moon with a blue filter
Full Moon image by PIRO from Pixabay

NASA Langley was the birthplace of the U.S. space program where astronauts like Neil Armstrong learned to land on the moon. Everyone knows the names of astronauts, but behind the scenes a group of African-American women were vital to the space program: Katherine Johnson, Mary Jackson and Dorothy Vaughan. Before electronic computers were invented ‘computers’ were just people who did calculations and that’s where they started out, as part of a segregated team of mathematicians. Dorothy Vaughan became the first African-American woman to supervise staff there and helped make the transition from human to electronic computers by teaching herself and her staff how to program in the early programming language, FORTRAN.

FORTRAN code on a punched card, from Wikipedia.

The women switched from being the computers to programming them. These hidden women helped put the first American, John Glenn, in orbit, and over many years worked on calculations like the trajectories of spacecraft and their launch windows (the small period of time when a rocket must be launched if it is to get to its target). These complex calculations had to be correct. If they got them wrong, the mistakes could ruin a mission, putting the lives of the astronauts at risk. Get them right, as they did, and the result was a giant leap for humankind.

See the film ‘Hidden Figures’ for more of their story (trailer below).

This story was originally published on the CS4FN website and was also published in issue 23, The Women Are (Still) Here, on p21 (see ‘Related magazine’ below).


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

Writing together: Clarence ‘Skip’ Ellis #BlackHistoryMonth ^JB

Small photo of Clarence 'Skip' Ellis

by Paul Curzon, Queen Mary University of London

Small photo of Clarence 'Skip' Ellis
Small photo of
Clarence ‘Skip’ Ellis

Back in 1956, Clarence Ellis started his career at the very bottom of the computer industry. He was given a job, at the age of 15, as a “computer operator” … because he was the only applicant. He was also told that under no circumstances should he touch the computer! Its lucky for all of us he got the job, though! He went on to develop ideas that have made computers easier for everyone to use. Working at a computer was once a lonely endeavour: one person, on one computer, doing one job. Clarence Ellis changed that. He pioneered ways for people to use computers together effectively.

The graveyard shift

The company Clarence first worked for had a new computer. Just like all computers back then, it was the size of a room. He worked the graveyard shift and his duties were more those of a nightwatchman than a computer operator. It could have been a dead-end job, but it gave him lots of spare time and, more importantly, access to all the computer’s manuals … so he read them … over and over again. He didn’t need to touch the computer to learn how to use it!

Saving the day

His studying paid dividends. Only a few months after he started, the company had a potential disaster on its hands. They ran out of punch cards. Back then punch cards were used to store both data. They used patterns of holes and non-holes as a way to store numbers as binary in a away a computer could read them. Without punchcards the computer could not work!

It had to though, because the payroll program had to run before the night was out. If it didn’t then no-one would be paid that month. Because he had studied the manuals in detail, and more so than anyone else, Clarence was the only person who could work out how to reuse old punch cards. The problem was that the computer used a system called ‘parity checking’ to spot mistakes. In its simplest form parity checking of a punch card involves adding an extra binary digit (an extra hole or no-hole) on the end of each number. This is done in a way that ensures that the number of holes is even. If there is an even number of holes already, the extra digit is left as a non-hole. If, on the other hand there is an odd number of holes, a hole is punched as the extra digit. That extra binary digit isn’t part of the number. It’s just there so the computer can check if the number has been corrupted. If a hole was accidentally or otherwise turned into a non-hole (or vice versa), then this would show up. It would mean there was now an odd number of holes. Special circuitry in the computer would spot this and spit out the card, rejecting it. Clarence knew how to switch that circuitry off. That meant they could change the numbers on the cards by adding new holes without them being rejected.

After that success he was allowed to become a real operator and was relied on to troubleshoot whenever there were problems. His career was up and running.

Clicking icons

He later worked at Xerox Parc, a massively influential research centre. He was part of the team that invented graphical user interfaces (GUIs). With GUIs Xerox Parc completely transformed the way we used computers. Instead of typing obscure and hard to remember commands, they introduced the now standard ideas, of windows, icons, dragging and dropping, using a mouse, and more. Clarence, himself, has been credited with inventing the idea of clicking on an icon to run a program.

Writing Together

As if that wasn’t enough of an impact, he went on to help make groupware a reality: software that supports people working together. His focus was on software that let people write a document together. With Simon Gibbs he developed a crucial algorithm called Operational Transformation. It allows people to edit the same document at the same time without it becoming hopelessly muddled. This is actually very challenging. You have to ensure that two (or more) people can change the text at exactly the same time, and even at the same place, without each ending up with a different version of the document.

The actual document sits on a server computer. It must make sure that its copy is always the same as the ones everyone is individually editing. When people type changes into their local copy, the master is sent messages informing it of the actions they performed. The trouble is the order that those messages arrive can change what happens. Clarence’s operational transformation algorithm solved this by changing the commands from each person into ones that work consistently whatever order they are applied. It is the transformed operation that is the one that is applied to the master. That master version is the version everyone then sees as their local copy. Ultimately everyone sees the same version. This algorithm is at the core of programs like Google Docs that have ensured collaborative editing of documents is now commonplace.

Clarence Ellis started his career with a lonely job. By the end of his career he had helped ensure that writing on a computer at least no longer needs to be a lonely affair.


This article was originally published on the CS4FN website. One of the aims of our Diversity in Computing posters (see below) is to help a classroom of young people see the range of computer scientists which includes people who look like them and people who don’t look like them. You can download our posters free from the link below.


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


This blog is funded through EPSRC grant EP/W033615/1.