Pepper’s Ghost: an 1860s illusion used in ‘head-up displays’

Three cute cartoon-styled plastic ghosts reflecting on a black glass panel. They are waving their arms and looking more scared than scary.

by Paul Curzon, Queen Mary University of London (first published in 2007)

A ghostly illustration including a woman in historic garb, an ornate candlestick, a grand chair and a mirror with grey curtains pulled back.
Ghostly stage image by S. Hermann / F. Richter from Pixabay

When Pepper’s Ghost first appeared on the stage as part of one of Professor Pepper’s shows on Christmas Eve, 1862 it stunned the audiences. This was more than just magic: it was miraculous. It was so amazing that some spiritualists were convinced Pepper had discovered a way of really summoning spirits. A ghostly figure appeared on the stage out of thin air, interacted with the other characters on the stage and then disappeared in an instant. This was no dark seance where ghostly effects happen in a darkened room: who knows what tricks are then being pulled in the dark to cause the effects. Neither was it modern day special effects where it is all done on film or in the virtual world of a computer. This was on a brightly lit stage in front of everyone’s eyes…

Stage setup for Pepper’s Ghost, from Wikipedia

Switch to the modern day and similar ghostly magic is now being used by fighter pilots. Have the military been funding X-files research? Well maybe, but there is nothing supernatural about Pepper’s Ghost. It is just an illusion. The show it first appeared in was a Science show, though it went on to amaze audiences as part of magic shows for years to come, and can still be found, for example in Disney Theme Parks, and onstage to make virtual band Gorillaz come to life.

Today’s “supernatural” often becomes tomorrow’s reality, thanks to technology. With Pepper’s ghost, 19th century magic has in fact become enormously useful 21st century hi-tech. 19th century magicians were more than just showmen, they were inventors, precision engineers and scientists, making use of the latest scientific results, frequently pushing technology forward themselves. People often think of magicians as being secretive, but they were also businessmen, often patenting the inventions behind their tricks, making them available for all to see but also ensuring their rivals could not use them without permission. The magic behind Pepper’s ghost was patented by Henry Dircks, a Liverpudlian engineer, in 1863 as a theatrical effect though it was probably originally invented much earlier – it was described in an Italian book back in 1558 by Baptista Porta.

Through the looking glass

So what was Pepper’s ghost? It’s a cliche to say that “it’s all done with mirrors”, but it is quite amazing what you can do with them if you both understand their physics and are innovative enough to think up extraordinary ways to use old ideas. Pepper’s ghost worked in a completely different way to the normal way mirrors are used in tricks though. It was done using a normal sheet of glass, not a silvered mirror at all. If you have ever looked at your image reflected in a window on a dark night you have seen a weak version of Pepper’s Ghost. The trick was to place a large, spotlessly clean sheet of glass at an angle in front of the stage between the actors and the audience. By using the stage lights in just the right way, it becomes a half mirror. Not only can the stage be seen through the glass, but so can anything placed at the right position off the stage where the glass is pointing. Better still, because of the physics of reflection, the reflected images don’t seem to be on the surface of the glass at all, but the same distance behind as the objects are in front. The actor playing the ghost would perform in a hidden black area so that he or she was the only thing that reflected light from that area. When the ghost was to appear a very strong light was shone on the actor. Suddenly the reflection would appear – and as long as they were standing the right distance from the mirror, they could appear anywhere desired on the stage. To make them disappear in an instant the light was just switched off.

Jump to the 21st century and a similar technique has reappeared. Now the ghosts are instrument panels. A problem with controlling a fighter plane is you don’t have time to look down. You really want the data you need to keep control of your plane wherever you are looking outside the plane. It needs not just to be in the right position on the screen but at the right depth so you don’t need to refocus your eyes. Most importantly you must also be able to see out of the plane in an unrestricted way…You need the Peppers Ghost effect. That is all “Head-up” displays display do, though the precise technology used varies.

C-130J: Co-pilot's head-up display panel
C-130J: Co-pilot’s head-up display panel by Todd Lappin (2004)
C-130J is a large, four-engine turboprop military transport aircraft known as the Super Hercules.

Satnav systems in cars are very dangerous if you have to keep looking down to see where the thing atually means you to turn. “What? This left turn or the next one?” Use a Head-up display and the instructions can hover in front of you, out on the road where your eyes are focussed. Better still you can project a yellow line (say) as though it was on the road, showing you the way off into the distance: Follow the Yellow Brick Road … Oh and wasn’t the Wizard of Oz another great magician who used science and engineering rather than magic dust.

You can make your own Pepper’s Ghost complete with your favourite band appearing live on stage.

This article was originally published on the CS4FN website and can also be found on page 4 of Issue 5 (you can download a free PDF copy from the panel below). You can also download ALL of our free material here.


Related Magazine …


This blog is funded through EPSRC grant EP/W033615/1.

Featured image: Cute ghosts image by Alexa from Pixabay

Making sense of squishiness – 3D modelling the natural world

by Paul Curzon, Queen Mary University of London

Look out the window at the human-made world. It’s full of hard, geometric shapes – our buildings, the roads, our cars. They are made of solid things like tarmac, brick and metal that are designed to be rigid and stay that way. The natural world is nothing like that though. Things bend, stretch and squish in response to the forces around them. That provides a whole bunch of fascinating problems for computer scientists like Lourdes Agapito of Queen Mary, University of London to solve.

Computer scientists interested in creating 3-dimensional models of the world have so far mainly concentrated on modelling the hard things. Why? Because they are easier! You can see the results in computer-animated films like Toy Story, and the 3D worlds like Second Life your avatar inhabits. Even the soft things tend to be rigid.

Lourdes works in this general area creating 3D computer models, but she wants to solve the problems of creating them automatically just from the flat images in videos and is specifically interested in things that deform – the squishy things.

Look out the window and watch the world go by. As you watch a woman walk past you have no problem knowing that you are looking at the same person as you were a second ago – even if she becomes partially hidden as she walks behind the post box and turns to post a letter. The sun goes behind a cloud and the scene is suddenly darker. It starts to rain and she opens an umbrella. You can still recognise her as the same object. Your brain is pulling some amazing tricks to make this seem so mundane. Essentially it is creating a model of the world – identifying all the 3-dimensional objects that you see and tracking them over time. If we can do it, why can’t a computer?

Unlike hard surfaces, deformable ones don’t look the same from one still to the next. You don’t have to just worry about changes in lighting, them being partially hidden, and that they appear different from a different angle. The object itself will be a different shape from one still to the next. That makes it far harder to work out which bits of one image are actually the same as the ones in the next. Lourdes has taken on a seriously hard problem.

Existing vision systems that create 3D objects have made things easier for themselves by using existing models. If a computer already has a model of a cube to compare what it sees with, then spotting a cube in the image stream is much easier than working it out from scratch. That doesn’t really generalise to deformable objects though because they vary too much. Another approach, used by the film industry, is to put highly visible markers on objects so that those markers can be tracked. That doesn’t help if you just want to point a camera out the window at whatever passes by though.

Software from Lourdes’ team creates a model of the human face as it deforms. A looping gif of a man’s face making different expressions next to a cartoon version which copies him. Red dots on his features are mapped to red dots on the cartoon face

Lourdes aim is to be able to point a camera at a deformable object and have a computer vision system be able to create a 3D model simply by analysing the images. No markers, no existing models of what might be there, not even previous films to train it with, just the video itself. So far her team have created a system that can do this in some situations such as with faces as a person changes their expression. Their next goal is to be able to make their system work for a whole person as they are filmed doing arbitrary things. It’s the technical challenge that inspires Lourdes the most, though once the problems of deformable objects are solved there are applications of course. One immediately obvious area is in operating theatres. Keyhole surgery is now very common. It involves a surgeon operating remotely, seeing what they are doing by looking at flat video images from a fibre optic probe inside the body of the person being operated on. The image is flat but the inside of the person that the surgeon is trying to make cuts in is 3-dimensional. It would be far less error prone if what the surgeon was looking at was an accurate 3D model of the video feed rather than just a flat picture. Of course the inside of your body is made of exactly the kind of squishy deformable surfaces that Lourdes is interested in. Get the computer science right and technologies like this will save lives.

At the same time as tackling seriously hard if squishy computer science problems, Lourdes is also a mother of three. A major reason she can fit it all in, as she points out, is that she has a very supportive partner who shares in the childcare. Without him it would be impossible to balance all the work involved in leading a top European research team. It’s also important to get away from work sometimes. Running regularly helps Lourdes cope with the pressures and as we write she is about to run her first half marathon.

Lourdes may or may not be the person who turns her team’s solutions into the applications that in the future save lives in operating theatres, spot suspicious behaviour in CCTV footage or allow film-makers to quickly animate the actions of actors. Whoever does create the applications, we still need people like Lourdes who are just excited about solving the fundamental problems in the first place.


This article was originally published on the CS4FN website in ~2011. You can read more about Women in Computing here.


This blog is funded through EPSRC grant EP/W033615/1.

Watching whales well – the travelling salesman problem

An aerial photograph of São Miguel lighthouse in the Azores showing the surrounding tree-covered cliff and winding road.

by Paul Curzon, Queen Mary University of London

Sasha owns a new tour company and her first tours are to the Azores, a group of volcanic islands in the Atlantic Ocean, off the coast of Portugal. They are one of the best places in the world to see whales and dolphins, so lots of people are signing up to go.

Sasha’s tour as advertised is to visit all nine islands in the Azores: São Miguel, Terceira, Faial, Pico, São Jorge, Santa Maria, Graciosa, Flores and Corvo. The holidaymakers go whale watching as well as visiting the attractions on each island, like swimming in the lava pools. Sasha’s first problem, though, is to sort out the itinerary. She has to work out the best order to visit the islands so her customers spend as little time as possible travelling, leaving more for watching whales and visiting volcanos. She also doesn’t want the tour to go back to the same island twice – and she needs it to end up back at the starting island, São Miguel, for the return flight back home.

Trouble in paradise

It sounds like it should be easy, but it’s actually an example of a computer science problem that dates back at least to the 1800s. It’s known as ‘The Travelling Salesman Problem’ because it is the same problem a salesman has who wants to visit a series of cities and get back to base at the end of the trip. It is surprisingly difficult.

It’s not that hard to come up with any old answer (just join the dots!), but it’s much tougher to come up with the best answer. Of course a computer scientist doesn’t want to just solve one-off problems like Sasha’s but to come up with a way of solving any variant of the problem. Sasha, of course, agrees – once she’s sorted out the Azores itinerary, she then needs to solve similar problems, like the day trip round São Miguel. Her customers will visit the lakes, the tea factory, the hot spring-fed swimming pool in the botanic gardens and so on. Not only that, once Sasha’s done with the Azores, she then needs to plan a wildlife tour of Florida. Knowing a quick way to do it would help her a lot.

The long way round

No one has yet come up with a good way to solve the Travelling Salesman problem though and it is generally believed to be impossible. You can find the best solution in theory of course: just try all the alternatives. Sasha could first work out how long it is if you go São Miguel, Terceira, Faial, Pico, São Jorge, Santa Maria, Graciosa, Flores, Corvo and back to São Miguel, then work out the time for a different order, swapping Corvo and Flores, say. Then she could try a different route, and keep on till she knew the length of every variation. She would then just pick the best. Trouble is, that takes forever.

Even this small problem with only 9 islands has over 20 000 solutions to check. Go up to a tour of 15 destinations and you have 43 billion calculations to do. Add a few more and it would take centuries for a fast computer running flat out to solve it. Bigger still and you find the computer would have to run for longer than the time left before the end of the universe. Hmmm. It’s a problem then.

Be greedy

The solution is not to be such a perfectionist and accept that a good solution will have to be good enough even though it may not be the absolute best. One way to get a good solution is called using a ‘greedy’ algorithm. You start at São Miguel and just go from there to the nearest island, from there to the nearest island not yet visited, and so on till you have done them all. That would probably work well for the Azores as they are in groups, so visiting the close ones in each group together makes sense. It doesn’t guarantee the best answer in all cases though.

Or just go climb a hill

Another way is to use a version of ‘hill climbing’. Here you take any old route and then try and optimise it, by just making small changes – swapping pairs of legs over, say: instead of going Faial to Pico and later Corvo to Flores try substituting Pico to Flores and Faial to Corvo, with the rest the same but in the opposite order. If the change is an improvement keep it and make later changes to that. Otherwise stick with the original. Either way keep trying changes on the best solution you’ve found so far, until you run out of time.

So Sasha may want to run a great tour company but there may not be enough time in the universe for her tours to be guaranteed perfect…unless of course she keeps them very small. After all, just visiting São Miguel and Terceira makes a great holiday anyway.


This article was originally published on the CS4FN website and a copy can also be found on pages 14-15 of issue 10 of the CS4FN magazine, which you can downloads as a PDF below. All of our free material can be downloaded here.


Related Magazine …


This blog is funded through EPSRC grant EP/W033615/1.

A round up of our posts for #BlackHistoryMonth 2022

The five shades used for skin tone emojis


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


This blog is funded through EPSRC grant EP/W033615/1.

Recognising (and addressing) bias in facial recognition tech – the Gender Shades Audit #BlackHistoryMonth

The five shades used for skin tone emojis

Some people have a neurological condition called face blindness (also known as ‘prosopagnosia’) which means that they are unable to recognise people, even those they know well – this can include their own face in the mirror! They only know who someone is once they start to speak but until then they can’t be sure who it is. They can certainly detect faces though, but they might struggle to classify them in terms of gender or ethnicity. In general though, most people actually have an exceptionally good ability to detect and recognise faces, so good in fact that we even detect faces when they’re not actually there – this is called pareidolia – perhaps you see a surprised face in this picture of USB sockets below.

A unit containing four sockets, 2 USB and 2 for a microphone and speakers.
Happy, though surprised, sockets

What if facial recognition technology isn’t as good at recognising faces as it has sometimes been claimed to be? If the technology is being used in the criminal justice system, and gets the identification wrong, this can cause serious problems for people (see Robert Williams’ story in “Facing up to the problems of recognising faces“).

In 2018 Joy Buolamwini and Timnit Gebru shared the results of research they’d done, testing three different commercial facial recognition systems. They found that these systems were much more likely to wrongly classify darker-skinned female faces compared to lighter- or darker-skinned male faces. In other words, the systems were not reliable.

“The findings raise questions about how today’s neural networks, which … (look for) patterns in huge data sets, are trained and evaluated.”

Study finds gender and skin-type bias in commercial artificial-intelligence systems
(11 February 2018) MIT News

The Gender Shades Audit

Facial recognition systems are trained to detect, classify and even recognise faces using a bank of photographs of people. Joy and Timnit examined two banks of images used to train facial recognition systems and found that around 80 per cent of the photos used were of people with lighter coloured skin. 

If the photographs aren’t fairly balanced in terms of having a range of people of different gender and ethnicity then the resulting technologies will inherit that bias too. Effectively the systems here were being trained to recognise light-skinned people.

The Pilot Parliaments Benchmark

They decided to create their own set of images and wanted to ensure that these covered a wide range of skin tones and had an equal mix of men and women (‘gender parity’). They did this by selecting photographs of members of various parliaments around the world which are known to have a reasonably equal mix of men and women, and selected parliaments from countries with predominantly darker skinned people (Rwanda, Senegal and South Africa) and from countries with predominantly lighter-skinned people (Iceland, Finland and Sweden). 

They labelled all the photos according to gender (they did have to make some assumptions based on name and appearance if pronouns weren’t available) and used the Fitzpatrick scale (see Different shades, below) to classify skin tones. The result was a set of photographs labelled as dark male, dark female, light male, light female with a roughly equal mix across all four categories – this time, 53 per cent of the people were light-skinned (male and female).

A composite image showing the range of skin tone classifications with the Fitzpatrick scale on top and the skin tone emojis below.

Different shades

The Fitzpatrick skin tone scale (top) is used by dermatologists (skin specialists) as a way of classifying how someone’s skin responds to ultraviolet light. There are six points on the scale with 1 being the lightest skin and 6 being the darkest. People whose skin tone has a lower Fitzpatrick score are more likely to burn in the sun and not tan, and are also at greater risk of melanoma (skin cancer). People with higher scores have darker skin which is less likely to burn and they have a lower risk of skin cancer. 

Below it is a variation of the Fitzpatrick scale, with five points, which is used to create the skin tone emojis that you’ll find on most messaging apps in addition to the ‘default’ yellow. 

Testing three face recognition systems

Joy and Timnit tested the three commercial face recognition systems against their new database of photographs – a fair test of a wide range of faces that a recognition system might come across – and this is where they found that the systems were less able to correctly identify particular groups of people. The systems were very good at spotting lighter-skinned men, and darker skinned men, but were less able to correctly identify darker-skinned women, and women overall.  

These tools, trained on sets of data that had a bias built into them, inherited those biases and this affected how well they worked. Joy and Timnit published the results of their research and it was picked up and discussed in the news as people began to realise the extent of the problem, and what this might mean for the ways in which facial recognition tech is used. 

“An audit of commercial facial-analysis tools found that dark-skinned faces are misclassified at a much higher rate than are faces from any other group. Four years on, the study is shaping research, regulation and commercial practices.”

The unseen Black faces of AI algorithms (19 October 2022) Nature

There is some good news though. The three companies made changes to improve their facial recognition technology systems and several US cities have already banned the use of this tech in criminal investigations, and more cities are calling for it too. People around the world are becoming more aware of the limitations of this type of technology and the harms to which it may be (perhaps unintentionally) put and are calling for better regulation of these systems.

Further reading

Study finds gender and skin-type bias in commercial artificial-intelligence systems (11 February 2018) MIT News
Facial recognition software is biased towards white men, researcher finds (11 February 2018) The Verge
Go read this special Nature issue on racism in science (21 October 2022) The Verge

More technical articles

• Joy Buolamwini and Timnit Gebru (2018) Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research 81:1-15.
The unseen Black faces of AI algorithms (19 October 2022) Nature News & Views


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


This blog is funded through EPSRC grant EP/W033615/1.