Mary and Eliza Edwards: the mother and daughter human computers

The globe with lines of longitude marked
Lines of Longitude. Image from wikimedia, Public Domain.

Mary Edwards was a computer, a human computer. Even more surprisingly for the time (the 1700s), she was a female computer (and so was her daughter Eliza).

In the early 1700s navigation at sea was a big problem. In particular, if you were lost in the middle of the Atlantic Ocean, there was no good way to determine your longitude, your position east to west. There was of course no satnavs at the time not least because there would be no satellites for 300 years! 

It could be done based on taking sightings of the position of the sun, moon or planets, at different times of the day, but only if you had an accurate time. Unfortunately, there was no good way to know the precise time when at sea. Then in the mid 1700s, an accurate clock that could survive a rough sea voyage and still be highly accurate was invented by clockmaker John Harrison. Now the problem moved to helping mariners know where the moon and planets were supposed to be at any given time so they could use the method.

As a result, the Board of Longitude (set up by the UK government to solve the problem) with the Royal Greenwich Observatory started to publish the Nautical Almanac from 1767. It consisted lots of information of such astronomical data for use by navigators at sea. For example, it contained tables of the position of the moon (or specifically its angle in the sky relative to the sun and planets (known as lunar distances). But how were these angles known years in advance to create the annual almanacs? Well, basic Newtonian physics allow the positions of planets and the moon to be calculated based on how everything in the solar system moves together with their positions at a known time. From that their position in the sky at any time can be calculated. That answers would be in the Nautical Almanac. Each year a new table was needed, so the answers also needed to be constantly recomputed.

But who did the complex calculations? No calculators, computers or other machines that could do it automatically would exist for several hundred years. It had to be done by human mathematicians. Computers then were just people, following algorithms, precisely and accurately, to get jobs like this done. Astronomer Royal, Nevil Maskelyne recruited 35 male mathematicians to do the job. One was the Revd John Edwards (well-educated clergy were of course perfectly capable of doing maths in their spare time!). He was paid for calculations done at home from 1773 until he died in 1884.

However, when he died Maskelyne received a letter from his wife Mary, revealing officially that in fact she had been doing a lot of the calculations herself, and with no family income any more she asked if she could continue to do the work to support herself and her daughters. Given the work had been of high enough quality that John Edwards had been kept on year after year so Mary was clearly an asset to the project, (and given he had visited the family several times so knew them, and was possibly even unofficially aware who was actually doing the work towards the end) Maskelyne was open-minded enough to give her a full time job. She worked as a human computer until her death 30 years later. Women doing such work was not at all normal at the time and this became apparent when Maskelyne himself died and the work stated to dry up. The quality of the work she did do, though, eventually persuaded the new Astronomer Royal  to continue to give her work.

Just as she helped her husband, her daughter Eliza helped her do the calculations, becoming proficient enough herself that when Mary died, Eliza took over the job, continuing the family business for another 17 years. Unfortunately, however, in 1832, the work was moved to a new body called ‘His Majesty’s Nautical Almanac Office’ At that point, despite Mary and Eliza having proved they were at least as good as the men for half a century or more, government imposed civil service rules came into force that meant women could no longer be employed to do the work.

Mary and Eliza, however had done lots of good, helping mariners safely navigate the oceans for very many years through their work as computers.

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The Digital Seabed: Data in Augmented Reality

A globe (North Atlantic visible) showing ocean depth information, with the path of HMS Challenger shown in red.
A globe (North Atlantic visible) showing ocean depth information, with the path of HMS Challenger shown in red. Image by Daniel Gill.

For many of us, the deep sea is a bit of a mystery. But an exciting interactive digital tool at the National Museum of the Royal Navy is bringing the seabed to life!

It turns out that the sea floor is just as interesting as the land where we spend most of our time (unless you’re a crab, of course, in which case you spend most of your time on the sea floor). I recently learnt about the sea floor at the National Museum of the Royal Navy in Portsmouth, in their “Worlds Beneath the Waves” exhibition, which documents 150-years of deep-sea exploration.

 One ship which revolutionised deep ocean study was HMS Challenger. It left London in 1858 and went on to make a 68,890 nautical-mile journey all over the earth’s oceans. One of its scientific goals was to measure the depth of the seabed as it circled the earth. To make these measurements, a long rope with a weight at one end was dropped into the water, which sank to the bottom. The length of the rope needed until the weight hit the floor was measured. It’s a simple process, but it worked! 

Thankfully, modern technology has caught up with bathymetry (the study of the sea floor). Now, sea floor depths are measured using sonar (so sound) and lidar (light) from ships or using special sensors on satellites. All of these methods send signals down to the seabed, and count how long it takes for a response. Knowing the speed of sound or light through air and water, you can calculate the distance to whatever reflected the signal.

You may be thinking, why do we need to know how deep the ocean is? Well, apart from the human desire to explore and mapour planet, it’s also useful for navigation and safety: in smaller waterways and ports, it’s very helpful to know whether there’s enough water below the boat to stay afloat!

It’s also useful to look at fault lines, the deep valleys (such as Challenger Deep, the deepest known point in the ocean, named after HMS Challenger), and underwater mountain ranges which separate continental plates. Studying these can help us to predict earthquakes and understand continental drift (read more about continental drift).

The sand table with colours projected onto it showing height.
The sand table with colours projected onto it showing height. Image by Daniel Gill.

We now have a much better understanding of the seabed, including detailed maps of sea floor topography around the world. So, we know what the ocean floor looks like at the moment, but how can we use this to understand the future of our waterways? This is where computers come in.

Near the end of the exhibition sits a table covered in sand, which has, projected onto it, the current topography of the sand. Where the sand is piled up higher is coloured red and orange, and lower in green and blue. Looking across the table you can see how sand at the same level, even far apart, is still within the same band of colour.

The projected image automatically adjusts (below) to the removal of the hill in red (above).
The projected image automatically adjusts (below) to the removal of the hill in red (above). Image by Daniel Gill.

But this isn’t even the coolest part! When you pick up and move sand around, the colours automatically adjust to the new sand topography, allowing you to shape the seabed at will. The sand itself, however, will flow and move depending on gravity, so an unrealistically tall tower will soon fall down and form a more rotund mound. 

 Want to know what will happen if a meteor impacts? Grab a handful of sand and drop it onto the table (without making a mess) and see how the topographical map changes with time!

The technology above the table.
The technology above the table. Image by Daniel Gill.

So how does this work? Looking above the table, you can see an Xbox Kinect sensor, and a projector. The Kinect works much like the lidar systems installed on ships – it sends beams of infrared lights down onto the sand, which bounce off back to the sensor in a measured time. This creates a depth map, just like ships do, but on a much smaller scale. This map is turned into colours and projected back on to the sand. 

Virtual water fills the valleys.
Virtual water fills the valleys. Image by Daniel Gill.

This is not the only feature of this table, however: it can also run physics simulations! By placing your hand over the sand, you can add virtual water, which flows realistically into the lower areas of sand, and even responds to the movement of sand.

The mixing of physical and digital representations of data like this is an example of augmented, or mixed, reality. It can help visualise things that you might otherwise find difficult to imagine, perhaps by simulating the effects of building a new dam, for example. Models like this can help experts and students, and, indeed, museum visitors, to see a problem in a different and more interactive way.

– Daniel Gill, Queen Mary University of London

More on…

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

An experiment in buoyancy

Here is a little science experiment anyone can do to help understand the physics of marine animals and their buoyancy. It helps give insight into how animals such as ancient ammonites and now cuttlefish can move up and down at will just by changing the density of internal fluids.* (See Ammonite propulsion of underwater robots). It also shows how marine robots could do the same with a programmed ammonite brain.

First take a beaker of water and a biro pen top. Put a small piece of blu tack over the the top of the pen top (to cover the holes that are there to hopefully stop you suffocating if you were to swallow one – never chew pen tops!). Next, put a larger blob of blu tack round the bottom of the pen top. You will have to use trial and error to get the right amount. Your aim is to make the pen top float vertically upright in the water, with the smaller blu tack just floating above the surface. Try it, by carefully placing the pen top vertically into the water. If it doesn’t float like that, dry the blu tack then add or remove a bit more until it does float correctly.

It now has neutral buoyancy. The force of gravity pulling it down is the same as the buoyancy force (or upthrust) pushing it upwards, caused by the air trapped in the top of the lid… so it stays put, neither sinking nor rising.

Now fill a drink bottle with water all the way to the top. Then add a little more water so the water curves up above the top of the bottle (held in place by surface tension). Carefully, drop in the weighted pen top and screw on the top of the bottle tightly.

The pen top should now just float in the water at some depth. It is acting just like the swim bladder of a fish, with the air in the pen top preventing the weight of the blue tack pulling it down to the bottom.

Now, squeeze the side of the bottle. As you squeeze, the pen top should suddenly sink to the bottom! Let go and it rises back up. What is happening? The force of gravity is still pulling down the same as it was (the mass hasn’t changed), so if it is sinking the buoyancy force pushing up must be less that it was.

What is happening? We are increasing the pressure inside the bottle, so the water is now compressing the air in the pen top, reducing its volume and increasing its density. The more dense your little diving bell is, the less the buoyancy force pushing up, so it sinks.

That is essentially the trick that ammonites evolved, many, many millions of years ago, squeezing the gas inside their shell to suddenly sink to get away quickly when they sensed danger. It is what cuttlefish still do today squeezing the gas in their cuttlebone so the cuttlefish becomes denser.

So, if you were basing a marine robot on an ammonite (with movement also possible by undulating its arms, and by jet propulsion, perhaps) then your programming task for controlling its movement would involve it being able to internally squeeze an air space by just the right amount at the right time!

In fact, several groups of researchers have created marine robots based on ammonites. For example, a group at Utah have been doing so to better understand the real but extinct ammonites themselves, including how they did actually move. For example, the team have been testing different shell shapes to see if some shapes work better than others, and so just how efficient ammonite shell shapes actually were. By programming an ammonite robot brain, you could similarly, for example, better understand how they controlled their movement and how effective it really was in practice (not just in theory).

Science can now be done in a completely different way to the traditional version of just using discovery, observation and experiment. You can now do computer and robotic modelling too, running experiments on your creations. If you want to study marine biology, or even fancy being a Palaeontologist with a difference, understanding long extinct life, you can now do it through robotics and computer science, not just by watching animals or digging up fossils (but understanding some physics is still important to get you started).

– Paul Curzon, Queen Mary University of London

More on …

*Thanks to the Dorset Wildlife Trusts at the Chisel Beach Visitor Centre, Portland where I personally learnt about ammonite and cuttlefish propulsion in a really fun science talk on the physics of marine biology, including demonstrating this experiment.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Ammonite propulsion of underwater robots

Ammonite statue showing creature inside its shell
Image by M W from Pixabay

Intending to make a marine robot that will operate under the ocean? Time to start learning, not just engineering and computing, but the physics of marine biology! And, it turns out you can learn a lot from ammonites: marine creatures that ruled the ocean for millennia and died out while dinosaurs ruled the earth. Perhaps your robot needs a shell, not for protection, but to help it move efficiently.

If you set yourself the task of building an underwater robot, perhaps to work with divers in exploring wrecks or studying marine life, you immediately have to solve a problem that is different to traditional land-based robotics researchers. Most of the really cool videos of the latest robots tend to show how great they are at balancing on two legs, doing some martial art, perhaps, or even gymnastics. Or maybe they are hyping how good they are running through the forest like a wolf, now on four legs. Once you go underwater all that exciting stuff with legs becomes a bit pointless. Now its all about floating not balancing. So what do you do?

The obvious thing perhaps is to just look at boats, submarines and torpedoes and design a propulsion system with propellers, maybe using an AI to design the most efficient propellor shape, then write some fancy software to control it as efficiently as possible. Alternatively, you could look at what the fish do and copy them!

What do fish do? They don’t have propellors! The most obvious thing is they have tails and fins and wiggle a lot. Perhaps your marine robot could be streamlined like a fish and well, swim, its way through the sea. That involves the fish using its muscles to make waves ripple along its body pushing against the water. In exerting a force on the water, by Newton’s Laws, the water pushes back and the fish moves forward.

Of course, your robot is likely to be heavy so will sink. That raises the other problem. Unlike on land, in water you need to be able to move up (and down) too. Being heavy, moving down is easy. But then that is the same for fish. All that fishy muscle is heavier than water so sinks too. Unless they have evolved a way to solve the problem, fish sink to the bottom and have to actively swim upwards if they want to be anywhere else. Some live on the bottom so that is exactly what they want. Maybe your robot is to crawl about on the sea floor too, so that may be right for it too.

Many, many other fish don’t want to be at the bottom. They float without needing to expend any energy to do so. How? They evolved a swim bladder that uses the physics of buoyancy to make them naturally float, neither rising or sinking. They have what is called neutral buoyancy. Perhaps that would be good for your robot too, not least to preserve its batteries for more important things like moving forwards. How do swim bladders do it? They are basically bags of air that give the fish buoyancy – a bit like you wearing a life jacket. Get the amount of air right and the buoyancy, which provides an upward force, can exactly counteract the force of gravity that is pushing your robot down to the depths. The result is the robot just floats under the water where it is. It now has to actively swim if it wants to move down towards the sea floor. So, if you want your robot to do more than crawl around on the bottom, designing in a swim bladder is a good idea.

Perhaps, you can save more energy and simplify things even more though. Perhaps, your robot could learn from ammonites. These are long extinct, dying out with the dinosaurs and now found only as fossils, fearsome predators that evolved a really neat way to move up and down in the water. Ammonites were once believed to be curled up snakes turned to stone, but they were actually molluscs (like snails) and the distinctive spiral structure preserved in fossils was their shell. They didn’t live deep in the spiral though, just in the last chamber at the mouth of the spiral with their multi-armed octopus like body sticking out the end to catch prey. So what were the rest of the chambers for? Filled with liquid or gas, they would act exactly like a swim bladder providing buoyancy control. However, it is likely that, as with the similar modern day nautilus, the ammonite could squeeze the gas or liquid of its spiral shell into a smaller volume, changing its density. Doing that changes its buoyancy: with increased density the buoyancy is less, so gravity exerts a greater force than the lift the shell’s content is giving and it suddenly sinks. Decrease the density by letting the gas or liquid expand and it rises again.

You can see how it works with this simple experiment.

You don’t needs a shell of course, other creatures have evolved more sophisticated versions. A cuttlebone does the same job. It is an internal organ of the cuttlefish (which are not fish but cephalopods like octopus and squid, so related to ammonites). They are the white elongated disks that you find washed up on the beach (especially along the south and west coasts in the UK). They are really hard on one side but slightly softer on the other. They act like an adjustable swim bladder. The hard upper side prevents gas escaping (whilst also adding a layer of armour). The soft lower side is full of microscopic chambers that the cuttlefish can push gas into or pull gas out of at will with the same effect as that of the ammonites shell.

This whole mechanism is essentially how the buoyancy tanks of a submarine work. First used in the original practical submarine, the Nautilus of 1800, they are flooded and emptied to make a submarine sink and rise.

Build the idea of a cuttlebone or ammonite shell into your robot and it can rise and sink at will with minimal energy wasted. Cuttlefish, though, also have another method of propulsion (aside from undulating their body) that allows it to escape from danger in a hurry: jet propulsion. By ejecting water stored in their mantle through their syphon (a tube), they can suddenly give themselves lots of acceleration just like a jet engine gives a plane. That would normally be a very inefficient form of propulsion, using lots of energy. However, experiments show that when used with negative buoyancy such as provided by the cuttlebone, this jet propulsion is actually much more efficient than it would be. So the cuttlebone saves energy again. And a rare ammonite fossil with the preserved muscles of the actual animal suggests that ammonites had similar jet propulsion too. Given some ammonites grew as large as several metres across, that would have been an amazing sight to see!

To be a great robotics engineer, rather than inventing everything from scratch, you could do well to learn from biological physics. Some of the best solutions are already out there and may even be older than the dinosaurs, You might then find your programming task is to program the equivalent of the brain of an ammonite.

Paul Curzon, Queen Mary University of London

More on …

Thanks to the Dorset Wildlife Trusts at the Chisel Beach Visitor Centre, Portland where I personally learnt about ammonite and cuttlefish propulsion in a really fun science talk on the physics of marine biology.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Film Futures: The Lord of the Rings

What if there was Computer Science in Middle Earth?…Computer Scientists and digital artists are behind the fabulous special effects and computer generated imagery we see in today’s movies, but for a bit of fun, in this series, we look at how movie plots could change if they involved Computer Scientists. Here we look at an alternative version of the film series (and of course book trilogy): The Lord of the Rings.

***SPOILER ALERT***

The Lord of the Rings is an Oscar winning film series by Peter Jackson. It follows the story of Frodo as he tries to destroy the darkly magical, controlling One Ring of Power, by throwing it in to the fires of Mount Doom at Mordor. This involves a three film epic journey across Middle Earth where he and “the company of the Ring” are chased by the Nazgûl, the Ringwraiths of the evil Sauron. Their aim is to get to Mordor, without being killed and the Ring taken from them and returned to Sauron who created it, or it being stolen by Golem who once owned it.

The Lord of the Rings: with computer science

In our computer science film future version, Frodo discovers there is a better way than setting out on a long and dangerous quest. Aragorn, has been tinkering with drones in his spare time, and so builds a drone to carry the Ring to Mount Doom controlled remotely. Frodo pilots it from the safety of Rivendell. However, on its first test flight, its radio signal is jammed by the magic of Saruman from his tower. The drone crashes and is lost. It looks like a the company must set off on a quest after all.

However, the wise Elf, the Lady Galadriel suggests that they control the drone by impossible-to-jam fibre optic cable. The Elves are experts at creating such cables using them in their highly sophisticated communication networks that span Middle Earth (unknown to the other peoples of Middle Earth), sending messages encoded in light down the cables.

They create a huge spool containing the hundreds of miles needed. Having also learnt from their first attempt, they build a new drone that uses stealth technology devised by Gandalf to make it invisible to the magic of Wizards, bouncing magical signals off it in a way that means even the ever watchful Eye of Sauron does not detect it until it is too late. The new drone sets off trailing a fine strand of silk-like cable behind, with the One Ring within. At its destination, the drone is piloted into the lava of Mount Doom, destroying the ring forever. Sauron’s power collapses, and peace returns to Middle Earth. Frodo does not suffer from post-traumatic stress disorder, and lives happily ever after, though what becomes of Golem is unknown (he was last seen on Mount Doom through the Drones camera, chasing after it, as the drone was piloted into the crater).

In real life…

Drones are being touted for lots of roles, from delivering packages to people’s doors to helping in disaster emergency areas. They have most quickly found their place as a weapon, however. At regular intervals a new technology changes war forever, whether it is the long bow, the musket, the cannon, the tank, the plane… The most recent technology to change warfare on the battlefield has been the introduction of drone technology. It is essentially the use of robots in warfare, just remote controlled, flying ones rather than autonomous humanoid ones, Terminator style (but watch this space – the military are not ones to hold back on a ‘good’ idea). The vast majority of deaths in the Russia-Ukraine war on both sides have been caused by drone strikes. Now countries around the world are scrambling to update their battle readiness, adding drones into their defence plans.

The earliest drones to be used on the battlefield were remote controlled by radio, The trouble with anything controlled that way is it is very easy to jam – either sending your own signals at higher power to take over control, or more easily to just swamp the airwaves with signal so the one controlling the drone does not get through. The need to avoid weapons being jammed is not a new problem. In World War II, some early torpedoes were radio controlled to their target, but that became ineffectual as jamming technology was introduced. Movie star Hedy Lamar is famous for patenting a mechanism whereby a torpedo could be controlled by radio signals that jumped from frequency to frequency, making it harder to jam (without knowing the exact sequence and timing of the frequency jumps). In London, torpedo stations protecting the Thames from enemy shipping had torpedoes controlled by wire so they could be guided all the way to the target. Unfortunately though it was not a great success, the only time one was used in a test it blew up a harmless fishing boat passing by (luckily no-one died).

And that is the solution adopted by both sides in the Ukraine war to overcome jamming. Drones flying across the front lines are controlled by miles of fibre optic cable that is run out on spools (tens of miles rather than the hundreds we suggested above). The light signals controlling the drone, pass down the glass fibre so cannot be jammed or interfered with. As a result the front lines in the Ukraine are now criss-crossed with gossamer thin fibres, left behind once the drones hit their target or are taken out by the opposing side. It looks as though the war is being fought by robotic spiders (which one day may be the case but not yet). With this advent of fibre-optic drone control, the war has changed again and new defences against this new technology are needed. By the time they are effective, likely the technology will have morphed into something new once more.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Why do we still have lighthouses?

Image by Tom from Pixabay

In an age of satellite navigation when all ships have high-tech navigation systems that can tell them exactly where they are to the metre, on accurate charts that show exactly where dangers lurk, why do we still bother to keep any working lighthouses?

Lighthouses were built around the Mediterranean from the earliest times, originally to help guide ships into ports rather than protect them from dangerous rocks or currents. The most famous ancient lighthouse was the great lighthouse of Pharos, at the entry to the port of Alexandria. Built by the Ancient Egyptians, it was one of the seven wonders of the ancient world.

In the UK Trinity House, the charitable trust that still runs all our lighthouses, was set up in Tudor Times by Henry VIII, originally to provide warnings for shipping in the Thames. The first offshore lighthouse built to protect shipping from dangerous rocks was built on Eddystone at the end of the 17th century, It only survived for 5 years, before it was washed away in a storm itself, along, sadly, with Henry Winstanley who built it. However, in the centuries since then Trinity House, has repeatedly improved the design of their lighthouses, turning them into a highly reliable warning system that has saved countless lives, across the centuries.

There are still several hundred lighthouses round the UK with over 60 still maintained by Trinity House. Each has a unique code spelled out in its flashing light that communicates to ships exactly where it is, and so what danger awaits them. But why are they still needed at all? They cost a lot of money to maintain, and the UK government doesn’t fund them. It is all done on donations and money they can raise. So why not just power them down and turn them into museums? Instead their old lamps have been modernised and upgraded with powerful LED lights, automated and networked. They switch on automatically based on light sensors, sounding foghorns automatically too. If the LED light fails, a second automatically switches on in its place, and the control centre, now hundreds of miles away is alerted. There are no plans to turn them all off and just leave shipping to look after itself. The reason is a lesson that we could learn from in many other areas where computer technology is replacing “old-fashioned” ways of doing things.

Yes, satellite navigation is a wonderful system that is a massive step forward for navigation. However, the problem is that it is not completely reliable for several reasons. GPS, for example, is a US system, developed originally for the military and ultimately they retain control. They can switch the public version off at any time, and will if they think it is in their interests to do so. Elon Musk switched off his Starlink system, which he aims to be a successor to GPS, to prevent Ukraine from using it in the war with Russia. It was done in the middle of a Ukranian military operation causing that operation to fail. In July 2025, the Starlink system also demonstrated it is not totally reliable anyway, as it went down for several hours showing that satellite navigation systems can fail for periods, even if not switched off intentionally, due to software bugs or other system issues. A third problem is that navigation systems can be intentionally jammed whether as an act of war or terrorism, or just high-tech vandalism. Finally, a more everyday problem is that people are over trusting of computer systems and they can give a false sense of security. Satellite navigation gives unprecedented accuracy and so are trusted to work to finer tolerances than people would without them. As a result it has been noticed that ships now often travel closer to dangerous rocks than they used to. However, the sea is capricious and treacherous. Sail too close to the rocks in a storm and you could suddenly find yourself tossed upon them, the back of your ship broken, just as has happened repeatedly through history.

Physical lighthouses may be old technology but they work as a very visible and dependable warning system, day or night. They can be used in parallel to satellite navigation, the red and white towers and powerful lights very clearly say: “there is danger here … be extra careful!” That extra very physical warning of the physical danger is worth having as a reminder not to take risks.The lighthouses are also still there, adding in redundancy, should the modern navigation systems go down just when a ship needs them, with nothing extra needing to be done, and so no delay.

It is not out of some sense of nostalgia that the lighthouses still work. Updated with modern technology of their own, they are still saving lives.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The first Internet concert

Severe Tire Damage
Severe Tire Damage. Image by Strubin, CC BY-SA 4.0 via Wikimedia Commons

Which band was the first to stream a concert live over the Internet? The Rolling Stones decided, in 1994, it should be them. After all, they were one of the greatest, most innovative rock bands of all time. A concert from their tour of that year, in Dallas, was therefore broadcast live. Mick Jagger addressed the world not just the 50,000 packed into the stadium welcoming the world with “I wanna say a special welcome to everyone that’s, climbed into the Internet tonight and, uh, has got into the MBone. And I hope it doesn’t all collapse.” Unknown to them, when planning this publicity coup, another band had got there first: a band of Computer Scientists from Xerox PARC, DEC and Apple, the research centres responsible for many innovations including many of the ideas around graphical user interfaces, networks and multimedia internet had played live on the Internet the year before!

The band which actually went down in history was called Severe Tire Damage. Its members were Russ Haines and Mark Manasse (from DEC), Steven Rubin (a Computer Aided design expert from Apple) and Mark Weiser (famous for the ideas behind calm computing, from Xerox PARC). They were playing a concert at Xerox PARC on  June 24, 1993. At the time researchers there were working on a system called MBone which provided a way to do multimedia over the Internet for the first time. Now we take that for granted (just about everyone with a computer or phone doing Zoom and Teams calls, for example) but then the Internet was only set up for exchanging text and images from one person to another. MBone, short for multicast backbone, allowed packets of data of any kind (so including video data) from one source to be sent to multiple Internet addresses rather than just to one address. Sites that joined the MBone could send and receive multimedia data, including video, live to all the others in one broadcast. This meant for the first time, video calls between multiple people over the Internet were possible. They needed to test the system, of course, so set up a camera in front of Severe Tire Damage and live-streamed their performance to other researchers on the nascent MBone round the world (research can be fun at the same time as being serious!). Possibly there was only a single Australian researcher watching at the time, but it is the principle that counts!

On hearing about the publicity around the Rolling Stones concert, and understanding the technology of course, they decided it was time for one more live internet gig to secure their place in history. Immediately, before the Rolling Stones started their gig, Severe Tire Damage broadcast their own live concert over the MBone to all those (including journalists) waiting for the main act to arrive online. In effect they had set themselves up as an Internet un-billed opening act for the Stones even though they were nowhere near Dallas. Of course that is partly the point, you no longer had to all be on one place to be part of the same concert. So, the Rolling Stones, sadly for them, weren’t even the first to play live over the Internet on that particular day, never mind ever!

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Margaret Hamilton: Apollo Emergency! Take a deep breath, hold your nerve and count to 5

Buzz Aldrin standing on the moon
Buzz Aldrin standing on the moon. Image by Neil Armstrong, NASA via Wikimedia Commons – Public Domain

You have no doubt heard of Neil Armstrong, first human on the moon. But have you heard of Margaret Hamilton? She was the lead engineer, responsible for the Apollo mission software that got him there, and ultimately for ensuring the lunar module didn’t crash land due to a last minute emergency.

Being a great software engineer means you have to think of everything. You are writing software that will run in the future encountering all the messiness of the real world (or real solar system in the case of a moon landing). If you haven’t written the code to be able to deal with everything then one day the thing you didn’t think about will bite back. That is why so much software is buggy or causes problems in real use. Margaret Hamilton was an expert not just in programming and software engineering generally, but also in building practically dependable systems with humans in the loop. A key interaction design principle is that of error detection and recovery – does your software help the human operators realise when a mistake has been made and quickly deal with it? This, it turned out, mattered a lot in safely landing Neil Armstrong and Buzz Aldrin on the moon.

As the Lunar module was in its final descent dropping from orbit to the moon with only minutes to landing, multiple alarms were triggered. An emergency was in progress at the worst possible time. What it boiled down to was that the system could only handle seven programs running at once but Buzz Aldrin had just set an eighth running. Suddenly, the guidance system started replacing the normal screens by priority alarm displays, in effect shouting “EMERGENCY! EMERGENCY”! These were coded into the system, but were supposed never to be shown, as the situations triggering them were supposed to never happen. The astronauts suddenly had to deal with situations that they should not have had to deal with and they were minutes away from crashing into the surface of the moon.

Margaret Hamilton was in charge of the team writing the Apollo in-flight software, and the person responsible for the emergency displays. She was covering all bases, even those that were supposedly not going to happen, by adding them. She did more than that though. Long before the moon landing happened she had thought through the consequences of if these “never events” did ever happen. Her team had therefore also included code in the Apollo software to prioritise what the computer was doing. In the situation that happened, it worked out what was actually needed to land the lunar module and prioritised that, shutting down the other software that was no longer vital. That meant that despite the problems, as long as the astronauts did the right things and carried on with the landing, everything would ultimately be fine.

Margaret Hamilton
Margaret Hamilton Image by Daphne Weld Nichols, CC BY-SA 3.0 via Wikimedia Commons

There was still a potential problem though, When an emergency like this happened, the displays appeared immediately so that the astronauts could understand the problem as soon as possible. However, behind the scenes the software itself that was also dealing with them, by switching between programs, shutting down the ones not needed. Such switchovers took time In the 1960s Apollo computers as computers were much slower than today. It was only a matter of seconds but the highly trained human astronauts could easily process the warning information and start to deal with it faster than that. The problem was that, if they pressed buttons, doing their part of the job continuing with the landing, before the switchover completed they would be sending commands to the original code, not the code that was still starting up to deal with the warning. That could be disastrous and is the kind of problem that can easily evade testing and only be discovered when code is running live, if the programmers do not deeply understand how their code works and spend time worrying about it.

Margaret Hamilton had thought all this through though. She had understood what could happen, and not only written the code, but also come up with a simple human instruction to deal with the human pilot and software being out of synch. Because she thought about it in advance, the astronauts knew about the issue and solution and so followed her instructions. What it boiled down to was “If a priority display appears, count to 5 before you do anything about it.” That was all it took for the computer to get back in synch and so for Buzz Aldrin and Neil Armstrong to recover the situation, land safely on the moon and make history.

Without Margaret Hamilton’s code and deep understanding of it, we  would most likely now be commemorating the 20th July as the day the first humans died on the moon, rather than being the day humans first walked on the moon.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

You cannot be serious! …Wimbledon line calls go wrong

Image by Felix Heidelberger from Pixabay (cropped)

The 2025 tennis championships are the first time Wimbledon has completely replaced their human line judges with an AI vision and decision system, Hawk-Eye. After only a week it caused controversy, with the system being updated, after it failed to call a glaringly out ball in a Centre Court match between Brit Sonay Kartal and Anastasia Pavlyuchenkova. Apparently it had been switched off by mistake mid-game. This raises issues inherent in all computer technology replacing humans: that they can go wrong, the need for humans-in-the-loop, the possibility of human error in their use, and what you do when they do go wrong.

Perhaps because it is a vision system rather than generative AI there has been little talk of whether Hawk-Eye is 100% accurate or not. Vision systems do not hallucinate in the way generative AI does, but they are still not infallible. The opportunity for players to appeal has been removed, however: in the original way Hawk-Eye was used humans made the call and players could ask for Hawk-Eye to check. Now, Hawk-Eye makes a decision and basically that is it. A picture is shown on screen of a circle relative to the line, generated by Hawk-Eye to ‘prove’ the ball was in or out as claimed. It is then taken as gospel. Of course, it is just reflecting Hawk-Eye’s decision – what it “saw” – not reality and not any sort of actual separate evidence. It is just a visual version of the call shouted. However, it is taken as though it is absolute proof with no argument possible. If it is aiming to be really, really dependable then Hawk-Eye will have multiple independent systems sensing in different ways and voting on the result – as that is one of the ways computer scientists have invented to program dependability. However, whether it is 100% accurate isn’t really the issue. What matters is whether it is more accurate, making fewer mistakes, than human line judges do. Undoubtedly it is, so is therefore an improvement and some uncaught mistakes are not actually the point.

However, the mistake in this problem call was different. The operators of the system had switched it off mistakenly mid-match due to “human error”. That raises two questions. First, why was it designed to that a human could accidentally turn it off mid-match – don’t blame that person as it should not have been possible in the first place. Fix the system so it can’t happen again. That is what within a day the Lawn Tennis Association claim to have done (whether resiliently remains to be seen).

However, the mistake begs another question. Wimbledon had not handed the match over to the machines completely. A human umpire was still in charge. There was a human in the loop. They, however, had no idea the system was switched off we were told until the call for a ball very obviously out was not made. If that is so, why not? Hawk-Eye supposedly made two calls of “Stop”. Was that its way of saying “I am not working so stop the match”? If it was such an actual message to the umpire it is not a very clear way to make it, and guarantees to be disruptive. It sounds a lot like a 404 error message, added by a programmer for a situation that they do not expect to occur!

A basic requirement of a good interactive system is that the system state is visible – that it is not even switched on should have been totally obvious in the controls the umpire had well before the bad call. That needs to be fixed too, just in case there is still a way Hawk-Eye can still be switched off. It begs the question of how often has the system been accidentally switched off, or powered down temporally for other reasons, with no one knowing, because there was no glaringly bad call to miss at the time.

Another issue is the umpire supposedly did follow the proper procedure which was not to just call the point (as might have happened in the past given he apparently knew “the ball was out!”) but instead had the point replayed. That was unsurprisingly considered unfair by the player who lost a point they should have won. Why couldn’t the umpire make a decision on the point? Perhaps, because humans are no longer trusted at all as they were before. As suggested by Pavlyuchenkova there is no reason why there cannot be a video review process in place so that the umpire can make a proper decision. That would be a way to add back in a proper appeal process.

Also, as was pointed out, what happens if the system fully goes down, does Wimbledon now have to just stop until Hawk-Eye is fixed: “AI stopped play”. We have lots of situations over many decades as well as recently of complex computer systems crashing. Hawk-Eye is a complex system so problems are likely possible. Programmers make mistakes (and especially when doing quick fixes to fix other problems as was apparently just done). If you replace people by computers, you need a reliable and appropriate backup that can kick into place immediately from the outset. A standard design principle is that programs should help avoid humans making mistakes, help them quickly detect them when they do and help them recover.

A tennis match is not actually high stakes by human standards. No one dies because of mistakes (though a LOT of money is at stake), but the issues are very similar in a wide range of systems where people can die – from control of medical devices, military applications, space, aircraft and nuclear power plant control…all of which computers are replacing humans. We need good solutions, and they need to be in place before something goes wrong not after. An issue as systems are more and more automated is that the human left in the loop to avoid disaster has more and more trouble tracking what the machine is doing as they do less and less, so making it harder to step in and correct problems in a timely way (as was likely the case with the Wimbledon umpire). The humans need to not just be a little bit in the loop but centrally so. How you do that for different situations is not easy to work out but as tennis has shown it can’t just be ignored. There are better solutions than Wimbledon are using but to even consider them you have to first accept that computers do make mistakes so know there is a problem to be solved.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

An AI Oppenheimer Moment?

A nuclear explosion mushroom cloud
Image by Harsh Ghanshyam from Pixabay

All computer scientists should watch the staggeringly good film, Oppenheimer, by Christopher Nolan. It charts the life of J. Robert Oppenheimer, “father of the atom bomb”, and the team he put together at Los Alamos, as they designed and built the first weapons of mass destruction. The film is about science, politics and war, not computer science and all the science is quantum physics (portrayed incredibly well). Despite that, Christopher Nolan believes the film does have lessons for all scientists, and especially those in Silicon Valley.

Why? In an interview, he suggested that given the current state of Artificial Intelligence the world is at “an Oppenheimer moment”. Computer scientists, in the 2020s, just like physicists in the 1940s, are creating technology that could be used for great good but also cause great harm (including in both cases a possibility that we use it in a way that destroys civilisation). Should scientists and technologists stay outside the political realm and leave discussion of what to do with their technology to politicians, while the scientist do as they wish in the name of science? That leaves society playing a game of catch up. Or do scientists and technologists have more responsibility than that?

Artificial Intelligence isn’t so obviously capable of doing bad things as an atomic bomb was and still clearly is. There is also no clear imperative, such as Oppenheimer had, to get there before the fascist Nazi party, who were clearly evil and already using technology for evil, (now the main imperative seems to be just to get there before someone else makes all the money, not you). It is, therefore, far easier for those creating AI technology to ignore both the potential and the real effects of their inventions on society. However, it is now clear AI can and already is doing lots of bad as well as good. Many scientists understand this and are focussing their work on developing versions that are, for example, built in to be transparent and accountable, are not biased, racist, homophobic, … that do put children’s protection at the heart of what they do… Unfortunately, not all are though. And there is one big elephant in the room. AI can be, and is being, put in control of weapons in wars that are actively taking place right now. There is an arms race to get there before the other side do. From mass identification of targets in the middle East to AI controlled drone strikes in the Ukraine war, military AI is a reality and is in control of killing people with only minimal, if any, real human’s in the loop. Do we really want that? Do we want AIs in control of weapons of mass destruction. Or is that total madness that will lead only to our destruction.

Oppenheimer was a complex man, as the film showed. He believed in peace but, a brilliant theoretical physicist himself, he managed a group of the best scientists in the world in the creation of the greatest weapon of destruction ever built to that point, the first atom bomb. He believed it had to be used once so that everyone would understand that all out nuclear war would end civilisation (it was of course used against Japan not the already defeated Nazis, the original justification). However, he also spent the rest of his life working for peace, arguing that international agreements were vital to prevent such weapons ever being used again. In times of relative peace people forget about the power we have to destroy everyone. The worries only surface again when there is international tension and wars break out such as in the Middle East or Ukraine. We need to always remeber the possibility is there though lest we use them by mistake. Oppenheimer thought the bomb would actually end war, having come up with the idea of “mutually assured destruction” as a means for peace. The phrase aimed to remind people that these weapons could never be used. He worked tirelessly, arguing for international regulation and agreements to prevent their use. 

Christopher Nolan was asked, if there was a special screening of the film in Silicon Valley, what message would he hope the computer scientists and technologists would take from it. His answer was that the should take home the message of the need for accountability. Scientists do have to be accountable for their work, especially when it is capable of having massively bad consequences for society. A key part of that is engaging with the public, industry and government; not with vested interests pushing for their own work to be allowed, but to make sure the public and policymakers do understand the science and technology so there can be fully informed debate. Both international law and international policy is now a long way off the pace of technological development. The willingness of countries to obey international law is also disintegrating and there is a new subtle difference to the 1940s: technology companies are now as rich and powerful as many countries so corporate accountability is now needed too, not just agreements between countries.

Oppenheimer was vilified over his politics after the war, and his name is now forever linked with weapons of mass destruction. He certainly didn’t get everything right: there have been plenty of wars since, so he didn’t manage to end all war as he had hoped, though so far no nuclear war. However, despite the vilification, he did spend his life making sure everyone understood the consequences of his work. Asked if he believed we had created the means to kill tens of millions of Americans (everyone) at a stroke, his answer was a clear “Yes”. He did ultimately make himself accountable for the things he had done. That is something every scientist should do too. The Doomsday Clock is closer to midnight than ever (89s to midnight – manmade global catastrophe). Let’s hope the Tech Bros and scientists of Silicon Valley are willingly to become accountable too, never mind countries. All scientists and technologists should watch Oppenheimer and reflect.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos