Why do we still have lighthouses?

Image by Tom from Pixabay

In an age of satellite navigation when all ships have high-tech navigation systems that can tell them exactly where they are to the metre, on accurate charts that show exactly where dangers lurk, why do we still bother to keep any working lighthouses?

Lighthouses were built around the Mediterranean from the earliest times, originally to help guide ships into ports rather than protect them from dangerous rocks or currents. The most famous ancient lighthouse was the great lighthouse of Pharos, at the entry to the port of Alexandria. Built by the Ancient Egyptians, it was one of the seven wonders of the ancient world.

In the UK Trinity House, the charitable trust that still runs all our lighthouses, was set up in Tudor Times by Henry VIII, originally to provide warnings for shipping in the Thames. The first offshore lighthouse built to protect shipping from dangerous rocks was built on Eddystone at the end of the 17th century, It only survived for 5 years, before it was washed away in a storm itself, along, sadly, with Henry Winstanley who built it. However, in the centuries since then Trinity House, has repeatedly improved the design of their lighthouses, turning them into a highly reliable warning system that has saved countless lives, across the centuries.

There are still several hundred lighthouses round the UK with over 60 still maintained by Trinity House. Each has a unique code spelled out in its flashing light that communicates to ships exactly where it is, and so what danger awaits them. But why are they still needed at all? They cost a lot of money to maintain, and the UK government doesn’t fund them. It is all done on donations and money they can raise. So why not just power them down and turn them into museums? Instead their old lamps have been modernised and upgraded with powerful LED lights, automated and networked. They switch on automatically based on light sensors, sounding foghorns automatically too. If the LED light fails, a second automatically switches on in its place, and the control centre, now hundreds of miles away is alerted. There are no plans to turn them all off and just leave shipping to look after itself. The reason is a lesson that we could learn from in many other areas where computer technology is replacing “old-fashioned” ways of doing things.

Yes, satellite navigation is a wonderful system that is a massive step forward for navigation. However, the problem is that it is not completely reliable for several reasons. GPS, for example, is a US system, developed originally for the military and ultimately they retain control. They can switch the public version off at any time, and will if they think it is in their interests to do so. Elon Musk switched off his Starlink system, which he aims to be a successor to GPS, to prevent Ukraine from using it in the war with Russia. It was done in the middle of a Ukranian military operation causing that operation to fail. In July 2025, the Starlink system also demonstrated it is not totally reliable anyway, as it went down for several hours showing that satellite navigation systems can fail for periods, even if not switched off intentionally, due to software bugs or other system issues. A third problem is that navigation systems can be intentionally jammed whether as an act of war or terrorism, or just high-tech vandalism. Finally, a more everyday problem is that people are over trusting of computer systems and they can give a false sense of security. Satellite navigation gives unprecedented accuracy and so are trusted to work to finer tolerances than people would without them. As a result it has been noticed that ships now often travel closer to dangerous rocks than they used to. However, the sea is capricious and treacherous. Sail too close to the rocks in a storm and you could suddenly find yourself tossed upon them, the back of your ship broken, just as has happened repeatedly through history.

Physical lighthouses may be old technology but they work as a very visible and dependable warning system, day or night. They can be used in parallel to satellite navigation, the red and white towers and powerful lights very clearly say: “there is danger here … be extra careful!” That extra very physical warning of the physical danger is worth having as a reminder not to take risks.The lighthouses are also still there, adding in redundancy, should the modern navigation systems go down just when a ship needs them, with nothing extra needing to be done, and so no delay.

It is not out of some sense of nostalgia that the lighthouses still work. Updated with modern technology of their own, they are still saving lives.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The first Internet concert

Severe Tire Damage
Severe Tire Damage. Image by Strubin, CC BY-SA 4.0 via Wikimedia Commons

Which band was the first to stream a concert live over the Internet? The Rolling Stones decided, in 1994, it should be them. After all, they were one of the greatest, most innovative rock bands of all time. A concert from their tour of that year, in Dallas, was therefore broadcast live. Mick Jagger addressed the world not just the 50,000 packed into the stadium welcoming the world with “I wanna say a special welcome to everyone that’s, climbed into the Internet tonight and, uh, has got into the MBone. And I hope it doesn’t all collapse.” Unknown to them, when planning this publicity coup, another band had got there first: a band of Computer Scientists from Xerox PARC, DEC and Apple, the research centres responsible for many innovations including many of the ideas around graphical user interfaces, networks and multimedia internet had played live on the Internet the year before!

The band which actually went down in history was called Severe Tire Damage. Its members were Russ Haines and Mark Manasse (from DEC), Steven Rubin (a Computer Aided design expert from Apple) and Mark Weiser (famous for the ideas behind calm computing, from Xerox PARC). They were playing a concert at Xerox PARC on  June 24, 1993. At the time researchers there were working on a system called MBone which provided a way to do multimedia over the Internet for the first time. Now we take that for granted (just about everyone with a computer or phone doing Zoom and Teams calls, for example) but then the Internet was only set up for exchanging text and images from one person to another. MBone, short for multicast backbone, allowed packets of data of any kind (so including video data) from one source to be sent to multiple Internet addresses rather than just to one address. Sites that joined the MBone could send and receive multimedia data, including video, live to all the others in one broadcast. This meant for the first time, video calls between multiple people over the Internet were possible. They needed to test the system, of course, so set up a camera in front of Severe Tire Damage and live-streamed their performance to other researchers on the nascent MBone round the world (research can be fun at the same time as being serious!). Possibly there was only a single Australian researcher watching at the time, but it is the principle that counts!

On hearing about the publicity around the Rolling Stones concert, and understanding the technology of course, they decided it was time for one more live internet gig to secure their place in history. Immediately, before the Rolling Stones started their gig, Severe Tire Damage broadcast their own live concert over the MBone to all those (including journalists) waiting for the main act to arrive online. In effect they had set themselves up as an Internet un-billed opening act for the Stones even though they were nowhere near Dallas. Of course that is partly the point, you no longer had to all be on one place to be part of the same concert. So, the Rolling Stones, sadly for them, weren’t even the first to play live over the Internet on that particular day, never mind ever!

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Let the brain take the strain

Cockpit controls
Image by Michi S from Pixabay

Whenever humans have complicated, repetitive jobs to do, designers set to work making computer systems that do those jobs automatically. Autopilot systems in airplanes are a good example. Flying a commercial airliner is incredibly complex, so a computer system helps the pilots by doing a lot of the boring, repetitive stuff automatically. But in any automated system, there has to be a balance between human and computer so that the human still has ultimate control. It’s a strange characteristic of human-computer interaction: the better an automated program, the more its users rely on it, and the more dangerous it can be.

The problem is that the unpredictable always happens. Automated systems run into situations the designers haven’t anticipated, and humans are still much better at dealing with the unexpected. If humans can’t take back control from the system, accidents can happen. For example, some airplanes used to have autopilots that took control of a landing until the wheels touched the ground. But then, one rainy night, a runway in Warsaw was so wet that the plane began skidding along the runway when it touched down. The skid was so severe that the sensors never registered the touchdown of the plane, and so the pilots couldn’t control the brakes. The airplane only stopped when it had overshot the runway. The designers had relied so much on the automation that the humans couldn’t fix the problem.

Many designers now think it’s better to give some control back to the operators of any automated system. Instead of doing everything, the computer helps the user by giving them feedback. For example, if a smart car detects that it’s too close to the car ahead of it, the accelerator becomes more difficult to press. The human brain is still much better than any computer system at coming up with solutions to unexpected situations. Computers are much better off letting our brains do the tricky thinking.

– Paul Curzon, Queen Mary University of London


This article was first published on the original CS4FN website and a copy is available on page 19 of Issue 15 of the CS4FN magazine, which you can download as a PDF by clicking on the panel below. All of our previous issues are free to download as PDFs here.


Related Magazine …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Margaret Hamilton: Apollo Emergency! Take a deep breath, hold your nerve and count to 5

Buzz Aldrin standing on the moon
Buzz Aldrin standing on the moon. Image by Neil Armstrong, NASA via Wikimedia Commons – Public Domain

You have no doubt heard of Neil Armstrong, first human on the moon. But have you heard of Margaret Hamilton? She was the lead engineer, responsible for the Apollo mission software that got him there, and ultimately for ensuring the lunar module didn’t crash land due to a last minute emergency.

Being a great software engineer means you have to think of everything. You are writing software that will run in the future encountering all the messiness of the real world (or real solar system in the case of a moon landing). If you haven’t written the code to be able to deal with everything then one day the thing you didn’t think about will bite back. That is why so much software is buggy or causes problems in real use. Margaret Hamilton was an expert not just in programming and software engineering generally, but also in building practically dependable systems with humans in the loop. A key interaction design principle is that of error detection and recovery – does your software help the human operators realise when a mistake has been made and quickly deal with it? This, it turned out, mattered a lot in safely landing Neil Armstrong and Buzz Aldrin on the moon.

As the Lunar module was in its final descent dropping from orbit to the moon with only minutes to landing, multiple alarms were triggered. An emergency was in progress at the worst possible time. What it boiled down to was that the system could only handle seven programs running at once but Buzz Aldrin had just set an eighth running. Suddenly, the guidance system started replacing the normal screens by priority alarm displays, in effect shouting “EMERGENCY! EMERGENCY”! These were coded into the system, but were supposed never to be shown, as the situations triggering them were supposed to never happen. The astronauts suddenly had to deal with situations that they should not have had to deal with and they were minutes away from crashing into the surface of the moon.

Margaret Hamilton was in charge of the team writing the Apollo in-flight software, and the person responsible for the emergency displays. She was covering all bases, even those that were supposedly not going to happen, by adding them. She did more than that though. Long before the moon landing happened she had thought through the consequences of if these “never events” did ever happen. Her team had therefore also included code in the Apollo software to prioritise what the computer was doing. In the situation that happened, it worked out what was actually needed to land the lunar module and prioritised that, shutting down the other software that was no longer vital. That meant that despite the problems, as long as the astronauts did the right things and carried on with the landing, everything would ultimately be fine.

Margaret Hamilton
Margaret Hamilton Image by Daphne Weld Nichols, CC BY-SA 3.0 via Wikimedia Commons

There was still a potential problem though, When an emergency like this happened, the displays appeared immediately so that the astronauts could understand the problem as soon as possible. However, behind the scenes the software itself that was also dealing with them, by switching between programs, shutting down the ones not needed. Such switchovers took time In the 1960s Apollo computers as computers were much slower than today. It was only a matter of seconds but the highly trained human astronauts could easily process the warning information and start to deal with it faster than that. The problem was that, if they pressed buttons, doing their part of the job continuing with the landing, before the switchover completed they would be sending commands to the original code, not the code that was still starting up to deal with the warning. That could be disastrous and is the kind of problem that can easily evade testing and only be discovered when code is running live, if the programmers do not deeply understand how their code works and spend time worrying about it.

Margaret Hamilton had thought all this through though. She had understood what could happen, and not only written the code, but also come up with a simple human instruction to deal with the human pilot and software being out of synch. Because she thought about it in advance, the astronauts knew about the issue and solution and so followed her instructions. What it boiled down to was “If a priority display appears, count to 5 before you do anything about it.” That was all it took for the computer to get back in synch and so for Buzz Aldrin and Neil Armstrong to recover the situation, land safely on the moon and make history.

Without Margaret Hamilton’s code and deep understanding of it, we  would most likely now be commemorating the 20th July as the day the first humans died on the moon, rather than being the day humans first walked on the moon.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Pots fixing problematic acoustics

Surface waves
Surface Waves, Image by Roger McLassus, CC BY-SA 3.0 via Wikimedia Commons.

Pots are buried in the walls of medieval churches and monasteries across Europe: in the UK, Sweden, Denmark and Serbia. Why? Are they just a weird form of decoration? Actually, they are there to fix problematic acoustics.

The problem

First of all, what do we mean by ‘problematic’ acoustics? When sound waves move around a room they reflect off the walls in a way that creates strange sound effects when they meet their reflections.

It happens because of what are called ‘standing waves’. Imagine dropping a pebble into a bath. The ripples create patterns in the water where they interfere with those that have bounced off the sides. As the two ripples pass in opposite directions if the movement pushing the molecule up from one ripple exactly cancels out the movement pushing down from the other and keeps doing so, then at that point the molecules remain still. On either side the two ripples reinforce each other rather than cancelling out giving the peaks and troughs of the combined wave. The result is the ripples appear to stop moving forward: a standing wave.

Sound waves are like water waves except that the air molecules vibrate from side to side rather than up and down as water molecules do. The same effects therefore happen when sound waves meet and standing waves can form. This is bad for two reasons. Standing waves take more time to die away after the sound source has been silenced than other sounds. Worse, the sound’s volume varies around the room depending on whether it is a point where the waves cancel out (no sound) or where they enhance each other (loud). That’s ‘problematic’ acoustics!

Standing Wave.By Lucas Vieira – Own work, Public Domain, from WIKIMEDIA

These acoustic problems ultimately come about because of what is known as ‘resonance’. That is where a sound repeatedly bounces back and forth across a space at a particular frequency. Frequencies that are directly tied to the room’s dimensions cause most problems. Called the ‘resonant frequencies’ they involve a whole number of wave troughs and crests fitting in the space between the walls. That is what leads to standing waves as the original and reflected wave coincide exactly. The lowest resonant frequency of a wave is also called the ‘fundamental frequency’. It’s the one where a single wave (a single trough and crest) fits in the space.

There are three different types of resonances developed in a room from sounds bouncing of the walls: called axial, tangential and oblique modes. Axial modes result from a sound bouncing back and forth between two facing walls. Tangential ones happen when the waves reflect around all four walls. Oblique modes are the most complicated and result from sound bouncing off the roof and floor too. Of all these, it turns out the worst are the axial modes. To improve the acoustics of a room you need to absorb the sounds at these resonant frequencies. But how?

The solution

OK, now we know the problem, but how do we deal with it? A solution is the ‘Helmholtz resonator’, named after a device created by Hermann von Helmholtz in the 1850s as part of his studies to identify the ‘tones’ of sounds. A Helmholtz resonator is just the phenomenon of air resonating in a cavity. It is the way you get a tone from blowing across the mouth of an empty bottle. The frequency of the tone is the resonant frequency of the bottle. If you change the volume of the air cavity or the length or diameter of the neck of the bottle you change its resonant frequency and so the tone.

A Helmholtz resonator actually absorbs sound at its resonant frequency and at a small range of nearby frequencies. This happens because when a sound strikes the resonator’s opening, the air mass in the neck starts to vibrate strongly at that resonant frequency and tries to leave. That makes the pressure of the air in the cavity lower than the outside. As a result it draws the air back into the cavity. This process repeats but energy is lost each time, which causes the wave, of this particular resonant frequency, to dissipate. That means that specific sound is absorbed by the resonator. Helmholtz resonators also reradiate the sound that is not absorbed in all directions from the opening. That means any energy that wasn’t absorbed is spread around the room and that improves the room’s acoustics too.

So back to those pots in the walls of medieval churches. What are they for? Well they would have acted as Helmholtz resonators so they presumably were designed to remove low-frequency sounds and so correct the acoustic of the vaults and domes. Ashes have been found in some of the pots. That would have increased the range of sound frequencies absorbed as well as helped spread the unabsorbed sound. St Andrew’s Church in Lyddington, Rutland, built in the 14th Century, has some of the finest examples of this kind of acoustic jars in the UK. Helmholtz resonators obviously predate Helmholtz, actually going back to the ancient Greeks and Romans. The pots in churches are thought to be based on the ideas of Roman architect Vitruvius. He discussed the use of resonant jars in the design of amphitheatres to improve the clarity of the speakers’ voices.

Designers of acoustic spaces like concert halls now use a variety of techniques to fix acoustic problems including Helmholtz resonators, resonant panels and tube traps. They’re all efficient ways for absorbing low-frequency sounds. Helmholtz resonators though have the particular advantage of being able to treat localized ‘problematic’ frequencies.

Those church designers were apparently rather sophisticated acoustic engineers. They had to be, of course. It would have been a little unfortunate to build a church so everyone could hear the word of God, only to have those words resonate with the walls rather than with the congregation.

– Dimitrios Giannoulis, Queen Mary University of London


Magazines …

This article was originally published on the CS4FN archive website and can also be found on pages 8 and 9 of issue 4 of Audio! Mad About Music Technology, our series of magazines celebrating sound and tech.


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


You cannot be serious! …Wimbledon line calls go wrong

Image by Felix Heidelberger from Pixabay (cropped)

The 2025 tennis championships are the first time Wimbledon has completely replaced their human line judges with an AI vision and decision system, Hawk-Eye. After only a week it caused controversy, with the system being updated, after it failed to call a glaringly out ball in a Centre Court match between Brit Sonay Kartal and Anastasia Pavlyuchenkova. Apparently it had been switched off by mistake mid-game. This raises issues inherent in all computer technology replacing humans: that they can go wrong, the need for humans-in-the-loop, the possibility of human error in their use, and what you do when they do go wrong.

Perhaps because it is a vision system rather than generative AI there has been little talk of whether Hawk-Eye is 100% accurate or not. Vision systems do not hallucinate in the way generative AI does, but they are still not infallible. The opportunity for players to appeal has been removed, however: in the original way Hawk-Eye was used humans made the call and players could ask for Hawk-Eye to check. Now, Hawk-Eye makes a decision and basically that is it. A picture is shown on screen of a circle relative to the line, generated by Hawk-Eye to ‘prove’ the ball was in or out as claimed. It is then taken as gospel. Of course, it is just reflecting Hawk-Eye’s decision – what it “saw” – not reality and not any sort of actual separate evidence. It is just a visual version of the call shouted. However, it is taken as though it is absolute proof with no argument possible. If it is aiming to be really, really dependable then Hawk-Eye will have multiple independent systems sensing in different ways and voting on the result – as that is one of the ways computer scientists have invented to program dependability. However, whether it is 100% accurate isn’t really the issue. What matters is whether it is more accurate, making fewer mistakes, than human line judges do. Undoubtedly it is, so is therefore an improvement and some uncaught mistakes are not actually the point.

However, the mistake in this problem call was different. The operators of the system had switched it off mistakenly mid-match due to “human error”. That raises two questions. First, why was it designed to that a human could accidentally turn it off mid-match – don’t blame that person as it should not have been possible in the first place. Fix the system so it can’t happen again. That is what within a day the Lawn Tennis Association claim to have done (whether resiliently remains to be seen).

However, the mistake begs another question. Wimbledon had not handed the match over to the machines completely. A human umpire was still in charge. There was a human in the loop. They, however, had no idea the system was switched off we were told until the call for a ball very obviously out was not made. If that is so, why not? Hawk-Eye supposedly made two calls of “Stop”. Was that its way of saying “I am not working so stop the match”? If it was such an actual message to the umpire it is not a very clear way to make it, and guarantees to be disruptive. It sounds a lot like a 404 error message, added by a programmer for a situation that they do not expect to occur!

A basic requirement of a good interactive system is that the system state is visible – that it is not even switched on should have been totally obvious in the controls the umpire had well before the bad call. That needs to be fixed too, just in case there is still a way Hawk-Eye can still be switched off. It begs the question of how often has the system been accidentally switched off, or powered down temporally for other reasons, with no one knowing, because there was no glaringly bad call to miss at the time.

Another issue is the umpire supposedly did follow the proper procedure which was not to just call the point (as might have happened in the past given he apparently knew “the ball was out!”) but instead had the point replayed. That was unsurprisingly considered unfair by the player who lost a point they should have won. Why couldn’t the umpire make a decision on the point? Perhaps, because humans are no longer trusted at all as they were before. As suggested by Pavlyuchenkova there is no reason why there cannot be a video review process in place so that the umpire can make a proper decision. That would be a way to add back in a proper appeal process.

Also, as was pointed out, what happens if the system fully goes down, does Wimbledon now have to just stop until Hawk-Eye is fixed: “AI stopped play”. We have lots of situations over many decades as well as recently of complex computer systems crashing. Hawk-Eye is a complex system so problems are likely possible. Programmers make mistakes (and especially when doing quick fixes to fix other problems as was apparently just done). If you replace people by computers, you need a reliable and appropriate backup that can kick into place immediately from the outset. A standard design principle is that programs should help avoid humans making mistakes, help them quickly detect them when they do and help them recover.

A tennis match is not actually high stakes by human standards. No one dies because of mistakes (though a LOT of money is at stake), but the issues are very similar in a wide range of systems where people can die – from control of medical devices, military applications, space, aircraft and nuclear power plant control…all of which computers are replacing humans. We need good solutions, and they need to be in place before something goes wrong not after. An issue as systems are more and more automated is that the human left in the loop to avoid disaster has more and more trouble tracking what the machine is doing as they do less and less, so making it harder to step in and correct problems in a timely way (as was likely the case with the Wimbledon umpire). The humans need to not just be a little bit in the loop but centrally so. How you do that for different situations is not easy to work out but as tennis has shown it can’t just be ignored. There are better solutions than Wimbledon are using but to even consider them you have to first accept that computers do make mistakes so know there is a problem to be solved.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

An AI Oppenheimer Moment?

A nuclear explosion mushroom cloud
Image by Harsh Ghanshyam from Pixabay

All computer scientists should watch the staggeringly good film, Oppenheimer, by Christopher Nolan. It charts the life of J. Robert Oppenheimer, “father of the atom bomb”, and the team he put together at Los Alamos, as they designed and built the first weapons of mass destruction. The film is about science, politics and war, not computer science and all the science is quantum physics (portrayed incredibly well). Despite that, Christopher Nolan believes the film does have lessons for all scientists, and especially those in Silicon Valley.

Why? In an interview, he suggested that given the current state of Artificial Intelligence the world is at “an Oppenheimer moment”. Computer scientists, in the 2020s, just like physicists in the 1940s, are creating technology that could be used for great good but also cause great harm (including in both cases a possibility that we use it in a way that destroys civilisation). Should scientists and technologists stay outside the political realm and leave discussion of what to do with their technology to politicians, while the scientist do as they wish in the name of science? That leaves society playing a game of catch up. Or do scientists and technologists have more responsibility than that?

Artificial Intelligence isn’t so obviously capable of doing bad things as an atomic bomb was and still clearly is. There is also no clear imperative, such as Oppenheimer had, to get there before the fascist Nazi party, who were clearly evil and already using technology for evil, (now the main imperative seems to be just to get there before someone else makes all the money, not you). It is, therefore, far easier for those creating AI technology to ignore both the potential and the real effects of their inventions on society. However, it is now clear AI can and already is doing lots of bad as well as good. Many scientists understand this and are focussing their work on developing versions that are, for example, built in to be transparent and accountable, are not biased, racist, homophobic, … that do put children’s protection at the heart of what they do… Unfortunately, not all are though. And there is one big elephant in the room. AI can be, and is being, put in control of weapons in wars that are actively taking place right now. There is an arms race to get there before the other side do. From mass identification of targets in the middle East to AI controlled drone strikes in the Ukraine war, military AI is a reality and is in control of killing people with only minimal, if any, real human’s in the loop. Do we really want that? Do we want AIs in control of weapons of mass destruction. Or is that total madness that will lead only to our destruction.

Oppenheimer was a complex man, as the film showed. He believed in peace but, a brilliant theoretical physicist himself, he managed a group of the best scientists in the world in the creation of the greatest weapon of destruction ever built to that point, the first atom bomb. He believed it had to be used once so that everyone would understand that all out nuclear war would end civilisation (it was of course used against Japan not the already defeated Nazis, the original justification). However, he also spent the rest of his life working for peace, arguing that international agreements were vital to prevent such weapons ever being used again. In times of relative peace people forget about the power we have to destroy everyone. The worries only surface again when there is international tension and wars break out such as in the Middle East or Ukraine. We need to always remeber the possibility is there though lest we use them by mistake. Oppenheimer thought the bomb would actually end war, having come up with the idea of “mutually assured destruction” as a means for peace. The phrase aimed to remind people that these weapons could never be used. He worked tirelessly, arguing for international regulation and agreements to prevent their use. 

Christopher Nolan was asked, if there was a special screening of the film in Silicon Valley, what message would he hope the computer scientists and technologists would take from it. His answer was that the should take home the message of the need for accountability. Scientists do have to be accountable for their work, especially when it is capable of having massively bad consequences for society. A key part of that is engaging with the public, industry and government; not with vested interests pushing for their own work to be allowed, but to make sure the public and policymakers do understand the science and technology so there can be fully informed debate. Both international law and international policy is now a long way off the pace of technological development. The willingness of countries to obey international law is also disintegrating and there is a new subtle difference to the 1940s: technology companies are now as rich and powerful as many countries so corporate accountability is now needed too, not just agreements between countries.

Oppenheimer was vilified over his politics after the war, and his name is now forever linked with weapons of mass destruction. He certainly didn’t get everything right: there have been plenty of wars since, so he didn’t manage to end all war as he had hoped, though so far no nuclear war. However, despite the vilification, he did spend his life making sure everyone understood the consequences of his work. Asked if he believed we had created the means to kill tens of millions of Americans (everyone) at a stroke, his answer was a clear “Yes”. He did ultimately make himself accountable for the things he had done. That is something every scientist should do too. The Doomsday Clock is closer to midnight than ever (89s to midnight – manmade global catastrophe). Let’s hope the Tech Bros and scientists of Silicon Valley are willingly to become accountable too, never mind countries. All scientists and technologists should watch Oppenheimer and reflect.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Dr Who? Dr You???

Image by Eduard Solà, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons

When The Doctor in Dr Who knows their time is up – usually because they’ve been injured so badly that they are dying – like all Time Lords, they can regenerate. They transform into a completely different body. They ends up with a new personality, new looks, a new gender, even new teeth. Could humans one day regenerate too?

Your body is constantly regenerating itself too. New cells are born to replace the ones that die. Your hair, nails and skin are always growing and renewing. Every year, you lose and regain so much that you could make a pile of dead cells that would weigh the same as your body. And yet with all this change, every morning you look in the mirror and you look and feel the same. No new personality, no new teeth. How does the human body keep such incredible control?

Here’s another puzzler. Even though our cells are always being renewed, you can’t regrow your arm if it gets cut off. We know it’s not impossible to regrow body parts: we do it for small things like cells, including whole toe nails and some animals like lizards can regrow tails. Why can we regrow some things but not others?

Creation of the shape

All of those questions are part of a field in biology called morphogenesis. The word is from Greek, and it means ‘creation of the shape’. Scientists who study morphogenesis are interested in how cells come together to create bodies. It might sound a long way from computing, but Alan Turing became interested in morphogenesis towards the end of his life. He was interested in finding out about patterns in nature – and patterns were something he knew a lot about as a mathematician. A paper he wrote in 1951 described a way that Turing thought animals could form patterns like stripes and spots on their bodies and in their fur. The mechanisms he described explain how uniform cells could end up turning into different things so not only different patttens in different places, but different body parts in different places. That work is now the foundation of a whole sub-discipline of biology.

Up for the chop

Turing died before he could do much work on morphogenesis, but lots of other scientists have taken up the mantle. One of them is Alejandro Sánchez Alvarado, who was born in Venezuela but works at the Stowers Institute for Medical Research in Kansas City, in the US. He is trying to get to the bottom of questions like how we regenerate our bodies. He thinks that some of the clues could come from working on flatworms that can regenerate almost any part of their body. A particular flatworm, called Schmidtea mediterranea, can regenerate its head and its reproductive organs. You can chop its body into almost 280 pieces and it will still regenerate.

A genetic mystery

The funny thing is, flatworms and humans aren’t as different as you might think. They have about the same number of genes as us, even though we’re so much bigger and seemingly more complicated. Even their genes and ours are mostly the same. All animals share a lot of the same, ancient genetic material. The difference seems to come from what we do with it. The good news there is that as the genes are mostly the same, if scientists can figure out how flatworm morphogenesis works, there’s a good chance that it will tell us something about humans too.

One gene does it all

Alejandro Sánchez Alvarado did one series of experiments on flatworms where he cut off their heads and watched them regenerate. He found that the process looked pretty similar to watching organs like lungs and kidneys grow in humans as well as other animals. He also found that there was a particular gene that, when knocked out, takes away the flatworm’s ability to regenerate.

What’s more, he tried again in other flatworms that can’t normally regenerate whole body parts – just cells, like us. Knocking out that gene made their organs, well, fall apart. That meant that the organs that fell apart would ordinarily have been kept together by regrowing cells, and that the same gene that allows for cell renewal in some flatworms takes care of regrowing whole bodies, Dr Who-style, in others. Phew. A lot of jobs for one gene.

Who knows, maybe Time Lords and humans share that same gene too. They’re like the lucky, regenerating flatworms and we’re the ones who are only just keeping things together. But if it’s any consolation, at least we know that our bodies are constantly working hard to keep us renewed. We still regenerate, just in a slightly less spectacular way.

– the CS4FN team (updated from the archive)

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

How did the zebra get its stripes?

Head of a fish with a distinctive stripy, spotty pattern
Image by geraldrose from Pixabay

There are many myths and stories about how different animals gained their distinctive patterns. In 1901, Rudyard Kipling wrote a “Just So Story” about how the leopard got its spots, for example. The myths are older than that though, such as a story told by the San people of Namibia (and others) of how the zebra got its stripes – during a fight with a baboon as a result of staggering through the baboon’s fire. These are just stories. It was a legendary computer scientist and mathematician, who was also interested in biology and chemistry, who worked out the actual way it happens.

Alan Turing is one of the most important figures in Computer Science having made monumental contributions to the subject, including what is now called the Turing Machine (giving a model of what a computer might be before they existed) and the Turing Test (kick-starting the field of Artificial Intelligence). Towards the end of his life, in the 1950s, he also made a major contribution to Biology. He came up with a mechanism that he believed could explain the stripy and spotty patterns of animals. He has largely been proved right. As a result those patterns are now called Turing Patterns. It is now the inspiration for a whole area of mathematical biology.

How animals come to have different patterns has long been a mystery. All sorts of animals from fish to butterflies have them though. How do different zebra cells “know” they ultimately need to develop into either black ones or white ones, in a consistent way so that stripes (not spots or no pattern at all) result, whereas leopard cells “know” they must grow into a creature with spots. They both start from similar groups of uniform cells without stripes or spots. How do some that end up in one place “know” to turn black and others ending up in another place “know” to turn white in such a consistent way?

There must be some physical process going on that makes it happen so that as cells multiply, the right ones grow or release pigments in the right places to give the right pattern for that animal. If there was no such process, animals would either have uniform colours or totally random patterns.

Mathematicians have always been interested in patterns. It is what maths is actually all about. And Alan Turing was a mathematician. However, he was a mathematician interested in computation, and he realised the stripy, spotty problem could be thought of as a computational kind of problem. Now we use computers to simulate all sorts or real phenomena, from the weather to how the universe formed, and in doing so we are thinking in the same kind of way. In doing this, we are turning a real, physical process into a virtual, computational one underpinned by maths. If the simulation gets it right then this gives evidence that our understanding of the process is accurate. This way of thinking has given us a whole new way to do science, as well as of thinking more generally (so a new kind of philosophy) and it starts with Alan Turing.

Back to stripes and spots. Turing realised it might all be explained by Chemistry and the processes that resulted from it. Thinking computationally he saw that you would get different patterns from the way chemicals react as they spread out (diffuse). He then worked out the mathematical equations that described those processes and suggested how computers could be used to explore the ideas.

Diffusion is just a way by which chemicals spread out. Imagine dropping some black ink onto some blotting paper. It starts as a drop in the middle, but gradually the black spreads out in an increasing circle until there is not enough to spread further. The expanding circle stops. Now, suppose that instead of just ink we have a chemical (let’s call it BLACK, after its colour), that as it spreads it also creates more of itself. Now, BLACK will gradually uniformly spread out everywhere. So far, so expected. You would not expect spots or stripes to appear!

Next, however, let’s consider what Turing thought about. What happens if that chemical BLACK produces another chemical WHITE as well as more BLACK? Now, starting with a drop of BLACK, as it spreads out, it creates both more BLACK to spread further, but also WHITE chemicals as well. Gradually they both spread. If the chemicals don’t interact then you would end up with BLACK and WHITE mixed everywhere in a uniform way leading to a uniform greyness. Again no spots or stripes. Having patterns appear still seems to be a mystery.

However, suppose instead that the presence of the WHITE chemical actually stops BLACK creating more of itself in that region. Anywhere WHITE becomes concentrated gets to stays WHITE. If WHITE spreads (ie diffuses) faster than BLACK then it spreads to places first that become WHITE with BLACK suppressed there. However, no new BLACK leads to no more new WHITE to spread further. Where there is already BLACK, however, it continue to create more BLACK leading to areas that become solid BLACK. Over time they spread around and beyond the white areas that stopped spreading and also create new WHITE that again spreads faster. The result is a pattern. What kind of pattern depends on the speed of the chemical reactions and how quickly each chemical diffuses, but where those are the same because it is the same chemicals the same kind of pattern will result: zebras will end up with stripes and leopards with spots.

This is now called a Turing pattern and the process is called a reaction-diffusion system. It gives a way that patterns can emerge from uniformity. It doesn’t just apply to chemicals spreading but to cells multiplying and creating different proteins. Detailed studies have shown it is the mechanism in play in a variety of animals that leads to their patterns. It also, as Alan Turing suggested, provides a basis to explain the way the different shapes of animals develop despite starting from identical cells. This is called morphogenesis. Reaction-diffusion systems have also been suggested as the mechanism behind how other things occur in the natural world, such as how fingerprints develop. Despite being ignored for decades, Turing’s theory now provides a foundation for the idea of mathematical biology. It has spawned a whole new discipline within biology, showing how maths and computation can support our understanding of the natural world. Not something that the writers of all those myths and stories ever managed.

– Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

If you go down to the woods today…

A girl walking through a meadow full of flowers within woods
Image by Jill Wellington from Pixabay

In the 2025 RHS Chelsea Flower Show there was one garden that was about technology as well as plants: The Avanade Intelligent Garden  exploring how AI might be used to support plants. Each of the trees contained probes that sensed and recorded data about them which could then be monitored through an App. This takes pioneering research from over two decades ago a step further, incorporating AI into the picture and making it mainstream. Back then a team led by Yvonne Rogers built an ambient wood aiming to add excitement to a walk in the woods...

Mark Weiser had a dream of ‘Calm Computing’ and while computing sometimes seems ever more frustrating to use, the ideas led to lots of exciting research that saw at least some computers disappearing into the background. His vision was driven by a desire to remove the frustration of using computers but also the realization that the most profound technologies are the ones that you just don’t notice. He wanted technology to actively remove frustrations from everyday life, not just the ones caused by computers. He wrote of wanting to “make using a computer as refreshing as taking a walk in the woods.”

Not calm, but engaging and exciting!

No one argues that computers should be frustrating to use, but Yvonne Rogers, then of the Open University, had a different idea of what the new vision could be. Not calm. Anything but calm in fact (apart from frustrating of course). Not calm, but engaging and exciting!

Her vision of Weiser’s tranquil woods was not relaxing but provocative and playful. To prove the point her team turned some real woods in Sussex into an ‘Ambient Wood’. The ambient wood was an enhanced wood. When you entered it you took probes with you, that you could point and poke with. They allowed you to take readings of different kinds in easy ways. Time hopping ‘Periscopes’ placed around the woods allowed you to see those patches of woodland at other times of the year. There was also a special woodland den where you could then see the bigger picture of the woods as all your readings were pulled together using computer visualisations.

Not only was the Ambient Wood technology visible and in your face but it made the invisible side of the wood visible in a way that provoked questions about the wildlife. You noticed more. You saw more. You thought more. A walk in the woods was no longer a passive experience but an active, playful one. Woods became the exciting places of childhood stories again but now with even more things to explore.

The idea behind the Ambient Wood, and similar ideas like Bristol’s Savannah project, where playing fields are turned into African Savannah, was to revisit the original idea of computers but in a new context. Computers started as tools, and tools don’t disappear, they extend our abilities. Tools originally extended our physical abilities – a hammer allows us to hit things harder, a pulley to lift heavier things. They make us more effective and allow us to do things a mere human couldn’t do alone. Computer technology can do a similar thing but for the human intellect…if we design them well.

“The most important thing the participants gained was a sense of wonderment at finding out all sorts of things and making connections through discovering aspects of the physical woodland (e.g., squirrel’s droppings, blackberries, thistles)”

– Yvonne Rogers

The Weiser dream was that technology invisibly watches the world and removes the obstacles in the way before you even notice them. It’s a little like the way servants to the aristocracy were expected to always have everything just right but at the same time were not to be noticed by those they served. The way this is achieved is to have technology constantly monitoring, understanding what is going on and how it might affect us and then calmly fixing things. The problem at the time was that it needs really ‘smart’ technology – a high level of Artificial Intelligence to achieve and that proved more difficult than anyone imagined (though perhaps we are now much closer than we were). Our behaviour and desires, however, are full of subtlety and much harder to read than was imagined. Even a super-intellect would probably keep getting it wrong.

There are also ethical problems. If we do ever achieve the dream of total calm we might not like it. It is very easy to be gung ho with technology and not realize the consequences. Calm computing needs monitors – the computer measuring everything it can so it has as much information as possible to make decisions from (see Big Sister is Watching You).

A classic example of how this can lead to people rejecting technology intended to help is in a project to make a ‘smart’ residential home for the elderly. The idea was that by wiring up the house to track the residents and monitor them the nurses would be able to provide much better care, and relatives be able to see how things were going. The place was filled with monitors. For example, sensors in the beds measured resident’s weight while they slept. Each night the occupants weight could invisibly be taken and the nurses alerted of worrying weight loss over time. The smart beds could also detect tossing and turning so someone having bad nights could be helped. A smart house could use similar technology to help you or I have a good nights sleep and help us diet.

The problem was the beds could tell other things too: things that the occupants preferred to keep to themselves. Nocturnal visitors also showed up in the records. That’s the problem if technology looks after us every second of the day, the records may give away to others far more than we are happy with.

Yvonne’s vision was different. It was not that the computers try to second-guess everything but instead extend our abilities. It is quite easy for new technology to lead to our being poorer intellectually than we were. Calculators are a good example. Yes, we can do more complex sums quickly now, but at the same time without a calculator many people can’t do the sums at all. Our abilities have both improved and been damaged at the same time. Generative AI seems to be currently heading the same way, What the probes do, instead, is extend our abilities not reduce them: allowing us to see the woods in a new way, but to use the information however we wish. The probes encourage imagination.

The alternative to the smart house (or calculator) that pampers allowing your brain to stay in neutral, or the residential home that monitors you for the sake of the nurses and your relatives, is one where the sensors are working for you. Where you are the one the bed reports to helping you to then make decisions about your health, or where the monitors you wear are (only) part of a game that you play because its fun.

What next? Yvonne suggested the same ideas could be used to help learning and exploration in other ways, understanding our bodies: “I’d like to see kids discover new ways of probing their bodies to find out what makes them tick.”

So if Yvonne’s vision is ultimately the way things turn out, you won’t be heading for a soporific future while the computer deals with real life for you. Instead it will be a future where the computers are sparking your imagination, challenging you to think, filling you with delight…and where the woods come alive again just as they do in the storybooks (and in the intelligent garden).

Paul Curzon, Queen Mary University of London

(adapted from the archive)

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos