Jerry Elliot High Eagle: Saving Apollo 13

Apollo 13 Mission patch of three golden horses travelling from Earth to the moon
Image by NASA Public domain via Wikimedia Commons

Jerry Elliot High Eagle was possibly the first Native American to work in NASA mission control. He worked for NASA for over 40 years, from the Apollo moon landings up until the space shuttle missions. He was a trained physicist with both Cherokee and Osage heritage and played a crucial part in saving the Apollo 13 crew when an explosion meant they might not get back to Earth alive.

The story of Apollo 13 is told in the Tom Hanks film Apollo 13. The aim was to land on the moon for a third time following the previous two successful lunar missions of Apollo 11 and Apollo 12. That plan was aborted on the way there, however, after pilot James Swigert radioed his now famous if misquoted words “Okay, Houston … we’ve had a problem here”. It was a problem that very soon seemed to mean they would die in space: an oxygen tank had just exploded. Instead of being a moon landing the mission turned into the most famous rescue attempt in history – could the crew of James Lovell, Jack Swigert and Fred Haise get back to Earth before their small space craft turned into a frozen, airless and lifeless space coffin. 

While the mission control team worked with the crew on how to keep the command and lunar modules habitable for as long as possible (they were rapidly running out of breathable air, water and heat and had lost electircal power), Elliot worked on actually getting the craft back to Earth. He was the “retrofire officer” for the mission which meant he was an expert in, and responsible for, the trajectory Apollo 13 took from the Earth to the moon and back. He had to compute a completely new trajectory from where they now were, which would get them back to Earth as fast and as safely as possible. It looked impossible given the limited time the crew could possibly stay alive. Elliot wasn’t a quitter though and motivated himself by telling himself:

“The Cherokee people had the tenacity to persevere on the Trail of Tears … I have their blood and I can do this.” 

The Trail of Tears was the forced removal of Native Americans from their ancestral homelands by the US government in the 19th century to make way for the gold rush . Now we would call this ethnic cleansing and genocide. 60, 000 Native American people were moved with the Cherokee forcibly marched a 1000 miles to an area to the West of the Mississippi, thousands dying along the way.

The best solution for Apollo 13, was to keep going and slingshot round the dark side of the moon, using the forces arising from its gravity, together with strategic use of the boosters to push the space craft on back to Earth more quickly than on those boosters alone. The trajectory he computed had to be absolutely accurate or the crew would not get home and he has suggested the accuracy needed was like “threading a needle from 70 feet away!” Get it wrong and the space craft could miss the Earth completely, or arrive too fast to reenter earth’s orbit and return through the atmosphere.

Jerry Elliot High Eagle, of course, famously got it right: the crew survived, safely returning to Earth and Elliot was awarded the President’s Medal of Freedom, the highest American honour possible, for the role he played. The Native American people also gave him the name High Eagle for his contributions to space exploration.

Paul Curzon, Queen Mary University of London

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Service Model: a review

A robot butler outline on a blood red background
Image by OpenClipart-Vectors from Pixabay

Artificial Intelligences are just tools, that do nothing but follow their programming. They are not self-aware and have no ability for self-determination. They are a what not a who. So what is it like to be a robot just following its (complex) program, making decisions based on data alone? What is it like to be an artificial intelligence? What is the real difference between being self-aware and not? What is the difference to being human? These are the themes explored by the dystopian (or is it utopian?) and funny science fiction novel “Service Model” by Adrian Tchaikovsky.

In a future where the tools of computer science and robotics have been used to make human lives as comfortable as conceivably possible, Charles(TM) is a valet robot looking after his Master’s every whim. His every action is controlled by a task list turned into sophisticated human facing interaction. Charles is designed to be totally logical but also totally loyal. What could go wrong? Everything it turns out when he apparently murders his master. Why did it happen? Did he actually do it? Is there a bug in his program? Has he been infected by a virus? Was he being controlled by others as part of an uprising? Has he become self-aware and able to made his own decision to turn on his evil master. And that should he do now? Will his task list continue to guide him once he is in a totally alien context he was never designed for, and where those around him are apparently being illogical?

The novel explores important topics we all need to grapple with, in a fun but serious way. It looks at what AI tools are for and the difference between a tool and a person even when doing the same jobs. Is it actually good to replace the work of humans with programs just because we can? Who actually benefits and who suffers? AI is being promoted as a silver bullet that will solve our economic problems. But, we have been replacing humans with computers for decades now based on that promise, but prices still go up and inequality seems to do nothing but rise with ever more children living in poverty. Who is actually benefiting? A small number of billionaires certainly are. Is everyone? We have many better “toys” that superficially make life easier and more comfortable – we can buy anything we want from the comfort of our sofas, self-driving cars will soon take us anywhere we want, we can get answers to any question we care to ask, ever more routine jobs are done by machines, many areas of work, boring or otherwise are becoming a thing of the past with a promise of utopia, but are we solving problems or making them with our drive to automate everything. Is it good for society as a whole or just good for vested interests? Are we losing track of what is most important about being human? Charles will perhaps help us find out.

Thinking about the consequences of technology is an important part of any computer science education and all CS professionals should think about ethics of what they are involved in. Reading great science fiction such as this is one good way to explore the consequences, though as Ursula Le Guin has said: the best science fiction doesn’t predict the future, it tells us about ourselves in the present. Following in the tradition of “The Machine Stops” and “I, Robot”, “Service Model” (and the short story “Human Resources” that comes with it) does that, if in a satyrical way. It is a must read for anyone involved in the design of AI tools especially those promoting the idea of utopian futures.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Crystal ball coupons – what your data might be giving away

Big companies know far more about you than you think. You have very little privacy from their all-seeing algorithms. They may even have worked out some very, very personal things about you, that even your parents don’t know…

An outraged father in Minneapolis stormed into a supermarket chain complaining that his school-aged daughter was being sent coupons for baby clothes. The shop manager apologised … but later they found there was no mistake in the tiny tot offers. The teenager was expecting a baby but had not told her father. Her situation was revealed not by a crystal ball but by an algorithm. The shop was using Big Data processing algorithms that noticed patterns in her shopping that they had linked to “pregnant”. They had even worked out her likely delivery date. Her buying habits had triggered targeted marketing.

Algorithms linked her shopping patterns to “pregnant”

When we use a loyalty card or an online account our sales activity is recorded. This data is added to a big database, with our details, the time, date, location and products bought (or browsed). It is then analysed. Patterns in behaviour can be tracked, our habits, likes, dislikes and even changes in our personal situation deduced, based on those patterns. Sometimes this seems quite useful, other times a bit annoying, it can surprise us, and it can be wrong.

This kind of computing is not just used to sell products, it is also used to detect fraud and to predict where the next outbreak of flu will happen. Our banking behaviour is tracked to flag suspicious transactions and help stop theft and money laundering. When we search for ‘high temperature’ our activity might be added to the data used to predict flu trends. However, the models are not always right as there can be a lot of ‘noise’ in the data. Maybe we bought baby clothes as a present for our aunt, and were googling temperatures because we wanted to go somewhere hot for our holiday.

Whether the predictions are spot on or not is perhaps not the most important thing. Maybe we should be considering whether we want our data saved, mined and used in these ways. A predictive pregnancy algorithm seems like an invasion of privacy, even like spying, especially if we don’t know about it. Predictive analytics is big; big data is really big and big business wants our data to make big profits. Think before you click!

Jane Waite, Queen Mary University of London (now at Raspberry Pi)

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Hacking DNA

A jigsaw of DNA with peices missing
Image by Arek Socha from Pixabay

DNA is the molecule of life. Our DNA stores the information of how to create us. Now it can be hacked.

DNA consists of two strands coiling round each other in a double helix. It’s made of four building blocks, or ‘nucleotides’, labelled A, C, G, T. Different orders of letters gives the information of how to build each unique creature, you and me included. Sequences of DNA are analysed in labs by a machine called a gene sequencer. It works out the order of the letters and so tells us what’s in the DNA. When biologists talk of sequencing the human (or another animal or plant’s) genome they mean using a gene sequencer to work out the specific sequences in the DNA for that species. They are also used by forensic scientists to work out who might have been at the scene of a crime, and to predict whether a person has genetic disorders that might lead to disease.

DNA can be used to store information other than that of life: any information in fact. This may be the future of data storage. Computers use a code made of 0s and 1s. There is no reason why you can’t encode all the same information using A, C, G, T instead. For example, a string of 1s and 0s might be encoded by having each pair of bits represented by one of the four nucleotides: 00 = A, 01 = C, 10 = G and 11 = T. The idea has been demonstrated by Harvard scientists who stored a video clip in DNA.

It also leads to whole new cyber-security threats. A program is just data too, so can be stored in DNA sequences, for example. Researchers from the University of Washington have managed to hide a malicious program inside DNA that can attack the gene sequencer itself!

The gene sequencer not only works out the sequence of DNA symbols. As it is a computer, it converts it into a binary form that can then be processed as normal. As DNA sequences are long, the sequencer compresses them. The attack made use of a common bug found in programs that malware often uses: ‘buffer overflow’ errors. These arise when the person writing a program includes instructions to set aside a fixed amount of space to store data, but then doesn’t include code to make sure only that amount of data is stored. If more data is stored then it overflows into the memory area beyond that allocated to it. If executable code is stored there, then the effect can be to overwrite the program with new malicious instructions.

When the gene sequencer reaches that malware DNA, the converted program emerges and is converted back into 1s and 0s. If those bits are treated as instructions and executed, it launches its attack and takes control of the computer that runs the sequencer. In principle, an attack like this could be used to fake results for subsequent DNA tests, subverting court cases, disrupt hospital testing, steal sensitive genetic data, or corrupt DNA-based memory.

Fortunately, the risks of exactly this attack causing any problems in the real world are very low but the team wanted to highlight the potential for DNA based attacks, generally. They pointed out how lax the development processes and controls were for much of the software used in these labs. The bigger risk right now is probably from scientists falling for spear phishing scams (where fake emails pretending to be from someone you know take you to a malware website) or just forgetting to change the default password on the sequencer.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Password strength and information entropy

Comparison of the password strength of Tr0ub4dor&3 (easy to guess, difficult to remember) and correcthorsebatterystaple (hard to guess but easy to remember if you turn it into a picture)
CREDIT: Randall Munroe, xkcd.com https://xkcd.com/936 – reprinted under a CC Attribution-NonCommercial 2.5 License

How do you decide whether a password is strong? Computer scientists have a mathematical way to do it. Based on an idea called information entropy it’s part of “Information Theory”, invented by electrical engineer Claude Shannon back in 1948. This XKCD cartoon for computer scientists uses this idea to compare two different password. Unless you understand information theory the detail is a bit mind blowing to work out what is going on though… so let’s explain the computer science!

Entropy is based on the number of guesses someone would need to make trying all the possibilities for a password one at a time – doing what computer scientists call a brute force attack. Think about a PIN on a mobile phone or cash point. Suppose it was a single digit – there would be 10 possibilities to try. If 2-digit PINS were required then there are now 100 different possibilities. With the normal 4 digits you need 10,000 (10^4 = 10x10x10x10) guesses to be sure of getting it. Different symbol sets lead to more possibilities. If you had to use lower case letters instead of digits, there are 26 possibilities for length 1 so over 450,000 (26^4 = 26x26x26x26) guesses needed for the 4 letter password. If upper case letters are possible that goes up to more than 7 million (52 letters so 52^4 = 52x52x52x52) guesses. If you know they used a word though you don’t have to try all the possibilities, just the words. There are only about 5000 of those, so far fewer guesses needed. So password strength depends on the number of symbols that could be used, but also whether the PIN or password was chosen randomly (words aren’t a random sequence of letters!)

To make everything standard, Shannon used binary to do entropy calculations so assumed a symbol set of only 0 and 1 (so all the answers become powers of 2 because he used 2 symbols). He then measured them in ‘bits’ needed to count all the guesses. Any other groups of symbols are converted to binary first. If a cashpoint only had buttons A, B, C and D for PINs, then to do the calculation you count those 4 options in binary: 00 (A), 01 (B), 10 (C), 11 (D) and see you need 2 bits to do it (2^2 = 2×2 choices). Real cashpoints have 10 digits and need just over 3 bits to represent all of a 1 digit PIN (2^3 = 2x2x2 = 8 so not enough, 2^4 = 2x2x2x2 = 16 so more than you need so the answer is more than 3 but less than 4). It’s entropy would be just over 3. To count the possibilities for a 4-digit PIN you need just over 13 bits, so that is its entropy. The lower case alphabet needs just under 6 bits to store the number of possibilities, so entropy is about 6, and so on.

So entropy is not measured directly in terms of guesses (which would be very big numbers) but instead indirectly in terms of bits. If you determine the entropy to be 28 bits (as in the cartoon), that means the number of different possibilities to guess would fit in 28 bits if each guess was given its own unique binary code. 28 bits of binary can be used to represent over 268 million different things (2^28), so that is the number of guesses actually needed. It is a lot of guesses but not so many a computer couldn’t try them all fairly quickly as the cartoon points out!

Where do those 28 bits come from? Well they assume the hacker is just trying to crack passwords that follow a common pattern people use (so are not that random). The hacker assumes the person did the following: Take a word; maybe make the first letter a capital; swap digits in place of some similar looking letters; and finally add a symbol and letter at the end. It follows the general advice for passwords, and looks random … but is it?

How do we work out the entropy of a password invented that way? First, think about the base word the password is built around. They estimate it as 16 bits for an up to 9 letter word of lower-case letters, so are assuming there are 2^16 (i.e. 65,000) possible such base words. There are about 40,000 nine letter words in English, so that’s an over estimate if you are assuming you know the length. Perhaps not if you assume things like fictional names and shorter words are possible.

As the pattern followed is that the first letter could be uppercase, that adds 1 more bit for the two guesses now needed for each word tried: check if it’s upper-case, then check if it’s lower-case. Similarly, as any letter ‘o’ might have been swapped for 0, and any ‘a’ for 4 (as people commonly do) this adds 3 more bits (assuming there are at most 3 such opportunities per word). Finally, we need 3 bits for the 9 possible digits and another 4 bits for the common punctuation characters added on the end. Another bit is added for the two possible orders of punctuation and digit. Add up all those bits and we have the 28 bits suggested in the cartoon (where bits are represented by little squares).

Now do a similar calculation for the other way of creating passwords suggested in the cartoon. If there are only about 2000 really common words a person might choose from, we need 11 bits per word. If we string 4 such words together completely at random (not following a recognisable phrase with no link between the words) we get the much larger entropy of 44 bits overall. More bits means harder to crack, so this password will be much, much harder than the first. It takes over 17 trillion guesses rather than 268 million.


The serious joke of the cartoon is that the rules we are told to follow leads to people creating passwords that are not very random at all, precisely because they have been created following rules. That means they are easy to crack (but still hard to remember). If instead you used the 4 longish and unconnected word method which doesn’t obey any of the rules we are told to follow, you actually get a password that is much easier to remember (if you turn it into a surreal picture and remember the picture), but harder to crack because it actually has more randomness in it! That is the real lesson. How ever you create a password, it has to have lots of randomness for it to be strong. Entropy gives you a way to check how well you have done.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The robot always wins

Children playing Rock Paper Scissors (Janken)
image by HeungSoon from Pixabay

Researchers in Japan made a robot arm that always wins at rock, paper, scissors (a game completely of chance). Not with ultra-clever psychology, which is the way that the best humans play, but with old-fashioned cheating. The robot uses high-speed motors and precise computer vision systems to recognise whether its human opponent is making the sign for rock, paper or scissors. One millisecond later, it can play the sign that beats whatever the human chooses. Because the whole process is so quick, it looks to humans like the robot is playing at the same time. See for yourself by following the link below to watch the video of this amazing cheating robot.

Watch …

Paul Curzon, Queen Mary University of London

Did you know?

The word ‘robot’ came to the English language over 100 years ago in the early 1920s. Before that the words ‘automaton’ or ‘android’ were used. In 1920 Czech playwright Karel Čapek published his play “R.U.R.” (Rossum’s Universal Robots, or Rossumovi Univerzální Roboti) and his brother Josef suggested using ‘roboti’, from the Slavic / Czech word meaning ‘forced labour’. In the late 1930s there was a performance of the play at the People’s Palace in London’s Stepney Green / Mile End – this building is now part of Queen Mary University of London (some of our computer science lectures take place there) and, one hundred years on, QMUL also has a Centre for Advanced Robotics.

More on …


Related Magazine …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

He attacked me with a dictionary!

Letters in an unreadable muddle
Image by JL G from Pixabay

You might be surprised at how many people have something short, simple (and stupid!) like ‘password’ as their password. Some people add a number to make it harder to guess (‘password1’) but unfortunately that doesn’t help. For decades the official advice has been to use a mixture of lower (abc) and upper case (ABC) characters as well as numbers (123) and special characters (such as & or ^). To meet these rules some people substitute letters for numbers (for example 0 for O or 4 for A and so on). Following these rules might lead you to create something like “P4ssW0^d1” which looks like it might be difficult to crack, but isn’t. The problem is that people tend to use the same substitutions so password-crackers can predict, and so break, them too.

Hackers know the really common passwords people use like ‘password’, ‘qwerty’ and ‘12345678’ (and more) so will just try them as a matter of course until they very quickly come across one of the many suckers who used one. Even apparently less obvious passwords can be easy to crack, though. The classic algorithm used is a ‘dictionary attack’.

The simple version of this is to run a program that just tries each word in an online dictionary one at a time as a password until it finds a word that works. It takes a program fractions of seconds to check every word like this. Using foreign words doesn’t help as hackers make dictionaries by combining those for every known language into one big universal dictionary. That might seem like a lot of words but it’s not for a computer.

You might think you can use imaginary words from fiction instead – names of characters in Lord of the Rings, perhaps, or the names of famous people. However, it is easy to compile lists of words like that too and add them to the password cracking dictionary. If it is a word somewhere on the web then it will be in a dictionary for hacking use.

Going a step further, a hacking program can take all these words and create versions with numbers added, 4 swapped for A, and so on. These new potential passwords become part of the attack dictionary too. More can be added by taking short words and combining them, including ones that appear in well known phrases like ‘starwars’ or ‘tobeornottobe’.

The list gets bigger and bigger, but computers are fast, and hackers are patient, so that’s no big deal…so make sure your password isn’t in their dictionary!

– Jo Brodie and Paul Curzon, Queen Mary University of London

from the archive

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Ninja White Hat Hacking

Female engineer working at a computer
Image by This_is_Engineering from Pixabay

Computer hackers are the bad guys, aren’t they? They cause mayhem: shutting down websites, releasing classified information, stealing credit card numbers, spreading viruses. They can cause lots of harm, even when they don’t mean to. Not all hackers are bad though. Some, called white hat hackers, are ethical hackers, paid by companies to test their security by actively trying to break in – it’s called penetration testing. It’s not just business though, it was also turned into a card game.

Perhaps the most famous white hat hacker is Kevin Mitnick. He started out as a bad guy – the most-wanted computer criminal in the US. Eventually the FBI caught him, and after spending 5-years in prison he reformed and became a white hat hacker who now runs his own computer security company. The way he hacked systems had nothing to do with computer skills and everything to do with language skills. He did what’s called social engineering. A social engineer uses their skills of persuasion to con people into telling them confidential information or maybe even actually doing things for them like downloading a program that contains spyware code. Professional white hat hackers have to have all round skills though: network, hardware or software hacking skills, not just social engineering ones. They need to understand a wide range of potential threats if they are to properly test a company’s security and help them fix all the vulnerabilities.

Breaking the law and ending up in jail, like Kevin Mitnik, isn’t a great way to learn the skills for your long-term career though. A more normal way to become an expert is to go to university and take classes. Wouldn’t playing games be a much more fun way to learn than sitting in lectures, though? That was what Tamara Denning, Tadayoshi Kohno, and Adam Shostack, computer security experts from the University of Washington, wondered. As a result, they teamed up with Steve Jackson Games and came up with a card game Control-Alt-Hack(TM) (www.controlalthack.com), sadly no longer available. It was based on the cult tabletop card game, Ninja Burger. Rather than being part of a Ninja Burger Delivery team as in that game, in Control-Alt-Hack(TM) you are an ethical white hat hacker working for an elite security company. You have to complete white hat missions using your Ninja hacking skills: from shutting down an energy company to turning a robotic vacuum cleaner into a pet. The game is lots of fun, but the idea was that by playing it you would understand a lot more of about the part that computer security plays in everyones lives and about the kinds of threats that security experts have to protect against.

We could all do with more of that. Lot’s of people like gaming so why not learn something useful at the same time as having fun? Let’s hope there are more fun, and commercial games, invented in future about cyber security. It would make a good cooperative game in the style of Pandemic perhaps, and there must be simple board game possibilities that would raise awareness oc cyber security threats. It would be great if one day such games could inspire more people to a career as a security expert. We certainly need lots more cybersecurity experts keeping us all safe.

– Paul Curzon, Queen Mary University of London

adapted from the archives

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

RADAR winning the Battle of Britain

Plaque commemorating the Birth of RADAR
Image Kintak, CC BY-SA 3.0 via Wikimedia Commons

The traditional story of how World War II was won is that of inspiring leaders, brilliant generals and plucky Brits with “Blitz Spirit”. In reality it is usually better technology that wins wars. Once that meant better weapons, but in World War II, mathematicians and computer scientists were instrumental in winning the war by cracking the German codes using both maths and machines. It is easy to be a brilliant general when you know the other sides plans in advance!. Less celebrated but just as important, weathermen and electronic engineers were also instrumental in winning World War II, and especially, the Battle of Britain, with the invention of RADAR. It is much easier to win an air battle when you know exactly where the opposition’s planes. It was down largely to meteorologist and electronic engineer, Robert Watson-Watt and his assistant Arnold Wilkins. Their story is told in the wonderful, but under-rated, film Castles in the Sky, starring Eddie Izzard.

****SPOILER ALERT****

In the 1930s, Nazi Germany looked like an ever increasing threat as it ramped up it’s militarisation, building a vast army and air force. Britain was way behind in the size of its air force. Should Germany decide to bomb Britain into submission it would be a totally one-sided battle. SOmething needed to be done.

A hopeful plan was hatched in the mid 1930s to build a death ray to zap pilots in attacking planes. One of the engineers asked to look into the idea was Robert Watson-Watt. He worked for the met office. He was an expert in the practical use of radio waves. He had pioneered the idea of tracking thunderstorms using the radio emissions from lightening as a warning system for planes, developing the idea as early as 1915. This ultimately led to the invention of “Huff-Duff”, shorthand for High Frequency Direction Finding, where radio sources could be accurately tracked from the signals they emitted. That system helped Britain win the U-Boat war, in the North Atlantic, as it allowed anti-submarine ships to detect and track U-Boats when they surfaced to use their radio. As a result Huff-Duff helped sink a quarter of the U-Boats that were attacked. That in itself was vital for Britain to survive the siege that the U-Boats were enforcing sinking convoys of supplies from the US.

However, by the 1930s Watson-Watt was working on other applications based on his understanding of radio. His assistant, Arnold Wilkins, quickly proved that the death ray idea would never work, but pointed out that planes seemed to affect radio waves. Together they instead came up with the idea of creating a radio detection system for planes. Many others had played with similar ideas, including German engineers, but no one had made a working system.

Because the French coast was only 20 minutes flying time away the only way to defend against German bombers would be to have planes patrolling the skies constantly. But that required vastly more planes than Britain could possibly build. If planes could be detected from sufficiently far away, then Spitfires could instead be scrambled to intercept them only when needed. That was the plan, but could it be made to work, when so little progress had been made by others?

Watson-Watt and Wilkins set to work making a prototype which they successfully demonstrated could detect a plane in the air (if only when it was close by). It was enough to get them money and a team to keep working on the idea. Watson-Watt followed a maxim of “Give them the third best to go on with; the second best comes too late, the best never comes”. With his radar system he did not come up with a perfect system, but with something that was good enough. His team just used off-the shelf components rather than designing better ones specifically for the job. Also, once they got something that worked they put it into action. Unlike later, better systems their original radar system didn’t involve sweeping radar signals that bounced off a plane when the sweep pointed at it, but a radio signal blasted in all directions. The position of the plane was determined by a direction finding system Watson-Watt designed based on where the radio signal bounced back from. That meant it took lots of power. However, it worked, and a network of antennas were set up in time for the Battle of Britain. Their radar system, codenamed Chain Home could detect planes 100 miles away. That was plenty of time to scramble planes. The real difficulty was actually getting the information to the air fields to scramble the pilots quickly. That was eventually solved with a better communication system.

The Germans were aware of all the antenna, appearing along the British coast but decided it must be a communications system. Carrots also helped fool them! You may of heard that carrots help you see in the dark. That was just war-time propaganda invented to explain away the ability of the Brits to detect bombers so soon…a story was circulated that due to rationing Brits were eating lots of carrots so had incredible eye-sight as a result!

The Spitfires and their fighter pilots got all the glory and fame, but without radar they would not even have been off the ground before the bombers had dropped their payloads. Practical electronic engineering, Robert Watson-Watt and Arnold Wilkins were the real unsung heroes of the Battle of Britain.

Paul Curzon, Queen Mary University of London

Postscript

In the 1950s Watson-Watt was caught speeding by a radar speed trap. He wrote a poem about it:

A Rough Justice

by Sir Robert Watson-Watt

Pity Sir Watson-Watt,
strange target of this radar plot

And thus, with others I can mention,
the victim of his own invention.

His magical all-seeing eye
enabled cloud-bound planes to fly

but now by some ironic twist
it spots the speeding motorist

and bites, no doubt with legal wit,
the hand that once created it.

More on…

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Transitional Automaton: a poem

Image by Агзам Гайсин from Pixabay M

My poetry collection, «Αλγόριθμοι Σιωπής» (Algorithms of Silence), explores the quiet, often unseen structures that shape our inner lives. As a computer scientist and a poet, I’m fascinated by the language we use to describe these systems – whether they are emotional, social, or computational. 

The following piece is an experiment that embodies this theme. It presents a single core idea – about choice, memory, and predetermination – in three different languages: the original Greek poem “Αυτόματον Μεταβατικόν,” an English transcreation, and a pseudocode version that translates the poem’s philosophical questions into the logic of an automaton.

– Vasileios Klimis, Queen Mary University of London

Transitional Automaton

Once,
a decision – small,
like a flaw in a cogwheel –
tilted the whole system toward a version
never written.

In the workshop of habits,
every choice left behind
a trace of activation;
you don’t see it,
but it returns
like a pulse
through a one-way gate.

I walk through a matrix of transitions
where each state defines the memory of the next.
Not infinite possibilities –
only those the structure permits.

Is this freedom?
Or merely the optimal illusion
of a system with elastic rules?

In moments of quiet
(but not of silence)
I feel the null persisting
not as absence,
but as a repository in waiting.
Perhaps that is where it resides,
all that was never activated.

If there is a continuation,
it will resemble a debug session
more than a crisis.

Not a moral crisis;
a recursion.
Who passes down to the final terminal
the most probable path?

The question is not
what we lived.
But which of the contingencies
remained active
when we
stopped calculating.


Αυτόματον μεταβατικόν

Κάποτε,
μια απόφαση – μικρή, σαν στρέβλωση σε οδοντωτό τροχό –
έγερνε το σύνολο προς μια εκδοχή
που δεν γράφτηκε ποτέ.

Στο εργαστήριο των συνηθειών
κάθε επιλογή άφηνε πίσω της
ένα ίχνος ενεργοποίησης·
δεν το βλέπεις,
αλλά επιστρέφει
σαν παλμός σε μη αντιστρεπτή πύλη.

Περπατώ μέσα σ’ έναν πίνακα μεταβάσεων
όπου κάθε κατάσταση ορίζει τη μνήμη της επόμενης.
Όχι άπειρες πιθανότητες –
μόνον όσες η δομή επιτρέπει.
Είναι ελευθερία αυτό;
Ή απλώς η βέλτιστη πλάνη
ενός συστήματος με ελαστικούς κανόνες;

Σε στιγμές σιγής (αλλά όχι σιωπής)
νιώθω το μηδέν να επιμένει
όχι ως απουσία,
αλλά ως αποθήκη αναμονής.
Ίσως εκεί διαμένει
ό,τι δεν ενεργοποιήθηκε.

Αν υπάρξει συνέχεια,
θα μοιάζει περισσότερο με debug session
παρά με κρίση.

Όχι κρίση ηθική·
μία αναδρομή.
Ποιος μεταβιβάζει στο τερματικό του τέλους
το πιο πιθανό μονοπάτι;

Η ερώτηση δεν είναι τι ζήσαμε.
Αλλά ποιο από τα ενδεχόμενα έμεινε ενεργό
όταν εμείς
σταματήσαμε να υπολογίζουμε.


Pseudocode Poem version

Pseudocode poems are poems written in pseudocode: the semi-formailsed language used for writing algorithms and planning the design of program. Here is the above poem as a pseudocode poem.

FUNCTION life_automaton(initial_state)
  DEFINE State_Transitions AS Matrix;
  DEFINE active_path AS Log;
  DEFINE potential_paths AS Set = {all_versions_never_written};

  current_state = initial_state;
  system.log("Initializing in the workshop of habits.");

  REPEAT

    WAIT FOR event.decision;
    // a decision — small, like a flaw in a cogwheel
    IF (event.decision.is_subtle) THEN
       previous_state = current_state;
       current_state = State_Transitions.calculate_next
                                (previous_state, event.decision);
       // it returns like a pulse through a one-way gate
       active_path.append(previous_state -> current_state);
       potential_paths.remove(current_state.version);
    END IF

   // Is this freedom? Or merely the optimal illusion
   // of a system with elastic rules?
   
   IF (system.isQuiet) THEN
       // I feel the null persisting
       // not as absence, but as a repository in waiting.
       // Perhaps that is where it resides, all that was never activated.
       PROCESS potential_paths.contemplate();
   END IF

  UNTIL system.isTerminated;
   
  // If there is a continuation,
  // it will resemble a debug session more than a crisis.
  // Not a moral crisis; a recursion.
  DEBUG_SESSION.run(active_path);

  // The question is not what we lived.
  // But which of the contingencies remained active
  // when we stopped calculating.
  RETURN final_state = active_path.getLast();
END FUNCTION

More on

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos