Hacking DNA

A jigsaw of DNA with peices missing
Image by Arek Socha from Pixabay

DNA is the molecule of life. Our DNA stores the information of how to create us. Now it can be hacked.

DNA consists of two strands coiling round each other in a double helix. It’s made of four building blocks, or ‘nucleotides’, labelled A, C, G, T. Different orders of letters gives the information of how to build each unique creature, you and me included. Sequences of DNA are analysed in labs by a machine called a gene sequencer. It works out the order of the letters and so tells us what’s in the DNA. When biologists talk of sequencing the human (or another animal or plant’s) genome they mean using a gene sequencer to work out the specific sequences in the DNA for that species. They are also used by forensic scientists to work out who might have been at the scene of a crime, and to predict whether a person has genetic disorders that might lead to disease.

DNA can be used to store information other than that of life: any information in fact. This may be the future of data storage. Computers use a code made of 0s and 1s. There is no reason why you can’t encode all the same information using A, C, G, T instead. For example, a string of 1s and 0s might be encoded by having each pair of bits represented by one of the four nucleotides: 00 = A, 01 = C, 10 = G and 11 = T. The idea has been demonstrated by Harvard scientists who stored a video clip in DNA.

It also leads to whole new cyber-security threats. A program is just data too, so can be stored in DNA sequences, for example. Researchers from the University of Washington have managed to hide a malicious program inside DNA that can attack the gene sequencer itself!

The gene sequencer not only works out the sequence of DNA symbols. As it is a computer, it converts it into a binary form that can then be processed as normal. As DNA sequences are long, the sequencer compresses them. The attack made use of a common bug found in programs that malware often uses: ‘buffer overflow’ errors. These arise when the person writing a program includes instructions to set aside a fixed amount of space to store data, but then doesn’t include code to make sure only that amount of data is stored. If more data is stored then it overflows into the memory area beyond that allocated to it. If executable code is stored there, then the effect can be to overwrite the program with new malicious instructions.

When the gene sequencer reaches that malware DNA, the converted program emerges and is converted back into 1s and 0s. If those bits are treated as instructions and executed, it launches its attack and takes control of the computer that runs the sequencer. In principle, an attack like this could be used to fake results for subsequent DNA tests, subverting court cases, disrupt hospital testing, steal sensitive genetic data, or corrupt DNA-based memory.

Fortunately, the risks of exactly this attack causing any problems in the real world are very low but the team wanted to highlight the potential for DNA based attacks, generally. They pointed out how lax the development processes and controls were for much of the software used in these labs. The bigger risk right now is probably from scientists falling for spear phishing scams (where fake emails pretending to be from someone you know take you to a malware website) or just forgetting to change the default password on the sequencer.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Password strength and information entropy

Comparison of the password strength of Tr0ub4dor&3 (easy to guess, difficult to remember) and correcthorsebatterystaple (hard to guess but easy to remember if you turn it into a picture)
CREDIT: Randall Munroe, xkcd.com https://xkcd.com/936 – reprinted under a CC Attribution-NonCommercial 2.5 License

How do you decide whether a password is strong? Computer scientists have a mathematical way to do it. Based on an idea called information entropy it’s part of “Information Theory”, invented by electrical engineer Claude Shannon back in 1948. This XKCD cartoon for computer scientists uses this idea to compare two different password. Unless you understand information theory the detail is a bit mind blowing to work out what is going on though… so let’s explain the computer science!

Entropy is based on the number of guesses someone would need to make trying all the possibilities for a password one at a time – doing what computer scientists call a brute force attack. Think about a PIN on a mobile phone or cash point. Suppose it was a single digit – there would be 10 possibilities to try. If 2-digit PINS were required then there are now 100 different possibilities. With the normal 4 digits you need 10,000 (10^4 = 10x10x10x10) guesses to be sure of getting it. Different symbol sets lead to more possibilities. If you had to use lower case letters instead of digits, there are 26 possibilities for length 1 so over 450,000 (26^4 = 26x26x26x26) guesses needed for the 4 letter password. If upper case letters are possible that goes up to more than 7 million (52 letters so 52^4 = 52x52x52x52) guesses. If you know they used a word though you don’t have to try all the possibilities, just the words. There are only about 5000 of those, so far fewer guesses needed. So password strength depends on the number of symbols that could be used, but also whether the PIN or password was chosen randomly (words aren’t a random sequence of letters!)

To make everything standard, Shannon used binary to do entropy calculations so assumed a symbol set of only 0 and 1 (so all the answers become powers of 2 because he used 2 symbols). He then measured them in ‘bits’ needed to count all the guesses. Any other groups of symbols are converted to binary first. If a cashpoint only had buttons A, B, C and D for PINs, then to do the calculation you count those 4 options in binary: 00 (A), 01 (B), 10 (C), 11 (D) and see you need 2 bits to do it (2^2 = 2×2 choices). Real cashpoints have 10 digits and need just over 3 bits to represent all of a 1 digit PIN (2^3 = 2x2x2 = 8 so not enough, 2^4 = 2x2x2x2 = 16 so more than you need so the answer is more than 3 but less than 4). It’s entropy would be just over 3. To count the possibilities for a 4-digit PIN you need just over 13 bits, so that is its entropy. The lower case alphabet needs just under 6 bits to store the number of possibilities, so entropy is about 6, and so on.

So entropy is not measured directly in terms of guesses (which would be very big numbers) but instead indirectly in terms of bits. If you determine the entropy to be 28 bits (as in the cartoon), that means the number of different possibilities to guess would fit in 28 bits if each guess was given its own unique binary code. 28 bits of binary can be used to represent over 268 million different things (2^28), so that is the number of guesses actually needed. It is a lot of guesses but not so many a computer couldn’t try them all fairly quickly as the cartoon points out!

Where do those 28 bits come from? Well they assume the hacker is just trying to crack passwords that follow a common pattern people use (so are not that random). The hacker assumes the person did the following: Take a word; maybe make the first letter a capital; swap digits in place of some similar looking letters; and finally add a symbol and letter at the end. It follows the general advice for passwords, and looks random … but is it?

How do we work out the entropy of a password invented that way? First, think about the base word the password is built around. They estimate it as 16 bits for an up to 9 letter word of lower-case letters, so are assuming there are 2^16 (i.e. 65,000) possible such base words. There are about 40,000 nine letter words in English, so that’s an over estimate if you are assuming you know the length. Perhaps not if you assume things like fictional names and shorter words are possible.

As the pattern followed is that the first letter could be uppercase, that adds 1 more bit for the two guesses now needed for each word tried: check if it’s upper-case, then check if it’s lower-case. Similarly, as any letter ‘o’ might have been swapped for 0, and any ‘a’ for 4 (as people commonly do) this adds 3 more bits (assuming there are at most 3 such opportunities per word). Finally, we need 3 bits for the 9 possible digits and another 4 bits for the common punctuation characters added on the end. Another bit is added for the two possible orders of punctuation and digit. Add up all those bits and we have the 28 bits suggested in the cartoon (where bits are represented by little squares).

Now do a similar calculation for the other way of creating passwords suggested in the cartoon. If there are only about 2000 really common words a person might choose from, we need 11 bits per word. If we string 4 such words together completely at random (not following a recognisable phrase with no link between the words) we get the much larger entropy of 44 bits overall. More bits means harder to crack, so this password will be much, much harder than the first. It takes over 17 trillion guesses rather than 268 million.


The serious joke of the cartoon is that the rules we are told to follow leads to people creating passwords that are not very random at all, precisely because they have been created following rules. That means they are easy to crack (but still hard to remember). If instead you used the 4 longish and unconnected word method which doesn’t obey any of the rules we are told to follow, you actually get a password that is much easier to remember (if you turn it into a surreal picture and remember the picture), but harder to crack because it actually has more randomness in it! That is the real lesson. How ever you create a password, it has to have lots of randomness for it to be strong. Entropy gives you a way to check how well you have done.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

He attacked me with a dictionary!

Letters in an unreadable muddle
Image by JL G from Pixabay

You might be surprised at how many people have something short, simple (and stupid!) like ‘password’ as their password. Some people add a number to make it harder to guess (‘password1’) but unfortunately that doesn’t help. For decades the official advice has been to use a mixture of lower (abc) and upper case (ABC) characters as well as numbers (123) and special characters (such as & or ^). To meet these rules some people substitute letters for numbers (for example 0 for O or 4 for A and so on). Following these rules might lead you to create something like “P4ssW0^d1” which looks like it might be difficult to crack, but isn’t. The problem is that people tend to use the same substitutions so password-crackers can predict, and so break, them too.

Hackers know the really common passwords people use like ‘password’, ‘qwerty’ and ‘12345678’ (and more) so will just try them as a matter of course until they very quickly come across one of the many suckers who used one. Even apparently less obvious passwords can be easy to crack, though. The classic algorithm used is a ‘dictionary attack’.

The simple version of this is to run a program that just tries each word in an online dictionary one at a time as a password until it finds a word that works. It takes a program fractions of seconds to check every word like this. Using foreign words doesn’t help as hackers make dictionaries by combining those for every known language into one big universal dictionary. That might seem like a lot of words but it’s not for a computer.

You might think you can use imaginary words from fiction instead – names of characters in Lord of the Rings, perhaps, or the names of famous people. However, it is easy to compile lists of words like that too and add them to the password cracking dictionary. If it is a word somewhere on the web then it will be in a dictionary for hacking use.

Going a step further, a hacking program can take all these words and create versions with numbers added, 4 swapped for A, and so on. These new potential passwords become part of the attack dictionary too. More can be added by taking short words and combining them, including ones that appear in well known phrases like ‘starwars’ or ‘tobeornottobe’.

The list gets bigger and bigger, but computers are fast, and hackers are patient, so that’s no big deal…so make sure your password isn’t in their dictionary!

– Jo Brodie and Paul Curzon, Queen Mary University of London

from the archive

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Ninja White Hat Hacking

Female engineer working at a computer
Image by This_is_Engineering from Pixabay

Computer hackers are the bad guys, aren’t they? They cause mayhem: shutting down websites, releasing classified information, stealing credit card numbers, spreading viruses. They can cause lots of harm, even when they don’t mean to. Not all hackers are bad though. Some, called white hat hackers, are ethical hackers, paid by companies to test their security by actively trying to break in – it’s called penetration testing. It’s not just business though, it was also turned into a card game.

Perhaps the most famous white hat hacker is Kevin Mitnick. He started out as a bad guy – the most-wanted computer criminal in the US. Eventually the FBI caught him, and after spending 5-years in prison he reformed and became a white hat hacker who now runs his own computer security company. The way he hacked systems had nothing to do with computer skills and everything to do with language skills. He did what’s called social engineering. A social engineer uses their skills of persuasion to con people into telling them confidential information or maybe even actually doing things for them like downloading a program that contains spyware code. Professional white hat hackers have to have all round skills though: network, hardware or software hacking skills, not just social engineering ones. They need to understand a wide range of potential threats if they are to properly test a company’s security and help them fix all the vulnerabilities.

Breaking the law and ending up in jail, like Kevin Mitnik, isn’t a great way to learn the skills for your long-term career though. A more normal way to become an expert is to go to university and take classes. Wouldn’t playing games be a much more fun way to learn than sitting in lectures, though? That was what Tamara Denning, Tadayoshi Kohno, and Adam Shostack, computer security experts from the University of Washington, wondered. As a result, they teamed up with Steve Jackson Games and came up with a card game Control-Alt-Hack(TM) (www.controlalthack.com), sadly no longer available. It was based on the cult tabletop card game, Ninja Burger. Rather than being part of a Ninja Burger Delivery team as in that game, in Control-Alt-Hack(TM) you are an ethical white hat hacker working for an elite security company. You have to complete white hat missions using your Ninja hacking skills: from shutting down an energy company to turning a robotic vacuum cleaner into a pet. The game is lots of fun, but the idea was that by playing it you would understand a lot more of about the part that computer security plays in everyones lives and about the kinds of threats that security experts have to protect against.

We could all do with more of that. Lot’s of people like gaming so why not learn something useful at the same time as having fun? Let’s hope there are more fun, and commercial games, invented in future about cyber security. It would make a good cooperative game in the style of Pandemic perhaps, and there must be simple board game possibilities that would raise awareness oc cyber security threats. It would be great if one day such games could inspire more people to a career as a security expert. We certainly need lots more cybersecurity experts keeping us all safe.

– Paul Curzon, Queen Mary University of London

adapted from the archives

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Sea sounds sink ships

You might think that under the sea things are nice and quiet, but something fishy is going on down there. Our oceans are filled with natural noise. This is called ambient noise and comes from lots of different sources: from the sound of winds blowing waves on the surface, rain, distant ships and even underwater volcanoes. For undersea marine life that relies on sonar or other acoustic ways to communicate and navigate all the extra ocean noise pollution that human activities, such as undersea mining and powerful ships sonars, have caused, is an increasing problem. But it’s not only the marine life that is affected by the levels of sea sounds, submarines also need to know something about all that ambient noise.

In the early 1900s the aptly named ‘Submarine signal company’ made their living by installing undersea bells near lighthouses. The sound of these bells were a warning to mariners about the impending navigation hazards: an auditory version of the lighthouse light.

The Second World War led to scientists taking undersea ambient noise more seriously as they developed deadly acoustic mines. These are explosive mines triggered by the sound of a passing ship. To make the acoustic trigger work reliably the scientists needed to measure ambient sound, or the mines would explode while simply floating in the water. Measurements of sound frequencies were taken in harbours and coastal waters, and from these a mathematical formula was computed that gave them the ‘Knudsen curves’. Named after the scientist who led the research these curves showed how undersea sound frequencies varies with surface wind speed and wave height. They allowed the acoustic triggers to be set to make the mines most effective.

– Peter McOwan, Queen Mary University of London


Related Magazine …

See also


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Was the first computer a ‘Bombe’?

Image from a set of wartime photos of GC&CS at Bletchley Park, Public domain, via Wikimedia Commons

A group of enthusiasts at Bletchley Park, the top secret wartime codebreaking base, rebuilt a primitive computing device used in the Second World War to help the Allies listen in on U-boat conversations. It was called ‘the Bombe’. Professor Nigel Smart, now at KU Leuven and an expert on cryptography, tells us more.

So What’s all this fuss about building “A Bombe”? What’s a Bombe?

The Bombe didn’t help win the war destructively like its explosive name-sakes but using intelligence. It was designed to find the passwords or ‘keys’ into the secret codes of ‘Enigma’: the famous encryption machine used both by the German army in the field and to communicate to U-Boats in the Atlantic. It effectively allowed the English to listen in to the German’s secret communications.

A Bombe is an electro-mechanical special purpose computing device. ‘Electro-mechanical’ because it works using both mechanics and electricity. It works by passing electricity through a circuit. The precise circuit that is used is modified mechanically on each step of the machine by drums that rotate. It used a set of rotating drums to mirror the way the Enigma machine used a set of discs which rotated when each letter was encrypted. The Bombe is a ‘special purpose’ computing device rather than a ‘general purpose’ computer because it can’t be used to solve any other problem than the one it was designed for.

Why Bombe?

There are many explanations of why it’s called a ‘Bombe’. The most popular is that it is named after an earlier, but unrelated, machine built by the Polish to help break Enigma called the Bomba. The Bomba was also an electro-mechanical machine and was called that because as it ran it made a ticking sound, rather like a clock-based fuse on an exploding bomb.

What problem did it solve?

The Enigma machine used a different main key, or password, every day. It was then altered slightly for each message by a small indicator sent at the beginning of each message. The goal of the codebreakers at Bletchley Park each day was to find the German key for that day. Once this was found it was easy to then decrypt all the day’s messages. The Bombe’s task was to find this day key. It was introduced when the procedures used by the Germans to operate the Enigma changed. This had meant that the existing techniques used by the Allies to break the Enigma codes could no longer be used. They could no longer crack the German codes fast enough by humans alone.

So how did it help?

The basic idea was that many messages sent would consist of some short piece of predictable text such as “The weather today will be….” Then using this guess for the message that was being encrypted the cryptographers would take each encrypted message in turn and decide whether it was likely that it could have been an encryption of the guessed message. The fact that the German army was trained to say and write “Heil Hitler” at any opportunity was a great help too!

The words “Heil, Hitler” help the German’s lose the war

If they found one that was a possible match they would analyze the message in more detail to produce a “menu”. A menu was just what computer scientists today call a ‘graph’. It is a set of nodes and edges, where the nodes are letters of the alphabet and the edges link the letters together a bit like the way a London tube map links stations (the nodes) by tube lines (the edges). If the graph had suitable mathematical properties that they checked for, then the codebreakers knew that the Bombe might be able to find the day key from the graph.

The menu, or graph, was then sent over to one of the Bombe’s. They were operated by a team of women – the World’s first team of computer operators. The operator programmed the Bombe by using wires to connect letters together on the Bombe according to the edges of the menu. The Bombe was then set running. Every so often it would stop and the operator would write down the possible day key which it had just found. Finally another group checked this possible day key to see if the Bombe had produced the correct one. Sometimes it had, sometimes not.

So was the Bombe a computer?

By a computer today we usually mean something which can do many things. The reason the computer is so powerful is that we can purchase one piece of equipment and then use this to run many applications and solve many problems. It would be a big problem if we needed to buy one machine to write letters, one machine to run a spreadsheet, one machine to play “Grand Theft Auto” and one machine to play “Solitaire”. So, in this sense the Bombe was not a computer. It could only solve one problem: cracking the Enigma keys.

Whilst the operator programmed the Bombe using the menu, they were not changing the basic operation of the machine. The programming of the Bombe is more like the data entry we do on modern computers.

Alan Turing who helped design the Bombe along with Gordon Welchman, is often called the father of the computer, but that’s not for his work on the Bombe. It’s for two other reasons. Firstly before the war he had the idea of a theoretical machine which could be programmed to solve any problem, just like our modern computers. Then, after the war he used the experience of working at Bletchley to help build some of the worlds first computers in the UK.

But wasn’t the first computer built at Bletchley?

Yes, Bletchley park did build the first computer as we would call it. This was a machine called Colossus. Colossus was used to break a different German encryption machine called the Lorenz cipher. The Colossus was a true computer as it could be used to not only break the Lorenz cipher, but it could also be used to solve a host of other problems. It also worked using digital data, namely the set of ones and zeros which modern computers now operate on.

Nigel Smart, KU Leuven

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Film Futures: Tsotsi

A burnt out car
Image by Derek Sewell from Pixabay

Computer Scientists and digital artists are behind the fabulous special effects and computer generated imagery we see in today’s movies, but for a bit of fun, in this series, we look at how movie plots could change if they involved Computer Scientists. Here we look at an alternative version of the film Tsotsi.

***SPOILER ALERT***

The outstanding, and Oscar winning, film Tsotsi follows a week in the life of a ruthless Soweto township gang leader who calls himself Tsotsi (township slang for ‘thug’). Having clawed a feral existence together from childhood in extreme urban deprivation he has lost all compassion. After a violent car-jacking, he finds he has inadvertently kidnapped a baby. What follows, to the backing of raw “Kwaito” music, is his chance for redemption.

Introducing new technology does not
always have just the effect you intended …

Tsotsi: with computer science

In our computer science film future version the baby is still accidentally kidnapped, but luckily the baby has wealthy parents, so wasn’t born in the township and was chipped with a rice-sized device injected under the skin at birth. It both contains identity data and can be tracked for life using GPS technology. The police are waiting as Tsotsi arrives back at the township having followed his progress walking across the scrubland with the baby.

Tsotsi doesn’t get a chance to form a bond with the baby, so doesn’t have a life-changing experience. There is no opportunity for redemption. Instead on release from jail he continues on his violent crime spree with no sense of humanity whatsoever.

In real life…

In 2004 there was a proposal in Japan that children would be tagged in the way luggage is. Now it is a totally standard way of tracking goods as they are moved around warehouses, and as a way to detect goods being shoplifted too. After all if it is sensible to keep track of your suitcase in case it is lost, why wouldn’t you for your even more important child. Fear of a child going missing is one of the biggest nightmares of being a parent. Take your eyes off a toddler for a few seconds in a shop and they could be gone. Such proposals repeatedly surface and

various similar proposals have been suggested ever since. In 2010, for example, nursery school kids in Richmond California were for a while required to wear jumpers containing RFID tags, supposedly to protect them. By placing sensors in appropriate places the children’s movements could be tracked so if they left school they could quickly be found.

Of course, pet cats and dogs are often chipped with tags under their skin. So it has also been suggested that children be tagged in a similar way. Then they couldn’t remove whatever clothing contained the tag and disappear. Someone who had kidnapped them would of course cut it out as, for example, Aaron Cross in the Bourne Legacy has to do at one point. Not what you want to happen to your child!

In general, there is an outcry and such proposals are dropped. As it was pointed out at the time of the California version, an RFID tag is not actually a very secure solution, for example. There have been lots and lots of demonstrations of how such systems can be cracked (even at a distance). For example, the RFID tags used in US passports was cracked so that the passports could be copied at a distance. And if the system can be cracked, then bad actors can sit in a van outside a school, or follow them on a school trip and track those children. Not only does it undermine their privacy, it could put them in greater danger of the kind it was supposed to protect them from. Ahh, you might think, but if someone did kidnap a child then the chip would still show where they were! Except if they can be copied then a duplicate could be used to leave a virtual version of the child in the school where they should be.

Security and privacy matter, and cyber security solutions are NEVER as simple as they seem. There are so often unforseen consequences, and fixing one problem just opens up new ones. Utopias can sometimes be distopian.

– Paul Curzon, Queen Mary University of London (extended from the archive version)

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Torchwood: in need of some backup

Multiple floors of an abandoned building
Image by Peter H from Pixabay

***SPOILER ALERT***

Disaster planning, that’s the Torchwood game. They are there to save the Earth whenever it needs saving from aliens (which is every week). Shame they blew it when it came to disaster planning for Torchwood itself!

We are coming

Torchwood is the BBC’s cult spin-off from Doctor Who. In the series, Children of the Earth, the world is threatened by the mysterious and brutal ‘456’ whose arrival is heralded when every child in the world simultaneously stops in their tracks and chants ‘We are coming’. The Torchwood team of Captain Jack Harkness, Gwen Cooper and Ianto Jones once more spring into action. Unfortunately, early on a little accident (we won’t say what so as not to spoil it) happens in their base buried under Cardiff. On the run and homeless for a while, they have only their wits in place of the normal hi-tech surveillance gadgetry. It’s so desperate at one point, they end up in an empty shell of a warehouse with only a sofa and the contents of their pockets with which to save the world!

Move it!

It’s such a shame that it comes to this when a little bit of disaster planning would have made it all so much easier to beat the aliens. A backup plan including a backup site is crucial in dealing with a disaster, whether earthquake or Martian hordes. Just because your home city has been hit by a tsunami or flattened to the ground by a meteorite doesn’t mean your company’s operations have to be disrupted.

Captain Jack knows all about disaster management of course. Kill him, and after a brief period of pain he jolts back to life and carries on as though nothing has happened. With some standard forward planning any organisation ought to operate just like that too.

The fact that when the disaster happens the Torchwood team have to come up with solutions on the fly shows that they not only had no backup, but hadn’t even thought about it. Tut tut!

If they had done some planning, what would have been their alternatives?

Cold war

The first alternative, for those organisations that need to survive a disaster is to have a ‘cold site’ ready. In fact this is what Torchwood defaulted to in their warehouse. Lucky Ianto remembered it! A cold site is just a backup location that can be moved in to. It doesn’t have software, data or even hardware ready, but at least everyone knows what to do and where to go. In time it can be up and running again. Clearly given their remit of saving the Earth against war-hungry aliens, Torchwood needed something better than that.

Getting warmer

At the other end of the disaster planning spectrum is the ‘hot site’. It is a fully functioning copy of your main operations building. All the hardware is there, the software is there and so is the data. Everything that happens at the main site IT-wise is copied at the hot site too. Lose your main site to a nuclear bomb and you just carry on almost seamlessly at the hot site. (It obviously has to be located somewhere else suitably far away, not just next door, or it too will be as radioactively hot as the original and be of little use). You can also have ‘warm sites’ of different degrees where for example you just have the hardware installed, or the data backups are only weekly rather than continuously.

Which kind of backup site is chosen depends on the organisation: what can it afford balanced against the costs of downtime (and how much down time the business can take and still survive). If it is critical to the survival of the planet, like Torchwood, then clearly you need to be at the warmer end of the backup scale!

Back to life

It’s a shame then that Torchwood’s IT management only focused on installing lots of fancy gadgets and ignored the more mundane side of things. If they had been a little more competent Jack and co might have sorted out the ‘456’ before it all got out of hand. Never mind. It all worked out OK in the end. Well, sort of.

– Paul Curzon, Queen Mary University of London (from the archive)

More on …

Magazines …

Issue 24 cover Keep Out

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

A puzzle, spies … and a beheading

A puzzle about secrets

Ayo wants to send her friends Guang and Elham who live together secret messages that only the person she sends the message to can read. She doesnt want Guang to read the messages to Elham and vice versa.

An ornate padlock with key
Image by Bernd from Pixabay

Guang buys them all small lockable notebooks for Christmas. They are normal notebooks except that they have a lock that can be locked shut using a small in-built padlock. Each padlock can be opened with a different single key. Guang suggests that they write messages in their notebook and post it and the key separately to the person who they wish to send the message to. After reading the message that person tears that page out and destroys it, then returns the notebook and key. They try this and it appears to be working, apparently preventing the others from reading the messages that aren’t for them. They exchange lots of secrets…until one day Guang gets a letter from Ayo that includes a note with an extra message added on the end by Elham in the locked notebook. It says “I can read your messages. I know all your secrets – Elham”. She has been reading Ayo’s messages to Guang all along and now knows all their secrets. She now wants them to know how clever she has been.

How did she do it? (And what does it have to do with the beheading of Mary Queen of Scots?)

Breaking the system

Elham has, of course, been getting to the post first, steaming open the envelopes, getting the key and notebook, reading the message (and for the last one adding her own note). She then seals them back in the envelopes and leaves them for Guang.

A similar thing happened to betray Mary Queen of Scots to her cousin Queen Elizabeth I. It led to Mary being beheaded.

Is there a better way?

Ayo suggests a solution that still uses the notebooks and keys, but in which no keys are posted anywhere. To prove her method works, she sends a secret message to Guang, that Elham fails to read. How does she do it? See if you can work it out before reading on…and what is the link to computer science?

Mary Queen of Scots

Mary Queen of Scots
Image by Gordon Johnson from Pixabay

The girls face a similar problem to that faced by Mary Queen of Scots and countless spies and businesses with secrets to exchange before and since…how to stop people intercepting and reading your messages. Mary was beheaded because she wasn’t good enough at it. The girls in the puzzle discovered, just like Mary, that weak encryption is worse than no encryption as it gives false confidence that messages are secret.

There are two ways to make messages secret – hide them so no one realises there is a message to read or disguise the message so only people in the know, are aware it exists (or both). Hiding the message is called Steganography. Disguising a message so it cannot be read even if known about is called encryption. Mary Queen of Scots did both and ultimately lost her life because her encryption was easy to crack, when she believed the encryption would protect her, it had given her the confidence to write things she otherwise would not have written.

House arrest

Mary had been locked up – under house arrest – for 18 years by Queen Elizabeth I, despite being captured only because she came to England asking her cousin Elizabeth to give her refuge after losing her Scottish crown. Elizabeth was worried that Mary and her allies would try to overthrow her and claim the English crown if given the chance. Better to lock her up before she even thought of treason? Towards the end of her imprisonment, in 1586 some of Mary’s supporters were in fact plotting to free her and assassinate Elizabeth. Unfortunately, they had no way of contacting Mary as letters were allowed neither in nor out by her jailors. Then, a stroke of good fortune arose. A young priest called Gilbert Gifford turned up claiming he had worked out a way to smuggle messages to and from Mary. He wrapped the messages in a leather package and hid them in the hollow bungs of barrels of beer. The beer was delivered by the brewer to Chartley Hall where Mary was held and the packages retrieved by one of Mary’s servants. This, a form of steganography, was really successful allowing Mary to exchange a long series of letters with her supporters. Eventually the plotters decided they needed to get Mary’s agreement to the full plot. The leader of the coup, Anthony Babington, wrote a letter to Mary outlining all the details. To be absolutely safe he also encrypted the message using a cipher that Mary could read (decipher). He soon received a reply in Mary’s hand also encrypted that agreed to the plot but also asked for the names of all the others involved. Babington responded with all the names. Unfortunately, unknown to Babington and Mary the spies of Elizabeth were reading everything they wrote – and the request for names was not even from Mary.

Spies and a Beheading

Unfortunately for Mary and Babington all their messages were being read by Sir Francis Walsingham, the ruthless Principal Secretary to Elizabeth and one of the most successful Spymasters ever. Gifford was his double agent – the method of exchanging messages had been Walsingham’s idea all along. Each time he had a message to deliver, Gifford took it to Walsingham first, whose team of spies carefully opened the seal, copied the contents, redid the seal and sent it on its way. The encrypted messages were a little more of a problem, but Walsingham’s codebreaker could break the cipher. The approach, called frequency analysis, that works for simple ciphers, involves using the frequency of letters in a message to guess which is which. For example, the most common letter in English is E, so the most common letter in an encrypted message is likely to be E. It is actually the way people nowadays solve crossword like code-puzzles know as Cross References that can be found in puzzle books and puzzle columns of newspapers.

When they read Babington’s letter they had the evidence to hang him, but let the letter continue on its way as when Mary replied, they finally had the excuse to try her too. Up to that point (for the 18 years of her house arrest) Elizabeth had not had strong enough evidence to convict Mary – just worries. Walsingham wanted more though, so he forged the note asking for the names of other plotters and added it to the end of one of Mary’s letters, encrypted in the same code. Babington fell for it, and all the plotters were arrested. Mary was tried and convicted. She was beheaded on February 8th 1587.

Private keys…public keys

What is Ayo’s method to get round their problems of messages being intercepted and read? Their main weakness was that they had to send the key as well as the locked message – if the key was intercepted then the lock was worthless. The alternative way that involves not sending keys anywhere is the following…

Top Secret written on a notebook with flowers

Image by Paul Curzon

Suppose Ayo wants to send a message to Guang. She first asks Guang to post her notebook (without the key but left open) to her. Ayo writes her message in Guang’s book then snaps it locked shut and posts it back. Guang has kept the key safe all along. She uses it to open the notebook secure in the knowledge that the key has never left her possession. This is essentially the same as a method known by computer scientist’s as public key encryption – the method used on the internet for secure message exchange, including banking, that allows the Internet to be secure. In this scheme, keys come in 2 halves a “private key” and a “public key”. Each person has a secret “private key” of their own that they use to read all messages sent to them. They also have a “public key” that is the equivalent to Guang’s open padlock.

If someone wants to send me a message, they first get my public key – which anyone who asks for can have as it is not used to decrypt messages, just for other people to to encrypt them (close the padlock) before sending them to me. It is of no use to decrypt any message (reopen the padlock). Only the person with the private key (the key to the padlock) can get at the message. So messages can be exchanged without the important decryption key going anywhere. It remains safe from interception.

Saving Mary

Would this have helped Mary? No. Her problem was not in exchanging keys but that she used a method of encryption that was easy to crack – in effect the lock itself was not very strong and could easily be picked. Walsingham’s code breakers were better at decryption than Babington was at encryption.

by Paul Curzon, Queen Mary University of London, updated from the archive

More on …

Magazines …

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



EPSRC supports this blog through research grant EP/W033615/1. 

Byte Queens

Women have made vital contributions to computer science ever since Ada Lovelace debugged the first algorithm for an actual computer (written by Charles Babbage) almost 200 years ago (more on CS4FN’s Women Portal). Despite this, women make up only a fraction (25%) of the STEM workforce: only about a fifth of senior tech roles and only a fifth of computer science students are women. The problem starts early: research by the National Centre for Computing Education suggests that female student’s intension to study computing drops off between the ages of 8 and 13. Ilenia Maietta, a computer science student at Queen Mary, talks about her experiences of studying in a male-dominated field and how she is helping to build a network for other women in tech.

Ilenia’s love for science hasn’t wavered since childhood and she is now studying for a master’s degree in computer science – but back in sixth form, the decision was between computer science and chemistry:

“I have always loved science, and growing up my dream was to become a scientist in a lab. However, in year 12, I dreaded doing the practical experiments and all the preparation and calculations needed in chemistry. At the same time, I was working on my computer science programming project, and I was enjoying it a lot more. I thought about myself 10 years in the future and asked myself ‘Where do I see myself enjoying my work more? In a lab, handling chemicals, or in an office, programming?’ I fortunately have a cousin who is a biologist, and her partner is a software engineer. I asked them about their day-to-day work, their teams, the projects they worked on, and I realised I would not enjoy working in a science lab. At the same time I realised I could definitely see myself as a computer scientist, so maybe child me knew she wanted to be scientist, just a different kind.”

The low numbers of female students in computer science classrooms can have the knock-on effect of making girls feel like they don’t belong. These faulty stereotypes that women don’t belong in computer science, together with the behaviour of male peers, continue to have an impact on Ilenia’s education:

“Ever since I moved to the UK, I have been studying STEM subjects. My school was a STEM school and it was male-dominated. At GCSEs, I was the only girl in my computer science class, and at A-levels only one of two. Most of the time it does not affect me whatsoever, but there were times it was (and is) incredibly frustrating because I am not taken seriously or treated differently because I am a woman, especially when I am equally knowledgeable or skilled. It is also equally annoying when guys start explaining to me something I know well, when they clearly do not (i.e. mansplaining): on a few occasions I have had men explain to me – badly and incorrectly – what my degree was to me, how to write code or explain tech concepts they clearly knew nothing about. 80% of the time it makes no difference, but that 20% of the time feels heavy.”

Many students choose computer science because of the huge variety of topics that you can go on to study. This was the case for Ilenia, especially being able to apply her new-found knowledge to lots of different projects:

“Definitely getting to explore different languages and trying new projects: building a variety of them, all different from each other has been fun. I really enjoyed learning about web development, especially last semester when I got to explore React.js: I then used it to make my own portfolio website! Also the variety of topics: I am learning about so many aspects of technology that I didn’t know about, and I think that is the fun part.”

“I worked on [the portfolio website] after I learnt about React.js and Next.js, and it was the very first time I built a big project by myself, not because I was assigned it. It is not yet complete, but I’m loving it. I also loved working on my EPQ [A-Level research project] when I was in school: I was researching how AI can be used in digital forensics, and I enjoyed writing up my research.”

Like many university students, Ilenia has had her fair share of challenges. She discussed the biggest of them all: imposter syndrome, as well as how she overcame it. 

“I know [imposter syndrome is] very common at university, where we wonder if we fit in, if we can do our degree well. When I am struggling with a topic, but I am seeing others around me appear to understand it much faster, or I hear about these amazing projects other people are working on, I sometimes feel out of place, questioning if I can actually make it in tech. But at the end of the day, I know we all have different strengths and interests, so because I am not building games in my spare time, or I take longer to figure out something does not mean I am less worthy of being where I am: I got to where I am right now by working hard and achieving my goals, and anything I accomplish is an improvement from the previous step.”

Alongside her degree, Ilenia also supports a small organisation called Byte Queens, which aims to connect girls and women in technology with community support.

“I am one of the awardees for the Amazon Future Engineer Award by the Royal Academy of Engineering and Amazon, and one of my friends, Aurelia Brzezowska, in the programme started a community for girls and women in technology to help and support each other, called Byte Queens. She has a great vision for Byte Queens, and I asked her if there was anything I could do to help, because I love seeing girls going into technology. If I can do anything to remove any barriers for them, I will do it immediately. I am now the content manager, so I manage all the content that Byte Queens releases as I have experience in working with social media. Our aim is to create a network of girls and women who love tech and want to go into it, and support each other to grow, to get opportunities, to upskill. At the Academy of Engineering we have something similar provided for us, but we wanted this for every girl in tech. We are going to have mentoring programs with women who have a career in tech, help with applications, CVs, etc. Once we have grown enough we will run events, hackathons and workshops. It would be amazing if any girl or woman studying computer science or a technology related degree could join our community and share their experiences with other women!”

For women and girls looking to excel in computer science, Ilenia has this advice:

“I would say don’t doubt yourself: you got to where you are because you worked for it, and you deserve it. Do the best you can in that moment (our best doesn’t always look the same at different times of our lives), but also take care of yourself: you can’t achieve much if you are not taking care of yourself properly, just like you can’t do much with your laptop if you don’t charge it. And finally, take space: our generation has the possibility to reframe so much wrongdoing of the past generations, so don’t be afraid to make yourself, your knowledge, your skills heard and valued. Any opportunities you get, any goals you achieve are because you did it and worked for it, so take the space and recognition you deserve.”

Ilenia also highlighted the importance of taking opportunities to grow professionally and personally throughout her degree, “taking time to experiment with careers, hobbies, sports to discover what I like and who I want to become” mattered enormously. Following her degree, she wants to work in software development or cyber security. Once the stress of coursework and exams is gone, Ilenia intends to “try living in different countries for some time too”, though she thinks that “London is a special place for me, so I know I will always come back.”

Ilenia encourages all women in tech who are looking for a community and support, to join the Byte Queens community and share with others: “the more, the merrier!”

– lenia Maietta and Daniel Gill, Queen Mary University of London

Visit the Byte Queens website for more details. Interested women can apply here.

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing
Cover of Issue 20 of CS4FN, celebrating Ada Lovelace

EPSRC supports this blog through research grant EP/W033615/1,