Going Postal: A review

Semaphore tower showing all the flag positions
Image by Clker-Free-Vector-Images from Pixabay adapted by CS4FN

Any one claiming to be a hard-core Computer Scientist would be ashamed if they had to admit they hadn’t read Terry Pratchett. If you are and you haven’t, then ‘Going Postal’ is a good place to start.

‘Going Postal’, is a must for anyone interested in networks. Not because it has any bearing on reality. It doesn’t. It’s about Discworld, a flat world that is held up on the back of elephants, and where magic reigns. Technology is starting to get a foothold though. For example, cameras, computers and movies have all been invented…though they usually have an Elf inside. Take cameras: they work because the Elf has a paint box and an easel. Take too many sunsets and he’ll run out of pink! It is all incredibly silly…but it works and so does the technology.

Now telecommunications technology is gaining a foothold…Corrupt business is muscling in and the post office is struggling to survive. Who would want to send a letter when they can send a c-mail over the Clacks? The Clacks are a network of semaphore towers that allow messages to ‘travel at the speed of light’.

At each tower the operators

“pound keys, kick pedals and pull levers as fast as they can'”

to forward the message to the next tower in the network and so on to their destination. The clacks are so fashionable, people have even started carrying pocket semaphore flags everywhere they go, so they can send messages to people on the other side of the room.

“But can you write
S.W.A.L.K. on a clacks?
Can you seal it with
a loving kiss?
Can you cry tears
on to a clacks,
can you smell it,
can you enclose
a pressed flower?
A letter is more than
just a message.”

Moist von Lipwig, a brilliant con-artist who just did one con too many, is given the job of saving the Post-office…his choice was ‘Take the job or die’. Not, actually, such a good deal given the last few Postmasters all died on the job … in the space of a few weeks.

Will he save the post office, or is the march of technology unstoppable?…and just who are the ‘Smoking GNU’ that you hear whispers about on the Clacks?

Reading this book has got to be the most fun way imaginable of learning about telecom networks, not to mention entrepreneurship and the effect of computers on society. None of the actual technology is the same as in our world of course, but the principle is the same: transmission codes, data and control signals, simplex and duplex transmissions, image encoding, internet nodes, encryption, e-commerce, phreakers and more…they are all there, which just goes to show computer science is not just about our current computer technology. It all applies even when there is no silicon in sight.

Oh, and this is the 33rd Discworld novel, so if you do get hooked, don’t expect to get much more done for the next few weeks as you catch up.

Paul Curzon, Queen Mary University of London

More on…

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The Alien Cookbook

An alien looking on distraught that two bowls of soup are different, one purple, one green.
Image by CS4FN from original soup bowls by OpenClipart-Vectors and alien image by Clker-Free-Vector-Images from Pixabay

How to spot a bad chef when you’ve never tasted the food (OR How to spot a bad quantum simulator when you do not know what the quantum circuit it is simulating is supposed to do.)

Imagine you’re a judge on a wild cooking competition. The contestants are two of the best chefs in the world, Chef Qiskit and Chef Cirq. Today’s challenge is a strange one. You hand them both a mysterious, ancient cookbook found in a crashed spaceship. The recipe you’ve chosen is called “Glorp Soup”. The instructions are very precise and scientific: “… Heat pan to 451 degrees. Stir counter-clockwise for exactly 18.7 seconds. … Add exactly 3 grams of powdered meteorite (with the specified composition). …” The recipe is a perfectly clear algorithm, but since no human has ever made Glorp Soup, nobody knows what it’s supposed to taste, look, or smell like. Both chefs go to their identical kitchens with the exact same alien ingredients. After an hour, they present their dishes.

  • Chef Qiskit brings out a bowl of thick, bubbling, bright purple soup that smells like cinnamon.
  • Chef Cirq brings out a bowl of thin, clear, green soup that smells like lemons.

Now you have a fascinating situation. You have no idea which one is the “real” Glorp Soup. Maybe it’s supposed to be purple, or maybe it’s green. But you have just learned something incredibly important: at least one of your expert chefs made a mistake. They were given the exact same, precise recipe, but they produced two completely different results. You’ve found a flaw in one of their processes without ever knowing the correct answer.

This powerful idea is called Differential Testing.

Cooking with Quantum Rules

In our research, the “alien recipes” we use are called quantum circuits. These are the step-by-step instructions for a quantum computer. And the “chefs” are incredibly complex computer programs called quantum simulators, built by places like Google and IBM.

Scientists give these simulators a recipe (a circuit) to predict what a real quantum computer will cook up. These “dishes” could be the design for a new medicine or a new type of battery. If the simulator-chef gets the recipe wrong, the final result could be useless or even dangerous. But how do you check a chef’s work when the recipe is for a food you’ve never tasted? How do you test a quantum simulator when you do not know exactly what a quantum circuit should do.

FuzzQ: The Robot Quantum Food Critic

We can’t just try one recipe, one quantum circuit. We need to try thousands. So we built a robot “quantum food critic”, a program we call FuzzQ. FuzzQ’s job is to invent new “alien recipes” ie quantum circuits and see if the two “chefs” cook the same dish (i.e. different simulators do the same thing when simulating it). This process of trying out thousands of different, and sometimes very weird, recipes is called Fuzzing.

Here’s how our quantum circuit food critic works:

  1. It writes a recipe: FuzzQ uses a rulebook for “alien cooking” to invent a new, unique, and often very strange quantum circuit.
  2. It gives the recipe to both chefs: It sends the exact same quantum circuit to “Chef Qiskit” (the Qiskit simulator) and “Chef Cirq” (the Cirq simulator).
  3. It tastes the soup: FuzzQ looks at the final result from both. If they’re identical, it assumes they’re correct. But if they do different things, so one did the equivalent of make a purple, bubbling soup and the other made the equivalent of a clear, green soup, FuzzQ sounds the alarm. It has found a bug!

We had FuzzQ invent and “taste-test”, so check the results of, over 800,000 different quantum recipes.

The Tale of the Two Ovens 

Our robot critic found 8 major types of quantum “cooking” errors. One of the most interesting was for a simple instruction called a “SWAP”, which was discovered by looking at how the two chefs used their high-tech “ovens”.

Imagine both chefs have an identical oven with two compartments, a Top Oven and a Bottom Oven. They preheat them according to the recipe: the Top Oven to a very hot 250°C, and the Bottom Oven to a low 100°C. The recipe then has a smart-oven command:

 “Logically SWAP the Top Oven and Bottom Oven.”

Both chefs press the button to do the “SWAP”.

  • Chef Cirq’s oven works as expected. It starts the long process of cooling the top oven and heating the bottom one.
  • Chef Qiskit’s oven, however, is a “smarter” model. It takes a shortcut. It doesn’t change the temperatures at all but just swaps the labels on its digital display so that the one at the top previously labelled the Top Oven is now labelled as the Bottom Oven, and vice versa. The screen now lies, showing Top Oven: 100°C and Bottom Oven: 250°C, even though the physical reality is the opposite: the one at the top is still the incredibly hot, 250°C and the one below it is still 100°C.

The final instruction is: 

“Place the delicate soufflé into the physical TOP OVEN.”

  • Chef Cirq opens his top oven (ie the one positioned above the other and labelled Top Oven), which is now correctly at 100°C, having cooled down, and bakes a perfect soufflé.
  • Chef Qiskit, trusting his display, opens his top oven (ie the one positioned above the other but internally now labelled Bottom Oven) and puts his soufflé inside. But that physical oven that is at the top is still at 250°C. A few minutes later, he has a burnt, smoky crisp.

Our robot judge, FuzzQ, doesn’t need to know how to bake. It just looks at the two final soufflés. One is perfect, and the other is charcoal. The results are different, so FuzzQ sounds the alarm: “Disagreement found!”

This is how we found the bug. We didn’t need to know the “correct temperature”. We only needed to see that the two expert simulators, when given the same instructions, produced two wildly different outcomes. Knowing something now is amiss, further investigation of what each quantum simulator did with those identical instructions, can determine what actually went wrong and the problematic quantum simulator improved. By finding these disagreements, we’re helping to make sure the amazing tools of quantum science are trustworthy.

Vasileios Klimis, Queen Mary University of London

More on …

Getting Technical …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Shh! Can you hear that diagram?

What does a diagram sound like? What does the shape of a sound feel like? Researchers at Queen Mary, University of London have been finding out.

At first sight listening to diagrams and feeling sounds might sound like nonsense, but for people who are visually impaired it is a practical issue. Even if you can’t see them, you can still listen to words, after all. Spoken books were originally intended for partially-sighted people, before we all realised how useful they were. Screen readers similarly read out the words on a computer screen making the web and other programs accessible. Blind people can also use touch to read. That is essentially all Braille is, replacing letters with raised patterns you can feel.

The written world is full of more than just words though. There are tables and diagrams, pictures and charts. How does a paritally-sighted person deal with them? Is there a way to allow them to work with others creating or manipulating diagrams even when each person is using a different sense?

That’s what the Queen Mary researchers, working with the Royal National Institute for the Blind and the British Computer Association of the Blind explored. Their solution was a diagram editor with a difference. It allows people to edit ‘node-and-link’ diagrams: like the London underground map, for example, where the stations are the nodes and the links show the lines between them. The diagram editor converts the graphical part of a diagram, such as shapes and positions, into sounds you can listen to and textured surfaces you can feel. It allows people to work together exploring and editing a variety of diagrams including flowcharts, circuit diagrams, tube maps, mind maps, organisation charts and software engineering diagrams. Each person, whether fully sighted or not, ‘views’ the diagram in the way that works for them.

The tool combines speech and non-speech sounds to display a diagram. For example, when the label of a node is spoken, it is accompanied by a bubble bursting sound if it’s a circle, and a wooden sound if it’s a square. The labels of highlighted nodes are spoken with a higher pitched voice to show that they are highlighted. Different types of links are also displayed using different sounds to match their line style. For example, the sound of a straight line is smoother than that of a dashed line. The idea for arrows came from listening to one being drawn on a chalk board. They are displayed using a short and a long sound where the short sound represents the arrow head, and the long sound represents its tail. Changing the order they are presented changes the direction of the arrow: either pointing towards or away from the node.

For the touch part, the team use a PHANTOM Omni haptic device, which is a robotic arm attached to a stylus that can be programmed to simulate feeling 3D shapes, textures and forces. For example, in the diagram editor nodes have a magnetic effect: if you move the stylus close to one the stylus gets pulled towards it. You can grab a node and move it to another location, and when you do, a spring like effect is applied to simulate dragging. If you let it go, the node springs back to its original location. Sound and touch are also integrated to reinforce each other. As you drag a node, you hear a chain like sound (like dragging a metal ball chained to a prisoner?!). When you drop it in a new location, you hear the sound of a dart hitting a dart board.

The Queen Mary research team tried out the editor in a variety of schools and work environments where visually impaired and sighted people use diagrams as part of their everyday activities and it seemed to work well. It’s free to download so why not try it yourself. You might see diagrams in a whole new light.

Paul Curzon, Queen Mary University of London


More on…


Related Magazine …

Jerry Elliot High Eagle: Saving Apollo 13

Apollo 13 Mission patch of three golden horses travelling from Earth to the moon
Image by NASA Public domain via Wikimedia Commons

Jerry Elliot High Eagle was possibly the first Native American to work in NASA mission control. He worked for NASA for over 40 years, from the Apollo moon landings up until the space shuttle missions. He was a trained physicist with both Cherokee and Osage heritage and played a crucial part in saving the Apollo 13 crew when an explosion meant they might not get back to Earth alive.

The story of Apollo 13 is told in the Tom Hanks film Apollo 13. The aim was to land on the moon for a third time following the previous two successful lunar missions of Apollo 11 and Apollo 12. That plan was aborted on the way there, however, after pilot James Swigert radioed his now famous if misquoted words “Okay, Houston … we’ve had a problem here”. It was a problem that very soon seemed to mean they would die in space: an oxygen tank had just exploded. Instead of being a moon landing the mission turned into the most famous rescue attempt in history – could the crew of James Lovell, Jack Swigert and Fred Haise get back to Earth before their small space craft turned into a frozen, airless and lifeless space coffin. 

While the mission control team worked with the crew on how to keep the command and lunar modules habitable for as long as possible (they were rapidly running out of breathable air, water and heat and had lost electircal power), Elliot worked on actually getting the craft back to Earth. He was the “retrofire officer” for the mission which meant he was an expert in, and responsible for, the trajectory Apollo 13 took from the Earth to the moon and back. He had to compute a completely new trajectory from where they now were, which would get them back to Earth as fast and as safely as possible. It looked impossible given the limited time the crew could possibly stay alive. Elliot wasn’t a quitter though and motivated himself by telling himself:

“The Cherokee people had the tenacity to persevere on the Trail of Tears … I have their blood and I can do this.” 

The Trail of Tears was the forced removal of Native Americans from their ancestral homelands by the US government in the 19th century to make way for the gold rush . Now we would call this ethnic cleansing and genocide. 60, 000 Native American people were moved with the Cherokee forcibly marched a 1000 miles to an area to the West of the Mississippi, thousands dying along the way.

The best solution for Apollo 13, was to keep going and slingshot round the dark side of the moon, using the forces arising from its gravity, together with strategic use of the boosters to push the space craft on back to Earth more quickly than on those boosters alone. The trajectory he computed had to be absolutely accurate or the crew would not get home and he has suggested the accuracy needed was like “threading a needle from 70 feet away!” Get it wrong and the space craft could miss the Earth completely, or arrive too fast to reenter earth’s orbit and return through the atmosphere.

Jerry Elliot High Eagle, of course, famously got it right: the crew survived, safely returning to Earth and Elliot was awarded the President’s Medal of Freedom, the highest American honour possible, for the role he played. The Native American people also gave him the name High Eagle for his contributions to space exploration.

Paul Curzon, Queen Mary University of London

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Service Model: a review

A robot butler outline on a blood red background
Image by OpenClipart-Vectors from Pixabay

Artificial Intelligences are just tools, that do nothing but follow their programming. They are not self-aware and have no ability for self-determination. They are a what not a who. So what is it like to be a robot just following its (complex) program, making decisions based on data alone? What is it like to be an artificial intelligence? What is the real difference between being self-aware and not? What is the difference to being human? These are the themes explored by the dystopian (or is it utopian?) and funny science fiction novel “Service Model” by Adrian Tchaikovsky.

In a future where the tools of computer science and robotics have been used to make human lives as comfortable as conceivably possible, Charles(TM) is a valet robot looking after his Master’s every whim. His every action is controlled by a task list turned into sophisticated human facing interaction. Charles is designed to be totally logical but also totally loyal. What could go wrong? Everything it turns out when he apparently murders his master. Why did it happen? Did he actually do it? Is there a bug in his program? Has he been infected by a virus? Was he being controlled by others as part of an uprising? Has he become self-aware and able to made his own decision to turn on his evil master. And that should he do now? Will his task list continue to guide him once he is in a totally alien context he was never designed for, and where those around him are apparently being illogical?

The novel explores important topics we all need to grapple with, in a fun but serious way. It looks at what AI tools are for and the difference between a tool and a person even when doing the same jobs. Is it actually good to replace the work of humans with programs just because we can? Who actually benefits and who suffers? AI is being promoted as a silver bullet that will solve our economic problems. But, we have been replacing humans with computers for decades now based on that promise, but prices still go up and inequality seems to do nothing but rise with ever more children living in poverty. Who is actually benefiting? A small number of billionaires certainly are. Is everyone? We have many better “toys” that superficially make life easier and more comfortable – we can buy anything we want from the comfort of our sofas, self-driving cars will soon take us anywhere we want, we can get answers to any question we care to ask, ever more routine jobs are done by machines, many areas of work, boring or otherwise are becoming a thing of the past with a promise of utopia, but are we solving problems or making them with our drive to automate everything. Is it good for society as a whole or just good for vested interests? Are we losing track of what is most important about being human? Charles will perhaps help us find out.

Thinking about the consequences of technology is an important part of any computer science education and all CS professionals should think about ethics of what they are involved in. Reading great science fiction such as this is one good way to explore the consequences, though as Ursula Le Guin has said: the best science fiction doesn’t predict the future, it tells us about ourselves in the present. Following in the tradition of “The Machine Stops” and “I, Robot”, “Service Model” (and the short story “Human Resources” that comes with it) does that, if in a satyrical way. It is a must read for anyone involved in the design of AI tools especially those promoting the idea of utopian futures.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Crystal ball coupons – what your data might be giving away

Big companies know far more about you than you think. You have very little privacy from their all-seeing algorithms. They may even have worked out some very, very personal things about you, that even your parents don’t know…

An outraged father in Minneapolis stormed into a supermarket chain complaining that his school-aged daughter was being sent coupons for baby clothes. The shop manager apologised … but later they found there was no mistake in the tiny tot offers. The teenager was expecting a baby but had not told her father. Her situation was revealed not by a crystal ball but by an algorithm. The shop was using Big Data processing algorithms that noticed patterns in her shopping that they had linked to “pregnant”. They had even worked out her likely delivery date. Her buying habits had triggered targeted marketing.

Algorithms linked her shopping patterns to “pregnant”

When we use a loyalty card or an online account our sales activity is recorded. This data is added to a big database, with our details, the time, date, location and products bought (or browsed). It is then analysed. Patterns in behaviour can be tracked, our habits, likes, dislikes and even changes in our personal situation deduced, based on those patterns. Sometimes this seems quite useful, other times a bit annoying, it can surprise us, and it can be wrong.

This kind of computing is not just used to sell products, it is also used to detect fraud and to predict where the next outbreak of flu will happen. Our banking behaviour is tracked to flag suspicious transactions and help stop theft and money laundering. When we search for ‘high temperature’ our activity might be added to the data used to predict flu trends. However, the models are not always right as there can be a lot of ‘noise’ in the data. Maybe we bought baby clothes as a present for our aunt, and were googling temperatures because we wanted to go somewhere hot for our holiday.

Whether the predictions are spot on or not is perhaps not the most important thing. Maybe we should be considering whether we want our data saved, mined and used in these ways. A predictive pregnancy algorithm seems like an invasion of privacy, even like spying, especially if we don’t know about it. Predictive analytics is big; big data is really big and big business wants our data to make big profits. Think before you click!

Jane Waite, Queen Mary University of London (now at Raspberry Pi)

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Hacking DNA

A jigsaw of DNA with peices missing
Image by Arek Socha from Pixabay

DNA is the molecule of life. Our DNA stores the information of how to create us. Now it can be hacked.

DNA consists of two strands coiling round each other in a double helix. It’s made of four building blocks, or ‘nucleotides’, labelled A, C, G, T. Different orders of letters gives the information of how to build each unique creature, you and me included. Sequences of DNA are analysed in labs by a machine called a gene sequencer. It works out the order of the letters and so tells us what’s in the DNA. When biologists talk of sequencing the human (or another animal or plant’s) genome they mean using a gene sequencer to work out the specific sequences in the DNA for that species. They are also used by forensic scientists to work out who might have been at the scene of a crime, and to predict whether a person has genetic disorders that might lead to disease.

DNA can be used to store information other than that of life: any information in fact. This may be the future of data storage. Computers use a code made of 0s and 1s. There is no reason why you can’t encode all the same information using A, C, G, T instead. For example, a string of 1s and 0s might be encoded by having each pair of bits represented by one of the four nucleotides: 00 = A, 01 = C, 10 = G and 11 = T. The idea has been demonstrated by Harvard scientists who stored a video clip in DNA.

It also leads to whole new cyber-security threats. A program is just data too, so can be stored in DNA sequences, for example. Researchers from the University of Washington have managed to hide a malicious program inside DNA that can attack the gene sequencer itself!

The gene sequencer not only works out the sequence of DNA symbols. As it is a computer, it converts it into a binary form that can then be processed as normal. As DNA sequences are long, the sequencer compresses them. The attack made use of a common bug found in programs that malware often uses: ‘buffer overflow’ errors. These arise when the person writing a program includes instructions to set aside a fixed amount of space to store data, but then doesn’t include code to make sure only that amount of data is stored. If more data is stored then it overflows into the memory area beyond that allocated to it. If executable code is stored there, then the effect can be to overwrite the program with new malicious instructions.

When the gene sequencer reaches that malware DNA, the converted program emerges and is converted back into 1s and 0s. If those bits are treated as instructions and executed, it launches its attack and takes control of the computer that runs the sequencer. In principle, an attack like this could be used to fake results for subsequent DNA tests, subverting court cases, disrupt hospital testing, steal sensitive genetic data, or corrupt DNA-based memory.

Fortunately, the risks of exactly this attack causing any problems in the real world are very low but the team wanted to highlight the potential for DNA based attacks, generally. They pointed out how lax the development processes and controls were for much of the software used in these labs. The bigger risk right now is probably from scientists falling for spear phishing scams (where fake emails pretending to be from someone you know take you to a malware website) or just forgetting to change the default password on the sequencer.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Password strength and information entropy

Comparison of the password strength of Tr0ub4dor&3 (easy to guess, difficult to remember) and correcthorsebatterystaple (hard to guess but easy to remember if you turn it into a picture)
CREDIT: Randall Munroe, xkcd.com https://xkcd.com/936 – reprinted under a CC Attribution-NonCommercial 2.5 License

How do you decide whether a password is strong? Computer scientists have a mathematical way to do it. Based on an idea called information entropy it’s part of “Information Theory”, invented by electrical engineer Claude Shannon back in 1948. This XKCD cartoon for computer scientists uses this idea to compare two different password. Unless you understand information theory the detail is a bit mind blowing to work out what is going on though… so let’s explain the computer science!

Entropy is based on the number of guesses someone would need to make trying all the possibilities for a password one at a time – doing what computer scientists call a brute force attack. Think about a PIN on a mobile phone or cash point. Suppose it was a single digit – there would be 10 possibilities to try. If 2-digit PINS were required then there are now 100 different possibilities. With the normal 4 digits you need 10,000 (10^4 = 10x10x10x10) guesses to be sure of getting it. Different symbol sets lead to more possibilities. If you had to use lower case letters instead of digits, there are 26 possibilities for length 1 so over 450,000 (26^4 = 26x26x26x26) guesses needed for the 4 letter password. If upper case letters are possible that goes up to more than 7 million (52 letters so 52^4 = 52x52x52x52) guesses. If you know they used a word though you don’t have to try all the possibilities, just the words. There are only about 5000 of those, so far fewer guesses needed. So password strength depends on the number of symbols that could be used, but also whether the PIN or password was chosen randomly (words aren’t a random sequence of letters!)

To make everything standard, Shannon used binary to do entropy calculations so assumed a symbol set of only 0 and 1 (so all the answers become powers of 2 because he used 2 symbols). He then measured them in ‘bits’ needed to count all the guesses. Any other groups of symbols are converted to binary first. If a cashpoint only had buttons A, B, C and D for PINs, then to do the calculation you count those 4 options in binary: 00 (A), 01 (B), 10 (C), 11 (D) and see you need 2 bits to do it (2^2 = 2×2 choices). Real cashpoints have 10 digits and need just over 3 bits to represent all of a 1 digit PIN (2^3 = 2x2x2 = 8 so not enough, 2^4 = 2x2x2x2 = 16 so more than you need so the answer is more than 3 but less than 4). It’s entropy would be just over 3. To count the possibilities for a 4-digit PIN you need just over 13 bits, so that is its entropy. The lower case alphabet needs just under 6 bits to store the number of possibilities, so entropy is about 6, and so on.

So entropy is not measured directly in terms of guesses (which would be very big numbers) but instead indirectly in terms of bits. If you determine the entropy to be 28 bits (as in the cartoon), that means the number of different possibilities to guess would fit in 28 bits if each guess was given its own unique binary code. 28 bits of binary can be used to represent over 268 million different things (2^28), so that is the number of guesses actually needed. It is a lot of guesses but not so many a computer couldn’t try them all fairly quickly as the cartoon points out!

Where do those 28 bits come from? Well they assume the hacker is just trying to crack passwords that follow a common pattern people use (so are not that random). The hacker assumes the person did the following: Take a word; maybe make the first letter a capital; swap digits in place of some similar looking letters; and finally add a symbol and letter at the end. It follows the general advice for passwords, and looks random … but is it?

How do we work out the entropy of a password invented that way? First, think about the base word the password is built around. They estimate it as 16 bits for an up to 9 letter word of lower-case letters, so are assuming there are 2^16 (i.e. 65,000) possible such base words. There are about 40,000 nine letter words in English, so that’s an over estimate if you are assuming you know the length. Perhaps not if you assume things like fictional names and shorter words are possible.

As the pattern followed is that the first letter could be uppercase, that adds 1 more bit for the two guesses now needed for each word tried: check if it’s upper-case, then check if it’s lower-case. Similarly, as any letter ‘o’ might have been swapped for 0, and any ‘a’ for 4 (as people commonly do) this adds 3 more bits (assuming there are at most 3 such opportunities per word). Finally, we need 3 bits for the 9 possible digits and another 4 bits for the common punctuation characters added on the end. Another bit is added for the two possible orders of punctuation and digit. Add up all those bits and we have the 28 bits suggested in the cartoon (where bits are represented by little squares).

Now do a similar calculation for the other way of creating passwords suggested in the cartoon. If there are only about 2000 really common words a person might choose from, we need 11 bits per word. If we string 4 such words together completely at random (not following a recognisable phrase with no link between the words) we get the much larger entropy of 44 bits overall. More bits means harder to crack, so this password will be much, much harder than the first. It takes over 17 trillion guesses rather than 268 million.


The serious joke of the cartoon is that the rules we are told to follow leads to people creating passwords that are not very random at all, precisely because they have been created following rules. That means they are easy to crack (but still hard to remember). If instead you used the 4 longish and unconnected word method which doesn’t obey any of the rules we are told to follow, you actually get a password that is much easier to remember (if you turn it into a surreal picture and remember the picture), but harder to crack because it actually has more randomness in it! That is the real lesson. How ever you create a password, it has to have lots of randomness for it to be strong. Entropy gives you a way to check how well you have done.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The robot always wins

Children playing Rock Paper Scissors (Janken)
image by HeungSoon from Pixabay

Researchers in Japan made a robot arm that always wins at rock, paper, scissors (a game completely of chance). Not with ultra-clever psychology, which is the way that the best humans play, but with old-fashioned cheating. The robot uses high-speed motors and precise computer vision systems to recognise whether its human opponent is making the sign for rock, paper or scissors. One millisecond later, it can play the sign that beats whatever the human chooses. Because the whole process is so quick, it looks to humans like the robot is playing at the same time. See for yourself by clicking below to watch the video of this amazing cheating robot.

Above: Janken (rock-paper-scissors) Robot with 100% winning rate (26 June 2012)

– Paul Curzon, Queen Mary University of London

Did you know?

The word ‘robot’ came to the English language over 100 years ago in the early 1920s. Before that the words ‘automaton’ or ‘android’ were used. In 1920 Czech playwright Karel Čapek published his play “R.U.R.” (Rossum’s Universal Robots, or Rossumovi Univerzální Roboti) and his brother Josef suggested using ‘roboti’, from the Slavic / Czech word meaning ‘forced labour’. In the late 1930s there was a performance of the play at the People’s Palace in London’s Stepney Green / Mile End – this building is now part of Queen Mary University of London (some of our computer science lectures take place there) and, one hundred years on, QMUL also has a Centre for Advanced Robotics.

More on … cheating

1. Winning at Rock Paper Scissors – Numberphile

Above: an entertaining look at a research paper investigating potential winning strategies (January 2015).

2. Bullseye! Mark Rober’s intelligent dart board

Above: our earlier article on Mark Rober’s robotic darts board which, like the rock paper robot, uses high-speed cameras to sense a dart, computing to work out where it will land and high-speed motors to move itself into position so your throw gets a high score.

3. The Intelligent Piece of Paper Activity

Above: a strategy for never losing at noughts and crosses (tic-tac-toe) – as long as you go first.


Related Magazine …

More on robotics

Above: our portal gathers together lots of our articles on robots and robotics.


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

He attacked me with a dictionary!

Letters in an unreadable muddle
Image by JL G from Pixabay

You might be surprised at how many people have something short, simple (and stupid!) like ‘password’ as their password. Some people add a number to make it harder to guess (‘password1’) but unfortunately that doesn’t help. For decades the official advice has been to use a mixture of lower (abc) and upper case (ABC) characters as well as numbers (123) and special characters (such as & or ^). To meet these rules some people substitute letters for numbers (for example 0 for O or 4 for A and so on). Following these rules might lead you to create something like “P4ssW0^d1” which looks like it might be difficult to crack, but isn’t. The problem is that people tend to use the same substitutions so password-crackers can predict, and so break, them too.

Hackers know the really common passwords people use like ‘password’, ‘qwerty’ and ‘12345678’ (and more) so will just try them as a matter of course until they very quickly come across one of the many suckers who used one. Even apparently less obvious passwords can be easy to crack, though. The classic algorithm used is a ‘dictionary attack’.

The simple version of this is to run a program that just tries each word in an online dictionary one at a time as a password until it finds a word that works. It takes a program fractions of seconds to check every word like this. Using foreign words doesn’t help as hackers make dictionaries by combining those for every known language into one big universal dictionary. That might seem like a lot of words but it’s not for a computer.

You might think you can use imaginary words from fiction instead – names of characters in Lord of the Rings, perhaps, or the names of famous people. However, it is easy to compile lists of words like that too and add them to the password cracking dictionary. If it is a word somewhere on the web then it will be in a dictionary for hacking use.

Going a step further, a hacking program can take all these words and create versions with numbers added, 4 swapped for A, and so on. These new potential passwords become part of the attack dictionary too. More can be added by taking short words and combining them, including ones that appear in well known phrases like ‘starwars’ or ‘tobeornottobe’.

The list gets bigger and bigger, but computers are fast, and hackers are patient, so that’s no big deal…so make sure your password isn’t in their dictionary!

– Jo Brodie and Paul Curzon, Queen Mary University of London

from the archive

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos