You are what you know by Paul Curzon, Queen Mary University of London Image: A path throught he woods at dawn. From PIXABAY.com
“Carter headed into the trees, his hat pulled low. Up ahead was a dark figure, standing in the shadow of a tree. As he drew close, Carter gave the agreed code phrase confirming he was the new agent: “Could I borrow a match?” The dark figure, stepped away from the tree, but rather than completing the exchange as Carter expected, he pulled a silenced gun. Before Carter could react, he heard the quiet spit of the gun and felt an excruciating pain in his chest. A moment later he was dead. Felix put the gun away, and quickly dragged the body into the bushes out of sight. He then went back to waiting. Soon another figure approached, but from the other direction. This time it was Felix who gave the pass phrase, which he now knew. “Could I borrow a match?” The new figure confidently responded, “Doesn’t everyone use a lighter these days?” Felix hadn’t known what he would say, but was happy to assume this was Carter’s real contact. He was in. “Hello. I’m Carter.” …
The trouble with using spy novel style passphrases to prove who you are is you still have to trust the other person. If they might have nefarious intentions, you want to prove who you are without giving anything else away. You certainly don’t want them to be able to take the information you give and use it to pretend to be you. Unfortunately, the above story is pretty much how passwords work, and why attacks like phishing, where someone sends emails pretending to be from your bank, are such a problem.
This is why phishing works
The story outlines the essential problem faced by all authentication systems trying to prove who someone is or that they possess some secret information. You give up the secret in the process to anyone there to hear. Security protocols somehow need ways one agent can prove to another who they are in a way that no one can masquerade as them in future. Creating a secure authentication system is harder than you might think! To do it well takes serious skill. What you don’t do is just send a password!
A simple solution for some situations is used by banks. Rather than ask you for a whole account number, they ask you for a random set of its digits: perhaps, the third, fifth and eighth digit one time, but completely different ones the next. Though they have learnt some of the secret, anyone listening in can’t masquerade as you as they will be asked for different digits when they do. Take this idea to an extreme and you get the “Zero Knowledge Proof“, where none of the secret is given up: possibly one of the cleverest ideas of computer science.
Shafi Goldwasser and Zero Knowledge by Paul Curzon, Queen Mary University of London
Shafi Goldwasser is one of the greatest living computer scientists, having won the Turing Award in 2012 (equivalent to a Nobel Prize). Her work helped turn cryptography from a dark art into a science. If you’ve ever used a credit card through a web browser, for example, her work was helping you stay secure. Her greatest achievement, with Silvio Micali and Charles Rackoff, is the “Zero knowledge proof”.
Zero knowledge proofs deal with the problem that, to be really secure, security protocols often need to prove that some statement is true without giving anything else away (see “You are what you know“). A specific case is where an agent (software or human) wants to prove they know some secret, without actually giving the secret up.
Satisfy me this
There are three properties a zero knowledge proof must satisfy. Suppose Peggy is trying to convince Victor that some statement about a secret is true. Firstly, if Peggy’s statement is true then Victor must be convinced of this at the end. Secondly, if it is not actually true, there must only be a tiny chance that Peggy can convince Victor that it is true. Finally, Victor must not be able to cheat in any way that means he learns more about the secret beyond the truth of the statement. Shafi and colleagues not only came up with the idea, but showed that such proofs, unlikely as they seem, were possible.
Imagine the following situation (based on a scenario by Jean-Jacques Quisquater). A top secret biosecurity laboratory is protected so only authorised people can get in and out. The lab is at the end of a corridor that splits. Each branch goes to a door at the opposite end of the lab. These two doors are the only ways in or out. The rest of the room is totally sealed (see diagram).
Now, Peggy claims she knows how to get in, and has told Victor she can steal a sample of the secret biotoxin held there if he pays her a million dollars. Victor wants to be sure she can get in, before paying. She wants to prove her claim is true, but without giving anything more away, and certainly not by showing him how she does it, or giving him the toxin. She doesn’t even want him to have any hard evidence he could use to convince others that she can get in, as then he could use it against her. How does she do it?
“I can get in”
She needs a Zero knowledge proof of her claim “I can get in”! Here is one way. Victor waits in the foyer, unable to see the corridor. Peggy goes to the fork, and chooses a branch to go down then waits at the door. Victor then goes to the fork, unable to see where she is but able to see both exit routes. He then chooses an exit corridor at random and tells Peggy to appear there. Peggy does, passing through the lab if need be.
If they do this enough times, with Victor choosing at random which side she should appear, then he can be strongly certain that she really does know how to get in. After all, that is the only way to appear at the other side. More to the point, he still cannot get in himself and even if he records everything he sees, he would have no way to convince anyone else that Peggy can get in. Even if he videod everything he saw, that would not be convincing proof. A video showing Peggy appearing from the correct corridor would be easy to fake. Peggy has shown she can get into the room, but without giving up the secret of how, or giving Victor a way to prove she can do it to anyone else.
So, strange as it seems, it is possible to prove you know a secret without giving anything more away about the secret. Thanks to Shafi and her co-researchers the idea is now a core part of computer security.
We’ve been posting a computing-themed article linked to the picture on the ‘front’ of the advent calendar for the last 17 days and today is no exception. The picture is of a Christmas cracker so today’s theme is going to be computer hacking and cracking – all about Cyber Security.
If you’ve missed any of our previous posts, please scroll to the end of this one where we have a full list.
The terms ‘cracker’ and ‘hacker’ are often used interchangeably to refer to people who break into computers though generally the word hacker also has a friendlier meaning – someone who uses their skills to find a workaround or a solution (e.g. ‘a clever hack’) whereas a cracker is probably someone who shouldn’t be in your system and is up to no good. Both people can use very similar skills though – one is using them to benefit others, the other to be benefit themselves.
We have an entire issue of the CS4FN magazine all about Cyber Security – it’s issue 24 and is called ‘Keep Out’ but we’ll let you in to read it. All you have to do is click on this very secret link, then click on the magazine’s front cover to download the PDF. But don’t tell anyone else…
Both the articles below were originally published in the magazine as well as on the CS4FN website.
You arrive in your holiday hotel and ask about Wi-Fi. Time to finish off your online game, connect with friends, listen to music, kick back and do whatever is your online thing. Excellent! The hotel Wi-Fi is free and better still you don’t even need one of those huge long codes to access it. Great news, or is it?
You always have to be very cautious around public Wi-Fi whether in hotels or cafes. One common attack is for the bad guys to set up a fake Wi-Fi with a name very similar to the real one. If you connect to it without realising, then everything you do online passes through their computer, including all those user IDs and passwords you send out to services you connect to. Even if the passwords they see are encrypted, they can crack them offline at their leisure.
Things just got more serious. A group has created a way to take over hotel Wi-Fi. In July 2017, the FireEye security team found a nasty bit of code, malware, linked to an email received by a series of hotels. The malware was called GAMEFISH. But this was no game and it certainly had a bad, in fact dangerous, smell! It was a ‘spear phishing’ attack on the hotel’s employees. This is an attack where fake emails try to get you to go to a malware site (phishing), but where the emails appear to be from someone you know and trust.
Once in the hotel network, so inside the security perimeter, the code searched for the machines running the hotel’s Wi- Fi and took them over. Once there they sat and watched, sniffing out passwords from the Wi-Fi traffic: what’s called a man-in-the-middle attack.
The report linked the malware to a very serious team of Russian hackers, called FancyBear (or APT28), who have been associated with high profile attacks on governments across the world. GAMEFISH used a software tool (an ‘exploit’) called EternalBlue, along with some code that compiled their Python scripts locally, to spread the attack. Would you believe, EternalBlue is thought to have been created by the US Government’s National Security Agency (NSA), but leaked by a hacker group! EternalBlue was used in the WannaCry ransomware too. This may all start to sound rather like a farfetched thriller but it is not. This is real! So think before you click to join an unsecured public Wi-Fi.
Just between the two of us: mentalism and covert channels
Secret information should stay secret. Beware ‘covert channels’ though. They are a form of attack where an illegitimate way of transferring information is set up. Stopping information leaking is a bit like stopping water leaking – even the smallest hole can be exploited. Magicians have been using covert channels for centuries, doing mentalism acts that wow audiences with their ‘telepathic’ powers.
The secret codes of Mentalism
In the 1950’s Australian couple Sydney and Lesley Piddington took the entertainment world by storm. They had the nation perplexed, puzzled and entertained. They were seemingly able to communicate telepathically over great distances. It all started in World War 2 when Sydney was a prisoner of war. To keep up morale, he devised a mentalism act where he ‘read the minds’ of other soldiers. When he later married Lesley they perfected the act and became an overnight sensation, attracting BBC radio audiences of 20 million. They communicated random words and objects selected by the audience, even when Lesley was in a circling aeroplane or Sydney was in a diving bell in a swimming pool. To this day their secret remains unknown, though many have tried to work it out. Perhaps they used a hidden transmitter. After all that was fairly new technology then. Or perhaps they were using their own version of an old mentalism trick: a code to transmit information hidden in plain sight.
Sydney had a severe stutter, and some suggested it was the pauses he made in words rather than the words themselves that conveyed the information. Using timing and silence to code information seems rather odd, but it can be used to great effect.
In the phone trick ‘Call the wizard’, for example, a member of the audience chooses any card from a pack. You then phone your accomplice. When they answer you say “I have a call for the wizard”. Your friend names the card suits: “Clubs … spades … diamonds … hearts”. When they reach the suit of the chosen card you say: “Thanks”.
Your phone friend now knows the suit and starts counting out the values, Ace to King. When they reach the chosen card value you say: “Let me pass you over”. Your accomplice now knows both suit and value so dramatically reveals the card to the person you pass the phone to.
This trick requires a shared understanding of the code words and the silence between them. When combined with the background count, information is passed. The silence is the code.
Timing can similarly be used by a program to communicate covertly out of a secure network. Information might be communicated by the time a message is sent rather than its contents, for example
Codes on the table
Covert channels can be hidden in the existence and placement of things too. Here’s another trick.
The receiving performer leaves the room. A card is chosen from a pack by a volunteer. When the receiver arrives back they are instantly able to tell the audience the name of the card. The secret is in the table. Once the card has been selected, pack and box are replaced on the table. The agreed code might be:
If the box is face up and its flap is closed: Clubs.
If the box is face up and its flap is open: Spades.
If the box is face down and its flap is closed: Diamonds.
If the box is face down and its flap is open: Hearts.
That’s the suits taken care of. Now for the value. The performers agree in advance how to mentally chop up the card table into zones: top, middle and bottom of the table, and far right, right, left and far left. That’s 3 x 4 unique locations. 12 places for 12 values. The pack of cards is placed in the correct pre-agreed position, box face up or not, flap open or closed as needed. What about the 13th possibility? Have the audience member hold their hand out flat and leave the cards on it for them to ‘concentrate’ on.
Again a similar idea can be used as a covert channel to subvert a security system: information might be passed based on whether a particular file exists or not, say.
Making it up as you go along
These are just a couple of examples of the clever ideas mentalists have used to amaze and entertain audiences with feats of seemingly superhuman powers. Our cs4fn mentalism portal has more. Some claim they have the powers for real, but with two dedicated performers and a lot of cunning memory work, it’s often hard to decipher performers’ methods. Covert channels can be similarly hard to spot.
Perhaps the Piddingtons secret was actually a whole range of different methods. Just before she died Lesley Piddington is said to have told her son, “Even if I wanted to tell you how it was done, I don’t think I would be able”. How ever it was done, they were using some form of covert channel to cement their place in magic history. As Sydney said at the end of each show “You be the judge”.
Governments pass laws to protect their citizens’ privacy. They outlaw reading email, listening in to phone calls and accessing data without permission or a court order. If you really care about privacy though, you need to protect your ‘metadata’ too: data about data. The police and security services can learn a lot from metadata when they put their minds to it. Anyone else with access can too. You need to take care of yours (and that of your cat).
Imagine you are on the run, hoping not to be found. You would make sure no one took a photograph of you that showed where you were, but you might think you were safe enough if you weren’t standing next to a distinctive landmark. Think again. John McAfee (who was on the run from the Police at the time), was located by a photograph, but not because of anything visible in the picture. It was because of the hidden metadata attached to it.
Metadata is all the indexing information that goes with data. So a film on DVD is actual data but the name of the film, the director, information about the actors, and so on – that’s its metadata. Similarly, the words you say in a phone call are the data, but who is calling who and from where is metadata.
With photos (the data), the metadata includes where the photo was taken and the camera it was taken with. Smartphones track very precise location information, and, unless you switch this setting off, this metadata of where it was taken is saved with the picture. If the photo is uploaded to a website, the metadata is uploaded too, and that’s what happened to John McAfee. A journalist interviewed him and took a photo, intending not to give away his whereabouts. However, he uploaded the photo which gave away the precise location in the metadata. This would have made it easier for law enforcement to catch up with him … had he not found out about the location-leak first and made his escape in the nick of time.
Location based services on smartphones are really useful. If you want to find out where you are, how far away something is, or use the inbuilt map apps on the phone to plan how to get somewhere, then you need to let the phone know where it is. But you might not want a photograph you take inside your home to share the information of where that house is to anyone who cares to know, stalkers and trolls included.
The website “I know where your cat lives” aims to make more people aware of the information they give away unintentionally. Plenty of people share photographs of their gorgeous pets on the Internet. Most also unwittingly give away the precise latitude and longitude of where they live. The site collects publicly available cat photos with their metadata and plots them on an aerial photo map, showing where each cat’s photo has been taken, and so likely where the owner lives
German Green party MP, Malte Spitz, went a step further and published 6 months of records kept (at the time by law) by his phone company about him. To emphasise how scary it was privacy-wise he published it in the form of a minute by minute interactive map, so anyone could follow his exact location (just like the phone company) as though in real time from the location metadata his phone was giving away all the time. The metadata was combined with his freely available social networking data, allowing anyone to see not just where he was but often what he was doing. Germany no longer requires phone companies to keep this metadata, but other countries have antiterrorist laws that require similar information to be kept for everyone. You can explore Malte’s movements at (archived link: www.zeit.de/datenschutz/malte-spitz-data-retention) to get an idea of how your life is being tracked by metadata.
Don’t just look after your cat, look after your cat’s metadata too (not to mention your own)!
The great Tudor and Stuart philosopher Sir Francis Bacon was a scientist, a statesman and an author. He was also a pretty decent computer scientist. He published* a new form of cipher, now called Bacon’s Cipher, invented when he was a teenager. Its core idea is the foundation for the way all messages are stored in computers today.
The Tudor and Stuart eras were a time of plot and intrigue. Perhaps the most famous is the 1605 Gunpowder plot where Guy Fawkes tried to assassinate King James I by blowing up the Houses of Parliament. Secrets mattered! In his youth Bacon had worked as a secret agent for Elizabeth I’s spy chief, Walsingham, so knew all about ciphers. Not content with using those that existed he invented his own. The one he is best remembered for was actually both a cipher and a form of steganography. While a cipher aims to make a message unreadable, steganography is the science of secret writing: disguising messages so no one but the recipient knows there is a message there at all.
A Cipher …
Bacon’s method came in two parts. The first was a substitution cipher, where different symbols are substituted for each letter of the alphabet in the message. This idea dates back to Roman times. Julius Caesar used a version, substituting each letter for a letter from a fixed number of places down the alphabet (so A becomes E, B becomes F, and so on). Bacon’s key idea was to replace each letter of the alphabet with, not a number or letter, but it’s own series of a’s and b’s (see the cipher table). The Elizabethan alphabet actually had only 24 letters so I and J have the same code as do U and V as they were interchangeable (J was the capital letter version of i and similarly for U and v).
In Bacon’s cipher everything is encoded in two symbols, so it is a binary encoding. The letters a and b are arbitrary. Today we would use 0 and 1. This is the first use of binary as a way to encode letters (in the West at least). Today all text stored in computers is represented in this way – though the codes are different – it is all Unicode is. It allocates each character in the alphabet with a binary pattern used to represent it in the computer. When the characters are to be displayed, the computer program just looks up which graphic pattern (the actual symbol as drawn) is linked to that binary pattern in the code being used. Unicode gives a binary pattern for every symbol in every human language (and some alien ones like Klingon).
The second part of Bacon’s cipher system was Steganography. Steganography dates back to at least the Greeks, who supposedly tattooed messages on the shaved heads of slaves, then let their hair grow back before sending them as both messenger and message. The binary encoding of Bacon’s cipher was vital to make his steganography algorithm possible. However, the message was not actually written as a’s and b’s. Bacon realised that two symbols could stand for any two things. If you could make the difference hard to spot, you could hide the messages. Bacon invented two ways of handwriting each letter of the alphabet – two fonts. An ‘a’ in the encoded message meant use one font and a ‘b’ meant use the other. The secret message could then be hidden inside an innocent one. The letters written were no longer the message, the message was in the font used. As Bacon noted, once you have the message in binary you could think of other ways to hide it. One way used was with capital and lower-case letters, though only using the first letter of words to make it less obvious.
Suppose you wanted to hide the message “no” in the innocuous message ‘hello world’. The message ‘no’ becomes ‘abbaa abbab’. So far this is just a substitution cipher. Next we hide it in, ‘hello world’. Two different kinds of fonts are those with curls on the tails of letters known as serif fonts and like this one and those without curls known as sans serif fonts and like this one. We can use a sans serif font to represent an ‘a’ in the coded message, and a serif font to represent ‘b’. We just alternate the fonts following the pattern of the a’s and b’s: ‘abbaa abbab’. The message becomes
sans serif, serif, serif, sans serif, sans serif, sans serif, serif, serif, sans serif, serif.
Using those fonts for our message we get the final mixed font message to send:
Bacon the polymath
Bacon is perhaps best known as one of the principal advocates for rigorous science as a way of building up knowledge. He argued that scientists needed to do more than just come up with theories of how the world worked, and also guard against just seeing the results that matched their theories. He argued knowledge should be based on careful, repeated observation. This approach is the basis of the Scientific Method and one of the foundation stones of modern science.
Bacon was also a famous writer of the time, and one of many authors who has since been suggested as the person who wrote William Shakespeare’s plays. In his case it is because they claim to have found secret messages hidden in the plays in Bacon’s code. The idea that someone else wrote Shakespeare’s plays actually started just because some upper class folk with a lack of imagination couldn’t believe a person from a humble background could turn themselves into a genius. How wrong they were!
– Paul Curzon, Queen Mary University of London, Autumn 2017
*Thanks to Pete Langman, whose PhD was on Francis Bacon, for pointing out a mistake in the original version of this blog where I suggested the cipher was published in, 1605, the year of the Gun Powder plot. It was actually first published in 1623 in De augmentis which was a translation/enlargement of his 1605 Advancement of Learning.
He also pointed out that Bacon conceived the idea while working with Elizabethan spymaster, Walsingham’s cipher expert at the time of the Babington plot to assasinate Elizabeth I, Thomas Phileppes, and Mary, Queen of Scots’ jailer, Amias paulet. Bacon also claimed the cipher was never broken!
Contactless payments seem magical. But don’t get caught out by someone magically scanning your card without you knowing. Almost £7 million was stolen by contactless card fraud in 2016 alone…
Contactless cards talk to the scanner by electromagnetic induction, discovered by Michael Faraday back in 1831. Changes in the current in a coil of wire, which for a contactless card is just an antenna in the form of a loop, creates a changing magnetic field. If a loop antenna on another device is placed inside that magnetic field, then a voltage is created in its circuit. As the current in the first circuit changes, that in the other circuit copies it, and information is passed from one to the other. This works up to about 10cm away.
Picking pockets at a distance
Contactless cards don’t require authentication like a PIN, to prove who is using them, for small amounts. Anyone with the card and a reader can charge small amounts to it. Worse, if someone gets a reader within 10cm of the bag holding your card, they could even take money from it without your knowledge. That might seem unlikely but then traditional pickpockets are easily capable of taking your wallet without you noticing, so just getting close isn’t hard by comparison! For that kind of fraud the crook has to have a legitimate reader to charge money. Even without doing that they can read the number and expiry date from the card and use them to make online purchases though.
A man in the middle
Security researchers have also shown that ‘relay’ attacks are possible, where a fake device passes messages between the shop and a card that is somewhere else. An attacker places a relay device near to someone’s actual card. It communicates with a fake card an accomplice is using in the shop. The shop’s reader queries the fake card which talks to its paired device. The paired device talks to the real card as though it were the one in the shop. It passes the answers from the real card back to the fake card which relays it on to the shop. Real reader and card get exactly the messages they would if the card was in the shop, just via the fake devices in between. Both shop and card think they are talking to each other even though they are a long way apart, and the owner of the real card knows nothing about it.
Block the field
How do you guard against contactless attacks? Never hand over your card, always ask for a receipt and check your statements. You can also keep your card in a blocking sleeve: a metal case that protects the card from electromagnetic fields (even using a homemade sleeve from tin foil should work). Then at least you force the pickpockets back to the Victorian, Artful Dodger style, method of actually stealing your wallet.
Of course Faraday was a Victorian, so a contactless attack is actually a Victorian way of stealing too!
– Jane Waite and Paul Curzon, Queen Mary University of London
To catch criminals, whether old-fashioned ones or cybercriminals, you need to understand the criminal mind. You need to understand how they think and how they work. Jeremiah Onaolapo, a PhD student at UCL, has been creating cyber-honeypots and finding out how cybercriminals really operate.
Hackers share user ids and passwords they have stolen on both open and hidden websites. But what do the criminals who then access those accounts do once inside? If your webmail account has been compromised what will happen. Will you even know you’ve been hacked?
Looking after passwords is important. If someone hacks your account there is probably lots of information you wouldn’t want criminals to find: information they could use whether other passwords, bank or shopping site details, personal images, information, links to cloud sites with yet more information about you … By making use of the information they discover, they could cause havoc to your life. But what are cybercriminals most interested in? Do they use hacked accounts just to send spam of phish for more details? Do they search for bank details, launch attacks elsewhere, … or something completely different we aren’t aware of? How do you even start to study the behaviour of criminals without becoming one? Jeremiah knew how hard it is for researchers to study issues like this, so he created some tools to help that others can use too.
His system is based on the honeypot. Police and spies have used various forms of honeytraps, stings and baits successfully for a long time, and the idea is used in computing security too. The idea is that you set up a situation so attractive to people that they can’t resist falling in to your trap. Jeremiah’s involved a set of webmail accounts. His accounts aren’t just normal accounts though. They are all fake, and have software built in that secretly records the activities of anyone accessing the account. They save any emails drafted or sent, details of the messages read, the locations the hackers come in from, and so on. The accounts look real, however. They are full of real messages, sent and received, but with all personal details, such as names and passwords or bank account details, fictionalised. New emails sent from them aren’t actually delivered but just go in to a sinkhole server – where they are stored for further study. This means that no successful criminal activity can happen from the accounts. A lot can be learnt about any cybercriminals though!
In an early experiment Jeremiah created 100 such accounts and then leaked their passwords and user ids in different ways: on hacker forums and web pages. Over 7 months hundreds of hackers fell into the trap, accessing the accounts from 29 countries. What emerged were four main kinds of behaviours, not necessarily distinct: the curious, the spammers the gold diggers and the hijackers. The curious seemed to just be intrigued to be in someone else’s account, but didn’t obviously do anything bad once there. Spammers just used the account to send vast amounts of spam email. Gold diggers went looking for more information like bank accounts or other account details. They were after personal information they could make money from, and also tried to use each account as a stepping stone to others. Finally hijackers took over accounts, changing the passwords so the owner couldn’t get in themselves.
The accounts were used for all sorts of purposes including attempts to use them to buy credit card details and in one extreme case to attempt to blackmail someone else.
Similar behaviours were seen in a second experiment where the account details were only released on hidden websites used by hackers to share account details. In only a month this set of accounts were accessed over a thousand times from more than 50 countries. As might be expected these people were more sophisticated in what they did. More were careful to ensure they cleared up any evidence they had been there (not realising everything was separately being recorded). They wanted to be able to keep using the accounts for as long as possible, so tried to make sure noone knew the account was compromised. They also seemed to be better at covering the tracks of where they actually were.
The Good Samaritan
Not everyone seemed to be there to do bad things though. One person stood out. They seemed to be entering the accounts to warn people – sending messages from inside the account to everyone in the contact list telling them that the account had been hacked. That would presumably also mean those contacted people would alert the real account owner. There are still good samaritans!
One thing this shows is how important it is to look after your account details: ensure no one knows or can guess them. Don’t enter details in a web page unless you are really sure you are in a secure place both physically and virtually and never tell them to anyone else. Also change your passwords regularly so if they are compromised without you realising, they quickly become useless.
Of course, if you are a cybercriminal, you had better beware as that tempting account might just be a honeypot and you might just be the rat in the maze.