Smart tablets (to swallow)

The first ever smart pill has been approved for use. It’s like any other pill except that this one has a sensor inside it and it comes with a tracking device patch you wear to make sure you take it.

A big problem with medicine is remembering to take it. It’s common for people to be unsure whether they did take today’s tablet or not. Getting it wrong regularly can make a difference to how quickly you recover from illness. Many medicines are also very, very expensive. Mass-produced electronics, on the other hand, are cheap. So could the smart pill be a new, potentially useful, solution? The pill contains a sensor that is triggered when the pill dissolves and the sensor meets your stomach acids. When it does, the patch you wear detects its signal and sends a message to your phone to record the fact. The specially made sensor itself is harmless and safe to swallow. Your phone’s app can then, if you allow it, tell your doctor so that they know whether you are taking the pills correctly or not.

Smart pills could also be invaluable for medical researchers. In medical trials of new drugs, knowing whether patients took the pills correctly is important but difficult to know. If a large number of patients don’t, that could be a reason why the drugs appeared less effective than expected. Smart pills could allow researchers to better work out how regularly a drug needs to be taken to still work. 

More futuristically still, such pills may form part of a future health artificial intelligence system that is personalised to you. It would collect data about you and your condition from a wide range of sensors recording anything relevant: from whether you’ve taken pills to how active you’ve been, your heart rate, blood pressure and so on: in fact anything useful that can be sensed. Then, using big data techniques to crunch all that data about you, it will tailor your treatment. For example, such a system may be better able to work out how a drug affects you personally, and so be better able to match doses to your body. It may be able to give you personalised advice about what to eat and drink, even predicting when your condition could be about to get better or worse. This could make a massive difference to life for those with long term illnesses like rheumatoid arthritis or multiple sclerosis, where symptoms flare up and die away unpredictably. It could also help the doctors who currently must find the right drug and dose for each person by trial and error.

Computing in future could be looking after your health personally, as long as you are willing to wear it both inside and out.

Paul Curzon, Queen Mary University of London, Spring 2021

More on

Magazines…

This post and issue 27 of the cs4fn magazine have been funded by EPSRC as part of the PAMBAYESIAN project.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

i-pickpocket

Credit cards in a back pocket.
Image by Kris from Pixabay

Contactless payments seem magical. But don’t get caught out by someone magically scanning your card without you knowing. Almost £7 million was stolen by contactless card fraud in 2016 alone…

Victorian Hi-Tech

Contactless cards talk to the scanner by electromagnetic induction, discovered by Michael Faraday back in 1831. Changes in the current in a coil of wire, which for a contactless card is just an antenna in the form of a loop, creates a changing magnetic field. If a loop antenna on another device is placed inside that magnetic field, then a voltage is created in its circuit. As the current in the first circuit changes, that in the other circuit copies it, and information is passed from one to the other. This works up to about 10cm away.

Picking pockets at a distance

Contactless cards don’t require authentication like a PIN, to prove who is using them, for small amounts. Anyone with the card and a reader can charge small amounts to it. Worse, if someone gets a reader within 10cm of the bag holding your card, they could even take money from it without your knowledge. That might seem unlikely but then traditional pickpockets are easily capable of taking your wallet without you noticing, so just getting close isn’t hard by comparison! For that kind of fraud the crook has to have a legitimate reader to charge money. Even without doing that they can read the number and expiry date from the card and use them to make online purchases though.

A man in the middle

Security researchers have also shown that ‘relay’ attacks are possible, where a fake device passes messages between the shop and a card that is somewhere else. An attacker places a relay device near to someone’s actual card. It communicates with a fake card an accomplice is using in the shop. The shop’s reader queries the fake card which talks to its paired device. The paired device talks to the real card as though it were the one in the shop. It passes the answers from the real card back to the fake card which relays it on to the shop. Real reader and card get exactly the messages they would if the card was in the shop, just via the fake devices in between. Both shop and card think they are talking to each other even though they are a long way apart, and the owner of the real card knows nothing about it.

Block the field

How do you guard against contactless attacks? Never hand over your card, always ask for a receipt and check your statements. You can also keep your card in a blocking sleeve: a metal case that protects the card from electromagnetic fields (even using a homemade sleeve from tin foil should work). Then at least you force the pickpockets back to the Victorian, Artful Dodger style, method of actually stealing your wallet.

Of course Faraday was a Victorian, so a contactless attack is actually a Victorian way of stealing too!

Jane Waite and Paul Curzon, Queen Mary University of London

More on…

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

AI Detecting the Scribes of the Dead Sea Scrolls

The cave where most of the Dead Sea Scrolls were found.
The cave where most of the Dead Sea Scrolls were found. Image by Effi Schweizer, Public Domain from wikimedia

Computer science and artificial intelligence have provided a new way to do science: it was in fact one of the earliest uses of the computer. They are now giving new ways for scholars to do research in other disciplines such as ancient history, too. Artificial Intelligence has been used in a novel way to help understand how the Dead Sea Scrolls were written, and it turns out scribes in ancient Judea worked in teams.

The Dead Sea Scrolls are a collection of almost a thousand ancient documents written several thousand years ago that were found in caves near the Dead Sea. The collection includes the oldest known written version of the Bible.

Researchers from the University of Groningen (Mladen Popović, Maruf Dhali and Lambert Schomaker) used artificial intelligence techniques to analyse a digitised version of the longest scroll in the collection, known as the Great Isaiah Scroll. They picked one letter, aleph, that appears thousands of times through the document, and analysed it in detail.

Two kinds of artificial intelligence programs were used. The first, feature extraction, based on computer vision and image processing was needed to recognize features in the images. At one level this is the actual characters, but also more subtly here, the aim was that the features corresponded to ink traces based on the actual muscle movements of the scribes.

The second was machine learning. Machine Learning programs are good at spotting patterns in data – grouping the data into things that are similar and things that are different. A typical text book example would be giving the program images of cats and of dogs. It would spot the patterns of features that correspond to dogs, and the different pattern of features that corresponds to cats and group each image into one or the other pattern.

Here the data was all those alephs or more specifically the features extracted from them. Essentially the aim was to find patterns that were based on the muscle movements of the original scribe of each letter. To the human eye the writing throughout the document looks very, very uniform, suggesting a single scribe wrote the whole document. If that was the case, only one pattern would be found that all letters were part of with no clear way to split them. Despite this, the artificial intelligence evidence suggests there were actually two scribes involved. There were two patterns.

The research team found, by analysing the way the letters were written, that there were two clear groupings of letters. One group were written in one way and the other in a slightly different way. There were very subtle differences in the way strokes were written, such as in their thickness and the positions of the connections between strokes. This could just be down to variations in the way a single writer wrote letters at different times. However, the differences were not random, but very clearly split at a point halfway through the scroll. This suggests there were two writers who each worked on the different parts. Because the characters were otherwise so uniform, those two scribes must have been making an effort to carefully mirror each other’s writing style so the letters looked the same to the naked eye.

The research team have not only found out something interesting about the Dead Sea Scrolls, but also demonstrated a new way to study ancient hand writing. With a few exceptions, the scribes who wrote the ancient documents, like the Dead Sea Scrolls, that have survived to the modern day, are generally anonymous, but thanks to leading-edge Computer Science, we have a new way to find out more about them.

Explore the digitised version of the Dead Sea Scrolls yourself at www.deadseascrolls.org.il

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Losing the match? Follow the science. Change the kit!

Artificial Intelligence software has shown that two different Manchester United gaffers got it right believing that kit and stadium seat colours matter if the team are going to win.

It is 1996. Sir Alex Ferguson’s Manchester United are doing the unthinkable. At half time they are losing 3-0 to lowly Southampton. Then the team return to the pitch for the second half and they’ve changed their kit. No longer are they wearing their normal grey away kit but are in blue and white, and their performance improves (if not enough to claw back such a big lead). The match becomes infamous for that kit change: the genius gaffer blaming the team’s poor performance on their kit seemed silly to most. Just play better football if you want to win!

Jump forward to 2021, and Manchester United Manager Ole Gunnar Solskjaer, who originally joined United as a player in that same year, 1996, tells a press conference that the club are changing the stadium seats to improve the team’s performance!

Is this all a repeat of previously successful mind games to deflect from poor performances? Or superstition, dressed up as canny management, perhaps. Actually, no. Both managers were following the science.

Ferguson wasn’t just following some gut instinct, he had been employing a vision scientist, Professor Gail Stephenson, who had been brought in to the club to help improve the players’ visual awareness, getting them to exercise the muscles in their eyes not just their legs! She had pointed out to Ferguson that the grey kit would make it harder for the players to pick each other out quickly. The Southampton match was presumably the final straw that gave him the excuse to follow her advice.

She was very definitely right, and modern vision Artificial Intelligence technology agrees with her! Colours do make it easier or harder to notice things and slows decision making in a way that matters on the pitch. 25 years ago the problem was grey kit merging into the grey background of the crowd. Now it is that red shirts merge into the background of an empty stadium of red seats.

It is all about how our brain processes the visual world and the saliency of objects. Saliency is just how much an object stands out and that depends on how our brain processes information. Objects are much easier to pick out if they have high contrast, for example, like a red shirt on a black background.

Peter McOwan and Hamit Soyel at Queen Mary combined vision research and computer science, creating an Artificial Intelligence (AI) that sees like humans in the sense that it predicts what will and won’t stand out to us, doing it in real time (see DragonflyAI: I see what you see). They used the program to analyse images from that infamous football match before and after the kit change and showed that the AI agreed with Gail Stephenson and Alex Ferguson. The players really were much easier for their team mates to see in the second half (see the DragonflyAI version of the scenes below).

Details matter and science can help teams that want to win in all sorts of ways. That includes computer scientists and Artificial Intelligence. So if you want an edge over the opposition, hire an AI to analyse the stadium scene at your next match. Changing the colour of the seats really could make a difference.

More on …

Paul Curzon, Queen Mary University of London

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

DragonflyAI: I see what you see

What use is a computer that sees like a human? Can’t computers do better than us? Well, such a computer can predict what we will and will not see, and there is BIG money to be gained doing that!

The Hong Kong Skyline.
The Hong Kong Skyline.
Image public domain from wikipedia


Peter McOwan’s team at Queen Mary spent 10 years doing exploratory research understanding the way our brains really see the world, exploring illusions, inventing games to test the ideas, and creating a computer model to test their understanding. Ultimately they created a program that sees like a human. But what practical use is a program that mirrors the oddities of the way we see the world? Surely a computer can do better than us: noticing all the things that we miss or misunderstand? Well, for starters the research opens up exciting possibilities for new applications, especially for marketeers.

The Hong Kong Skyline as seen by DragonflyAI (processed public domain image from wikipedia))


A fruitful avenue to emerge is ‘visual analytics’ software: applications that predict what humans will and will not notice. Our world is full of competing demands, overloading us with information. All around us things vie to catch our attention, whether a shop window display, a road sign warning of danger or an advertising poster.

Imagine, a shop has a big new promotion designed to entice people in, but no more people enter than normal. No-one notices the display. Their attention is elsewhere. Another company runs a web ad campaign, but it has no effect, as people’s eyes are pulled elsewhere on the screen. A third company pays to have its products appear in a blockbuster film. Again, a waste of money. In surveys afterwards no one knew the products had been there. A town council puts up a new warning sign at a dangerous bend in the road but the crashes continue. These are examples of situations where predicting where people look in advance allows you to get it right. In the past this was either done by long and expensive user testing, perhaps using software that tracks where people look, or by having teams of ‘experts’ discuss what they think will happen. What if a program made the predictions in a fraction of a second beforehand? What if you could tweak things repeatedly until your important messages could not be missed.

Queen Mary’s Hamit Soyel turned the research models into a program called DragonflyAI, which does exactly that. The program analyses all kinds of imagery in real-time and predicts the places where people’s attention will, and will not, be drawn. It works whether the content is moving or not, and whether it is in the real world, completely virtual, or both. This then gives marketeers the power to predict and so influence human attention to see the things they want. The software quickly caught the attention of big, global companies like NBC Universal, GSK and Jaywing who now use the technology.

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Studying Comedy with Computers

Black comedian at mike
Image by Rob Slaven from Pixabay 

Smart speakers like Alexa might know a joke or two, but machines aren’t very good at sounding funny yet. Comedians, on the other hand, are experts at sounding both funny and exciting,  even when they’ve told the same joke hundreds of times. Maybe speech technology could learn a thing or two from comedians… that is what my research is about.

To test a joke, stand-up comedians tell it to lots of different audiences and see how they react. If no-one laughs, they might change the words of the joke or the way they tell it. If we can learn how they make their adjustments, maybe technology can borrow their tricks. How much do comedians change as they write a new show? Does a comedian say the same joke the same way at every performance? The first step is to find out.

The first step is to record lots of the same live show of a comedian and find the parts that match from one show to the next. It was much faster to write a program to find the same jokes in different shows than finding them all myself. My code goes through all the words and sounds a comedian said in one live show and looks for matching chunks in their other shows. Words need to be in the same exact order to be a match: “Why did the chicken cross the road” is very different to “Why did the road cross the chicken”! The process of looking through a sequence to find a match is called “subsequence matching,” because you’re looking through one sequence (the whole set of words and sounds in a show) for a smaller sequence (the “sub” in “subsequence”). If a subsequence (little sequence) is found in lots of shows, it means the comedian says that joke the same way at every show. Subsequence matching is a brand new way to study comedy and other types of speech that are repeated, like school lessons or a favourite campfire story.

By comparing how comedians told the same jokes in lots of different shows, I found patterns in the way they told them. Although comedy can sound very improvised, a big chunk of comedians’ speech (around 40%) was exactly the same in different shows. Sounds like “ummm” and “errr” might seem like mistakes but these hesitation sounds were part of some matches, so we know that they weren’t actually mistakes. Maybe “umm”s help comedians sound like they’re making up their jokes on the spot.

Varying how long pauses are could be an important part of making speech sound lively, too. A comedian told a joke more slowly and evenly when they were recorded on their own than when they had an audience. Comedians work very hard to prepare their jokes so they are funny to lots of different people. Computers might, therefore, be able to borrow the way comedians test their jokes and change them. For example, one comedian kept only five of their original jokes in their final show! New jokes were added little by little around the old jokes, rather than being added in big chunks.

If you want to run an experiment at home, try recording yourself telling the same joke to a few different people. How much practice did you need before you could say the joke all at once? What did you change, including little sounds like “umm”? What didn’t you change? How did the person you were telling the joke to, change how you told it?

There’s lots more to learn from comedians and actors, like whether they change their voice and movement to keep different people’s attention. This research is the first to use computers to study how performers repeat and adjust what they say, but hopefully just the beginning. 

Now, have you heard the one about the …

Vanessa Pope, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Every Breath You Take: Reclaim the Internet

Image by Adam from Pixabay

The 1983 hit song by the Police “Every breath you take” is up there in the top 100 pop songs ever. It seems a charming love song, and some couples even treat it as “their” song, playing it for the first dance at their wedding. Some of the lyrics “Every single day…I’ll be watching you”, if in a loving relationship, might be a good and positive thing. As the Police’s Sting has said though, the lyrics are about exactly the opposite.

It is being sung by a man obsessed with his former girlfriend. He is singing a threat. It is about sinister stalking and surveillance, about nasty use of power by a deranged man over a woman who once loved him.

Reclaim the Internet

Back in 1983 the web barely existed, but what the song describes is now happening every day, with online stalking, trolling and other abuse a big problem. What starts in the virtual world, we now see, spills over into the real world, too. This is one reason why we need to Reclaim the Internet and why online privacy is important. We must all call out online abuse. Prosecuters need to treat it seriously. Social media companies need to find ways to prevent abusive content being posted and remove it quickly. They need easier ways for us to protect our privacy and to know it is protected. They need to be up for the challenge.

Reclaim your privacy

The lyrics fit our lives in another way too, about another kind of relationship. When we click those unreadable consent forms for using a new app, we give permission for the technology companies that we love so much to watch over us. They follow the song as a matter of course (in a loving way they say). They are “watching you” as you keep your gadgets on you “every single day”; “every night you stay” online you are recorded along with anyone you are with online; they watch “every move you make” (physically with location aware devices and virtually, noting every click, every site visited, everything you are interested in they know from your searches); “every step you take” (recorded by your fitness tracker); and “every breath you take” (by your healthcare app); “every bond you break” is logged (as you unlike friends and as you leave websites never to go back); “every game you play” (of course), “every word you say” (everything you type is noted, but the likes of Alexa also record every sound too, shipping your words off to be processed by distant company servers). They really are watching you.

Let’s hope the companies really are loving and don’t turn out to have an ugly underside, changing personality and becoming abusive once they have us snared. Remember their actual aim is to make money for shareholders. They don’t actually love us back. We may fall out of love with them, but by then they will already know everything about us, and will still be watching every move we make. Perhaps you should not be giving up your privacy so freely.

You belong to me?

We probably can’t break our love affair, anyway. We’ve already sold them our souls (for nothing much at all). As the lyrics say: “You belong to me.”

More on…

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Why would you accept inefficiency?

Three British Airways planes flying close together
Image by Angela from Pixabay

In May 2017, British Airways IT system had a meltdown. Someone mistakenly disconnected the power for a short time. The fleet was grounded and tens of thousands of passengers were left stranded for days. One suggestion was it was due to “cost cutting”. Willie Walsh, the Head of BAs parent group came out fighting, defending the idea of doing things cheaply: “You talk about it as cost-cutting, I talk about it as efficiency … The idea that you would accept inefficiency – I just don’t get it.”

The fact that many business leaders don’t get it may be exactly the problem. Doing things more cheaply than the competition is an idea that is at the core of capitalism. It is often taken as a given. But, is it really always true?

The best and only the best

Computer Scientists actually use the word “efficiency” in a subtly different way. When they talk about a program or algorithm being efficient, they do not mean that it was cheap. They mean it did exactly the same job, but faster or with less memory. This is one of the really creative areas of computer science. Can you come up with an algorithm that does exactly the same thing but in fewer steps?

The business version of efficiency would be fine if it had the same underlying principle. Do it cheaper, yes, but only if it really does do exactly the same thing in all circumstances. To company bosses, however, the trade-off can be seen as cut costs at all costs. ‘Waste’ is anything you think no one will notice. You accept the 1 in a million chance of it not working at all – just as with the BA meltdown, taking the hit (or rather your passengers do) because you think you will make more money overall as a result.

Even with algorithms we do accept inefficiency though. Engineering is often about trade-offs. Sometimes, you will accept inefficiency in the use of memory because it gives a way to get a faster algorithm. Sometimes you accept a slower algorithm because it is just easier to be certain your code really does do the right thing. Sometimes slow is good enough. Sometimes it is the bigger picture that matters. The fastest algorithms for searching for information require sorted data. That is why a dictionary is in alphabetical order. Finding the word you want is quick – you don’t have to check every word in turn to find the one you want. However, if you were only ever going to look for a single thing in a data source, you wouldn’t sort it first. You would use an inefficient search algorithm, because overall that would be faster than sorting and then searching once. Efficiency can be subtle.

Inefficiently safe

There are actually even more powerful reasons for demanding inefficiency. In the area of safety-critical systems, computer scientists build in redundancy on purpose. When the consequences of the computer not working is that lives are lost, we definitely want inefficiency, as long as it is well-engineered inefficiency. Dependability and safety matter more.

An algorithm is a mathematical object. If it works, it always works. However, programs operate in the real world where things can go wrong. Hardware fails, clocks drift, criminals hack, technicians do silly things by accident (like unplug the power). Systems that matter have to be resilient. They have to cope with the unexpected, with the never before seen. One way that is achieved is by designing in inefficiency. For example, if your single computer goes down, you are stuffed. If instead two computers run the same program in parallel, then if one goes down the other can take over. Ahh, but how do you know which is wrong when they disagree? Be even more ‘inefficient’ and have three computers ‘wastefully’ doing the same thing. Then, if one goes rogue, the three vote on who is at fault … cut them out and carry on.

Computer Scientists have developed many ingenious ways to build in guarantees of safety even when the world around conspires against us. To a cost cutter these extras may seem like inefficiency but the inefficiency is there, apparently unused most of the time, waiting to step in and avert disaster, waiting to save lives. Personally, I would accept inefficiency. I hope, for the sake of saved lives, society would too.

by Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The Cyber-Security Honeypot

To catch criminals, whether old-fashioned ones or cybercriminals, you need to understand the criminal mind. You need to understand how they think and how they work. Jeremiah Onaolapo, as a PhD student at UCL, has been creating cyber-honeypots and finding out how cybercriminals really operate.

Hackers share user ids and passwords they have stolen on both open and hidden websites. But what do the criminals who then access those accounts do once inside? If your webmail account has been compromised what will happen. Will you even know you’ve been hacked?

Looking after passwords is important. If someone hacks your account there is probably lots of information you wouldn’t want criminals to find: information they could use whether other passwords, bank or shopping site details, personal images, information, links to cloud sites with yet more information about you … By making use of the information they discover, they could cause havoc to your life. But what are cybercriminals most interested in? Do they use hacked accounts just to send spam of phish for more details? Do they search for bank details, launch attacks elsewhere, … or something completely different we aren’t aware of? How do you even start to study the behaviour of criminals without becoming one? Jeremiah knew how hard it is for researchers to study issues like this, so he created some tools to help that others can use too.

His system is based on the honeypot. Police and spies have used various forms of honeytraps, stings and baits successfully for a long time, and the idea is used in computing security too. The idea is that you set up a situation so attractive to people that they can’t resist falling in to your trap. Jeremiah’s involved a set of webmail accounts. His accounts aren’t just normal accounts though. They are all fake, and have software built in that secretly records the activities of anyone accessing the account. They save any emails drafted or sent, details of the messages read, the locations the hackers come in from, and so on. The accounts look real, however. They are full of real messages, sent and received, but with all personal details, such as names and passwords or bank account details, fictionalised. New emails sent from them aren’t actually delivered but just go in to a sinkhole server – where they are stored for further study. This means that no successful criminal activity can happen from the accounts. A lot can be learnt about any cybercriminals though!

Experiments

In an early experiment Jeremiah created 100 such accounts and then leaked their passwords and user ids in different ways: on hacker forums and web pages. Over 7 months hundreds of hackers fell into the trap, accessing the accounts from 29 countries. What emerged were four main kinds of behaviours, not necessarily distinct: the curious, the spammers the gold diggers and the hijackers. The curious seemed to just be intrigued to be in someone else’s account, but didn’t obviously do anything bad once there. Spammers just used the account to send vast amounts of spam email. Gold diggers went looking for more information like bank accounts or other account details. They were after personal information they could make money from, and also tried to use each account as a stepping stone to others. Finally hijackers took over accounts, changing the passwords so the owner couldn’t get in themselves.

The accounts were used for all sorts of purposes including attempts to use them to buy credit card details and in one extreme case to attempt to blackmail someone else.

Similar behaviours were seen in a second experiment where the account details were only released on hidden websites used by hackers to share account details. In only a month this set of accounts were accessed over a thousand times from more than 50 countries. As might be expected these people were more sophisticated in what they did. More were careful to ensure they cleared up any evidence they had been there (not realising everything was separately being recorded). They wanted to be able to keep using the accounts for as long as possible, so tried to make sure noone knew the account was compromised. They also seemed to be better at covering the tracks of where they actually were.

The Good Samaritan

Not everyone seemed to be there to do bad things though. One person stood out. They seemed to be entering the accounts to warn people – sending messages from inside the account to everyone in the contact list telling them that the account had been hacked. That would presumably also mean those contacted people would alert the real account owner. There are still good samaritans!

Take care

One thing this shows is how important it is to look after your account details: ensure no one knows or can guess them. Don’t enter details in a web page unless you are really sure you are in a secure place both physically and virtually and never tell them to anyone else. Also change your passwords regularly so if they are compromised without you realising, they quickly become useless.

Of course, if you are a cybercriminal, you had better beware as that tempting account might just be a honeypot and you might just be the rat in the maze.

Paul Curzon, Queen Mary University of London based on a talk by Jeremiah Onaolapo, UCL

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The very first computers

Victorian engineer Charles Babbage designed, though never built the first mechanical computer. The first computers had actually existed for a long time before he had his idea, though. The British superiority at sea and ultimately the Empire was already dependent on them. They were used to calculate books of numbers that British sailors relied on to navigate the globe. The original meaning of the word computer was actually a person who did these calculations. The first computers were humans.

Globe with continents in binary
Image by Gordon Johnson from Pixabay (colour by CS4FN)

Babbage became interested in the idea of creating a mechanical computer in part because of computing work he did himself, calculating accurate versions of numbers needed for a special book: ‘The Nautical Almanac’. It was a book of astronomical tables, the result of an idea of Astronomer Royal, Nevil Maskelyne. It was the earliest way ships had to reliably work out their longitudinal (i.e., east-west) position at sea. Without them, to cross the Atlantic, you just set off and kept going until you hit land, just as Columbus did. The Nautical Almanac gave a way to work out how far west you were all the time.

Maskelyne’s idea was based on the fact that the angle from the moon’ to a person on the Earth and back to a star was the same at the same time wherever that person was looking from (as long as they could see both the star and moon at once). This angle was called the lunar distance.

The lunar distance could be used to work out where you were because as time passed its value changed but in a predictable way based on Newton’s Laws of motion applied to the planets. For a given place, Greenwich say, you could calculate what that lunar distance would be for different stars at any time in the future. This is essentially what the Almanac recorded.

Now the time changes as you move East or West: Dawn gradually arrives later the further west you go, for example, as the Earth rotates the sun comes into view at different times round the planet). That is why we have different time zones. The time in the USA is hours behind that in Britain which itself is behind that in China. Now suppose you know your local time, which you can check regularly from the position of the sun or moon, and you know the lunar distance. You can look up in the Almanac the time in Greenwich that the lunar distance occurs and that gives you the current time in Greenwich. The greater the difference that time is to your local time, the further West (or East) you are. It is because Greenwich was used as the fixed point for working the lunar distances out, that we now use Greenwich Mean Time as UK time. The time in Greenwich was the one that mattered!

This was all wonderful. Sailors just had to take astronomical readings, do some fairly simple calculations and a look up in the Almanac to work out where they were. However, there was a big snag. it relied on all those numbers in the tables having been accurately calculated in advance. That took some serious computing power. Maskelyne therefore employed teams of human ‘computers’ across the country, paying them to do the calculations for him. These men and women were the first industrial computers.

Before pocket calculators were invented in the 1970s the easiest way to do calculations whether big multiplication, division, powers or square roots was to use logarithms. The logarithm of a number is just the number of times you can divide it by 10 before you get to 1. Complicated calculations can be turned in to simple ones using logarithms. Therefore the equivalent of the pocket calculator was a book containing a table of logarithms. Log tables were the basis of all other calculations including maritime ones. Babbage himself became a human computer, doing calculations for the Nautical Almanac. He calculated the most accurate book of log tables then available for the British Admiralty.

The mechanical computer came about because Babbage was also interested in finding the most profitable ways to mechanise work in factories. He realised a machine could do more than weave cloth but might also do calculations. More to the point such a machine would be able to do them with a guaranteed accuracy, unlike people. He therefore spent his life designing and then trying to build such a machine. It was a revolutionary idea and while his design worked, the level of precision engineering needed was beyond what could be done. It was another hundred years before the first electronic computer was invented – again to replace human computers working in the national interest…but this time at Bletchley Park doing the calculations needed to crack the German military codes and so win the World War II.

More on …

Related Magazines …

Cover of Issue 20 of CS4FN, celebrating Ada Lovelace

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos