Smart tablets (to swallow)

The first ever smart pill has been approved for use. It’s like any other pill except that this one has a sensor inside it and it comes with a tracking device patch you wear to make sure you take it.

A big problem with medicine is remembering to take it. It’s common for people to be unsure whether they did take today’s tablet or not. Getting it wrong regularly can make a difference to how quickly you recover from illness. Many medicines are also very, very expensive. Mass-produced electronics, on the other hand, are cheap. So could the smart pill be a new, potentially useful, solution? The pill contains a sensor that is triggered when the pill dissolves and the sensor meets your stomach acids. When it does, the patch you wear detects its signal and sends a message to your phone to record the fact. The specially made sensor itself is harmless and safe to swallow. Your phone’s app can then, if you allow it, tell your doctor so that they know whether you are taking the pills correctly or not.

Smart pills could also be invaluable for medical researchers. In medical trials of new drugs, knowing whether patients took the pills correctly is important but difficult to know. If a large number of patients don’t, that could be a reason why the drugs appeared less effective than expected. Smart pills could allow researchers to better work out how regularly a drug needs to be taken to still work. 

More futuristically still, such pills may form part of a future health artificial intelligence system that is personalised to you. It would collect data about you and your condition from a wide range of sensors recording anything relevant: from whether you’ve taken pills to how active you’ve been, your heart rate, blood pressure and so on: in fact anything useful that can be sensed. Then, using big data techniques to crunch all that data about you, it will tailor your treatment. For example, such a system may be better able to work out how a drug affects you personally, and so be better able to match doses to your body. It may be able to give you personalised advice about what to eat and drink, even predicting when your condition could be about to get better or worse. This could make a massive difference to life for those with long term illnesses like rheumatoid arthritis or multiple sclerosis, where symptoms flare up and die away unpredictably. It could also help the doctors who currently must find the right drug and dose for each person by trial and error.

Computing in future could be looking after your health personally, as long as you are willing to wear it both inside and out.

– Paul Curzon, Queen Mary University of London, Spring 2021

Download Issue 27 of the cs4fn magazine on Smart Health here.

This post and issue 27 of the cs4fn magazine have been funded by EPSRC as part of the PAMBAYESIAN project.

i-pickpocket

Contactless payments seem magical. But don’t get caught out by someone magically scanning your card without you knowing. Almost £7 million was stolen by contactless card fraud in 2016 alone…

Victorian Hi-Tech

Contactless cards talk to the scanner by electromagnetic induction, discovered by Michael Faraday back in 1831. Changes in the current in a coil of wire, which for a contactless card is just an antenna in the form of a loop, creates a changing magnetic field. If a loop antenna on another device is placed inside that magnetic field, then a voltage is created in its circuit. As the current in the first circuit changes, that in the other circuit copies it, and information is passed from one to the other. This works up to about 10cm away.

Credit cards in a back pocket.
Image by TheDigitalWay from Pixabay 

Picking pockets at a distance

Contactless cards don’t require authentication like a PIN, to prove who is using them, for small amounts. Anyone with the card and a reader can charge small amounts to it. Worse, if someone gets a reader within 10cm of the bag holding your card, they could even take money from it without your knowledge. That might seem unlikely but then traditional pickpockets are easily capable of taking your wallet without you noticing, so just getting close isn’t hard by comparison! For that kind of fraud the crook has to have a legitimate reader to charge money. Even without doing that they can read the number and expiry date from the card and use them to make online purchases though.

A man in the middle

Security researchers have also shown that ‘relay’ attacks are possible, where a fake device passes messages between the shop and a card that is somewhere else. An attacker places a relay device near to someone’s actual card. It communicates with a fake card an accomplice is using in the shop. The shop’s reader queries the fake card which talks to its paired device. The paired device talks to the real card as though it were the one in the shop. It passes the answers from the real card back to the fake card which relays it on to the shop. Real reader and card get exactly the messages they would if the card was in the shop, just via the fake devices in between. Both shop and card think they are talking to each other even though they are a long way apart, and the owner of the real card knows nothing about it.

Block the field

How do you guard against contactless attacks? Never hand over your card, always ask for a receipt and check your statements. You can also keep your card in a blocking sleeve: a metal case that protects the card from electromagnetic fields (even using a homemade sleeve from tin foil should work). Then at least you force the pickpockets back to the Victorian, Artful Dodger style, method of actually stealing your wallet.

Of course Faraday was a Victorian, so a contactless attack is actually a Victorian way of stealing too!

– Jane Waite and Paul Curzon, Queen Mary University of London

AI Detecting the Scribes of the Dead Sea Scrolls

Computer science and artificial intelligence have provided a new way to do science: it was in fact one of the earliest uses of the computer. They are now giving new ways for scholars to do research in other disciplines such as ancient history, too. Artificial Intelligence has been used in a novel way to help understand how the Dead Sea Scrolls were written, and it turns out scribes in ancient Judea worked in teams.

The Dead Sea Scrolls are a collection of almost a thousand ancient documents written several thousand years ago that were found in caves near the Dead Sea. The collection includes the oldest known written version of the Bible.

The cave where most of the Dead Sea Scrolls were found.

Researchers from the University of Groningen (Mladen Popović, Maruf Dhali and Lambert Schomaker) used artificial intelligence techniques to analyse a digitised version of the longest scroll in the collection, known as the Great Isaiah Scroll. They picked one letter, aleph, that appears thousands of times through the document, and analysed it in detail.

Two kinds of artificial intelligence programs were used. The first, feature extraction, based on computer vision and image processing was needed to recognize features in the images. At one level this is the actual characters, but also more subtly here, the aim was that the features corresponded to ink traces based on the actual muscle movements of the scribes.

The second was machine learning. Machine Learning programs are good at spotting patterns in data – grouping the data into things that are similar and things that are different. A typical text book example would be giving the program images of cats and of dogs. It would spot the patterns of features that correspond to dogs, and the different pattern of features that corresponds to cats and group each image into one or the other pattern.

Here the data was all those alephs or more specifically the features extracted from them. Essentially the aim was to find patterns that were based on the muscle movements of the original scribe of each letter. To the human eye the writing throughout the document looks very, very uniform, suggesting a single scribe wrote the whole document. If that was the case, only one pattern would be found that all letters were part of with no clear way to split them. Despite this, the artificial intelligence evidence suggests there were actually two scribes involved. There were two patterns.

The research team found, by analysing the way the letters were written, that there were two clear groupings of letters. One group were written in one way and the other in a slightly different way. There were very subtle differences in the way strokes were written, such as in their thickness and the positions of the connections between strokes. This could just be down to variations in the way a single writer wrote letters at different times. However, the differences were not random, but very clearly split at a point halfway through the scroll. This suggests there were two writers who each worked on the different parts. Because the characters were otherwise so uniform, those two scribes must have been making an effort to carefully mirror each other’s writing style so the letters looked the same to the naked eye.

The research team have not only found out something interesting about the Dead Sea Scrolls, but also demonstrated a new way to study ancient hand writing. With a few exceptions, the scribes who wrote the ancient documents, like the Dead Sea Scrolls, that have survived to the modern day, are generally anonymous, but thanks to leading-edge Computer Science, we have a new way to find out more about them.

Explore the digitised version of the Dead Sea Scrolls yourself at www.deadseascrolls.org.il

– Paul Curzon, Queen Mary University of London

Losing the match? Follow the science. Change the kit!

Artificial Intelligence software has shown that two different Manchester United gaffers got it right believing that kit and stadium seat colours matter if the team are going to win.

It is 1996. Sir Alex Ferguson’s Manchester United are doing the unthinkable. At half time they are losing 3-0 to lowly Southampton. Then the team return to the pitch for the second half and they’ve changed their kit. No longer are they wearing their normal grey away kit but are in blue and white, and their performance improves (if not enough to claw back such a big lead). The match becomes infamous for that kit change: the genius gaffer blaming the team’s poor performance on their kit seemed silly to most. Just play better football if you want to win!

Jump forward to 2021, and Manchester United Manager Ole Gunnar Solskjaer, who originally joined United as a player in that same year, 1996, tells a press conference that the club are changing the stadium seats to improve the team’s performance!

Is this all a repeat of previously successful mind games to deflect from poor performances? Or superstition, dressed up as canny management, perhaps. Actually, no. Both managers were following the science.

Ferguson wasn’t just following some gut instinct, he had been employing a vision scientist, Professor Gail Stephenson, who had been brought in to the club to help improve the players’ visual awareness, getting them to exercise the muscles in their eyes not just their legs! She had pointed out to Ferguson that the grey kit would make it harder for the players to pick each other out quickly. The Southampton match was presumably the final straw that gave him the excuse to follow her advice.

She was very definitely right, and modern vision Artificial Intelligence technology agrees with her! Colours do make it easier or harder to notice things and slows decision making in a way that matters on the pitch. 25 years ago the problem was grey kit merging into the grey background of the crowd. Now it is that red shirts merge into the background of an empty stadium of red seats.

It is all about how our brain processes the visual world and the saliency of objects. Saliency is just how much an object stands out and that depends on how our brain processes information. Objects are much easier to pick out if they have high contrast, for example, like a red shirt on a black background.

Peter McOwan and Hamit Soyel at Queen Mary combined vision research and computer science, creating an Artificial Intelligence (AI) that sees like humans in the sense that it predicts what will and won’t stand out to us, doing it in real time (see DragonflyAI: I see what you see). They used the program to analyse images from that infamous football match before and after the kit change and showed that the AI agreed with Gail Stephenson and Alex Ferguson. The players really were much easier for their team mates to see in the second half (see the DragonflyAI version of the scenes below).

Dragonfly highlights areas of a scene that are more salient to humans so easier to notice. Red areas stand out the most. In the left image when wearing the grey kit, Ryan Giggs merges into the background. He is highly salient (red) in the right image where he is in the blue and white kit.

Details matter and science can help teams that want to win in all sorts of ways. That includes computer scientists and Artificial Intelligence. So if you want an edge over the opposition, hire an AI to analyse the stadium scene at your next match. Changing the colour of the seats really could make a difference.

Find out more about DragonflyAI: https://dragonflyai.co/ [EXTERNAL]

– Paul Curzon, Queen Mary University of London

DragonflyAI: I see what you see

What use is a computer that sees like a human? Can’t computers do better than us? Well, such a computer can predict what we will and will not see, and there is BIG money to be gained doing that!

The Hong Kong Skyline


Peter McOwan’s team at Queen Mary spent 10 years doing exploratory research understanding the way our brains really see the world, exploring illusions, inventing games to test the ideas, and creating a computer model to test their understanding. Ultimately they created a program that sees like a human. But what practical use is a program that mirrors the oddities of the way we see the world? Surely a computer can do better than us: noticing all the things that we miss or misunderstand? Well, for starters the research opens up exciting possibilities for new applications, especially for marketeers.

The Hong Kong Skyline as seen by DragonflyAI


A fruitful avenue to emerge is ‘visual analytics’ software: applications that predict what humans will and will not notice. Our world is full of competing demands, overloading us with information. All around us things vie to catch our attention, whether a shop window display, a road sign warning of danger or an advertising poster.

Imagine, a shop has a big new promotion designed to entice people in, but no more people enter than normal. No-one notices the display. Their attention is elsewhere. Another company runs a web ad campaign, but it has no effect, as people’s eyes are pulled elsewhere on the screen. A third company pays to have its products appear in a blockbuster film. Again, a waste of money. In surveys afterwards no one knew the products had been there. A town council puts up a new warning sign at a dangerous bend in the road but the crashes continue. These are examples of situations where predicting where people look in advance allows you to get it right. In the past this was either done by long and expensive user testing, perhaps using software that tracks where people look, or by having teams of ‘experts’ discuss what they think will happen. What if a program made the predictions in a fraction of a second beforehand? What if you could tweak things repeatedly until your important messages could not be missed.

Queen Mary’s Hamit Soyel turned the research models into a program called DragonflyAI, which does exactly that. The program analyses all kinds of imagery in real-time and predicts the places where people’s attention will, and will not, be drawn. It works whether the content is moving or not, and whether it is in the real world, completely virtual, or both. This then gives marketeers the power to predict and so influence human attention to see the things they want. The software quickly caught the attention of big, global companies like NBC Universal, GSK and Jaywing who now use the technology.

Find out more about DragonflyAI: https://dragonflyai.co/ [EXTERNAL]

Studying Comedy with Computers

by Vanessa Pope, Queen Mary University of London

Smart speakers like Alexa might know a joke or two, but machines aren’t very good at sounding funny yet. Comedians, on the other hand, are experts at sounding both funny and exciting,  even when they’ve told the same joke hundreds of times. Maybe speech technology could learn a thing or two from comedians… that is what my research is about.

Image by Rob Slaven from Pixabay 

To test a joke, stand-up comedians tell it to lots of different audiences and see how they react. If no-one laughs, they might change the words of the joke or the way they tell it. If we can learn how they make their adjustments, maybe technology can borrow their tricks. How much do comedians change as they write a new show? Does a comedian say the same joke the same way at every performance? The first step is to find out.

The first step is to record lots of the same live show of a comedian and find the parts that match from one show to the next. It was much faster to write a program to find the same jokes in different shows than finding them all myself. My code goes through all the words and sounds a comedian said in one live show and looks for matching chunks in their other shows. Words need to be in the same exact order to be a match: “Why did the chicken cross the road” is very different to “Why did the road cross the chicken”! The process of looking through a sequence to find a match is called “subsequence matching,” because you’re looking through one sequence (the whole set of words and sounds in a show) for a smaller sequence (the “sub” in “subsequence”). If a subsequence (little sequence) is found in lots of shows, it means the comedian says that joke the same way at every show. Subsequence matching is a brand new way to study comedy and other types of speech that are repeated, like school lessons or a favourite campfire story.

By comparing how comedians told the same jokes in lots of different shows, I found patterns in the way they told them. Although comedy can sound very improvised, a big chunk of comedians’ speech (around 40%) was exactly the same in different shows. Sounds like “ummm” and “errr” might seem like mistakes but these hesitation sounds were part of some matches, so we know that they weren’t actually mistakes. Maybe “umm”s help comedians sound like they’re making up their jokes on the spot.

Varying how long pauses are could be an important part of making speech sound lively, too. A comedian told a joke more slowly and evenly when they were recorded on their own than when they had an audience. Comedians work very hard to prepare their jokes so they are funny to lots of different people. Computers might, therefore, be able to borrow the way comedians test their jokes and change them. For example, one comedian kept only five of their original jokes in their final show! New jokes were added little by little around the old jokes, rather than being added in big chunks.

If you want to run an experiment at home, try recording yourself telling the same joke to a few different people. How much practice did you need before you could say the joke all at once? What did you change, including little sounds like “umm”? What didn’t you change? How did the person you were telling the joke to, change how you told it?

There’s lots more to learn from comedians and actors, like whether they change their voice and movement to keep different people’s attention. This research is the first to use computers to study how performers repeat and adjust what they say, but hopefully just the beginning. 

Now, have you heard the one about the …

For more information about Vanessa’s work visit https://vanessapope.co.uk/ [EXTERNAL]