The trouble with healthcare is that it’s becoming ever more expensive: new drugs, new treatments, more patients, the ever-increasing time needed with experts. Smart healthcare might be able to help.
We want everyone to get the care they need, but the costs are growing. Perhaps computer scientists can help? Research groups worldwide are exploring ways to create computing technology to improve healthcare, and intelligent programs that can support patients at home, helping monitor them and make decisions about what to do.
For example, say you are on powerful drugs to manage a long term illness: should you have the vaccine? Can you have a baby? Is a flare up of your disease about to hit you and how can you avoid it? Is that new ache a side effect of the drugs? Do you need to change medicines? Do you need to see a specialist?
If smart programs can help support patients then the doctors and nurses can spend more time with those who actually need it, hospitals can save on expensive drugs that aren’t working, and patients can have better lives. But what kind of technology can deliver this sort of service?
In the current issue of cs4fn magazine, we explore one particular way being developed on the EPSRC funded PAMBAYESIAN project at Queen Mary University of London, based on an area of computing called Bayesian networks, that might just be the answer. We also look at other ways computers can help deliver better healthcare for all and other uses of Bayesian networks.
‘How do robots eat pizza?’… ‘One byte at a time’. Computational Humour is real, but it’s not jokes about computers, it’s computers telling their own jokes.
Computers can create art, stories, slogans and even magic tricks. But can computers perform themselves? Can robots invent their own jokes? Can they tell jokes?
Combining Artificial Intelligence, computational linguistics and humour studies (yes you can study how to be funny!) a team of Scottish researchers made an early attempt at computerised standup comedy! They came up with Standup (System to Augment Non Speakers Dialogue Using Puns): a program that generates riddles for kids with language difficulties. Standup has a dictionary and joke-building mechanism, but does not perform, it just creates the jokes. You will have to judge for yourself as to whether the puns are funny. You can download the software from here. What makes a pun funny? It is a about the word having two meanings at exactly the same time in a sentence. It is also about generating an expectation that you then break: a key idea about what is at the core of creativity too.
A research team at Virginia Tech in the US created a system that started to learn about funny pictures. Having defined a ‘funniness score’ they created a computational model for humorous scenes, and trained it to predict funniness, perhaps with an eye to spotting pics for social media posting, or not.
But are there funny robots out there? Yes! RoboThespian programmed by researchers at Queen Mary University of London, and Data, created by researchers at Carnegie Mellon University are both robots programmed to do stand-up comedy. Data has a bank of jokes and responds to audience reaction. His developers don’t actually know what he will do when he performs, as he is learning all the time. At his first public gig, he got the crowd laughing, but his timing was poor. You can see his performance online, in a TED Talk.
RoboThespian did a gig at the London Barbican alongside human comedians. The performance was a live experiment to understand whether the robot could ‘work the audience’ as well as a human comedian. They found that even relatively small changes in the timing of delivery make a big difference to audience response.
What have these all got in common? Artificial Intelligence, machine learning and studies to understand what humour actually is, are being combined to make something that is funny. Comedy is perhaps the pinnacle of creativity. It’s certainly not easy for a human to write even one joke, so think how hard it is distill that skill into algorithms and train a computer to create loads of them.
You have to laugh!
Jane Waite, Queen Mary University of London, Summer 2017
The great Tudor and Stuart philosopher Sir Francis Bacon was a scientist, a statesman and an author. He was also a pretty decent computer scientist. He published* a new form of cipher, now called Bacon’s Cipher, invented when he was a teenager. Its core idea is the foundation for the way all messages are stored in computers today.
The Tudor and Stuart eras were a time of plot and intrigue. Perhaps the most famous is the 1605 Gunpowder plot where Guy Fawkes tried to assassinate King James I by blowing up the Houses of Parliament. Secrets mattered! In his youth Bacon had worked as a secret agent for Elizabeth I’s spy chief, Walsingham, so knew all about ciphers. Not content with using those that existed he invented his own. The one he is best remembered for was actually both a cipher and a form of steganography. While a cipher aims to make a message unreadable, steganography is the science of secret writing: disguising messages so no one but the recipient knows there is a message there at all.
A Cipher …
Bacon’s method came in two parts. The first was a substitution cipher, where different symbols are substituted for each letter of the alphabet in the message. This idea dates back to Roman times. Julius Caesar used a version, substituting each letter for a letter from a fixed number of places down the alphabet (so A becomes E, B becomes F, and so on). Bacon’s key idea was to replace each letter of the alphabet with, not a number or letter, but it’s own series of a’s and b’s (see the cipher table). The Elizabethan alphabet actually had only 24 letters so I and J have the same code as do U and V as they were interchangeable (J was the capital letter version of i and similarly for U and v).
In Bacon’s cipher everything is encoded in two symbols, so it is a binary encoding. The letters a and b are arbitrary. Today we would use 0 and 1. This is the first use of binary as a way to encode letters (in the West at least). Today all text stored in computers is represented in this way – though the codes are different – it is all Unicode is. It allocates each character in the alphabet with a binary pattern used to represent it in the computer. When the characters are to be displayed, the computer program just looks up which graphic pattern (the actual symbol as drawn) is linked to that binary pattern in the code being used. Unicode gives a binary pattern for every symbol in every human language (and some alien ones like Klingon).
Image by CS4FN
Steganography
The second part of Bacon’s cipher system was Steganography. Steganography dates back to at least the Greeks, who supposedly tattooed messages on the shaved heads of slaves, then let their hair grow back before sending them as both messenger and message. The binary encoding of Bacon’s cipher was vital to make his steganography algorithm possible. However, the message was not actually written as a’s and b’s. Bacon realised that two symbols could stand for any two things. If you could make the difference hard to spot, you could hide the messages. Bacon invented two ways of handwriting each letter of the alphabet – two fonts. An ‘a’ in the encoded message meant use one font and a ‘b’ meant use the other. The secret message could then be hidden inside an innocent one. The letters written were no longer the message, the message was in the font used. As Bacon noted, once you have the message in binary you could think of other ways to hide it. One way used was with capital and lower-case letters, though only using the first letter of words to make it less obvious.
Suppose you wanted to hide the message “no” in the innocuous message ‘hello world’. The message ‘no’ becomes ‘abbaa abbab’. So far this is just a substitution cipher. Next we hide it in, ‘hello world’. Two different kinds of fonts are those with curls on the tails of letters known as serif fonts and like this one and those without curls known as sans serif fonts and like this one. We can use a sans serif font to represent an ‘a’ in the coded message, and a serif font to represent ‘b’. We just alternate the fonts following the pattern of the a’s and b’s: ‘abbaa abbab’. The message becomes
Image by CS4FN
sans serif, serif, serif, sans serif, sans serif, sans serif, serif, serif, sans serif, serif.
Using those fonts for our message we get the final mixed font message to send:
Bacon the polymath
Bacon is perhaps best known as one of the principal advocates for rigorous science as a way of building up knowledge. He argued that scientists needed to do more than just come up with theories of how the world worked, and also guard against just seeing the results that matched their theories. He argued knowledge should be based on careful, repeated observation. This approach is the basis of the Scientific Method and one of the foundation stones of modern science.
Bacon was also a famous writer of the time, and one of many authors who has since been suggested as the person who wrote William Shakespeare’s plays. In his case it is because they claim to have found secret messages hidden in the plays in Bacon’s code. The idea that someone else wrote Shakespeare’s plays actually started just because some upper class folk with a lack of imagination couldn’t believe a person from a humble background could turn themselves into a genius. How wrong they were!
Paul Curzon, Queen Mary University of London, Autumn 2017
*Thanks to Pete Langman, whose PhD was on Francis Bacon, for pointing out a mistake in the original version of this blog where I suggested the cipher was published in, 1605, the year of the Gun Powder plot. It was actually first published in 1623 in De augmentis which was a translation/enlargement of his 1605 Advancement of Learning.
He also pointed out that Bacon conceived the idea while working with Elizabethan spymaster, Walsingham’s cipher expert at the time of the Babington plot to assasinate Elizabeth I, Thomas Phileppes, and Mary, Queen of Scots’ jailer, Amias Paulet. Bacon also claimed the cipher was never broken!
Subscribe to be notified whenever we publish a new post to the CS4FN blog.
This blog is funded by EPSRC on research agreement EP/W033615/1.
Suppose you want to send messages as fast as possible. What’s the best way to do it? That is what Polina Bayvel, a Professor at UCL has dedicated her research career to: exploring the limits of how fast information can be sent over networks. It’s not just messages that it’s about nowadays of course, but videos, pictures, money, music, books – anything you can do over the Internet.
Send a text message and it arrives almost instantly. Sending message hasn’t always been that quick, though. The Greeks used runners – in fact the Marathon athletic event originally commemorated a messenger who supposedly ran from a battlefield at Marathon to Athens to deliver the message “We won” before promptly dying. The fastest woman in the world at the time of writing, 2011, Paula Radcliffe, at her quickest could deliver a message a marathon distance away in 2 hours 15 minutes and 25 seconds (without dying!) … ( now in 2020, Brigid Kosgei, a minute or so faster).
Horses improved things (and the Greeks in fact normally used horseback messengers, but hey it was a good story). Unfortunately, even a horse can’t keep up the pace for hundreds of miles. The Pony Express pushed horse technology to its limits. They didn’t create new breeds of genetically modified fast horses, or anything like that. All it took was to create an organised network of normal ones. They set up pony stations every 10 miles or so right across North America from Missouri to Sacramento. Why every 10 miles? That’s the point a galloping horse starts to give up the ghost. The mail came thundering in to each station and thundered out with barely a break as it was swapped to a new fresh pony.
The pony express was swiftly overtaken by the telegraph. Like the switch to horses, this involved a new carrier technology – this time copper wire. Now the messages had to be translated first though, here into electrical signals in Morse code. The telegraph was followed by the telephone. With a phone it seems like you just talk and the other person just hears but of course the translation of the message into a different form is still happening. The invention of the telephone was really just the invention of a way to turn sound into an electrical code that could be sent along copper cables and then translated back again.
The Internet took things digital – in some ways that’s a step back towards Morse code. Now, everything, even sound and images, are turned into a code of ones and zeros instead of dots and dashes. In theory images could of course have been sent using a telegraph tapper in the same way…if you were willing to wait months for the code of the image to be tapped in and then decoded again. Better to just wait for computers that can do it fast to be invented.
In the early Internet, the message carrier was still good old copper wire. Trouble is, when you want to send lots of data, like a whole movie, copper wire and electricity are starting to look like the runners must have done to horse riders: slow out-of-date technology. The optical fibre is the modern equivalent of the horse. They are just long thin tubes of glass. Instead of sending pulses of electricity to carry the coded messages, they now go on the back of a pulse of light.
Up to this point it’s been mainly men taking the credit, but this is where Polina’s work comes in. She is both exploring the limits of what can be done with optical fibres in theory and building ever faster optical networks in practice. How much information can actually be sent down fibres and what is the best way to do it? Can new optical materials make a difference? How can devices be designed to route information to the right place – such ‘routers’ are just like mail sorting depots for pulses of light. How can fibre optics best be connected into networks so that they work as efficiently as possible – allowing you and everyone else in your street to be watching different movies at the same time, for example, without the film going all jerky? These are all the kinds of questions that fascinate Polina and she has built up an internationally respected team to help her answer them.
Why are optical fibres such a good way to send messages? Well the obvious answer is that you can’t get much faster than light! Well actually you can’t get ANY faster than light. The speed of light is the fastest anything, including information, can travel according to Einstein’s laws. That’s not the end of the story though. Remember the worn out Marathon runner. It turns out that signals being sent down cables do something similar. Well, not actually getting out of breath and dying but they do get weaker the further they travel. That means it gets harder to extract the information at the other end and eventually there is a point where the message is just garbled noise. What’s the solution? Well actually it’s exactly the one the Pony Express came up with. You add what are called ‘repeaters’ every so often. They extract the message from the optical fibre and then send it down the next fibre, but now back at full strength again. One of the benefits of fibre optics is that signals can go much further before they need a repeater. That means the message gets to its destination faster because those repeaters take time extracting and resending the message. That, in turn, leaves scope for improvement. The Pony Express made their ‘repeaters’ faster by giving the rider a horn to alert the stationmaster that they were arriving. He would then have time to get the next horse ready so it could leave the moment the mail was handed over. Researchers like Polina are looking for similar ways to speed up optical repeaters.
You can do more than play with repeaters to speed things up though. You can also bump up the amount of information you carry in one go. In particular you can send lots of messages at the same time over an optical fibre as long as they use different wavelengths. You can think of this as though one person is using a torch with a blue bulb to send a Morse code message using flashes of blue light (say), while someone else is doing the same thing with a red torch and red light. If two people at the other end are wearing tinted sunglasses then depending on the tint they will each see only the red pulses or only the blue ones and so only get the message meant for them. Each new frequency of light used gives a new message that can be sent at the same time.
The tricky bit is not so much in doing that but in working out which people can use which torch at any particular time so their aren’t any clashes, bearing in mind that at any instant messages could be coming from anywhere in the network and trying to go anywhere. If two people try to use the same torch on the same link at the same time it all goes to pot. This is complicated further by the fact that at any time particular links could be very busy, or broken, meaning that different messages may also travel by different routes between the same places, just as you might go a different way to normal when driving if there is a jam. All this, and together with other similar issues, means there are lots of hairy problems to worry about if coming up with a the best possible optical network as Polina is aiming to do.
Polina’s has been highly successful working in this area. She has been made a Fellow of the Royal Academy of Engineering for her work and is also a Royal Society Wolfson Research Merit Award holder. It is only given to respected scientists of outstanding achievement and potential. She has also won the prestigious Patterson Medal awarded for distinguished research in applied physics. It’s important to remember that modern engineering is a team game, though. As she notes she has benefited hugely by having inspiring and supporting mentors, as well as superb students and colleagues. It is her ability to work well with other people that allowed her build a critical mass in her research and so gain all the accolades. All that achieved and she is a mother of two boys to boot. Bringing up children is, of course, a team game too.
Paul Curzon, Queen Mary University of London, Autumn 2011
In our stress-filled world with ever increasing levels of anxiety, it would be nice if technology could sometimes reduce stress rather than just add to it. That is the problem that QMUL’s Christine Farion set out to solve for her PhD. She wanted to do something stylish too, so she created a new kind of bag: a smart bag.
Christine realised that one thing that causes anxiety for a lot of people is forgetting everyday things. It is very common for us to forget keys, train tickets, passports and other everyday things we need for the day. Sometimes it’s just irritating. At other times it can ruin the day. Even when we don’t forget things, we waste time unpacking and repacking bags to make sure we really do have the things we need. Of course, the moment we unpack a bag to check, we increase the chance that something won’t be put back!
Electronic bags
Christine wondered if a smart bag could help. Over the space of several years, she built ten different prototypes using basic electronic kits, allowing her to explore lots of options. Her basic design has coloured lights on the outside of the bag, and a small scanner inside. To use the bag, you attach electronic tags to the things you don’t want to forget. They are like the ones shops use to keep track of stock and prevent shoplifting. Some tags are embedded into things like key fobs, while others can be stuck directly on to an object. Then when you pack your bag, you scan the objects with the reader as you put them in, and the lights show you they are definitely there. The different coloured lights allow you to create clear links – natural mappings – between the lights and the objects. For her own bag, Christine linked the blue light to a blue key fob with her keys, and the yellow light to her yellow hayfever tablet box.
In the wild
One of the strongest things about her work was she tested her bags extensively ‘in the wild’. She gave them to people who used them as part of their normal everyday life, asking them to report to her what did and didn’t work about them. This all fed in to the designs for subsequent bags and allowed her to learn what really mattered to make this kind of bag work for the people using it. One of the key things she discovered was that the technology needed to be completely simple to use. If it wasn’t both obvious how to use and quick and simple to do it wouldn’t be used.
Christine also used the bags herself, keeping a detailed diary of incidents related to the bags and their design. This is called ‘autoethnography’. She even used one bag as her own main bag for a year and a half, building it completely into her life, fixing problems as they arose. She took it to work, shopping, to coffee shops … wherever she went.
Suspicious?
When she had shown people her prototype bags, one of the common worries was that the electronics would look suspicious and be a problem when travelling. She set out to find out, taking her bag on journeys around the country, on trains and even to airports, travelling overseas on several occasions. There were no problems at all.
Fashion matters
As a bag is a personal item we carry around with us, it becomes part of our identity. She found that appropriate styling is, therefore, essential in this kind of wearable technology. There is no point making a smart bag that doesn’t fit the look that people want to carry around. This is a problem with a lot of today’s medical technology, for example. Objects that help with medical conditions: like diabetic monitors or drug pumps and even things as simple and useful as hearing aids or glasses, while ‘solving’ a problem, can lead to stigma if they look ugly. Fashion on the other hand does the opposite. It is all about being cool. Christine showed that by combining design of the technology with an understanding of fashion, her bags were seen as cool. Rather than designing just a single functional smart bag, ideally you need a range of bags, if the idea is to work for everyone.
Now, why don’t I have my glasses with me?
Paul Curzon, Queen Mary University of London, Autumn 2018
Researchers at MIT and Harvard have new skin in the game when it comes to monitoring people’s bodily health. They have developed a new wearable technology in the form of colour- and shape-changing tattoos. These tattoos work by using bio-sensitive inks, changing colour, fading away or appearing under different coloured illumination, depending on your body chemistry. They could, for example, change their colour, or shape as their parts fade away, depending on your blood glucose levels.
This kind of constantly on, constantly working body monitoring ensures that there is nothing to fall off, get broken or run out of power. That’s important in chronic conditions like diabetes where monitoring and controlling blood glucose levels is crucial to the person’s health. The project, called Dermal Abyss, brings together scientists and artists in a new way to create a data interface on your skin.
There are still lots of questions to answer, like how long will the tattoos last and would people be happy displaying their health status to anyone who catches a glimpse of their body art? How would you feel having your body stats displayed on your tats? It’s a future question for researchers to draw out the answer to.
Peter W. McOwan, Queen Mary University of London, Autumn 2018
Contact lenses, normally used to simply, but usefully, correct people’s vision, could in the future do far more.
Tiny microelectronic circuits, antennae and sensors can now be fabricated and set in the plastic of contact lenses. Researchers are looking at the possibility of using such sensors to sample and transmit the glucose level in the eye moisture: useful information for diabetics. Others are looking at lenses that can change your focus, or even project data onto the lens, allowing new forms of augmented and virtual reality.
Conveniently, you can turn the frequent natural motion from the blinks of your eye into enough power to run the sensors and transmitter, doing away with the need for charging. All this means that smart contact lenses could be a real eye opener for wearable tech.
Peter W. McOwan, Queen Mary University of London, Autumn 2018
The first ever smart pill has been approved for use. It’s like any other pill except that this one has a sensor inside it and it comes with a tracking device patch you wear to make sure you take it.
A big problem with medicine is remembering to take it. It’s common for people to be unsure whether they did take today’s tablet or not. Getting it wrong regularly can make a difference to how quickly you recover from illness. Many medicines are also very, very expensive. Mass-produced electronics, on the other hand, are cheap. So could the smart pill be a new, potentially useful, solution? The pill contains a sensor that is triggered when the pill dissolves and the sensor meets your stomach acids. When it does, the patch you wear detects its signal and sends a message to your phone to record the fact. The specially made sensor itself is harmless and safe to swallow. Your phone’s app can then, if you allow it, tell your doctor so that they know whether you are taking the pills correctly or not.
Smart pills could also be invaluable for medical researchers. In medical trials of new drugs, knowing whether patients took the pills correctly is important but difficult to know. If a large number of patients don’t, that could be a reason why the drugs appeared less effective than expected. Smart pills could allow researchers to better work out how regularly a drug needs to be taken to still work.
More futuristically still, such pills may form part of a future health artificial intelligence system that is personalised to you. It would collect data about you and your condition from a wide range of sensors recording anything relevant: from whether you’ve taken pills to how active you’ve been, your heart rate, blood pressure and so on: in fact anything useful that can be sensed. Then, using big data techniques to crunch all that data about you, it will tailor your treatment. For example, such a system may be better able to work out how a drug affects you personally, and so be better able to match doses to your body. It may be able to give you personalised advice about what to eat and drink, even predicting when your condition could be about to get better or worse. This could make a massive difference to life for those with long term illnesses like rheumatoid arthritis or multiple sclerosis, where symptoms flare up and die away unpredictably. It could also help the doctors who currently must find the right drug and dose for each person by trial and error.
Computing in future could be looking after your health personally, as long as you are willing to wear it both inside and out.
Paul Curzon, Queen Mary University of London, Spring 2021
Contactless payments seem magical. But don’t get caught out by someone magically scanning your card without you knowing. Almost £7 million was stolen by contactless card fraud in 2016 alone…
Victorian Hi-Tech
Contactless cards talk to the scanner by electromagnetic induction, discovered by Michael Faraday back in 1831. Changes in the current in a coil of wire, which for a contactless card is just an antenna in the form of a loop, creates a changing magnetic field. If a loop antenna on another device is placed inside that magnetic field, then a voltage is created in its circuit. As the current in the first circuit changes, that in the other circuit copies it, and information is passed from one to the other. This works up to about 10cm away.
Picking pockets at a distance
Contactless cards don’t require authentication like a PIN, to prove who is using them, for small amounts. Anyone with the card and a reader can charge small amounts to it. Worse, if someone gets a reader within 10cm of the bag holding your card, they could even take money from it without your knowledge. That might seem unlikely but then traditional pickpockets are easily capable of taking your wallet without you noticing, so just getting close isn’t hard by comparison! For that kind of fraud the crook has to have a legitimate reader to charge money. Even without doing that they can read the number and expiry date from the card and use them to make online purchases though.
A man in the middle
Security researchers have also shown that ‘relay’ attacks are possible, where a fake device passes messages between the shop and a card that is somewhere else. An attacker places a relay device near to someone’s actual card. It communicates with a fake card an accomplice is using in the shop. The shop’s reader queries the fake card which talks to its paired device. The paired device talks to the real card as though it were the one in the shop. It passes the answers from the real card back to the fake card which relays it on to the shop. Real reader and card get exactly the messages they would if the card was in the shop, just via the fake devices in between. Both shop and card think they are talking to each other even though they are a long way apart, and the owner of the real card knows nothing about it.
Block the field
How do you guard against contactless attacks? Never hand over your card, always ask for a receipt and check your statements. You can also keep your card in a blocking sleeve: a metal case that protects the card from electromagnetic fields (even using a homemade sleeve from tin foil should work). Then at least you force the pickpockets back to the Victorian, Artful Dodger style, method of actually stealing your wallet.
Of course Faraday was a Victorian, so a contactless attack is actually a Victorian way of stealing too!
Jane Waite and Paul Curzon, Queen Mary University of London
What use is a computer that sees like a human? Can’t computers do better than us? Well, such a computer can predict what we will and will not see, and there is BIG money to be gained doing that!
The Hong Kong Skyline. Image public domain from wikipedia
Peter McOwan’s team at Queen Mary spent 10 years doing exploratory research understanding the way our brains really see the world, exploring illusions, inventing games to test the ideas, and creating a computer model to test their understanding. Ultimately they created a program that sees like a human. But what practical use is a program that mirrors the oddities of the way we see the world? Surely a computer can do better than us: noticing all the things that we miss or misunderstand? Well, for starters the research opens up exciting possibilities for new applications, especially for marketeers.
The Hong Kong Skyline as seen by DragonflyAI (processed public domain image from wikipedia))
A fruitful avenue to emerge is ‘visual analytics’ software: applications that predict what humans will and will not notice. Our world is full of competing demands, overloading us with information. All around us things vie to catch our attention, whether a shop window display, a road sign warning of danger or an advertising poster.
Imagine, a shop has a big new promotion designed to entice people in, but no more people enter than normal. No-one notices the display. Their attention is elsewhere. Another company runs a web ad campaign, but it has no effect, as people’s eyes are pulled elsewhere on the screen. A third company pays to have its products appear in a blockbuster film. Again, a waste of money. In surveys afterwards no one knew the products had been there. A town council puts up a new warning sign at a dangerous bend in the road but the crashes continue. These are examples of situations where predicting where people look in advance allows you to get it right. In the past this was either done by long and expensive user testing, perhaps using software that tracks where people look, or by having teams of ‘experts’ discuss what they think will happen. What if a program made the predictions in a fraction of a second beforehand? What if you could tweak things repeatedly until your important messages could not be missed.
Queen Mary’s Hamit Soyel turned the research models into a program called DragonflyAI, which does exactly that. The program analyses all kinds of imagery in real-time and predicts the places where people’s attention will, and will not, be drawn. It works whether the content is moving or not, and whether it is in the real world, completely virtual, or both. This then gives marketeers the power to predict and so influence human attention to see the things they want. The software quickly caught the attention of big, global companies like NBC Universal, GSK and Jaywing who now use the technology.