Diagnose? Delay delivery? Decisions, decisions. Decisions about diabetes in pregnancy

In the film Minority Report, a team of psychics – who can see into the future – predict who might cause harm, allowing the police to intervene before the harm happens. It is science fiction. But smart technology is able to see into the future. It may be able to warn months in advance when a mother’s body might be about to harm her unborn baby and so allow the harm to be prevented before it even happens.

Baby holding feet with feet in foreground.
Image by Daniel Nebreda from Pixabay

Gestational diabetes (or GDM) is a type of diabetes that appears only during pregnancy. Once the baby is born it usually disappears. Although it doesn’t tend to produce many symptoms it can increase the risk of complications in pregnancy so pregnant women are tested for it to avoid problems. Women who’ve had GDM are also at greater risk of developing Type 2 diabetes later on, joining an estimated 4 million people who have the condition in the UK.

Diabetes happens either when someone’s pancreas is unable to produce enough of a chemical called insulin, or because the body stops responding to the insulin that is produced. We need insulin to help us make use of glucose: a kind of sugar in our food that gives us energy. In Type 1 diabetes (commonly diagnosed in young people) the pancreas pretty much stops producing any insulin. In Type 2 diabetes (more commonly diagnosed in older people) the problem isn’t so much the pancreas (in fact in many cases it produces even more insulin), it’s that the person has become resistant to insulin. The result from either ‘not enough insulin’ or ‘plenty of insulin but can’t use it properly’ is that glucose isn’t able to get into our cells to fuel them. It’s a bit like being unable to open the fuel cap on a car, so the driver can’t fill it with petrol. This means higher levels of glucose circulate in the bloodstream and, unfortunately, high glucose can cause lots of damage to blood vessels.

During a normal pregnancy, women often become a little more insulin-resistant than usual anyway. This is an effect of pregnancy hormones from the placenta. From the point of view of the developing foetus, which is sharing a blood supply with mum, this is mostly good news as the blood arriving in the placenta is full of glucose to help the baby grow. That sounds great but if the woman becomes too insulin-resistant and there’s too much glucose in her blood it can lead to accelerated growth (a very large baby) and increase the risk of complications during pregnancy and at birth. Not great for mum or baby. Doctors regularly monitor the blood glucose levels in a GDM pregnancy to keep both mother and baby in good health. Once taught, anyone can measure their own blood glucose levels using a finger-prick test and people with diabetes do this several times a day.It will save money but also be much more flexible for mothers.

In-depth screening of every pregnant woman, to see if she has, or is at risk of, GDM costs money and is time-consuming, and most pregnant women will not develop GDM anyway. PAMBAYESIAN researchers at Queen Mary have developed a prototype intelligent decision-making tool, both to help doctors decide who needs further investigation, but also to help the women decide when they need additional support from their healthcare team.

The team of computer scientists and maternity experts developed a Bayesian network with information based on expert knowledge about GDM, then trained it on real (anonymised) patient data. They are now evaluating its performance and refining it. There are different decision points throughout a GDM pregnancy. First, does the person have GDM or are they at increased risk (perhaps because of a family history)? If ‘yes’ then the next decision is how best to care for them and whether or not to begin medical treatment or just give diet and lifestyle support. Later on in the pregnancy the woman and her doctor must consider when it’s best for her to deliver her baby, then later she needs ongoing support to prevent her GDM from leading to Type 2 diabetes. Currently in early development work, it’s hoped that if given blood glucose readings, the GDM Bayesian network will ultimately be able to take account of the woman’s risk factors (like age, ethnicity and previous GDM) that increase her risk. It would use that information to predict how likely she is to develop the condition in this pregnancy, and suggest what should happen next.

Systems like this mean that one day your smartphone may be smart enough to help protect you and your unborn baby from future harm.

– Jo Brodie, Queen Mary University of London, Spring 2021

Download Issue 27 of the cs4fn magazine on Smart Health here.

This post and issue 27 of the cs4fn magazine have been funded by EPSRC as part of the PAMBAYESIAN project.

I’m feeling Moo-dy today

It has long been an aim of computer scientists to develop software that can work out how a person is feeling. Are you happy or sad, frustrated or lonely? If the software can tell then it can adapt to you moods, changing its behaviour or offering advice. Suresh Neethirajan from Wageningen University in the Netherlands has gone step further. He has developed a program that detects the emotions of farm animals.

Image by Couleur from Pixabay 

Working out how someone is feeling is called “Sentiment Analysis” and there are lots of ways computer scientists have tried to do it. One way is based on looking at the words people speak or write. The way people speak, such as the tone of voice also gives information about emotions. Another way is based on our facial expressions and body language. A simple version of sentiment analysis involves working out whether someone is feeling a positive emotion (like being happy or excited) versus a negative emotions (such as being sad or angry) rather than trying to determine the precise emotion.

Applications range from deciding how a person might vote to predicting what they might buy. A more futuristic use is to help medics make healthcare decisions. When the patient says they are aren’t feeling too bad, are they actually fine or are they just being stoical, for example? And how much pain or stress are they actually suffering?

But why would you want to know the emotions of animals? One really important application is to know when an animal is, or is not, in distress. Knowing that can help a farmer look after that animal better, but also work out the best way to better look after animals more generally. It might help farmers design nicer living conditions, but also work out more humane ways to slaughter animals that involves the least suffering. Avoiding cruel conditions is reason on its own, but with happy farm animals you might also improve the yield of milk, quality of meat or how many offspring animals have in their lifetime. A farmer certainly shouldn’t want their animals to be so upset they start to self harm, which can be a problem when animals are kept in poor conditions. Not only is it cruel it can lead to infections which costs money to treat. It also spreads resistance to antibiotics. Having accurate ways to quickly and remotely detect how animals are feeling would be a big step forward for animal welfare.

But how to do it? While some scientists are actually working on understanding animal language, recognising body language is an easier first step to understand animal emotions. A lot is actually known about animal expressions and body language, and what they mean. If a dog is wagging its tail, then it is happy, for example. Suresh focussed on facial expressions in cows and pigs. What kind of expressions do they have? Cows, for example, are likely to be relaxed if their eyes are half-closed, and their ears are backwards or hung-down. If you can see the whites of their eyes, on the other hand then they are probably stressed. Pigs that are moving their ears around very quickly, by contrast, are likely to be stressed. If their ears are hanging and flipping in the direction of their eyes, though, then they are in a much more neutral state.

There are lots of steps to go through in creating a system to recognise emotions. The first for Suresh was to collect lots of pictures of cows and pigs from different farms. He collected almost 4000 images from farms in Canada, the USA and India. Each image was labelled by human experts according to whether it showed a positive, neutral and negative emotional state of the animal, based on what was already known about how animal expressions link to their emotions.

Sophisticated image processing software was then used to automatically pick out the animals’ faces as well as locate the individual features, such as eyes and ears. The orientation and other properties of those facial features, such as whether ears were hanging down or up is also determined. This processed data is then fed into a machine learning system to train it on this data. The fact that it was labelled meant the program knew what a human judged the different expressions to mean in terms of emotions, and so could then work out how patterns in the data that represented each animal state.

Once trained the system was then given new images without the labels to judge how accurate it was. It made a judgement and this was compared to the human judgement of the state. Human and machine agreed 86% of the time. More work is needed before such a system could be used on farms but it opens the possibility of using video cameras around a farm to raise the alarm when animals are suffering, for example.

Machine learning is helping humans in lots of ways. With systems like this machine learning could soon be helping animals live better lives too.

Paul Curzon, Queen Mary University of London, Spring 2021

Standup Robots

‘How do robots eat pizza?’… ‘One byte at a time’. Computational Humour is real, but it’s not jokes about computers, it’s computers telling their own jokes.

Robot performing
Image from istockphoto

Computers can create art, stories, slogans and even magic tricks. But can computers perform themselves? Can robots invent their own jokes? Can they tell jokes?

Combining Artificial Intelligence, computational linguistics and humour studies (yes you can study how to be funny!) a team of Scottish researchers made an early attempt at computerised standup comedy! They came up with Standup (System to Augment Non Speakers Dialogue Using Puns): a program that generates riddles for kids with language difficulties. Standup has a dictionary and joke-building mechanism, but does not perform, it just creates the jokes. You will have to judge for yourself as to whether the puns are funny. You can download the software from here. What makes a pun funny? It is a about the word having two meanings at exactly the same time in a sentence. It is also about generating an expectation that you then break: a key idea about what is at the core of creativity too.

A research team at Virginia Tech in the US created a system that started to learn about funny pictures. Having defined a ‘funniness score’ they created a computational model for humorous scenes, and trained it to predict funniness, perhaps with an eye to spotting pics for social media posting, or not.

But are there funny robots out there? Yes! RoboThespian programmed by researchers at Queen Mary University of London, and Data, created by researchers at Carnegie Mellon University are both robots programmed to do stand-up comedy. Data has a bank of jokes and responds to audience reaction. His developers don’t actually know what he will do when he performs, as he is learning all the time. At his first public gig, he got the crowd laughing, but his timing was poor. You can see his performance online, in a TED Talk.

RoboThespian did a gig at the London Barbican alongside human comedians. The performance was a live experiment to understand whether the robot could ‘work the audience’ as well as a human comedian. They found that even relatively small changes in the timing of delivery make a big difference to audience response.

What have these all got in common? Artificial Intelligence, machine learning and studies to understand what humour actually is, are being combined to make something that is funny. Comedy is perhaps the pinnacle of creativity. It’s certainly not easy for a human to write even one joke, so think how hard it is distill that skill into algorithms and train a computer to create loads of them.

You have to laugh!

Watch RoboThespian [EXTERNAL]

– Jane Waite, Queen Mary University of London, Summer 2017

Download Issue 22 of the cs4fn magazine “Creative Computing” here

Lots more computing jokes on our Teaching London Computing site

Sabine Hauert: Swarm Engineer

Based on a 2016 talk by Sabine Hauert at the Royal Society

Sabine Hauert is a swarm engineer. She is fascinated by the idea of making use of swarms of robots. Watch a flock of birds and you see that they have both complex and beautiful behaviours. It helps them avoid predators very effectively, for example, so much so that many animals behave in a similar way. Predators struggle to fix on any one bird in all the chaotic swirling. Sabine’s team at the University of Bristol are exploring how we can solve our own engineering problems: from providing communication networks in a disaster zone to helping treat cancer, all based on the behaviours of swarms of animals.

A murmuration  - a flock of starlings

Sabine realised that flocks of birds have properties that are really interesting to an engineer. Their ability to scale is one. It is often easy to come up with solutions to problems that work in a small ‘toy’ system, but when you want to use it for real, the size of the problem defeats you. With a flock, birds just keep arriving, and the flock keeps working, getting bigger and bigger. It is common to see thousands of Starlings behaving like this – around Brighton Pier most winter evenings, for example. Flocks can even be of millions of birds all swooping and swirling together, never colliding, always staying as a flock. It is an engineering solution that scales up to massive problems. If you can build a system to work like a flock, you will have a similar ability to scale.

Flocks of birds are also very robust. If one bird falls out of the sky, perhaps because it is caught by a predator, the flock itself doesn’t fail, it continues as if nothing happened. Compare that to most systems humans create. Remove one component from a car engine and it’s likely that you won’t be going anywhere. This kind of robustness from failure is often really important.

Swarms are an example of emergent behaviour. If you look at just one bird you can’t tell how the flock works as a whole. In fact, each is just following very simple rules. Each bird just tracks the positions of a few nearest neighbours using that information to make simple decisions about how to move. That is enough for the whole complex behaviour of the flock to emerge. Despite all that fast and furious movement, the birds never crash into each other. Fascinated, Sabine started to explore how swarms of robots might be used to solve problems for people.

Her first idea was to create swarms of flying robots to work as a communications network, providing wi-fi coverage in places it would otherwise be hard to set up a network. This might be a good solution in a disaster area, for example, where there is no other infrastructure, but communication is vital. You want it to scale over the whole disaster area quickly and easily, and it has to be robust. She set about creating a system to achieve this.

The robots she designed were very simple, fixed wing, propellor-powered model planes. Each had a compass so it knew which direction it was pointing and was able to talk to those nearest using wi-fi signals. It could also tell who its nearest neighbours were. The trick was to work out how to design the behaviour of one bird so that appropriate swarming behaviour emerged. At any time each had to decide how much to turn to avoid crashing into another but to maintain the flock, and coverage. You could try to work out the best rules by hand. Instead, Sabine turned to machine learning.

“Throwing those flying robots

and seeing them flock

was truly magical”

The idea of machine learning is that instead of trying to devise algorithms that solve problems yourself, you write an algorithm for how to learn. The program then learns for itself by trial and error the best solution. Sabine created a simple first program for her robots that gave them fairly random behaviour. The machine learning program then used a process modelled on evolution to gradually improve. After all evolution worked for animals! The way this is done is that variations on the initial behaviour are trialled in simulators and only the most successful are kept. Further random changes are made to those and the new versions trialled again. This is continued over thousands of generations, each generation getting that little bit better at flocking until eventually a behaviour of individual robots results that leads to them swarming together.

Sabine has now moved on to to thinking about a situation where swarms of trillions of individuals are needed: nanomedicine. She wants to create nanobots that are each smaller than the width of a strand of hair and can be injected into cancer patients. Once inside the body they will search out and stick themselves to tumour cells. The tumour cells gobble them up, at which point they deliver drugs directly inside the rogue cell. How do you make them behave in a way that gives the best cancer treatment though? For example, how do you stop them all just sticking to the same outer cancer cells? One way might be to give them a simple swarm behaviour that allows them to go to different depths and only then switch on their stickiness, allowing them to destroy all the cancer cells. This is the sort of thing Sabine’s team are experimenting with.

Swarm engineering has all sorts of other practical applications, and while Sabine is leading the way, some time soon we may need lots more swarm engineers, able to design swarm systems to solve specific problems. Might that be you?

Explore swarm behaviour using the Oxford Turtle system [EXTERNAL] (click the play button top centre) to see how to run a flocking simulation as well as program your own swarms.

Paul Curzon, Queen Mary University of London

What’s on your mind?

Telepathy is the supposed Extra Sensory Perception ability to read someone else’s mind at a distance. Whilst humans do not have that ability, brain-computer interaction researchers at Stanford have just made the high tech version a virtual reality.

Image by Andrei Cássia from Pixabay

It has long been know that by using brain implants or electrodes on a person’s head it is possible to tell the difference between simple thoughts. Thinking about moving parts of the body gives particularly useful brain signals. Thinking about moving your right arm, generates different signals to thinking about moving your left leg, for example, even if you are paralysed so cannot actually move at all. Telling two different things apart is enough to communicate – it is the basis of binary and so how all computer-to-computer communication is done. This led to the idea of the brain-computer interface where people communicate with and control a computer with their mind alone.

Stanford researchers made a big step forward in 2017, when they demonstrated that paralysed people could move a cursor on a screen by thinking of moving their hands in the appropriate direction. This created a point and click interface – a mind mouse – for the paralysed. Impressively, the speed and accuracy was as good as for people using keyboard applications

Stanford researchers have now gone a step even further and used the same idea to turn mental handwriting into actual typing. The person just thinks of writing letters with an imagined pen on imagined paper, the brain-computer interface then picks up the thoughts of subtle movements and the computer converts them into actual letters. Again the speed and accuracy is as good as most people can type. The paralysed participant concerned could communicate 18 words a minute and made virtually no mistakes at all: when the system was combined with auto-correction software, as we now all can use to correct our typing mistakes, it got letters right 99% of the time.

The system has been made possible by advances in both neuroscience and computer science. Recognising the letters being mind-written involves distinguishing very subtle differences in patterns of neurons firing in the brain. Recognising patterns is however, exactly what Machine Learning algorithms do. They are trained on lots of data and pick out patterns of similar data. If told what letter the person was actually trying to communicate then they can link that letter to the pattern detected. Here each letter will not lead to exactly the same pattern of brain signals firing each time, but they will largely clump together,. Other letters will also group but with slightly different patterns of firings. Once trained, the system works by taking the pattern of brain signals just seen and matching it to the nearest clumping pattern. The computer then guesses that the nearest clumping is the letter being communicated. If the system is highly accurate, as this one was at 94% (before autocorrection), then it means the patterns of most letters are very distinct. A letter being mind-written rarely fell into a brain pattern gap, which would have meant that letter could as easily have been the pattern of one letter as the other.

So a computer based “telepathy” is possible. But don’t expect us all to be able to communicate by mind alone over the internet any time soon. The approach involves having implants surgically inserted into the brain: in this case two computer chips connecting to your brain via 100 electrodes. The operation is a massive risk to take, and while perhaps justifiable for someone with a problem as severe as total paralysis, it is less obvious it is a good idea for anyone else. However, this shows at least it is possible to communicate written messages by mind alone, and once developed further could make life far better for severely disabled people in the future.

Yet again science fiction is no longer fantasy, it is possible, just not in the way the science fiction writers perhaps originally imagined by the power of a person’s mind alone.

Paul Curzon, Queen Mary University of London, Spring 2021.

AI Detecting the Scribes of the Dead Sea Scrolls

Computer science and artificial intelligence have provided a new way to do science: it was in fact one of the earliest uses of the computer. They are now giving new ways for scholars to do research in other disciplines such as ancient history, too. Artificial Intelligence has been used in a novel way to help understand how the Dead Sea Scrolls were written, and it turns out scribes in ancient Judea worked in teams.

The Dead Sea Scrolls are a collection of almost a thousand ancient documents written several thousand years ago that were found in caves near the Dead Sea. The collection includes the oldest known written version of the Bible.

The cave where most of the Dead Sea Scrolls were found.

Researchers from the University of Groningen used artificial intelligence techniques to analyse a digitised version of the longest scroll in the collection, known as the Great Isaiah Scroll. They picked one letter, aleph, that appears thousands of times through the document, and analysed it in detail.

Two kinds of artificial intelligence programs were used. The first, feature extraction, based on computer vision and image processing was needed to recognize features in the images. At one level this is the actual characters, but also more subtly here, the aim was that the features corresponded to ink traces based on the actual muscle movements of the scribes.

The second was machine learning. Machine Learning programs are good at spotting patterns in data – grouping the data into things that are similar and things that are different. A typical text book example would be giving the program images of cats and of dogs. It would spot the patterns of features that correspond to dogs, and the different pattern of features that corresponds to cats and group each image into one or the other pattern.

Here the data was all those alephs or more specifically the features extracted from them. Essentially the aim was to find patterns that were based on the muscle movements of the original scribe of each letter. To the human eye the writing throughout the document looks very, very uniform, suggesting a single scribe wrote the whole document. If that was the case, only one pattern would be found that all letters were part of with no clear way to split them. Despite this, the artificial intelligence evidence suggests there were actually two scribes involved. There were two patterns.

The research team found, by analysing the way the letters were written, that there were two clear groupings of letters. One group were written in one way and the other in a slightly different way. There were very subtle differences in the way strokes were written, such as in their thickness and the positions of the connections between strokes. This could just be down to variations in the way a single writer wrote letters at different times. However, the differences were not random, but very clearly split at a point halfway through the scroll. This suggests there were two writers who each worked on the different parts. Because the characters were otherwise so uniform, those two scribes must have been making an effort to carefully mirror each other’s writing style so the letters looked the same to the naked eye.

The research team have not only found out something interesting about the Dead Sea Scrolls, but also demonstrated a new way to study ancient hand writing. With a few exceptions, the scribes who wrote the ancient documents, like the Dead Sea Scrolls, that have survived to the modern day, are generally anonymous, but thanks to leading-edge Computer Science, we have a new way to find out more about them.

Explore the digitised version of the Dead Sea Scrolls yourself at www.deadseascrolls.org.il

– Paul Curzon, Queen Mary University of London

Losing the match? Follow the science. Change the kit!

Artificial Intelligence software has shown that two different Manchester United gaffers got it right believing that kit and stadium seat colours matter if the team are going to win.

It is 1996. Sir Alex Ferguson’s Manchester United are doing the unthinkable. At half time they are losing 3-0 to lowly Southampton. Then the team return to the pitch for the second half and they’ve changed their kit. No longer are they wearing their normal grey away kit but are in blue and white, and their performance improves (if not enough to claw back such a big lead). The match becomes infamous for that kit change: the genius gaffer blaming the team’s poor performance on their kit seemed silly to most. Just play better football if you want to win!

Jump forward to 2021, and Manchester United Manager Ole Gunnar Solskjaer, who originally joined United as a player in that same year, 1996, tells a press conference that the club are changing the stadium seats to improve the team’s performance!

Is this all a repeat of previously successful mind games to deflect from poor performances? Or superstition, dressed up as canny management, perhaps. Actually, no. Both managers were following the science.

Ferguson wasn’t just following some gut instinct, he had been employing a vision scientist, Professor Gail Stephenson, who had been brought in to the club to help improve the players’ visual awareness, getting them to exercise the muscles in their eyes not just their legs! She had pointed out to Ferguson that the grey kit would make it harder for the players to pick each other out quickly. The Southampton match was presumably the final straw that gave him the excuse to follow her advice.

She was very definitely right, and modern vision Artificial Intelligence technology agrees with her! Colours do make it easier or harder to notice things and slows decision making in a way that matters on the pitch. 25 years ago the problem was grey kit merging into the grey background of the crowd. Now it is that red shirts merge into the background of an empty stadium of red seats.

It is all about how our brain processes the visual world and the saliency of objects. Saliency is just how much an object stands out and that depends on how our brain processes information. Objects are much easier to pick out if they have high contrast, for example, like a red shirt on a black background.

Peter McOwan and Hamit Soyel at Queen Mary combined vision research and computer science, creating an Artificial Intelligence (AI) that sees like humans in the sense that it predicts what will and won’t stand out to us, doing it in real time (see DragonflyAI: I see what you see). They used the program to analyse images from that infamous football match before and after the kit change and showed that the AI agreed with Gail Stephenson and Alex Ferguson. The players really were much easier for their team mates to see in the second half (see the DragonflyAI version of the scenes below).

Dragonfly highlights areas of a scene that are more salient to humans so easier to notice. Red areas stand out the most. In the left image when wearing the grey kit, Ryan Giggs merges into the background. He is highly salient (red) in the right image where he is in the blue and white kit.

Details matter and science can help teams that want to win in all sorts of ways. That includes computer scientists and Artificial Intelligence. So if you want an edge over the opposition, hire an AI to analyse the stadium scene at your next match. Changing the colour of the seats really could make a difference.

Find out more about DragonflyAI: https://dragonflyai.co/ [EXTERNAL]

– Paul Curzon, Queen Mary University of London

DragonflyAI: I see what you see

What use is a computer that sees like a human? Can’t computers do better than us? Well, such a computer can predict what we will and will not see, and there is BIG money to be gained doing that!

The Hong Kong Skyline


Peter McOwan’s team at Queen Mary spent 10 years doing exploratory research understanding the way our brains really see the world, exploring illusions, inventing games to test the ideas, and creating a computer model to test their understanding. Ultimately they created a program that sees like a human. But what practical use is a program that mirrors the oddities of the way we see the world? Surely a computer can do better than us: noticing all the things that we miss or misunderstand? Well, for starters the research opens up exciting possibilities for new applications, especially for marketeers.

The Hong Kong Skyline as seen by DragonflyAI


A fruitful avenue to emerge is ‘visual analytics’ software: applications that predict what humans will and will not notice. Our world is full of competing demands, overloading us with information. All around us things vie to catch our attention, whether a shop window display, a road sign warning of danger or an advertising poster.

Imagine, a shop has a big new promotion designed to entice people in, but no more people enter than normal. No-one notices the display. Their attention is elsewhere. Another company runs a web ad campaign, but it has no effect, as people’s eyes are pulled elsewhere on the screen. A third company pays to have its products appear in a blockbuster film. Again, a waste of money. In surveys afterwards no one knew the products had been there. A town council puts up a new warning sign at a dangerous bend in the road but the crashes continue. These are examples of situations where predicting where people look in advance allows you to get it right. In the past this was either done by long and expensive user testing, perhaps using software that tracks where people look, or by having teams of ‘experts’ discuss what they think will happen. What if a program made the predictions in a fraction of a second beforehand? What if you could tweak things repeatedly until your important messages could not be missed.

Queen Mary’s Hamit Soyel turned the research models into a program called DragonflyAI, which does exactly that. The program analyses all kinds of imagery in real-time and predicts the places where people’s attention will, and will not, be drawn. It works whether the content is moving or not, and whether it is in the real world, completely virtual, or both. This then gives marketeers the power to predict and so influence human attention to see the things they want. The software quickly caught the attention of big, global companies like NBC Universal, GSK and Jaywing who now use the technology.

Find out more about DragonflyAI: https://dragonflyai.co/ [EXTERNAL]