The Emoji Crystal Ball

Fairground fortune tellers claim to be able to tell a lot about you by staring into a crystal ball. They could tell far more about you (that wasn’t made up) by staring at your public social media profile. Even your use of emojis alone gives away something of who you are.

Reflective ball with dots of lights
Image by Hier und jetzt endet leider meine Reise auf Pixabay  from Pixabay

Walid Magdy’s research team at Edinburgh University are interested in how much people unknowingly give away about themselves when they use social media. They have found that it’s possible to work out an awful lot about you from your social media activity. One of their experiments involved exploring emojis. About a fifth of posts on Twitter include emojis, so they wondered if anything could be predicted about people, ignoring what they wrote and just looking at the emojis they used in their tweets. They found that the way people use emojis in twitter posts alone gives away whether they are male or female and their ethnic background.

They started by taking a large number of tweets known to be written by either men or women and stripped out the words, leaving only the emojis they used. They then counted how often each group used different emojis. The differences in use of each emoji seemed to be revealing as there was clearly a different pattern of use overall of each emoji by men and by women. Men and women each use some emojis much more than others.

Next, they used emoji data for some of the people to train a machine learning system (creating what is known as a classifier). The classifier was given all the emojis used by a person and told which were by men and which by women. It built up a detailed pattern of what a man’s emoji profile was like and similarly what a woman’s was like. 

Given a new set of tweets from a single person the classifier could then try to predict man or woman based on whether that profile was closer to the male pattern or closer to the female pattern of emoji use. Walid’s team found their emoji classifier’s predictions were right about 80% of the time – essentially it was as accurate as doing a similar thing based on the words they wrote. When they tried a similar experiment with ethnicity (was the person black, white or of another ethnicity) the predictions were even more accurate getting it right 84% of the time.

A lot can be worked out about you from apparently innocuous information that is publicly available as a result of your social media use. Even emojis give away something of who you are 😦

Paul Curzon, Queen Mary University of London, Spring 2021

– Based on a talk given by Walid Magdy at QMUL, May 2021.

Is your healthcare algorithm racist?

Algorithms are taking over decision making, and this is especially so in healthcare. But could the algorithms be making biased decisions? Could their decisions be racist? Yes, and such algorithms are already being used.

A medical operation showing an anaesthetist and the head of the patient
Image by David Mark from Pixabay

There is now big money to be made from healthcare software. One of the biggest areas is in intelligent algorithms that help healthcare workers make decisions. Some even completely take over the decision making. In the US, software is used widely, for example, to predict who will most benefit from interventions. The more you help a patient the more it costs. Some people may just get better without extra help, but for others it means the difference between a disability that might have been avoided or not, or even life and death. How do you tell? It matters as money is limited, so someone has to choose. You need to be able to predict outcomes with or without potential treatments. That is the kind of thing that machine learning technology is generally good at. By looking at the history of lots and lots of past patients, their treatments and what happened, these artificial intelligence programs can spot the patterns in the data and then make predictions about new patients.

This is what current commercial software does. Ziad Obermeyer, from UC Berkeley, decided to investigate how well the systems made those decisions. Working with a team combining academics and clinicians, they looked specifically at the differences between black and white patients in one widely used system. It made decisions about whether to put patients on more expensive treatment programmes. What they found was that the system had a big racial bias in the decisions it made. For patients that were equally ill, it was much more likely to recommend white patients for treatment programmes.

One of the problems with machine learning approaches is it is hard to see why they make the decisions they do. They just look for patterns in data, and who knows what patterns they find to base their decisions on? The team had access to the data of a vast number of patients the algorithms had made recommendations about, the decisions made about them and the outcomes. This meant they could evaluate whether patients were treated fairly.

The data given to the algorithm specifically excluded race, supposedly to stop it making decisions on colour of skin. However, despite not having that information, that was ultimately what it was doing. How?

The team found that its decision-making was based on predicting healthcare costs rather than how ill people actually were. The greater the cost saving of putting a person on a treatment programme, the more likely it was to recommend them. At first sight, this seems reasonable, given the aim is to make best use of a limited budget. The system was totally fair in allocating treatment based on cost. However, when the team looked at how ill people were, black people had to be much sicker before they would be recommended for help. There are lots of reasons more money might be spent on white people, so skewing the system. For example, they may be more likely to seek treatment earlier or more often. Being poor means it can be harder to seek healthcare due to difficulties getting to hospital, difficulties taking time off work, etc. If more black people in the data used to train the system are poor then this will lead to them seeking help less, so less is being spent on them. The system had spotted patterns like this and that was how it was making decisions. Even though it wasn’t told who was black and white, it had learnt to be biased.

There is an easy way to fix the system. Instead of including data about costs and having it use that as the basis of decision making, you can use direct measures of how ill a person is: for example, using the number of different conditions the patient is suffering from and the rule of thumb that the more complications you have, the more you will benefit from treatment. The researchers showed that if the system was trained this way instead, the racial bias disappeared. Access to healthcare became much fairer.

If we are going to allow machines to take healthcare decisions for us based on their predictions, we have to make sure we know how they make those predictions, and make sure they are fair. You should not lose the chance of the help you need just because of your ethnicity, or because you are poor. We must take care not to build racist algorithms. Just because computers aren’t human doesn’t mean they can’t be humane.

– Paul Curzon, Queen Mary University of London, Spring 2021

Download Issue 27 of the cs4fn magazine on Smart Health here.

This post and issue 27 of the cs4fn magazine have been funded by EPSRC as part of the PAMBAYESIAN project.

I’m feeling Moo-dy today

It has long been an aim of computer scientists to develop software that can work out how a person is feeling. Are you happy or sad, frustrated or lonely? If the software can tell then it can adapt to you moods, changing its behaviour or offering advice. Suresh Neethirajan from Wageningen University in the Netherlands has gone step further. He has developed a program that detects the emotions of farm animals.

Image by Couleur from Pixabay 

Working out how someone is feeling is called “Sentiment Analysis” and there are lots of ways computer scientists have tried to do it. One way is based on looking at the words people speak or write. The way people speak, such as the tone of voice also gives information about emotions. Another way is based on our facial expressions and body language. A simple version of sentiment analysis involves working out whether someone is feeling a positive emotion (like being happy or excited) versus a negative emotions (such as being sad or angry) rather than trying to determine the precise emotion.

Applications range from deciding how a person might vote to predicting what they might buy. A more futuristic use is to help medics make healthcare decisions. When the patient says they are aren’t feeling too bad, are they actually fine or are they just being stoical, for example? And how much pain or stress are they actually suffering?

But why would you want to know the emotions of animals? One really important application is to know when an animal is, or is not, in distress. Knowing that can help a farmer look after that animal better, but also work out the best way to better look after animals more generally. It might help farmers design nicer living conditions, but also work out more humane ways to slaughter animals that involves the least suffering. Avoiding cruel conditions is reason on its own, but with happy farm animals you might also improve the yield of milk, quality of meat or how many offspring animals have in their lifetime. A farmer certainly shouldn’t want their animals to be so upset they start to self harm, which can be a problem when animals are kept in poor conditions. Not only is it cruel it can lead to infections which costs money to treat. It also spreads resistance to antibiotics. Having accurate ways to quickly and remotely detect how animals are feeling would be a big step forward for animal welfare.

But how to do it? While some scientists are actually working on understanding animal language, recognising body language is an easier first step to understand animal emotions. A lot is actually known about animal expressions and body language, and what they mean. If a dog is wagging its tail, then it is happy, for example. Suresh focussed on facial expressions in cows and pigs. What kind of expressions do they have? Cows, for example, are likely to be relaxed if their eyes are half-closed, and their ears are backwards or hung-down. If you can see the whites of their eyes, on the other hand then they are probably stressed. Pigs that are moving their ears around very quickly, by contrast, are likely to be stressed. If their ears are hanging and flipping in the direction of their eyes, though, then they are in a much more neutral state.

There are lots of steps to go through in creating a system to recognise emotions. The first for Suresh was to collect lots of pictures of cows and pigs from different farms. He collected almost 4000 images from farms in Canada, the USA and India. Each image was labelled by human experts according to whether it showed a positive, neutral and negative emotional state of the animal, based on what was already known about how animal expressions link to their emotions.

Sophisticated image processing software was then used to automatically pick out the animals’ faces as well as locate the individual features, such as eyes and ears. The orientation and other properties of those facial features, such as whether ears were hanging down or up is also determined. This processed data is then fed into a machine learning system to train it on this data. The fact that it was labelled meant the program knew what a human judged the different expressions to mean in terms of emotions, and so could then work out how patterns in the data that represented each animal state.

Once trained the system was then given new images without the labels to judge how accurate it was. It made a judgement and this was compared to the human judgement of the state. Human and machine agreed 86% of the time. More work is needed before such a system could be used on farms but it opens the possibility of using video cameras around a farm to raise the alarm when animals are suffering, for example.

Machine learning is helping humans in lots of ways. With systems like this machine learning could soon be helping animals live better lives too.

Paul Curzon, Queen Mary University of London, Spring 2021

Standup Robots

‘How do robots eat pizza?’… ‘One byte at a time’. Computational Humour is real, but it’s not jokes about computers, it’s computers telling their own jokes.

Robot performing
Image from istockphoto

Computers can create art, stories, slogans and even magic tricks. But can computers perform themselves? Can robots invent their own jokes? Can they tell jokes?

Combining Artificial Intelligence, computational linguistics and humour studies (yes you can study how to be funny!) a team of Scottish researchers made an early attempt at computerised standup comedy! They came up with Standup (System to Augment Non Speakers Dialogue Using Puns): a program that generates riddles for kids with language difficulties. Standup has a dictionary and joke-building mechanism, but does not perform, it just creates the jokes. You will have to judge for yourself as to whether the puns are funny. You can download the software from here. What makes a pun funny? It is a about the word having two meanings at exactly the same time in a sentence. It is also about generating an expectation that you then break: a key idea about what is at the core of creativity too.

A research team at Virginia Tech in the US created a system that started to learn about funny pictures. Having defined a ‘funniness score’ they created a computational model for humorous scenes, and trained it to predict funniness, perhaps with an eye to spotting pics for social media posting, or not.

But are there funny robots out there? Yes! RoboThespian programmed by researchers at Queen Mary University of London, and Data, created by researchers at Carnegie Mellon University are both robots programmed to do stand-up comedy. Data has a bank of jokes and responds to audience reaction. His developers don’t actually know what he will do when he performs, as he is learning all the time. At his first public gig, he got the crowd laughing, but his timing was poor. You can see his performance online, in a TED Talk.

RoboThespian did a gig at the London Barbican alongside human comedians. The performance was a live experiment to understand whether the robot could ‘work the audience’ as well as a human comedian. They found that even relatively small changes in the timing of delivery make a big difference to audience response.

What have these all got in common? Artificial Intelligence, machine learning and studies to understand what humour actually is, are being combined to make something that is funny. Comedy is perhaps the pinnacle of creativity. It’s certainly not easy for a human to write even one joke, so think how hard it is distill that skill into algorithms and train a computer to create loads of them.

You have to laugh!

Watch RoboThespian [EXTERNAL]

– Jane Waite, Queen Mary University of London, Summer 2017

Download Issue 22 of the cs4fn magazine “Creative Computing” here

Lots more computing jokes on our Teaching London Computing site

Sabine Hauert: Swarm Engineer

Based on a 2016 talk by Sabine Hauert at the Royal Society

Sabine Hauert is a swarm engineer. She is fascinated by the idea of making use of swarms of robots. Watch a flock of birds and you see that they have both complex and beautiful behaviours. It helps them avoid predators very effectively, for example, so much so that many animals behave in a similar way. Predators struggle to fix on any one bird in all the chaotic swirling. Sabine’s team at the University of Bristol are exploring how we can solve our own engineering problems: from providing communication networks in a disaster zone to helping treat cancer, all based on the behaviours of swarms of animals.

A murmuration  - a flock of starlings

Sabine realised that flocks of birds have properties that are really interesting to an engineer. Their ability to scale is one. It is often easy to come up with solutions to problems that work in a small ‘toy’ system, but when you want to use it for real, the size of the problem defeats you. With a flock, birds just keep arriving, and the flock keeps working, getting bigger and bigger. It is common to see thousands of Starlings behaving like this – around Brighton Pier most winter evenings, for example. Flocks can even be of millions of birds all swooping and swirling together, never colliding, always staying as a flock. It is an engineering solution that scales up to massive problems. If you can build a system to work like a flock, you will have a similar ability to scale.

Flocks of birds are also very robust. If one bird falls out of the sky, perhaps because it is caught by a predator, the flock itself doesn’t fail, it continues as if nothing happened. Compare that to most systems humans create. Remove one component from a car engine and it’s likely that you won’t be going anywhere. This kind of robustness from failure is often really important.

Swarms are an example of emergent behaviour. If you look at just one bird you can’t tell how the flock works as a whole. In fact, each is just following very simple rules. Each bird just tracks the positions of a few nearest neighbours using that information to make simple decisions about how to move. That is enough for the whole complex behaviour of the flock to emerge. Despite all that fast and furious movement, the birds never crash into each other. Fascinated, Sabine started to explore how swarms of robots might be used to solve problems for people.

Her first idea was to create swarms of flying robots to work as a communications network, providing wi-fi coverage in places it would otherwise be hard to set up a network. This might be a good solution in a disaster area, for example, where there is no other infrastructure, but communication is vital. You want it to scale over the whole disaster area quickly and easily, and it has to be robust. She set about creating a system to achieve this.

The robots she designed were very simple, fixed wing, propellor-powered model planes. Each had a compass so it knew which direction it was pointing and was able to talk to those nearest using wi-fi signals. It could also tell who its nearest neighbours were. The trick was to work out how to design the behaviour of one bird so that appropriate swarming behaviour emerged. At any time each had to decide how much to turn to avoid crashing into another but to maintain the flock, and coverage. You could try to work out the best rules by hand. Instead, Sabine turned to machine learning.

“Throwing those flying robots

and seeing them flock

was truly magical”

The idea of machine learning is that instead of trying to devise algorithms that solve problems yourself, you write an algorithm for how to learn. The program then learns for itself by trial and error the best solution. Sabine created a simple first program for her robots that gave them fairly random behaviour. The machine learning program then used a process modelled on evolution to gradually improve. After all evolution worked for animals! The way this is done is that variations on the initial behaviour are trialled in simulators and only the most successful are kept. Further random changes are made to those and the new versions trialled again. This is continued over thousands of generations, each generation getting that little bit better at flocking until eventually a behaviour of individual robots results that leads to them swarming together.

Sabine has now moved on to to thinking about a situation where swarms of trillions of individuals are needed: nanomedicine. She wants to create nanobots that are each smaller than the width of a strand of hair and can be injected into cancer patients. Once inside the body they will search out and stick themselves to tumour cells. The tumour cells gobble them up, at which point they deliver drugs directly inside the rogue cell. How do you make them behave in a way that gives the best cancer treatment though? For example, how do you stop them all just sticking to the same outer cancer cells? One way might be to give them a simple swarm behaviour that allows them to go to different depths and only then switch on their stickiness, allowing them to destroy all the cancer cells. This is the sort of thing Sabine’s team are experimenting with.

Swarm engineering has all sorts of other practical applications, and while Sabine is leading the way, some time soon we may need lots more swarm engineers, able to design swarm systems to solve specific problems. Might that be you?

Explore swarm behaviour using the Oxford Turtle system [EXTERNAL] (click the play button top centre) to see how to run a flocking simulation as well as program your own swarms.

Paul Curzon, Queen Mary University of London

What’s on your mind?

Telepathy is the supposed Extra Sensory Perception ability to read someone else’s mind at a distance. Whilst humans do not have that ability, brain-computer interaction researchers at Stanford have just made the high tech version a virtual reality.

Image by Andrei Cássia from Pixabay

It has long been know that by using brain implants or electrodes on a person’s head it is possible to tell the difference between simple thoughts. Thinking about moving parts of the body gives particularly useful brain signals. Thinking about moving your right arm, generates different signals to thinking about moving your left leg, for example, even if you are paralysed so cannot actually move at all. Telling two different things apart is enough to communicate – it is the basis of binary and so how all computer-to-computer communication is done. This led to the idea of the brain-computer interface where people communicate with and control a computer with their mind alone.

Stanford researchers made a big step forward in 2017, when they demonstrated that paralysed people could move a cursor on a screen by thinking of moving their hands in the appropriate direction. This created a point and click interface – a mind mouse – for the paralysed. Impressively, the speed and accuracy was as good as for people using keyboard applications

Stanford researchers have now gone a step even further and used the same idea to turn mental handwriting into actual typing. The person just thinks of writing letters with an imagined pen on imagined paper, the brain-computer interface then picks up the thoughts of subtle movements and the computer converts them into actual letters. Again the speed and accuracy is as good as most people can type. The paralysed participant concerned could communicate 18 words a minute and made virtually no mistakes at all: when the system was combined with auto-correction software, as we now all can use to correct our typing mistakes, it got letters right 99% of the time.

The system has been made possible by advances in both neuroscience and computer science. Recognising the letters being mind-written involves distinguishing very subtle differences in patterns of neurons firing in the brain. Recognising patterns is however, exactly what Machine Learning algorithms do. They are trained on lots of data and pick out patterns of similar data. If told what letter the person was actually trying to communicate then they can link that letter to the pattern detected. Here each letter will not lead to exactly the same pattern of brain signals firing each time, but they will largely clump together,. Other letters will also group but with slightly different patterns of firings. Once trained, the system works by taking the pattern of brain signals just seen and matching it to the nearest clumping pattern. The computer then guesses that the nearest clumping is the letter being communicated. If the system is highly accurate, as this one was at 94% (before autocorrection), then it means the patterns of most letters are very distinct. A letter being mind-written rarely fell into a brain pattern gap, which would have meant that letter could as easily have been the pattern of one letter as the other.

So a computer based “telepathy” is possible. But don’t expect us all to be able to communicate by mind alone over the internet any time soon. The approach involves having implants surgically inserted into the brain: in this case two computer chips connecting to your brain via 100 electrodes. The operation is a massive risk to take, and while perhaps justifiable for someone with a problem as severe as total paralysis, it is less obvious it is a good idea for anyone else. However, this shows at least it is possible to communicate written messages by mind alone, and once developed further could make life far better for severely disabled people in the future.

Yet again science fiction is no longer fantasy, it is possible, just not in the way the science fiction writers perhaps originally imagined by the power of a person’s mind alone.

Paul Curzon, Queen Mary University of London, Spring 2021.

Punk robots learn to pogo

It’s the second of three punk gigs in a row for Neurotic and the PVCs, and tonight they’re sounding good. The audience seem to be enjoying it too. All around the room the people are clapping and cheering, and in the middle of the mosh pit the three robots are dancing. They’re jumping up and down in the style of the classic punk pogo, and they’ve been doing it all night whenever they like the music most. Since Neurotic came on the robots can hardly keep still. In fact Neurotic and the PVCs might be the best, most perfect band for these three robots to listen to, since their frontman, Fiddian, made sure they learned to like the same music he does.

Programming punks

It’s a tough task to get a robot to learn what punk music sounds like, but there are lots of hints lurking in our own brains. Inside your brain are billions of connected cells called neurons that can send messages to one another. When and where the messages get sent depends on how strong each connection is, and we forge new connections whenever we learn something.

What the robots’ programmers did was to wire up a network of computerised connections like the ones in a real brain. Then they let the robots sample lots of different kinds of music and told them what it was, like reggae, pop, and of course, Fiddian’s collection of classic punk. That way the connections in the neural network got stronger and stronger – the more music the robots listened to, the easier it got for them to recognise what kind of stuff it was. When they recognised a style they’d been told to look out for, they would dance, firing a cylinder of compressed air to make them jump up and down.

The robots’ first gig

The last step was to tell the robots to go out and enjoy some punk. The programmers turned off the robots’ neural connections to other kinds of music, so no Kylie or Bob Marley would satisfy them. They would only dance to the angry, churning sound of punk guitars. The robots got dressed up in spray-painted leather, studded belts and safety pins, so with their bloblike bodies they looked like extra-tough boxing gloves on sticks. Then the three two-metre tall troublemakers went to their first gig.

Whenever a band begins to play, the robots’ computer system analyses the sound coming from the stage. If the patterns in it look the same as the idea of punk music they’ve learned, the robots begin to dance. If the pattern isn’t quite right, they stand still. For lots of songs they hardly dance at all, which might seem weird since all the bands that are playing the gig call themselves punk bands. Except there are many different styles of punk music, and the robots have been brought up listening to Fiddian’s favourites. The other styles aren’t close enough to the robots’ idea of punk – they’ve developed taste, and it’s the same as Fiddian’s. Which is why the robots go crazy for Neurotic and the PVCs. Fiddian’s songs are influenced by classic punk like the Clash, the Sex Pistols and Siouxsie & the Banshees, which is exactly the music he’s taught the robots to love. As the robots jump wildly up and down, it’s clear that Neurotic and the PVCs now have three tall, tough, computerised superfans.

Machines Inventing Musical Instruments

by Paul Curzon, Queen Mary University of London

based on a 2016 talk by Rebecca Fiebrink

Gesturing hands copyright www.istock.com 1876387

Machine Learning is the technology driving driverless cars, recognising faces in your photo collection and more, but how could it help machines invent new instruments? Rebecca Fiebrink of Goldsmiths, University of London is finding out.

Rebecca is helping composers and instrument builders to design new musical instruments and giving them new ways to perform. Her work has also shown that machine learning provides an alternative to programming as a way to quickly turn design ideas into prototypes that can be tested.

Suppose you want to create a new drum machine-based musical instrument that is controlled by the wave of a hand: perhaps a fist means one beat, whereas waggling your fingers brings in a different beat. To program a prototype of your idea, you would need to write code that could recognize all the different hand gestures, perhaps based on a video feed. You would then have some kind of decision code that chose the appropriate beat. The second part is not too hard, perhaps, but writing code to recognize specific gestures in video is a lot harder, needing sophisticated programming skills. Rebecca wants even young children to be able to do it!

How can machine learning help? Rebecca has developed a machine learning program with a difference. It takes sensor input – sound, video, in fact just about any kind of sensor you can imagine. It then watches, listens…senses what is happening and learns to associate what it senses with different actions it should take. With the drum machine example, you would first select one of the kinds of beats. You then make the gesture that should trigger it: a fist perhaps. You do that a few times so it can learn what a fist looks like. It learns that the patterns it is sensing are to be linked with the beat you selected. Then you select the next beat and show it the next gesture – waggling your fingers – until it has seen enough examples. You keep doing this with each different gesture you want to control the instrument. In just a few minutes you have a working machine to try. It is learning by example how the instrument you are wanting works. You can try it, and then adjust it by showing it new examples if it doesn’t quite do what you want.

It is learning by example how the instrument you are wanting works.

Rebecca realised that this approach of learning by example gives a really powerful new way to support creativity: to help designers design. In the traditional ways machine learning is used, you start with lots of examples of the things that you want it to recognize – lots of pictures of cats and dogs, perhaps. You know the difference, so label all these training pictures as cats or dogs, so it knows which to form the two patterns from. Your aim is for the machine to learn the difference between cat and dog patterns so it can decide for itself when it sees new pictures.

When designing something like a new musical instrument though, you don’t actually know exactly what you want at the start. You have a general idea but will work out the specifics as you go. You tinker with the design, trying new things and keeping the ideas that work, gradually refining your thoughts about what you want as you refine the design of the instrument. The machine learning program can even help by making mistakes – it might not have learnt exactly what you were thinking but as a result makes some really exciting sound you never thought of. You can then explore that new idea.

One of Rebecca’s motivations in wanting to design new instruments is to create accessible instruments that people with a wide range of illness and disability can play. The idea is to adapt the instrument to the kinds of movement the person can actually do. The result is a tailored instrument perfect for each person. An advantage of this approach is you can turn a whole room, say, into an instrument so that every movement does something: an instrument that it’s impossible not to play. It is a play space to explore.

Playing an instrument suddenly really is just playing.