by Peter W McOwan and Paul Curzon, Queen Mary University of London
(from the archive)
By understanding the way hoverflies mate, computer scientists found a way to sneak up on humans, giving a way to make games harder.
When hoverflies get the hots for each other they make some interesting moves. Biologists had noticed that as one hoverfly moves towards a second to try and mate, the approaching fly doesn’t go in a straight line. It makes a strange curved flight. Peter and his student Andrew Anderson thought this was an interesting observation and started to look at why it might be. They came up with a cunning idea. The hoverfly was trying to sneak up on its prospective mate unseen.
The route the approaching fly takes matches the movements of the prospective mate in such a way that, to the mate, the fly in the distance looks like it’s far away and ‘probably’ stationary.
How does it do this? Imagine you are walking across a field with a single tree in it, and a friend is trying to sneak up on you. Your friend starts at the tree and moves in such a way that they are always in direct line of sight between your current position and the tree. As they move towards you they are always silhouetted against the tree. Their motion towards you is mimicking the stationary tree’s apparent motion as you walk past it… and that’s just what the hoverfly does when approaching a mate. It’s a stealth technique called ‘active motion camouflage’.
By building a computer model of the mating flies, the team were able to show that this complex behaviour can actually be done with only a small amount of ‘brain power’. They went on to show that humans are also fooled by active motion camouflage. They did this by creating a computer game where you had to dodge missiles. Some of those missiles used active motion camouflage. The missiles using the fly trick were the most difficult to spot.
It just goes to show: there is such a thing as a useful computer bug.
There are many ways Artificial Intelligences might create art. Breeding a colony of virtual ants is one of the most creative.
Photogrowth from the University of Coimbra does exactly that. The basic idea is to take an image and paint an abstract version of it. Normally you would paint with brush strokes. In ant paintings you paint with the trails of hundreds of ants as they crawl over the picture, depositing ink rather than the normal chemical trails ants use to guide other ants to food. The colours in the original image act as food for the ants, which absorb energy from its bright parts. They then use up energy as they move around. They die if they don’t find enough food, but reproduce if they have lots. The results are highly novel swirl-filled pictures.
The program uses vector graphics rather than pixel-based approaches. In pixel graphics, an image is divided into a grid of squares and each allocated a colour. That means when you zoom in to an area, you just see larger squares, not more detail. With vector graphics, the exact position of the line followed is recorded. That line is just mapped on to the particular grid of the display when you view it. The more pixels in the display, the more detailed the trail is drawn. That means you can zoom in to the pictures and just see ever more detail of the ant trails that make them up.
Because the virtual ants wander around at random, each time you run the program you will get a different image. However, there are lots of ways to control how ants can move around their world. Exploring the possibilities by hand would only ever uncover a small fraction of the possibilities. Photogrowth therefore uses a genetic algorithm. Rather than set all the options of ant behaviour for each image, you help design a fitness function for the algorithm. You do this by adjusting the importance of different aspects like the thickness of trail left and the extent the ants will try and cover the whole canvas. In effect you become a breeder of a species of ant that produce trails, and so images, you will find pleasing. Once you’ve chosen the fitness function, the program evolves a colony of ants based on it, and they then paint you a picture with their trails.
The result is a painting painted by ants bred purely to create images that please you.
Ada Lovelace was close friends with John Crosse, and knew his father Andrew: the ‘real Frankenstein’. Andrew Crosse apparently created insect life from electricity, stone and water…
Andrew Crosse was a ‘gentleman scientist’ doing science for his own amusement including work improving giant versions of the first batteries called ‘voltaic piles’. He was given the nickname ‘the thunder and lightning man’ because of the way he used the batteries to do giant discharges of electricity with bangs as loud as canons.
He hit the headlines when he appeared to create life from electricity, Frankenstein-like. This was an unexpected result of his experiments using electricity to make crystals. He was passing a current through water containing dissolved limestone over a period of weeks. In one experiment, about a month in, a perfect insect appeared apparently from no-where, and soon after starting to move. More and more insects then appeared over time. He mentioned it to friends, which led to a story in a local paper. It was then picked up nationally. Some of the stories said he had created the insects, and this led to outrage and death threats over his apparent blasphemy of trying to take the position of God.
(Does this start to sound like a modern social networking storm, trolls and all?) In fact he appears to have believed, and others agreed, that the mineral samples he was using must have been contaminated with tiny insect eggs, that just naturally hatched. Scientific results are only accepted if they can be replicated. Others, who took care to avoid contamination couldn’t get the same result. The secret of creating life had not been found.
While Mary Shelley, who wrote Frankenstein, did know Crosse, sadly perhaps, for the story’s sake, he can’t have been the inspiration for Frankenstein as has been suggested, given she wrote it decades earlier!
Mary Shelley, Frankenstein’s monster and artificial life
by Paul Curzon and Peter W McOwan, Queen Mary University of London
(Updated from the archive)
Shortly after Ada Lovelace was born, so long before she made predictions about future “creative machines”, Mary Shelley, a friend of her father (Lord Byron), was writing a novel. In her book, Frankenstein, inanimate flesh is brought to life. Perhaps Shelley foresaw what is actually to come, what computer scientists might one day create: artificial life.
Life it may not be, but engineers are now doing pretty well in creating humanoid machines that can do their own thing. Could a machine ever be considered alive? The 21st century is undoubtedly going to be the age of the robot. Maybe it’s time to start thinking about the consequences in case they gain a sense of self.
Frankenstein was obsessed with creating life. In Mary Shelley’s story, he succeeded, though his creation was treated as a “Monster” struggling to cope with the gift of life it was given. Many science fiction books and films have toyed with these themes: the film Blade Runner, for example, explored similar ideas about how intelligent life is created; androids that believe they are human, and the consequences for the creatures concerned.
Is creating intelligent life fiction? Not totally. Several groups of computer scientists are exploring what it means to create non-biological life, and how it might be done. Some are looking at robot life, working at the level of insect life-forms, for example. Others are looking at creating intelligent life within cyberspace.
For 70 years or more scientists have tried to create artificial intelligences. They have had a great deal of success in specific areas such as computer vision and chess playing programs. They are not really intelligent in the way humans are, though they are edging closer. However none of these programs really cuts it as creating “life”. Life is something more than intelligence.
A small band of computer scientists have been trying a different approach that they believe will ultimately lead to the creation of new life forms: life forms that could one day even claim to be conscious (and who would we be to disagree with them if they think they are?) These scientists believe life can’t be engineered in a piecemeal way, but that the whole being has to be created as a coherent whole. Their approach is to build the basic building blocks and let life emerge from them.
The outline of the idea could be seen in the game Sodarace, where you could build your own creatures that move around a virtual world, and even let them evolve. One approach to building creatures, such as a spider, would be to try and work out mathematical equations about how each leg moves and program those equations. The alternative artificial life way as used in Sodarace is to instead program up the laws of physics such as gravity and friction and how masses, springs and muscles behave according to those laws. Then you just put these basic bits together in a way that corresponds to a spider. With this approach you don’t have to work out in advance every eventuality (what if it comes to a wall? Or a cliff? Or bumpy ground?) and write code to deal with it. Instead natural behaviour emerges.
The artificial life community believe, not just life-like movement, but life-like intelligence can emerge in a similar way. Rather than programming the behaviour of muscles you program the behaviour of neurones and then build brains out of them. That it turns out has been the key to the machine learning programs that are storming the world of Artificial Intelligence, turning it into an everyday tool. However, if aiming for artificial life, you would keep going and combine it with the basic biochemistry of an immune system, do a similar thing with a reproductive system, and so on.
Want to know more? A wonderful early book is Steve Grand’s: “Creation”, on how he created what at the time was claimed to be “the nearest thing to artificial life yet”… It started life as the game “Creatures”.
Then have a go at creating artificial life yourself (but be nice to it).
Chatbots, knowing where your files are, and winning at noughts and crosses with artificial intelligence.
Welcome to Day 10 of our CS4FN Christmas Computing Advent Calendar. We are just under halfway through our 25 days of posts, one every day between now and Christmas. You can see all our previous posts in the list at the end.
Today’s picture-theme is Holly (and ivy). Let’s see how I manage to link that to computer science 🙂
1. Holly – or Alexa or Siri
In the comedy TV series* Red Dwarf the spaceship has ‘Holly’ an intelligent computer who talks to the crew and answers their questions. Star Trek also has ‘Computer’ who can have quite technical conversations and give reports on the health of the ship and crew.
People are now quite familiar with talking to computers, or at least giving them commands. You might have heard of Alexa (Amazon) or Siri (Apple / iPhone) and you might even have talked to one of these virtual assistants yourself.
When this article (below) was written people were much less familiar with them. How can they know all the answers to people’s questions and why do they seem to have an intelligence?
Read the article and then play a game (see 3. Today’s Puzzle) to see if you think a piece of paper can be intelligent.
Meet the Chatterbots – talking to computers thanks to artificial intelligence and virtual assistants
*also a book!
2. Are you a filing cabinet or a laundry basket?
People have different ways of saving information on their computers. Some university teachers found that when they asked their students to open a file from a particular directory their students were completely puzzled. It turned out that the (younger) students didn’t think about files and where to put them in the same way that their (older) teachers did, and the reason is partly the type of device teachers and students grew up with.
Older people grew up using computers where the best way to organise things was to save a file in a particular folder to make it easy to find it again. Sometimes there would be several folders. For example you might have a main folder for Homework, then a year folder for 2021, then folders inside for each month. In the December folder you’d put your december.doc file. The file has a file name (december.doc) and an ‘address’ (Homework/2021/December/). Pretty similar to the link to this blog post which also uses the / symbol to separate all the posts made in 2021, then December, then today.
To find your december.doc file again you’d just open each folder by following that path: first Homework, then 2021, then December – and there’s your file. It’s a bit like looking for a pair of socks in your house – first you need to open your front door and go into your home, then open your bedroom door, then open the sock drawer and there are your socks.
Younger people have grown up with devices that make it easy to search for any file. It doesn’t really matter where the file is so people used to these devices have never really needed to think about a file’s location. People can search for the file by name, by some words that are in the file, or the date range for when it was created, even the type of file. So many options.
The first way, that the teachers were using, is like a filing cabinet in an office, with documents neatly packed away in folders within folders. The second way is a bit more like a laundry basket where your socks might be all over the house but you can easily find the pair you want by typing ‘blue socks’ into the search bar.
Which way do you use?
In most cases either is fine and you can just choose whichever way of searching or finding their files that works for you. If you’re learning programming though it can be really helpful to know a bit about file paths because the code you’re creating might need to know exactly where a file is, so that it can read from it. So now some university teachers on STEM (science, technology, engineering and maths) and computing courses are also teaching their students how to use the filing cabinet method. It could be useful for them in their future careers.
As the author says “Many consumer devices try to conceal the underlying file system from the user (for example, smart phones and some tablet computers). Graphical interfaces, applications, and even search have all made it possible for people to use these devices without being concerned with file systems. When you study Computer Science, you must look behind these interfaces.”
You might be wondering what any of this has to do with ivy. Well, whenever I’ve seen a real folder structure on a Windows computer (you can see one here) I’ve often thought it looked a bit like ivy 😉
The trick is based on a very old puzzle at least one early version of which was by Sam Lloyd. See this selection of vanishing puzzles for some variations. A very simple version of it appears in the Moscow Puzzles (puzzle 305) by Boris A. Kordemsky where a line is made to disappear.
In the picture above five medium-length lines become four longer lines. It looks like a line has disappeared but its length has just been spread among the other lines, lengthening them.
If you’d like to have a go at drawing your own disappearing puzzle have a look here.
Sitting down and having a nice chat with a computer probably isn’t something you do every day. You may never have done it. We mainly still think of it as being a dream for the future. But there is lots of work being done to make it happen in the present, and the idea has roots that stretch far back into the past. It’s a dream that goes back to Alan Turing, and then even a little further.
The imitation game
Back around 1950, Turing was thinking about whether computers could be intelligent. He had a problem though. Once you begin thinking about intelligence, you find it is a tricky thing to pin down. Intelligence is hard to define even in humans, never mind animals or computers. Turing started to wonder if he could ask his question about machine intelligence in a different way. He turned to a Victorian parlour game called the imitation game for inspiration.
The imitation game was played with large groups at parties, but focused on two people, a man and a woman. They would go into a different room to be asked questions by a referee. The woman had to answer truthfully. The man answered in any way he believed would convince everyone else he was really the woman. Their answers were then read out to the rest of the guests. The man won the game if he could convince everyone back in the party that he was really the woman.
Pretending to be human
Turing reckoned that he could use a similar test for intelligence in a machine. In Turing’s version of the imitation game, instead of a man trying to convince everyone he’s really a woman, a computer pretends to be a human. Everyone accepts the idea that it takes a certain basic intelligence to carry on a conversation. If a computer could carry on a conversation so well that talking to it was just like talking to a human, the computer must be intelligent.
When Turing published his imitation game idea, it helped launch the field of artificial intelligence (AI). Today, the field pulls together biologists, computer scientists and psychologists in a quest to understand and replicate intelligence. AI techniques have delivered some stunning results. People have designed computers that can beat the best human at chess, diagnose diseases, and invest in stocks more successfully than humans.
A chat with a chatterbot
But what about the dream of having a chat with a computer? That’s still alive. Turing’s idea, demonstrating computer intelligence by successfully faking human conversation, became known as the Turing test. Turing thought machines would pass his test before the 20th century was over, but the goal has proved more elusive than that. People have been making better conversational chat programs, called chatterbots, since the 1960s, but no one has yet made a program that can fool everyone into thinking it’s a real human.
What’s up, Doc
On the other hand, some chatterbots have done pretty well. One of the first and still one of the most famous chatterbots was created in 1968. It was called ELIZA. Its trick was imitating the sort of conversation you might have with a therapist. ELIZA didn’t volunteer much knowledge itself, but tried to get the user to open up about what they were thinking. So the person might type “I don’t feel well”, and ELIZA would respond with “you say you don’t feel well?” In a normal social situation, that would be a frustrating response. But it’s a therapist’s job to get a patient to talk about themselves, so ELIZA could get away with it. For an early example of a chatterbot, ELIZA did pretty well, but after a few minutes of chatting users realised that ELIZA didn’t really understand what they were saying.
Where have I heard this before?
One of the big problems in making a good chatterbot is coming up with sentences that sound realistic. That’s why ELIZA tried to keep its sentences simple and non-committal. A much more recent chatterbot called Cleverbot uses another brilliantly simple solution: it doesn’t try to make up sentences at all. It just stores all the phrases that it’s ever heard, and chooses from them when it needs to say something. When a human types a phrase to say to Cleverbot, its program looks for a time in the past when it said something similar, then reuses whatever response the human gave at the time. Given that Cleverbot has had 65 million chats on the Internet since 1997, it’s got a lot to choose from. And because its sentences were all originally entered by humans, Cleverbot can speak in slang or text speak. That can lead to strange conversations, though. A member of our team at cs4fn had an online chat with Cleverbot, and found it pretty weird to have a computer tell him “I want 2 b called Silly Sally”.
Computerised con artists
Most chatterbots are designed just for fun. But some chatterbots are made for a more sinister intent. A few years ago, a program called CyberLover was stalking dating chat forums on the Internet. It would strike up flirty conversations with people, then try and get them to reveal personal details, which could then be used to steal people’s identities or credit card accounts. CyberLover even had different programmed personalities, from a more romantic flirter to a more aggressive one. Most people probably wouldn’t be fooled by a robot come-on, but that’s OK. CyberLover didn’t mind rejection: it could start up ten relationships every half an hour.
Chatterbots may be ready to hit the big time soon. Apple’s iPhone 4S includes Siri, a computerised assistant that can find answers to human questions – sometimes with a bit of attitude. Most of Siri’s humourous answers appear to be pre-programmed, but some of them come from Siri’s access to powerful search engines. Apple don’t want to give away their secrets, so they’re not saying much. But if computerised conversation continues advancing, we may not be too far off from a computer that can pass the Turing test. And while we’re waiting at least we’ve got better games to play than the Victorians had.
In the film Minority Report, a team of psychics – who can see into the future – predict who might cause harm, allowing the police to intervene before the harm happens. It is science fiction. But smart technology is able to see into the future. It may be able to warn months in advance when a mother’s body might be about to harm her unborn baby and so allow the harm to be prevented before it even happens.
Gestational diabetes (or GDM) is a type of diabetes that appears only during pregnancy. Once the baby is born it usually disappears. Although it doesn’t tend to produce many symptoms it can increase the risk of complications in pregnancy so pregnant women are tested for it to avoid problems. Women who’ve had GDM are also at greater risk of developing Type 2 diabetes later on, joining an estimated 4 million people who have the condition in the UK.
Diabetes happens either when someone’s pancreas is unable to produce enough of a chemical called insulin, or because the body stops responding to the insulin that is produced. We need insulin to help us make use of glucose: a kind of sugar in our food that gives us energy. In Type 1 diabetes (commonly diagnosed in young people) the pancreas pretty much stops producing any insulin. In Type 2 diabetes (more commonly diagnosed in older people) the problem isn’t so much the pancreas (in fact in many cases it produces even more insulin), it’s that the person has become resistant to insulin. The result from either ‘not enough insulin’ or ‘plenty of insulin but can’t use it properly’ is that glucose isn’t able to get into our cells to fuel them. It’s a bit like being unable to open the fuel cap on a car, so the driver can’t fill it with petrol. This means higher levels of glucose circulate in the bloodstream and, unfortunately, high glucose can cause lots of damage to blood vessels.
During a normal pregnancy, women often become a little more insulin-resistant than usual anyway. This is an effect of pregnancy hormones from the placenta. From the point of view of the developing foetus, which is sharing a blood supply with mum, this is mostly good news as the blood arriving in the placenta is full of glucose to help the baby grow. That sounds great but if the woman becomes too insulin-resistant and there’s too much glucose in her blood it can lead to accelerated growth (a very large baby) and increase the risk of complications during pregnancy and at birth. Not great for mum or baby. Doctors regularly monitor the blood glucose levels in a GDM pregnancy to keep both mother and baby in good health. Once taught, anyone can measure their own blood glucose levels using a finger-prick test and people with diabetes do this several times a day.It will save money but also be much more flexible for mothers.
In-depth screening of every pregnant woman, to see if she has, or is at risk of, GDM costs money and is time-consuming, and most pregnant women will not develop GDM anyway. PAMBAYESIAN researchers at Queen Mary have developed a prototype intelligent decision-making tool, both to help doctors decide who needs further investigation, but also to help the women decide when they need additional support from their healthcare team.
The team of computer scientists and maternity experts developed a Bayesian network with information based on expert knowledge about GDM, then trained it on real (anonymised) patient data. They are now evaluating its performance and refining it. There are different decision points throughout a GDM pregnancy. First, does the person have GDM or are they at increased risk (perhaps because of a family history)? If ‘yes’ then the next decision is how best to care for them and whether or not to begin medical treatment or just give diet and lifestyle support. Later on in the pregnancy the woman and her doctor must consider when it’s best for her to deliver her baby, then later she needs ongoing support to prevent her GDM from leading to Type 2 diabetes. Currently in early development work, it’s hoped that if given blood glucose readings, the GDM Bayesian network will ultimately be able to take account of the woman’s risk factors (like age, ethnicity and previous GDM) that increase her risk. It would use that information to predict how likely she is to develop the condition in this pregnancy, and suggest what should happen next.
Systems like this mean that one day your smartphone may be smart enough to help protect you and your unborn baby from future harm.
– Jo Brodie, Queen Mary University of London, Spring 2021
It has long been an aim of computer scientists to develop software that can work out how a person is feeling. Are you happy or sad, frustrated or lonely? If the software can tell then it can adapt to you moods, changing its behaviour or offering advice. Suresh Neethirajan from Wageningen University in the Netherlands has gone step further. He has developed a program that detects the emotions of farm animals.
Working out how someone is feeling is called “Sentiment Analysis” and there are lots of ways computer scientists have tried to do it. One way is based on looking at the words people speak or write. The way people speak, such as the tone of voice also gives information about emotions. Another way is based on our facial expressions and body language. A simple version of sentiment analysis involves working out whether someone is feeling a positive emotion (like being happy or excited) versus a negative emotions (such as being sad or angry) rather than trying to determine the precise emotion.
Applications range from deciding how a person might vote to predicting what they might buy. A more futuristic use is to help medics make healthcare decisions. When the patient says they are aren’t feeling too bad, are they actually fine or are they just being stoical, for example? And how much pain or stress are they actually suffering?
But why would you want to know the emotions of animals? One really important application is to know when an animal is, or is not, in distress. Knowing that can help a farmer look after that animal better, but also work out the best way to better look after animals more generally. It might help farmers design nicer living conditions, but also work out more humane ways to slaughter animals that involves the least suffering. Avoiding cruel conditions is reason on its own, but with happy farm animals you might also improve the yield of milk, quality of meat or how many offspring animals have in their lifetime. A farmer certainly shouldn’t want their animals to be so upset they start to self harm, which can be a problem when animals are kept in poor conditions. Not only is it cruel it can lead to infections which costs money to treat. It also spreads resistance to antibiotics. Having accurate ways to quickly and remotely detect how animals are feeling would be a big step forward for animal welfare.
But how to do it? While some scientists are actually working on understanding animal language, recognising body language is an easier first step to understand animal emotions. A lot is actually known about animal expressions and body language, and what they mean. If a dog is wagging its tail, then it is happy, for example. Suresh focussed on facial expressions in cows and pigs. What kind of expressions do they have? Cows, for example, are likely to be relaxed if their eyes are half-closed, and their ears are backwards or hung-down. If you can see the whites of their eyes, on the other hand then they are probably stressed. Pigs that are moving their ears around very quickly, by contrast, are likely to be stressed. If their ears are hanging and flipping in the direction of their eyes, though, then they are in a much more neutral state.
There are lots of steps to go through in creating a system to recognise emotions. The first for Suresh was to collect lots of pictures of cows and pigs from different farms. He collected almost 4000 images from farms in Canada, the USA and India. Each image was labelled by human experts according to whether it showed a positive, neutral and negative emotional state of the animal, based on what was already known about how animal expressions link to their emotions.
Sophisticated image processing software was then used to automatically pick out the animals’ faces as well as locate the individual features, such as eyes and ears. The orientation and other properties of those facial features, such as whether ears were hanging down or up is also determined. This processed data is then fed into a machine learning system to train it on this data. The fact that it was labelled meant the program knew what a human judged the different expressions to mean in terms of emotions, and so could then work out how patterns in the data that represented each animal state.
Once trained the system was then given new images without the labels to judge how accurate it was. It made a judgement and this was compared to the human judgement of the state. Human and machine agreed 86% of the time. More work is needed before such a system could be used on farms but it opens the possibility of using video cameras around a farm to raise the alarm when animals are suffering, for example.
Machine learning is helping humans in lots of ways. With systems like this machine learning could soon be helping animals live better lives too.
– Paul Curzon, Queen Mary University of London, Spring 2021
‘How do robots eat pizza?’… ‘One byte at a time’. Computational Humour is real, but it’s not jokes about computers, it’s computers telling their own jokes.
Computers can create art, stories, slogans and even magic tricks. But can computers perform themselves? Can robots invent their own jokes? Can they tell jokes?
Combining Artificial Intelligence, computational linguistics and humour studies (yes you can study how to be funny!) a team of Scottish researchers made an early attempt at computerised standup comedy! They came up with Standup (System to Augment Non Speakers Dialogue Using Puns): a program that generates riddles for kids with language difficulties. Standup has a dictionary and joke-building mechanism, but does not perform, it just creates the jokes. You will have to judge for yourself as to whether the puns are funny. You can download the software from here. What makes a pun funny? It is a about the word having two meanings at exactly the same time in a sentence. It is also about generating an expectation that you then break: a key idea about what is at the core of creativity too.
A research team at Virginia Tech in the US created a system that started to learn about funny pictures. Having defined a ‘funniness score’ they created a computational model for humorous scenes, and trained it to predict funniness, perhaps with an eye to spotting pics for social media posting, or not.
But are there funny robots out there? Yes! RoboThespian programmed by researchers at Queen Mary University of London, and Data, created by researchers at Carnegie Mellon University are both robots programmed to do stand-up comedy. Data has a bank of jokes and responds to audience reaction. His developers don’t actually know what he will do when he performs, as he is learning all the time. At his first public gig, he got the crowd laughing, but his timing was poor. You can see his performance online, in a TED Talk.
RoboThespian did a gig at the London Barbican alongside human comedians. The performance was a live experiment to understand whether the robot could ‘work the audience’ as well as a human comedian. They found that even relatively small changes in the timing of delivery make a big difference to audience response.
What have these all got in common? Artificial Intelligence, machine learning and studies to understand what humour actually is, are being combined to make something that is funny. Comedy is perhaps the pinnacle of creativity. It’s certainly not easy for a human to write even one joke, so think how hard it is distill that skill into algorithms and train a computer to create loads of them.
Based on a 2016 talk by Sabine Hauert at the Royal Society
Sabine Hauert is a swarm engineer. She is fascinated by the idea of making use of swarms of robots. Watch a flock of birds and you see that they have both complex and beautiful behaviours. It helps them avoid predators very effectively, for example, so much so that many animals behave in a similar way. Predators struggle to fix on any one bird in all the chaotic swirling. Sabine’s team at the University of Bristol are exploring how we can solve our own engineering problems: from providing communication networks in a disaster zone to helping treat cancer, all based on the behaviours of swarms of animals.
Sabine realised that flocks of birds have properties that are really interesting to an engineer. Their ability to scale is one. It is often easy to come up with solutions to problems that work in a small ‘toy’ system, but when you want to use it for real, the size of the problem defeats you. With a flock, birds just keep arriving, and the flock keeps working, getting bigger and bigger. It is common to see thousands of Starlings behaving like this – around Brighton Pier most winter evenings, for example. Flocks can even be of millions of birds all swooping and swirling together, never colliding, always staying as a flock. It is an engineering solution that scales up to massive problems. If you can build a system to work like a flock, you will have a similar ability to scale.
Flocks of birds are also very robust. If one bird falls out of the sky, perhaps because it is caught by a predator, the flock itself doesn’t fail, it continues as if nothing happened. Compare that to most systems humans create. Remove one component from a car engine and it’s likely that you won’t be going anywhere. This kind of robustness from failure is often really important.
Swarms are an example of emergent behaviour. If you look at just one bird you can’t tell how the flock works as a whole. In fact, each is just following very simple rules. Each bird just tracks the positions of a few nearest neighbours using that information to make simple decisions about how to move. That is enough for the whole complex behaviour of the flock to emerge. Despite all that fast and furious movement, the birds never crash into each other. Fascinated, Sabine started to explore how swarms of robots might be used to solve problems for people.
Her first idea was to create swarms of flying robots to work as a communications network, providing wi-fi coverage in places it would otherwise be hard to set up a network. This might be a good solution in a disaster area, for example, where there is no other infrastructure, but communication is vital. You want it to scale over the whole disaster area quickly and easily, and it has to be robust. She set about creating a system to achieve this.
The robots she designed were very simple, fixed wing, propellor-powered model planes. Each had a compass so it knew which direction it was pointing and was able to talk to those nearest using wi-fi signals. It could also tell who its nearest neighbours were. The trick was to work out how to design the behaviour of one bird so that appropriate swarming behaviour emerged. At any time each had to decide how much to turn to avoid crashing into another but to maintain the flock, and coverage. You could try to work out the best rules by hand. Instead, Sabine turned to machine learning.
The idea of machine learning is that instead of trying to devise algorithms that solve problems yourself, you write an algorithm for how to learn. The program then learns for itself by trial and error the best solution. Sabine created a simple first program for her robots that gave them fairly random behaviour. The machine learning program then used a process modelled on evolution to gradually improve. After all evolution worked for animals! The way this is done is that variations on the initial behaviour are trialled in simulators and only the most successful are kept. Further random changes are made to those and the new versions trialled again. This is continued over thousands of generations, each generation getting that little bit better at flocking until eventually a behaviour of individual robots results that leads to them swarming together.
Sabine has now moved on to to thinking about a situation where swarms of trillions of individuals are needed: nanomedicine. She wants to create nanobots that are each smaller than the width of a strand of hair and can be injected into cancer patients. Once inside the body they will search out and stick themselves to tumour cells. The tumour cells gobble them up, at which point they deliver drugs directly inside the rogue cell. How do you make them behave in a way that gives the best cancer treatment though? For example, how do you stop them all just sticking to the same outer cancer cells? One way might be to give them a simple swarm behaviour that allows them to go to different depths and only then switch on their stickiness, allowing them to destroy all the cancer cells. This is the sort of thing Sabine’s team are experimenting with.
Swarm engineering has all sorts of other practical applications, and while Sabine is leading the way, some time soon we may need lots more swarm engineers, able to design swarm systems to solve specific problems. Might that be you?
Explore swarm behaviour using the Oxford Turtle system [EXTERNAL] (click the play button top centre) to see how to run a flocking simulation as well as program your own swarms.