CS4FN Advent – Day 10: Holly, Ivy and Alexa – chatbots and the useful skill of file management. Plus win at noughts and crosses

Chatbots, knowing where your files are, and winning at noughts and crosses with artificial intelligence.

Welcome to Day 10 of our CS4FN Christmas Computing Advent Calendar. We are just under halfway through our 25 days of posts, one every day between now and Christmas. You can see all our previous posts in the list at the end.

Today’s picture-theme is Holly (and ivy). Let’s see how I manage to link that to computer science 🙂

Some holly with red berries

1. Holly – or Alexa or Siri

In the comedy TV series* Red Dwarf the spaceship has ‘Holly’ an intelligent computer who talks to the crew and answers their questions. Star Trek also has ‘Computer’ who can have quite technical conversations and give reports on the health of the ship and crew.

People are now quite familiar with talking to computers, or at least giving them commands. You might have heard of Alexa (Amazon) or Siri (Apple / iPhone) and you might even have talked to one of these virtual assistants yourself.

When this article (below) was written people were much less familiar with them. How can they know all the answers to people’s questions and why do they seem to have an intelligence?

Read the article and then play a game (see 3. Today’s Puzzle) to see if you think a piece of paper can be intelligent.

Meet the Chatterbots – talking to computers thanks to artificial intelligence and virtual assistants

 

*also a book!

 

2. Are you a filing cabinet or a laundry basket?

People have different ways of saving information on their computers. Some university teachers found that when they asked their students to open a file from a particular directory their students were completely puzzled. It turned out that the (younger) students didn’t think about files and where to put them in the same way that their (older) teachers did, and the reason is partly the type of device teachers and students grew up with.

Older people grew up using computers where the best way to organise things was to save a file in a particular folder to make it easy to find it again. Sometimes there would be several folders. For example you might have a main folder for Homework, then a year folder for 2021, then folders inside for each month. In the December folder you’d put your december.doc file. The file has a file name (december.doc) and an ‘address’ (Homework/2021/December/). Pretty similar to the link to this blog post which also uses the / symbol to separate all the posts made in 2021, then December, then today.

Files and folders image by Ulrike Mai from Pixabay. Each brown folder contains files, and is itself contained in the drawer, and the drawer is contained in the cabinet.

To find your december.doc file again you’d just open each folder by following that path: first Homework, then 2021, then December – and there’s your file. It’s a bit like looking for a pair of socks in your house – first you need to open your front door and go into your home, then open your bedroom door, then open the sock drawer and there are your socks.

What your file and folder structure might look like.

Younger people have grown up with devices that make it easy to search for any file. It doesn’t really matter where the file is so people used to these devices have never really needed to think about a file’s location. People can search for the file by name, by some words that are in the file, or the date range for when it was created, even the type of file. So many options.

The first way, that the teachers were using, is like a filing cabinet in an office, with documents neatly packed away in folders within folders. The second way is a bit more like a laundry basket where your socks might be all over the house but you can easily find the pair you want by typing ‘blue socks’ into the search bar.

Which way do you use?

In most cases either is fine and you can just choose whichever way of searching or finding their files that works for you. If you’re learning programming though it can be really helpful to know a bit about file paths because the code you’re creating might need to know exactly where a file is, so that it can read from it. So now some university teachers on STEM (science, technology, engineering and maths) and computing courses are also teaching their students how to use the filing cabinet method. It could be useful for them in their future careers.

Want to find out more about files / file names / file paths and directory structures? Have a look at this great little tutorial https://superbasics.beholder.uk/file-system/

As the author says “Many consumer devices try to conceal the underlying file system from the user (for example, smart phones and some tablet computers). Graphical interfaces, applications, and even search have all made it possible for people to use these devices without being concerned with file systems. When you study Computer Science, you must look behind these interfaces.

You might be wondering what any of this has to do with ivy. Well, whenever I’ve seen a real folder structure on a Windows computer (you can see one here) I’ve often thought it looked a bit like ivy 😉

Creeping ivy at Blackheath station in London.

Further reading

File not found: A generation that grew up with Google is forcing professors to rethink their lesson plans (22 September 2021) The Verge

 

 

3. Today’s puzzle

Print or write out the instructions on page 5 of the PDF and challenge someone to a game of noughts and crosses… (there’s a good chance the bit of paper will win).

The Intelligent Piece of Paper activity.

 

4. Yesterday’s puzzle

The trick is based on a very old puzzle at least one early version of which was by Sam Lloyd. See this selection of vanishing puzzles for some variations. A very simple version of it appears in the Moscow Puzzles (puzzle 305) by Boris A. Kordemsky where a line is made to disappear.

In the picture above five medium-length lines become four longer lines. It looks like a line has disappeared but its length has just been spread among the other lines, lengthening them.

If you’d like to have a go at drawing your own disappearing puzzle have a look here.

 

5. Previous Advent Calendar posts

CS4FN Advent – Day 1 – Woolly jumpers, knitting and coding (1 December 2021)

 

CS4FN Advent – Day 2 – Pairs: mittens, gloves, pair programming, magic tricks (2 December 2021)

 

CS4FN Advent – Day 3 – woolly hat: warming versus cooling (3 December 2021)

 

CS4FN Advent – Day 4 – Ice skate: detecting neutrinos at the South Pole, figure-skating motion capture, Frozen and a puzzle (4 December 2021)

 

CS4FN Advent – Day 5 – snowman: analog hydraulic computers (aka water computers), digital compression, and a puzzle (5 December 2021)

 

CS4FN Advent – Day 6 – patterned bauble: tracing patterns in computing – printed circuit boards, spotting links and a puzzle for tourists (6 December 2021)

 

CS4FN Advent – Day 7 – Computing for the birds: dawn chorus, birds as data carriers and a Google April Fool (plus a puzzle!) (7 December 2021)

 

CS4FN Advent – Day 8: gifts, and wrapping – Tim Berners-Lee, black boxes and another computing puzzle (8 December 2021)

 

CS4FN Advent – Day 9: gingerbread man – computing and ‘food’ (cookies, spam!), and a puzzle (9 December 2021)

 

CS4FN Advent – Day 10: Holly, Ivy and Alexa – chatbots and the useful skill of file management. Plus win at noughts and crosses – (10 December 2021) – this post

 

 

 

Meet the chatterbots – talking to computers thanks to artificial intelligence and virtual assistants

This article, by Paul Curzon (QMUL) was originally published on the CS4FN website.

A line of robots

Sitting down and having a nice chat with a computer probably isn’t something you do every day. You may never have done it. We mainly still think of it as being a dream for the future. But there is lots of work being done to make it happen in the present, and the idea has roots that stretch far back into the past. It’s a dream that goes back to Alan Turing, and then even a little further.

 

The imitation game
Back around 1950, Turing was thinking about whether computers could be intelligent. He had a problem though. Once you begin thinking about intelligence, you find it is a tricky thing to pin down. Intelligence is hard to define even in humans, never mind animals or computers. Turing started to wonder if he could ask his question about machine intelligence in a different way. He turned to a Victorian parlour game called the imitation game for inspiration.

The imitation game was played with large groups at parties, but focused on two people, a man and a woman. They would go into a different room to be asked questions by a referee. The woman had to answer truthfully. The man answered in any way he believed would convince everyone else he was really the woman. Their answers were then read out to the rest of the guests. The man won the game if he could convince everyone back in the party that he was really the woman.

Pretending to be human
Turing reckoned that he could use a similar test for intelligence in a machine. In Turing’s version of the imitation game, instead of a man trying to convince everyone he’s really a woman, a computer pretends to be a human. Everyone accepts the idea that it takes a certain basic intelligence to carry on a conversation. If a computer could carry on a conversation so well that talking to it was just like talking to a human, the computer must be intelligent.

When Turing published his imitation game idea, it helped launch the field of artificial intelligence (AI). Today, the field pulls together biologists, computer scientists and psychologists in a quest to understand and replicate intelligence. AI techniques have delivered some stunning results. People have designed computers that can beat the best human at chess, diagnose diseases, and invest in stocks more successfully than humans.

A chat with a chatterbot
But what about the dream of having a chat with a computer? That’s still alive. Turing’s idea, demonstrating computer intelligence by successfully faking human conversation, became known as the Turing test. Turing thought machines would pass his test before the 20th century was over, but the goal has proved more elusive than that. People have been making better conversational chat programs, called chatterbots, since the 1960s, but no one has yet made a program that can fool everyone into thinking it’s a real human.

What’s up, Doc
On the other hand, some chatterbots have done pretty well. One of the first and still one of the most famous chatterbots was created in 1968. It was called ELIZA. Its trick was imitating the sort of conversation you might have with a therapist. ELIZA didn’t volunteer much knowledge itself, but tried to get the user to open up about what they were thinking. So the person might type “I don’t feel well”, and ELIZA would respond with “you say you don’t feel well?” In a normal social situation, that would be a frustrating response. But it’s a therapist’s job to get a patient to talk about themselves, so ELIZA could get away with it. For an early example of a chatterbot, ELIZA did pretty well, but after a few minutes of chatting users realised that ELIZA didn’t really understand what they were saying.

Where have I heard this before?
One of the big problems in making a good chatterbot is coming up with sentences that sound realistic. That’s why ELIZA tried to keep its sentences simple and non-committal. A much more recent chatterbot called Cleverbot uses another brilliantly simple solution: it doesn’t try to make up sentences at all. It just stores all the phrases that it’s ever heard, and chooses from them when it needs to say something. When a human types a phrase to say to Cleverbot, its program looks for a time in the past when it said something similar, then reuses whatever response the human gave at the time. Given that Cleverbot has had 65 million chats on the Internet since 1997, it’s got a lot to choose from. And because its sentences were all originally entered by humans, Cleverbot can speak in slang or text speak. That can lead to strange conversations, though. A member of our team at cs4fn had an online chat with Cleverbot, and found it pretty weird to have a computer tell him “I want 2 b called Silly Sally”.

Computerised con artists
Most chatterbots are designed just for fun. But some chatterbots are made for a more sinister intent. A few years ago, a program called CyberLover was stalking dating chat forums on the Internet. It would strike up flirty conversations with people, then try and get them to reveal personal details, which could then be used to steal people’s identities or credit card accounts. CyberLover even had different programmed personalities, from a more romantic flirter to a more aggressive one. Most people probably wouldn’t be fooled by a robot come-on, but that’s OK. CyberLover didn’t mind rejection: it could start up ten relationships every half an hour.

Chatterbots may be ready to hit the big time soon. Apple’s iPhone 4S includes Siri, a computerised assistant that can find answers to human questions – sometimes with a bit of attitude. Most of Siri’s humourous answers appear to be pre-programmed, but some of them come from Siri’s access to powerful search engines. Apple don’t want to give away their secrets, so they’re not saying much. But if computerised conversation continues advancing, we may not be too far off from a computer that can pass the Turing test. And while we’re waiting at least we’ve got better games to play than the Victorians had.

Diagnose? Delay delivery? Decisions, decisions. Decisions about diabetes in pregnancy

In the film Minority Report, a team of psychics – who can see into the future – predict who might cause harm, allowing the police to intervene before the harm happens. It is science fiction. But smart technology is able to see into the future. It may be able to warn months in advance when a mother’s body might be about to harm her unborn baby and so allow the harm to be prevented before it even happens.

Baby holding feet with feet in foreground.
Image by Daniel Nebreda from Pixabay

Gestational diabetes (or GDM) is a type of diabetes that appears only during pregnancy. Once the baby is born it usually disappears. Although it doesn’t tend to produce many symptoms it can increase the risk of complications in pregnancy so pregnant women are tested for it to avoid problems. Women who’ve had GDM are also at greater risk of developing Type 2 diabetes later on, joining an estimated 4 million people who have the condition in the UK.

Diabetes happens either when someone’s pancreas is unable to produce enough of a chemical called insulin, or because the body stops responding to the insulin that is produced. We need insulin to help us make use of glucose: a kind of sugar in our food that gives us energy. In Type 1 diabetes (commonly diagnosed in young people) the pancreas pretty much stops producing any insulin. In Type 2 diabetes (more commonly diagnosed in older people) the problem isn’t so much the pancreas (in fact in many cases it produces even more insulin), it’s that the person has become resistant to insulin. The result from either ‘not enough insulin’ or ‘plenty of insulin but can’t use it properly’ is that glucose isn’t able to get into our cells to fuel them. It’s a bit like being unable to open the fuel cap on a car, so the driver can’t fill it with petrol. This means higher levels of glucose circulate in the bloodstream and, unfortunately, high glucose can cause lots of damage to blood vessels.

During a normal pregnancy, women often become a little more insulin-resistant than usual anyway. This is an effect of pregnancy hormones from the placenta. From the point of view of the developing foetus, which is sharing a blood supply with mum, this is mostly good news as the blood arriving in the placenta is full of glucose to help the baby grow. That sounds great but if the woman becomes too insulin-resistant and there’s too much glucose in her blood it can lead to accelerated growth (a very large baby) and increase the risk of complications during pregnancy and at birth. Not great for mum or baby. Doctors regularly monitor the blood glucose levels in a GDM pregnancy to keep both mother and baby in good health. Once taught, anyone can measure their own blood glucose levels using a finger-prick test and people with diabetes do this several times a day.It will save money but also be much more flexible for mothers.

In-depth screening of every pregnant woman, to see if she has, or is at risk of, GDM costs money and is time-consuming, and most pregnant women will not develop GDM anyway. PAMBAYESIAN researchers at Queen Mary have developed a prototype intelligent decision-making tool, both to help doctors decide who needs further investigation, but also to help the women decide when they need additional support from their healthcare team.

The team of computer scientists and maternity experts developed a Bayesian network with information based on expert knowledge about GDM, then trained it on real (anonymised) patient data. They are now evaluating its performance and refining it. There are different decision points throughout a GDM pregnancy. First, does the person have GDM or are they at increased risk (perhaps because of a family history)? If ‘yes’ then the next decision is how best to care for them and whether or not to begin medical treatment or just give diet and lifestyle support. Later on in the pregnancy the woman and her doctor must consider when it’s best for her to deliver her baby, then later she needs ongoing support to prevent her GDM from leading to Type 2 diabetes. Currently in early development work, it’s hoped that if given blood glucose readings, the GDM Bayesian network will ultimately be able to take account of the woman’s risk factors (like age, ethnicity and previous GDM) that increase her risk. It would use that information to predict how likely she is to develop the condition in this pregnancy, and suggest what should happen next.

Systems like this mean that one day your smartphone may be smart enough to help protect you and your unborn baby from future harm.

– Jo Brodie, Queen Mary University of London, Spring 2021

Download Issue 27 of the cs4fn magazine on Smart Health here.

This post and issue 27 of the cs4fn magazine have been funded by EPSRC as part of the PAMBAYESIAN project.

I’m feeling Moo-dy today

It has long been an aim of computer scientists to develop software that can work out how a person is feeling. Are you happy or sad, frustrated or lonely? If the software can tell then it can adapt to you moods, changing its behaviour or offering advice. Suresh Neethirajan from Wageningen University in the Netherlands has gone step further. He has developed a program that detects the emotions of farm animals.

Image by Couleur from Pixabay 

Working out how someone is feeling is called “Sentiment Analysis” and there are lots of ways computer scientists have tried to do it. One way is based on looking at the words people speak or write. The way people speak, such as the tone of voice also gives information about emotions. Another way is based on our facial expressions and body language. A simple version of sentiment analysis involves working out whether someone is feeling a positive emotion (like being happy or excited) versus a negative emotions (such as being sad or angry) rather than trying to determine the precise emotion.

Applications range from deciding how a person might vote to predicting what they might buy. A more futuristic use is to help medics make healthcare decisions. When the patient says they are aren’t feeling too bad, are they actually fine or are they just being stoical, for example? And how much pain or stress are they actually suffering?

But why would you want to know the emotions of animals? One really important application is to know when an animal is, or is not, in distress. Knowing that can help a farmer look after that animal better, but also work out the best way to better look after animals more generally. It might help farmers design nicer living conditions, but also work out more humane ways to slaughter animals that involves the least suffering. Avoiding cruel conditions is reason on its own, but with happy farm animals you might also improve the yield of milk, quality of meat or how many offspring animals have in their lifetime. A farmer certainly shouldn’t want their animals to be so upset they start to self harm, which can be a problem when animals are kept in poor conditions. Not only is it cruel it can lead to infections which costs money to treat. It also spreads resistance to antibiotics. Having accurate ways to quickly and remotely detect how animals are feeling would be a big step forward for animal welfare.

But how to do it? While some scientists are actually working on understanding animal language, recognising body language is an easier first step to understand animal emotions. A lot is actually known about animal expressions and body language, and what they mean. If a dog is wagging its tail, then it is happy, for example. Suresh focussed on facial expressions in cows and pigs. What kind of expressions do they have? Cows, for example, are likely to be relaxed if their eyes are half-closed, and their ears are backwards or hung-down. If you can see the whites of their eyes, on the other hand then they are probably stressed. Pigs that are moving their ears around very quickly, by contrast, are likely to be stressed. If their ears are hanging and flipping in the direction of their eyes, though, then they are in a much more neutral state.

There are lots of steps to go through in creating a system to recognise emotions. The first for Suresh was to collect lots of pictures of cows and pigs from different farms. He collected almost 4000 images from farms in Canada, the USA and India. Each image was labelled by human experts according to whether it showed a positive, neutral and negative emotional state of the animal, based on what was already known about how animal expressions link to their emotions.

Sophisticated image processing software was then used to automatically pick out the animals’ faces as well as locate the individual features, such as eyes and ears. The orientation and other properties of those facial features, such as whether ears were hanging down or up is also determined. This processed data is then fed into a machine learning system to train it on this data. The fact that it was labelled meant the program knew what a human judged the different expressions to mean in terms of emotions, and so could then work out how patterns in the data that represented each animal state.

Once trained the system was then given new images without the labels to judge how accurate it was. It made a judgement and this was compared to the human judgement of the state. Human and machine agreed 86% of the time. More work is needed before such a system could be used on farms but it opens the possibility of using video cameras around a farm to raise the alarm when animals are suffering, for example.

Machine learning is helping humans in lots of ways. With systems like this machine learning could soon be helping animals live better lives too.

Paul Curzon, Queen Mary University of London, Spring 2021

Standup Robots

‘How do robots eat pizza?’… ‘One byte at a time’. Computational Humour is real, but it’s not jokes about computers, it’s computers telling their own jokes.

Robot performing
Image from istockphoto

Computers can create art, stories, slogans and even magic tricks. But can computers perform themselves? Can robots invent their own jokes? Can they tell jokes?

Combining Artificial Intelligence, computational linguistics and humour studies (yes you can study how to be funny!) a team of Scottish researchers made an early attempt at computerised standup comedy! They came up with Standup (System to Augment Non Speakers Dialogue Using Puns): a program that generates riddles for kids with language difficulties. Standup has a dictionary and joke-building mechanism, but does not perform, it just creates the jokes. You will have to judge for yourself as to whether the puns are funny. You can download the software from here. What makes a pun funny? It is a about the word having two meanings at exactly the same time in a sentence. It is also about generating an expectation that you then break: a key idea about what is at the core of creativity too.

A research team at Virginia Tech in the US created a system that started to learn about funny pictures. Having defined a ‘funniness score’ they created a computational model for humorous scenes, and trained it to predict funniness, perhaps with an eye to spotting pics for social media posting, or not.

But are there funny robots out there? Yes! RoboThespian programmed by researchers at Queen Mary University of London, and Data, created by researchers at Carnegie Mellon University are both robots programmed to do stand-up comedy. Data has a bank of jokes and responds to audience reaction. His developers don’t actually know what he will do when he performs, as he is learning all the time. At his first public gig, he got the crowd laughing, but his timing was poor. You can see his performance online, in a TED Talk.

RoboThespian did a gig at the London Barbican alongside human comedians. The performance was a live experiment to understand whether the robot could ‘work the audience’ as well as a human comedian. They found that even relatively small changes in the timing of delivery make a big difference to audience response.

What have these all got in common? Artificial Intelligence, machine learning and studies to understand what humour actually is, are being combined to make something that is funny. Comedy is perhaps the pinnacle of creativity. It’s certainly not easy for a human to write even one joke, so think how hard it is distill that skill into algorithms and train a computer to create loads of them.

You have to laugh!

Watch RoboThespian [EXTERNAL]

– Jane Waite, Queen Mary University of London, Summer 2017

Download Issue 22 of the cs4fn magazine “Creative Computing” here

Lots more computing jokes on our Teaching London Computing site

Sabine Hauert: Swarm Engineer

Based on a 2016 talk by Sabine Hauert at the Royal Society

Sabine Hauert is a swarm engineer. She is fascinated by the idea of making use of swarms of robots. Watch a flock of birds and you see that they have both complex and beautiful behaviours. It helps them avoid predators very effectively, for example, so much so that many animals behave in a similar way. Predators struggle to fix on any one bird in all the chaotic swirling. Sabine’s team at the University of Bristol are exploring how we can solve our own engineering problems: from providing communication networks in a disaster zone to helping treat cancer, all based on the behaviours of swarms of animals.

A murmuration  - a flock of starlings

Sabine realised that flocks of birds have properties that are really interesting to an engineer. Their ability to scale is one. It is often easy to come up with solutions to problems that work in a small ‘toy’ system, but when you want to use it for real, the size of the problem defeats you. With a flock, birds just keep arriving, and the flock keeps working, getting bigger and bigger. It is common to see thousands of Starlings behaving like this – around Brighton Pier most winter evenings, for example. Flocks can even be of millions of birds all swooping and swirling together, never colliding, always staying as a flock. It is an engineering solution that scales up to massive problems. If you can build a system to work like a flock, you will have a similar ability to scale.

Flocks of birds are also very robust. If one bird falls out of the sky, perhaps because it is caught by a predator, the flock itself doesn’t fail, it continues as if nothing happened. Compare that to most systems humans create. Remove one component from a car engine and it’s likely that you won’t be going anywhere. This kind of robustness from failure is often really important.

Swarms are an example of emergent behaviour. If you look at just one bird you can’t tell how the flock works as a whole. In fact, each is just following very simple rules. Each bird just tracks the positions of a few nearest neighbours using that information to make simple decisions about how to move. That is enough for the whole complex behaviour of the flock to emerge. Despite all that fast and furious movement, the birds never crash into each other. Fascinated, Sabine started to explore how swarms of robots might be used to solve problems for people.

Her first idea was to create swarms of flying robots to work as a communications network, providing wi-fi coverage in places it would otherwise be hard to set up a network. This might be a good solution in a disaster area, for example, where there is no other infrastructure, but communication is vital. You want it to scale over the whole disaster area quickly and easily, and it has to be robust. She set about creating a system to achieve this.

The robots she designed were very simple, fixed wing, propellor-powered model planes. Each had a compass so it knew which direction it was pointing and was able to talk to those nearest using wi-fi signals. It could also tell who its nearest neighbours were. The trick was to work out how to design the behaviour of one bird so that appropriate swarming behaviour emerged. At any time each had to decide how much to turn to avoid crashing into another but to maintain the flock, and coverage. You could try to work out the best rules by hand. Instead, Sabine turned to machine learning.

“Throwing those flying robots

and seeing them flock

was truly magical”

The idea of machine learning is that instead of trying to devise algorithms that solve problems yourself, you write an algorithm for how to learn. The program then learns for itself by trial and error the best solution. Sabine created a simple first program for her robots that gave them fairly random behaviour. The machine learning program then used a process modelled on evolution to gradually improve. After all evolution worked for animals! The way this is done is that variations on the initial behaviour are trialled in simulators and only the most successful are kept. Further random changes are made to those and the new versions trialled again. This is continued over thousands of generations, each generation getting that little bit better at flocking until eventually a behaviour of individual robots results that leads to them swarming together.

Sabine has now moved on to to thinking about a situation where swarms of trillions of individuals are needed: nanomedicine. She wants to create nanobots that are each smaller than the width of a strand of hair and can be injected into cancer patients. Once inside the body they will search out and stick themselves to tumour cells. The tumour cells gobble them up, at which point they deliver drugs directly inside the rogue cell. How do you make them behave in a way that gives the best cancer treatment though? For example, how do you stop them all just sticking to the same outer cancer cells? One way might be to give them a simple swarm behaviour that allows them to go to different depths and only then switch on their stickiness, allowing them to destroy all the cancer cells. This is the sort of thing Sabine’s team are experimenting with.

Swarm engineering has all sorts of other practical applications, and while Sabine is leading the way, some time soon we may need lots more swarm engineers, able to design swarm systems to solve specific problems. Might that be you?

Explore swarm behaviour using the Oxford Turtle system [EXTERNAL] (click the play button top centre) to see how to run a flocking simulation as well as program your own swarms.

Paul Curzon, Queen Mary University of London

What’s on your mind?

Telepathy is the supposed Extra Sensory Perception ability to read someone else’s mind at a distance. Whilst humans do not have that ability, brain-computer interaction researchers at Stanford have just made the high tech version a virtual reality.

Image by Andrei Cássia from Pixabay

It has long been know that by using brain implants or electrodes on a person’s head it is possible to tell the difference between simple thoughts. Thinking about moving parts of the body gives particularly useful brain signals. Thinking about moving your right arm, generates different signals to thinking about moving your left leg, for example, even if you are paralysed so cannot actually move at all. Telling two different things apart is enough to communicate – it is the basis of binary and so how all computer-to-computer communication is done. This led to the idea of the brain-computer interface where people communicate with and control a computer with their mind alone.

Stanford researchers made a big step forward in 2017, when they demonstrated that paralysed people could move a cursor on a screen by thinking of moving their hands in the appropriate direction. This created a point and click interface – a mind mouse – for the paralysed. Impressively, the speed and accuracy was as good as for people using keyboard applications

Stanford researchers have now gone a step even further and used the same idea to turn mental handwriting into actual typing. The person just thinks of writing letters with an imagined pen on imagined paper, the brain-computer interface then picks up the thoughts of subtle movements and the computer converts them into actual letters. Again the speed and accuracy is as good as most people can type. The paralysed participant concerned could communicate 18 words a minute and made virtually no mistakes at all: when the system was combined with auto-correction software, as we now all can use to correct our typing mistakes, it got letters right 99% of the time.

The system has been made possible by advances in both neuroscience and computer science. Recognising the letters being mind-written involves distinguishing very subtle differences in patterns of neurons firing in the brain. Recognising patterns is however, exactly what Machine Learning algorithms do. They are trained on lots of data and pick out patterns of similar data. If told what letter the person was actually trying to communicate then they can link that letter to the pattern detected. Here each letter will not lead to exactly the same pattern of brain signals firing each time, but they will largely clump together,. Other letters will also group but with slightly different patterns of firings. Once trained, the system works by taking the pattern of brain signals just seen and matching it to the nearest clumping pattern. The computer then guesses that the nearest clumping is the letter being communicated. If the system is highly accurate, as this one was at 94% (before autocorrection), then it means the patterns of most letters are very distinct. A letter being mind-written rarely fell into a brain pattern gap, which would have meant that letter could as easily have been the pattern of one letter as the other.

So a computer based “telepathy” is possible. But don’t expect us all to be able to communicate by mind alone over the internet any time soon. The approach involves having implants surgically inserted into the brain: in this case two computer chips connecting to your brain via 100 electrodes. The operation is a massive risk to take, and while perhaps justifiable for someone with a problem as severe as total paralysis, it is less obvious it is a good idea for anyone else. However, this shows at least it is possible to communicate written messages by mind alone, and once developed further could make life far better for severely disabled people in the future.

Yet again science fiction is no longer fantasy, it is possible, just not in the way the science fiction writers perhaps originally imagined by the power of a person’s mind alone.

Paul Curzon, Queen Mary University of London, Spring 2021.

AI Detecting the Scribes of the Dead Sea Scrolls

Computer science and artificial intelligence have provided a new way to do science: it was in fact one of the earliest uses of the computer. They are now giving new ways for scholars to do research in other disciplines such as ancient history, too. Artificial Intelligence has been used in a novel way to help understand how the Dead Sea Scrolls were written, and it turns out scribes in ancient Judea worked in teams.

The Dead Sea Scrolls are a collection of almost a thousand ancient documents written several thousand years ago that were found in caves near the Dead Sea. The collection includes the oldest known written version of the Bible.

The cave where most of the Dead Sea Scrolls were found.

Researchers from the University of Groningen used artificial intelligence techniques to analyse a digitised version of the longest scroll in the collection, known as the Great Isaiah Scroll. They picked one letter, aleph, that appears thousands of times through the document, and analysed it in detail.

Two kinds of artificial intelligence programs were used. The first, feature extraction, based on computer vision and image processing was needed to recognize features in the images. At one level this is the actual characters, but also more subtly here, the aim was that the features corresponded to ink traces based on the actual muscle movements of the scribes.

The second was machine learning. Machine Learning programs are good at spotting patterns in data – grouping the data into things that are similar and things that are different. A typical text book example would be giving the program images of cats and of dogs. It would spot the patterns of features that correspond to dogs, and the different pattern of features that corresponds to cats and group each image into one or the other pattern.

Here the data was all those alephs or more specifically the features extracted from them. Essentially the aim was to find patterns that were based on the muscle movements of the original scribe of each letter. To the human eye the writing throughout the document looks very, very uniform, suggesting a single scribe wrote the whole document. If that was the case, only one pattern would be found that all letters were part of with no clear way to split them. Despite this, the artificial intelligence evidence suggests there were actually two scribes involved. There were two patterns.

The research team found, by analysing the way the letters were written, that there were two clear groupings of letters. One group were written in one way and the other in a slightly different way. There were very subtle differences in the way strokes were written, such as in their thickness and the positions of the connections between strokes. This could just be down to variations in the way a single writer wrote letters at different times. However, the differences were not random, but very clearly split at a point halfway through the scroll. This suggests there were two writers who each worked on the different parts. Because the characters were otherwise so uniform, those two scribes must have been making an effort to carefully mirror each other’s writing style so the letters looked the same to the naked eye.

The research team have not only found out something interesting about the Dead Sea Scrolls, but also demonstrated a new way to study ancient hand writing. With a few exceptions, the scribes who wrote the ancient documents, like the Dead Sea Scrolls, that have survived to the modern day, are generally anonymous, but thanks to leading-edge Computer Science, we have a new way to find out more about them.

Explore the digitised version of the Dead Sea Scrolls yourself at www.deadseascrolls.org.il

– Paul Curzon, Queen Mary University of London

Losing the match? Follow the science. Change the kit!

Artificial Intelligence software has shown that two different Manchester United gaffers got it right believing that kit and stadium seat colours matter if the team are going to win.

It is 1996. Sir Alex Ferguson’s Manchester United are doing the unthinkable. At half time they are losing 3-0 to lowly Southampton. Then the team return to the pitch for the second half and they’ve changed their kit. No longer are they wearing their normal grey away kit but are in blue and white, and their performance improves (if not enough to claw back such a big lead). The match becomes infamous for that kit change: the genius gaffer blaming the team’s poor performance on their kit seemed silly to most. Just play better football if you want to win!

Jump forward to 2021, and Manchester United Manager Ole Gunnar Solskjaer, who originally joined United as a player in that same year, 1996, tells a press conference that the club are changing the stadium seats to improve the team’s performance!

Is this all a repeat of previously successful mind games to deflect from poor performances? Or superstition, dressed up as canny management, perhaps. Actually, no. Both managers were following the science.

Ferguson wasn’t just following some gut instinct, he had been employing a vision scientist, Professor Gail Stephenson, who had been brought in to the club to help improve the players’ visual awareness, getting them to exercise the muscles in their eyes not just their legs! She had pointed out to Ferguson that the grey kit would make it harder for the players to pick each other out quickly. The Southampton match was presumably the final straw that gave him the excuse to follow her advice.

She was very definitely right, and modern vision Artificial Intelligence technology agrees with her! Colours do make it easier or harder to notice things and slows decision making in a way that matters on the pitch. 25 years ago the problem was grey kit merging into the grey background of the crowd. Now it is that red shirts merge into the background of an empty stadium of red seats.

It is all about how our brain processes the visual world and the saliency of objects. Saliency is just how much an object stands out and that depends on how our brain processes information. Objects are much easier to pick out if they have high contrast, for example, like a red shirt on a black background.

Peter McOwan and Hamit Soyel at Queen Mary combined vision research and computer science, creating an Artificial Intelligence (AI) that sees like humans in the sense that it predicts what will and won’t stand out to us, doing it in real time (see DragonflyAI: I see what you see). They used the program to analyse images from that infamous football match before and after the kit change and showed that the AI agreed with Gail Stephenson and Alex Ferguson. The players really were much easier for their team mates to see in the second half (see the DragonflyAI version of the scenes below).

Dragonfly highlights areas of a scene that are more salient to humans so easier to notice. Red areas stand out the most. In the left image when wearing the grey kit, Ryan Giggs merges into the background. He is highly salient (red) in the right image where he is in the blue and white kit.

Details matter and science can help teams that want to win in all sorts of ways. That includes computer scientists and Artificial Intelligence. So if you want an edge over the opposition, hire an AI to analyse the stadium scene at your next match. Changing the colour of the seats really could make a difference.

Find out more about DragonflyAI: https://dragonflyai.co/ [EXTERNAL]

– Paul Curzon, Queen Mary University of London

DragonflyAI: I see what you see

What use is a computer that sees like a human? Can’t computers do better than us? Well, such a computer can predict what we will and will not see, and there is BIG money to be gained doing that!

The Hong Kong Skyline


Peter McOwan’s team at Queen Mary spent 10 years doing exploratory research understanding the way our brains really see the world, exploring illusions, inventing games to test the ideas, and creating a computer model to test their understanding. Ultimately they created a program that sees like a human. But what practical use is a program that mirrors the oddities of the way we see the world? Surely a computer can do better than us: noticing all the things that we miss or misunderstand? Well, for starters the research opens up exciting possibilities for new applications, especially for marketeers.

The Hong Kong Skyline as seen by DragonflyAI


A fruitful avenue to emerge is ‘visual analytics’ software: applications that predict what humans will and will not notice. Our world is full of competing demands, overloading us with information. All around us things vie to catch our attention, whether a shop window display, a road sign warning of danger or an advertising poster.

Imagine, a shop has a big new promotion designed to entice people in, but no more people enter than normal. No-one notices the display. Their attention is elsewhere. Another company runs a web ad campaign, but it has no effect, as people’s eyes are pulled elsewhere on the screen. A third company pays to have its products appear in a blockbuster film. Again, a waste of money. In surveys afterwards no one knew the products had been there. A town council puts up a new warning sign at a dangerous bend in the road but the crashes continue. These are examples of situations where predicting where people look in advance allows you to get it right. In the past this was either done by long and expensive user testing, perhaps using software that tracks where people look, or by having teams of ‘experts’ discuss what they think will happen. What if a program made the predictions in a fraction of a second beforehand? What if you could tweak things repeatedly until your important messages could not be missed.

Queen Mary’s Hamit Soyel turned the research models into a program called DragonflyAI, which does exactly that. The program analyses all kinds of imagery in real-time and predicts the places where people’s attention will, and will not, be drawn. It works whether the content is moving or not, and whether it is in the real world, completely virtual, or both. This then gives marketeers the power to predict and so influence human attention to see the things they want. The software quickly caught the attention of big, global companies like NBC Universal, GSK and Jaywing who now use the technology.

Find out more about DragonflyAI: https://dragonflyai.co/ [EXTERNAL]