Gladys West: Where’s my satellite? Where’s my child? #BlackHistoryMonth

Satellite image of the Earth at night

by Paul Curzon, Queen Mary University of London

Satellites are critical to much modern technology, and especially GPS. It allows our smartphones, laptops and cars to work out their exact position on the surface of the earth. This is central to all mobile technology, wearable or not, that relies on knowing where you are, from plotting a route your nearest Indian restaurant to telling you where a person you might want to meet is. Many, many people were involved in creating GPS, but it was only in Black History Month of 2017 when the critical part Gladys West played became widely known.

Work hard, go far

As a child Gladys worked with her family in the fields of their farm in rural Virginia. That wasn’t the life she wanted, so she worked hard through school, leaving as the top student. She won a scholarship to university, and then landed a job as a mathematician at a US navy base.

There she solved the maths problems behind the positioning of satellites. She worked closely with the programmers to write the code to do calculations based on her maths. Nine times out of ten the results that came back weren’t exactly right so much of her time was spent working out what was going wrong with the programs, as it was vital the results were very accurate.

Seasat and Geosat

Her work on the Seasat satellite won her a commendation. It was a revolutionary satellite designed to remotely monitor the oceans. It collected data about things like temperature, wind speed and wind direction at the sea’s surface, the heights of waves, as well as sensing data about sea ice. This kind of remote sensing has since had a massive impact on our understanding of climate change. Gladys specifically worked on the satellite’s altimeter. It was a radar-based sensor that allowed Seasat to measure its precise distance from the surface of the ocean below. She continued this work on later remote sensing satellites too, including Geosat, a later earth observation satellite.

Gladys West and Sam Smith look over data from the Global Positioning System,
which Gladys helped develop. Photo credit US Navy, 1985, via Wikipedia.

GPS

Knowing the positions of satellites is the foundation for GPS. The way GPS works is that our mobile receivers pick up a timed signal from several different satellites. Calculating where we are can only be done if you first know very precisely where those satellites were when they sent the signal. That is what Gladys’ work provided.

GPS Watches

You can now buy, for example, buy GPS watches, allowing you to wear a watch that watches where you are. They can also be used by people with dementia, who have bad memory problems, allowing their carers to find them if they go out on their own but are then confused about where they are. They also allow parents to know where their kids are all the time. Do you think that’s a good use?

Since so much technology now relies on knowing exactly where we are, Gladys’ work has had a massive impact on all our lives.

This article was originally published on the CS4FN website and a copy can also be found on page 14 of Issue 25 of CS4FN, “Technology worn out (and about)“, on wearable computing, which can be downloaded as a PDF, along with all our other free material, here: https://cs4fndownloads.wordpress.com/  

This article is also republished during Black History Month and is part of our Diversity in Computing series, celebrating the different people working in computer science (Gladys West’s page).


This blog is funded through EPSRC grant EP/W033615/1.

Microwave health check – using wearable tech to monitor elite athletes’ health

Microwave health check

by Tina Chowdhury, Institute of Bioengineering, School of Engineering and Materials Science, Queen Mary University of London

Black and white photo of someone sweating after exertion
Image by un-perfekt from Pixabay

Microwaves aren’t just useful for cooking your dinner. Passing through your ears they might help check your health in future, especially if you are an elite athlete. Bioengineer Tina Chowdhury tells us about her multidisciplinary team’s work with the National Physics Laboratory (NPL).   Lots of wearable gadgets work out things about us by sensing our bodies. They can tell who you are just by tapping into your biometric data, like fingerprints, features of your face or the patterns in your eyes. They can even do some of this remotely without you even knowing you’ve been identified. Smart watches and fitness trackers tell you how fast you are running, how fit you are and whether you are healthy, how many calories you have burned and how well you are sleeping or not sleeping. They also work out things about your heart, like how well it beats. This is done using optical sensor technology, shining light at your skin and measuring how much is scattered by the blood flowing through it.  

Microwave Sensors

With PhD student, Wesleigh Dawsmith, and electronic engineer, microwave and antennae specialist, Rob Donnan, we are working on a different kind of sensor to check the health of elite athletes. Instead of using visible light we use invisible microwaves, the kind of radiation that gives microwave ovens their name. The microwave-based wearables have the potential to provide real-time information about how our bodies are coping when under stress, such as when we are exercising, similar to health checks without having to go to hospital. The technology measures how much of the microwaves are absorbed through the ear lobe using a microwave antenna and wireless circuitry. How much of the microwaves are absorbed is linked to being dehydrated when we sweat and overheat during exercise. We can also use the microwave sensor to track important biomarkers like glucose, sodium, chloride and lactate which can be a sign of dehydration and give warnings of illnesses like diabetes. The sensor sounds an alarm telling the person that they need medication, or are getting dehydrated, so need to drink some water. How much of the microwaves are absorbed is linked to being dehydrated

Making it work

We are working with with Richard Dudley at the NPL to turn these ideas into a wearable, microwave-based dehydration tracker. The company has spent eight years working on HydraSenseNPL a device that clips onto the ear lobe, measuring microwaves with a flexible antenna earphone.

Blue and yellow sine wave patterns representing light
Image by Gerd Altmann from Pixabay

A big question is whether the ear device will become practical to actually wear while doing exercise, for example keeping a good enough contact with the skin. Another is whether it can be made fashionable, perhaps being worn as jewellery. Another issue is that the system is designed for athletes, but most people are not professional athletes doing strenuous exercise. Will the technology work for people just living their normal day-to-day life too? In that everyday situation, sensing microwave dynamics in the ear lobe may not turn out to be as good as an all-in-one solution that tracks your biometrics for the entire day. The long term aim is to develop health wearables that bring together lots of different smart sensors, all packaged into a small space like a watch, that can help people in all situations, sending them real-time alerts about their health.

This article was originally published on the CS4FN website and a copy can also be found on page 8 of Issue 25 of CS4FN, “Technology worn out (and about)“, on wearable computing, which can be downloaded as a PDF, along with all our other free material, here: https://cs4fndownloads.wordpress.com/  


 


This blog is funded through EPSRC grant EP/W033615/1.

Dressing it up

Why it might be good for robots to wear clothes

by Peter W McOwan and the CS4FN team, Queen Mary University of London

Updated from the archive

(Robot) dummies in different clothes standing in a line up a slope
Image by Peter Toporowski from Pixabay 

Even though most robots still walk around naked, the Swedish Institute of Computer Science (SICS) in Stockholm explored how to produce fashion conscious robots.

The applied computer scientists there were looking for ways to make the robots of today easier for us to get along with. As part of the LIREC project to build the first robot friends for humans they examined how our views of simple robots change when we can clothe and customise them. Does this make the robots more believable? Do people want to interact more with a fashionable robot?

How do you want it?

These days most electronic gadgets allow the human user to customise them. For example, on a phone you can change the background wallpaper or colour scheme, the ringtone or how the menus work. The ability of the owner to change the so-called ‘look and feel’ of software is called end-user programming. It’s essentially up to you how your phone looks and what it does.

Dinosaurs waking and sleeping

The Swedish team began by taking current off-the-shelf robots and adding dress-up elements to them. Enter Pleo, a toy dinosaur ‘pet’ able to learn as you play with it. Now add in that fashion twist. What happens when you can play dress up with the dinosaur? Pleo’s costumes change its behaviour, kind of like what happens when you customise your phone. For example, if you give Pleo a special watchdog necklace the robot remains active and ‘on guard’. Change the costume from necklace to pyjamas, and the robot slowly switches into ‘sleep’ mode. The costumes or accessories you choose communicate electronically with the robot’s program, and its behaviour follows suit in a way you can decide. The team explored whether this changed the way people played with them.

Clean sweeps

In another experiment the researchers played dress up with a robot vacuum cleaner. The cleaner rolls around the house sweeping the floor, and had already proven a hit with many consumers. It bleeps happily as its on-board computer works out the best path to bust your carpet dust. The SICS team gave the vacuum a special series of stick-on patches, which could add to its basic programming. They found that choosing the right patch could change the way the humans perceive the robot’s actions. Different patches can make humans think the robot is curious, aggressive or nervous. There’s even a shyness patch that makes the robot hide under the sofa.

What’s real?

If humans are to live in a world populated by robots there to help them, the robots need to be able to play by our rules. Humans have whole parts of their brains given over to predicting how other humans will react. For example, we can empathise with others because we know that other beings have thoughts like us, and we can imagine what they think. This often spills over into anthropomorphism, where we give human characteristics to non-human animal or non-living things. Classic examples are where people believe their car has a particular personality, or think their computer is being deliberately annoying – they are just machines but our brains tend to attach motives to the behaviours we see.

Real-er robots?

Robots can produce very complex behaviours depending on the situations they are in and the ways we have interacted with them, which creates the illusion that they have some sort of ‘personality’ or motives in the way they are acting. This can help robots seem more natural and able to fit in with the social world around us. It can also improve the ways they provide us with assistance because they seem that bit more believable. Projects like the SICS’s ‘actDresses’ one help us by providing new ways that human users can customise the actions of their robots in a very natural way, in their case by getting the robots to dress for the part.


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

The naked robot

by Paul Curzon, Queen Mary University of London

From the archive

A naked robot holding a flower
Image by bamenny from Pixabay 

Why are so many film robots naked? We take it for granted that robots don’t wear clothes, and why should they?

They are machines, not humans, after all. On the other hand, the quest to create artificial intelligence involves trying to create machines that share the special ingredients of humanity. One of the things that is certainly special about humans in comparison to other animals is the way we like to clothe and decorate our bodies. Perhaps we should think some more about why we do it but the robots don’t!

Shame or showoff?

The creation story in the Christian Bible suggests humans were thrown out of the Garden of Eden when Adam and Eve felt the need to cover up – when they developed shame. Humans usually wear more than just the bare minimum though, so wearing clothing can’t be all about shame. Nor is it just about practicalities like keeping warm. Turn up at an interview covering your body with the wrong sort of clothes and you won’t get the job. Go to a fancy dress party in the clothes that got you the job and you will probably feel really uncomfortable the moment you see that everyone else is wearing costumes. Clothes are about decorating our bodies as much as covering them.

Our urge to decorate our bodies certainly seems to be a deeply rooted part of what makes us human. After all, anthropologists consider finds like ancient beads as the earliest indications of humanity evolving from apehood. It is taken as evidence that there really was someone ‘in there’ back then. Body painting is used as another sign of our emerging humanity. We still paint our bodies millennia later too. Don’t think we’re only talking about children getting their faces painted – grownups do it too, as the vast make-up industry and the popularity of tattoos show. We put shiny metal and stones around our necks and on our hands too.

The fashion urge

Whatever is going on in our heads, clearly the robots are missing something. Even in the movies the intelligent ones rarely feel the need to decorate their bodies. R2D2? C3PO? Wall-E? The exceptions are the ones created specifically to pass themselves off as human like in Blade Runner.

You can of course easily program a robot to ‘want’ to decorate itself, or to refuse to leave its bedroom unless it has managed to drape some cloth over its body and shiny wire round its neck, but if it was just following a programmed rule would that be the same as when a human wears clothes? Would it be evidence of ‘someone in there’? Presumably not!

We do it because of an inner need to conform more than an inner need to wear a particular thing. That is what fashion is really all about. Perhaps programming an urge to copy others would be a start. In Wall-E, the robot shows early signs of this as he tries to copy what he sees the humans doing in the old films he watches. At one point he even uses a hubcap as a prop hat for a dance. Human decoration may have started as a part of rituals too.

Where to now?

Is this need to decorate our bodies something special, something linked to what makes us human? Should we be working on what might lead to robots doing something similar of their own accord? When archaeologists are hunting through the rubble in thousands of years’ time, will there be something other than beads that would confirm their robot equivalent to self-awareness? If robots do start to decorate and cover up their bodies because they want to rather than because it was what some God-like programmer coded them to do, surely something special will have happened. Perhaps that will be the point when the machines have to leave their Garden of Eden too.


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

Shirts that keep score

by the CS4FN team, Queen Mary University of London

From the archive

Basketball player with shirt in mouth
Image by 愚木混株 Cdd20 from Pixabay 

When you are watching a sport in person, a quick glance at the scoreboard should tell you everything you need to know about what’s going on. But why not try to put that information right in the action? How much better would it be if all the players’ shirts could display not just the score, but how well each individual is doing?

Light up, light up

An Australian research group from the University of Sydney has made it happen. They rigged up two basketball teams’ shirts with displays that showed instant information as they played one another. The players (and everyone else watching the game) could see information that usually stays hidden, like how many fouls and points each player had. The displays were simple coloured bands in different places around the shirt, all connected up with tiny wires sewn into the shirts like thread. For every point a player got, for example, one of the bands on the player’s waist would light up. Each foul a player got made a shoulder band light up. There was also a light on players’ backs reserved for the leading team. Take the lead and all your team’s lights turned on, but lose it again and they went dark with defeat.

Sweaty but safe

All those displays were controlled by an on-board computer that each player harnessed to his or her body. That computer, in turn, was wirelessly connected to a central computer that kept track of winners, losers, fouls and baskets. The designers had to be careful about certain things, though. In case a player fell over and crushed their computer, the units were designed with ‘weak spots’ on purpose so they would detach rather than crumple underneath the player. And, since no one wants to get electrocuted while playing their favourite sport, the designers protected all the gear against moisture and sweat.

Keeping your head in the game

In the end, it was the audience at the game who got the most out of the system. They were able to track the players more closely than they normally would, and it helped those in the crowd who didn’t know much about basketball to understand what was going on. The players themselves had less time to think about what was on everyone’s clothes, as they were busy playing the game, but the system did help them a few times. One player said that she could see when her teammate had a high score, “and it made me want to pass to her more, as she had a ‘hot hand'”. Another said that it was easier to tell when the clock was running down, so she knew when to play harder. Plus, just seeing points on their shirts gave the players more confidence. There’s so much information available to you when you watch a game on television that, in a weird way, actually being in the stadium could make you less informed. Maybe in the future, the fans in the stands will see everything the TV audience does as well, when the players wear all their statistics on their shirts! We’ll see what the sponsors think of that…


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

Full metal jacket: the fashion of Iron Man

by Peter W McOwan and Paul Curzon, Queen Mary University of London

Spoiler Alert

Industrialist Tony Stark always dresses for the occasion, even when that particular occasion happens to be a fight with the powers of evil. His clothes are driven by computer science: the ultimate in wearable computing.

In the Iron Man comic and movie franchise Anthony Edward Stark, Tony to his friends, becomes his crime fighting alter ego by donning his high tech suit. The character was created by Marvel comic legend Stan Lee and first hit the pages in 1963. The back story tells how industrial armaments engineer and international playboy Stark is kidnapped and forced to work to develop new forms of weapons, but instead manages to escape by building a flying armoured suit.

Though the escape is successful Stark suffers a major heart injury during the kidnap ordeal, becoming dependant on technology to keep him alive. The experience forces him to reconsider his life, and the crime avenging Iron Man is born. Lee’s ‘businessman superhero’ has proved extremely popular and in recent years the Iron Man movies, starring Robert Downey Jr, have been box office hits. But as Tony himself would be the first to admit, there is more than a little computer science supporting Iron Man’s superhero standing.

Suits you

The Iron Man suit is an example of a powered exoskeleton. The technology surrounding the wearer amplifies the movement of the body, a little like a wearable robot. This area of research is often called ‘human performance augmentation’ and there are a number of organisations interested in it, including universities and, unsurprisingly, defence companies like Stark Industries. Their researchers are building real exoskeletons which have powers uncannily like those of the Iron Man suit.

To make the exoskeleton work the technology needs to be able to accurately read the exact movements of the wearer, then have the robot components duplicate them almost instantly. Creating this fluid mechanical shadow means the exoskeleton needs to contain massive computing power, able to read the forces being applied and convert them into signals to control the robot servo motors without any delay. Slow computing would cause mechanical drag for the wearer, who would feel like they were wading through treacle. Not a good idea when you’re trying to save the world.

Pump it up

Humans move by using their muscles in what are called antagonistic pairs. There are always two muscles on either side of the joint that pull the limb in different directions. For example, in your upper arm there are the muscles called the biceps and the triceps. Contracting the biceps muscle bends your elbow up, and contracting your triceps straightens your elbow back. It’s a clever way to control biological movement using just a single type of shortening muscle tissue rather than needing one kind that shortens and another that lengthens.

In an exoskeleton, the robot actuators (the things that do the moving) take the place of the muscles, and we can build these to move however we want, but as the robot’s movements need to shadow the person’s movements inside, the computer needs to understand how humans move. As the human bends their elbow to lift up an object, sensors in the exoskeleton measure the forces applied, and the onboard computer calculates how to move the exoskeleton to minimise the resulting strain on the person’s hand. In strength amplifying exoskeletons the actuators are high pressure hydraulic pistons, meaning that the human operators can lift considerable weight. The hydraulics support the load, the humans movements provide the control.

I knew you were going to do that

It is important that the human user doesn’t need to expend any effort in moving the exoskeleton; people get tired very easily if they have to counteract even a small but continual force. To allow this to happen the computer system must ensure that all the sensors read zero force whenever possible. That way the robot does the work and the human is just moving inside the frame. The sensors can take thousands of readings per second from all over the exoskeleton: arms, legs, back and so on.

This information is used to predict what the user is trying to do. For example, when you are lifting a weight the computer begins by calculating where all the various exoskeleton ‘muscles’ need to be to mirror your movements. Then the robot arm is instructed to grab the weight before the user exerts any significant force, so you get no strain but a lot of gain.

Flight suit?

Exoskeleton systems exist already. Soldiers can march further with heavy packs by having an exoskeleton provide some extra mechanical support that mimics their movements. There are also medical applications that help paralysed patients walk again. Sadly, current exoskeletons still don’t have the ability to let you run faster or do other complex activities like fly.

Flying is another area where the real trick is in the computer programming. Iron Man’s suit is covered in smart ‘control surfaces’ that move under computer control to allow him to manoeuvre at speed. Tony Stark controls his suit through a heads-up display and voice control in his helmet, technology that at least we do have today. Could we have fully functional Iron Man suits in the future? It’s probably just a matter of time, technology and computer science (and visionary multi-millionaire industrialists too).


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

Let buttons be buttons

by Paul Curzon, Queen Mary University of London

Assorted buttons including Rebecca Stewart's integrated circuit button
Image by Melly95 from Pixabay with added integrated circuit button by Rebecca Stewart

We are used to the idea that we use buttons with electronics to switch things on and off, but Rebecca Stewart and Sophie Skach decided to use real
buttons in the old-fashioned sense of a fashionable way to fasten up clothes.

Rebecca created integrated circuit buttons – electronics, sensors and a battery inside an actual button. Sophie then built them into a stylish jacket that included digital embroidery, embedding lighting and the circuitry to control it into the fabric of the jacket.

How do you control the light effects?

You just button and unbutton the jacket of course


Design your own

If you are interested in fashion design, why not design of a jacket, dress or shirt of your own that uses wearable technology. What would it do and how would you control it?

More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

CS4FN Advent – Day 2 – Pairs: mittens, gloves, pair programming, magic tricks

Welcome to the second ‘window’ of the CS4FN Christmas Computing Advent Calendar. The picture on the ‘box’ was a pair of mittens, so today’s focus is on pairs, and a little bit on gloves. Sadly no pear trees though.

A pair of cyan blue Christmas mittens with a black and white snowflake pattern on each.

1. i-pickpocket

In this article, by a pair (ho ho) of computer scientists (Jane Waite and Paul Curzon), you can find out how paired devices can be used to steal money from people, picking pockets at a distance.

Credit cards denim jeans blue image by TheDigitalWay from Pixabay

A web card for the i-pickpocket article on the CS4FN website.

2. Gestural gloves

Working with scientists musician Imogen Heap developed Mi.Mu gloves, a wearable musical instrument in glove form which lets the wearer map hand movements (gestures) to a particular musical effect (pairing a gesture to an action). The gloves contain sensors which can measure the speed and position of the hands and can send this information wirelessly to a controlling computer which can then trigger the sound effect that the musician previously mapped to that hand movement.

You can watch Imogen talk about and demo the gloves here and in the video below, which also looks at the ways in which the gloves might help disabled people to make music.

Further reading

The glove that controls your cords… (a CS4FN article by Jane Waite)

3. Pair programming

‘Pair programming’ involves having two people working together on one computer to write and edit code. One person is the ‘Driver’ who writes the code and explains what it’s going to do, the other person is the ‘Navigator’ who observes and makes suggestions and corrections. This is a way to bring two different perspectives on the same code, which is being edited, reviewed and debugged in real-time. Importantly, the two people in the mini-team switch roles regularly. Pair programming is widely used in industry and increasingly being used in the classroom – it can really help people who are learning about computers and how to program to talk through what they’re doing with someone else (you may have done this yourself in class). However, some people prefer to work by themselves and pair programming takes up two people’s time instead of one, but it can also produce better code with fewer bugs. It does need good communication between the two people working on the task though (and good communication is a very important skill in computer science!).

Here’s a short video from Code.org which shows how it’s done.

4. Digital Twins

A digital twin is a computer-based model that represents a real, physical thing (such as a jet engine or car component) and which behaves as closely as possible to the real thing. Taking information from the real-world version and applying it to the digital twin lets engineers and designers test things virtually, to see how the physical object would behave under different circumstances and to help spot (and fix) problems.

 

5. A magic trick: two cards make a pair

You will need

  • some playing cards
  • your hands (no mittens)
  • another pair of mitten-free hands to do the trick on

Find a pack of cards and take out 15 (doesn’t matter which ones, pick a card, any card, but 15 of them). Ask someone to put their hands on a table but with their fingers spread as if they’re playing a piano. You are going to do a magic trick that involves slotting pairs of cards between their fingers (10 fingers gives 8 spaces). As you do this you’ll ask them to say with you “two cards make a pair”. Take the first pair and slot them between the first space on their left hand (between their little finger and their ring finger) and both of you say “two cards make a pair”.

The magician puts pairs of cards between the assistant’s fingers.

Repeat with another pair of cards between ring finger and middle finger (“two cards make a pair”) and twice again between middle and index, and between index and thumb – saying “two cards make a pair” each time you do. You’ve now got 8 cards in 4 pairs in their left hand.

Repeat the same process on their right hand saying “two cards make a pair” each time (but you only have 7 cards left so can only make 3 pairs). There’s one card left over which can go between their index finger and thumb.

The magician removes the cards and puts them into two piles.

Then you’ll take back each pair of cards and lay them on the table, separating them into two different piles – each time saying “two cards make a pair”. Again you’ll have one left over. Ask the person to choose which pile it goes on. You, the magician, are going to magically move the card from the pile they’ve chosen to the other pile, but you’re going to do it invisibly by hiding the card in your palm (‘palming’). To find out how to do the trick, and how this can be used to think about the ways in which “self-working” magic tricks are like algorithms have a look at the full instructions and video below.

6. Something to print and colour in

Did you work out yesterday’s colour-in puzzle from Elaine Huen? Here’s the answer.

Today’s Christmas colour-in puzzle is by Elisa Huen. A clue is “helps deliver the Christmas presents”. (The answer will be in Day 3 of the CS4FN advent calendar, tomorrow).

The creation of this post was funded by UKRI, through grant EP/K040251/2 held by Professor Ursula Martin, and forms part of a broader project on the development and impact of computing.

What’s on your mind?

Telepathy is the supposed Extra Sensory Perception ability to read someone else’s mind at a distance. Whilst humans do not have that ability, brain-computer interaction researchers at Stanford have just made the high tech version a virtual reality.

Image by Andrei Cássia from Pixabay

It has long been know that by using brain implants or electrodes on a person’s head it is possible to tell the difference between simple thoughts. Thinking about moving parts of the body gives particularly useful brain signals. Thinking about moving your right arm, generates different signals to thinking about moving your left leg, for example, even if you are paralysed so cannot actually move at all. Telling two different things apart is enough to communicate – it is the basis of binary and so how all computer-to-computer communication is done. This led to the idea of the brain-computer interface where people communicate with and control a computer with their mind alone.

Stanford researchers made a big step forward in 2017, when they demonstrated that paralysed people could move a cursor on a screen by thinking of moving their hands in the appropriate direction. This created a point and click interface – a mind mouse – for the paralysed. Impressively, the speed and accuracy was as good as for people using keyboard applications

Stanford researchers have now gone a step even further and used the same idea to turn mental handwriting into actual typing. The person just thinks of writing letters with an imagined pen on imagined paper, the brain-computer interface then picks up the thoughts of subtle movements and the computer converts them into actual letters. Again the speed and accuracy is as good as most people can type. The paralysed participant concerned could communicate 18 words a minute and made virtually no mistakes at all: when the system was combined with auto-correction software, as we now all can use to correct our typing mistakes, it got letters right 99% of the time.

The system has been made possible by advances in both neuroscience and computer science. Recognising the letters being mind-written involves distinguishing very subtle differences in patterns of neurons firing in the brain. Recognising patterns is however, exactly what Machine Learning algorithms do. They are trained on lots of data and pick out patterns of similar data. If told what letter the person was actually trying to communicate then they can link that letter to the pattern detected. Here each letter will not lead to exactly the same pattern of brain signals firing each time, but they will largely clump together,. Other letters will also group but with slightly different patterns of firings. Once trained, the system works by taking the pattern of brain signals just seen and matching it to the nearest clumping pattern. The computer then guesses that the nearest clumping is the letter being communicated. If the system is highly accurate, as this one was at 94% (before autocorrection), then it means the patterns of most letters are very distinct. A letter being mind-written rarely fell into a brain pattern gap, which would have meant that letter could as easily have been the pattern of one letter as the other.

So a computer based “telepathy” is possible. But don’t expect us all to be able to communicate by mind alone over the internet any time soon. The approach involves having implants surgically inserted into the brain: in this case two computer chips connecting to your brain via 100 electrodes. The operation is a massive risk to take, and while perhaps justifiable for someone with a problem as severe as total paralysis, it is less obvious it is a good idea for anyone else. However, this shows at least it is possible to communicate written messages by mind alone, and once developed further could make life far better for severely disabled people in the future.

Yet again science fiction is no longer fantasy, it is possible, just not in the way the science fiction writers perhaps originally imagined by the power of a person’s mind alone.

Paul Curzon, Queen Mary University of London, Spring 2021.