Microwave health check

Using wearable tech to monitor elite athletes’ health

Microwaves aren’t just useful for cooking your dinner. Passing through your ears they might help check your health in future, especially if you are an elite athlete. Bioengineer Tina Chowdhury tells us about her multidisciplinary team’s work with the National Physics Laboratory (NPL).  

Lots of wearable gadgets work out things about us by sensing our bodies. They can tell who you are just by tapping into your biometric data, like fingerprints, features of your face or the patterns in your eyes. They can even do some of this remotely without you even knowing you’ve been identified. Smart watches and fitness trackers tell you how fast you are running, how fit you are and whether you are healthy, how many calories you have burned and how well you are sleeping or not sleeping. They also work out things about your heart, like how well it beats. This is done using optical sensor technology, shining light at your skin and measuring how much is scattered by the blood flowing through it.  

Microwave Sensors

With PhD student, Wesleigh Dawsmith, and electronic engineer, microwave and antennae specialist, Rob Donnan, we are working on a different kind of sensor to check the health of elite athletes. Instead of using visible light we use invisible microwaves, the kind of radiation that gives microwave ovens their name. The microwave-based wearables have the potential to provide real-time information about how our bodies are coping when under stress, such as when we are exercising, similar to health checks without having to go to hospital. The technology measures how much of the microwaves are absorbed through the ear lobe using a microwave antenna and wireless circuitry. How much of the microwaves are absorbed is linked to being dehydrated when we sweat and overheat during exercise. We can also use the microwave sensor to track important biomarkers like glucose, sodium, chloride and lactate which can be a sign of dehydration and give warnings of illnesses like diabetes. The sensor sounds an alarm telling the person that they need medication, or are getting dehydrated, so need to drink some water. How much of the microwaves are absorbed is linked to being dehydrated

Making it work

We are working with with Richard Dudley at the NPL to turn these ideas into a wearable, microwave-based dehydration tracker. The company has spent eight years working on HydraSenseNPL a device that clips onto the ear lobe, measuring microwaves with a flexible antenna earphone.

A big question is whether the ear device will become practical to actually wear while doing exercise, for example keeping a good enough contact with the skin. Another is whether it can be made fashionable, perhaps being worn as jewellery. Another issue is that the system is designed for athletes, but most people are not professional athletes doing strenuous exercise. Will the technology work for people just living their normal day-to-day life too? In that everyday situation, sensing microwave dynamics in the ear lobe may not turn out to be as good as an all-in-one solution that tracks your biometrics for the entire day. The long term aim is to develop health wearables that bring together lots of different smart sensors, all packaged into a small space like a watch, that can help people in all situations, sending them real-time alerts about their health.

Tina Chowdhury, Institute of Bioengineering, School of Engineering and Materials Science, Queen Mary University of London

More on …

This article was originally published on the CS4FN website and a copy can also be found on page 8 of Issue 25 of CS4FN, “Technology worn out (and about)“, on wearable computing, which can be downloaded as a PDF, along with all our other free material, here: https://cs4fndownloads.wordpress.com/  


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Dressing it up

Why it might be good for robots to wear clothes

(Robot) dummies in different clothes standing in a line up a slope
Image by Peter Toporowski from Pixabay 

Even though most robots still walk around naked, the Swedish Institute of Computer Science (SICS) in Stockholm explored how to produce fashion conscious robots.

The applied computer scientists there were looking for ways to make the robots of today easier for us to get along with. As part of the LIREC project to build the first robot friends for humans they examined how our views of simple robots change when we can clothe and customise them. Does this make the robots more believable? Do people want to interact more with a fashionable robot?

How do you want it?

These days most electronic gadgets allow the human user to customise them. For example, on a phone you can change the background wallpaper or colour scheme, the ringtone or how the menus work. The ability of the owner to change the so-called ‘look and feel’ of software is called end-user programming. It’s essentially up to you how your phone looks and what it does.

Dinosaurs waking and sleeping

The Swedish team began by taking current off-the-shelf robots and adding dress-up elements to them. Enter Pleo, a toy dinosaur ‘pet’ able to learn as you play with it. Now add in that fashion twist. What happens when you can play dress up with the dinosaur? Pleo’s costumes change its behaviour, kind of like what happens when you customise your phone. For example, if you give Pleo a special watchdog necklace the robot remains active and ‘on guard’. Change the costume from necklace to pyjamas, and the robot slowly switches into ‘sleep’ mode. The costumes or accessories you choose communicate electronically with the robot’s program, and its behaviour follows suit in a way you can decide. The team explored whether this changed the way people played with them.

Clean sweeps

In another experiment the researchers played dress up with a robot vacuum cleaner. The cleaner rolls around the house sweeping the floor, and had already proven a hit with many consumers. It bleeps happily as its on-board computer works out the best path to bust your carpet dust. The SICS team gave the vacuum a special series of stick-on patches, which could add to its basic programming. They found that choosing the right patch could change the way the humans perceive the robot’s actions. Different patches can make humans think the robot is curious, aggressive or nervous. There’s even a shyness patch that makes the robot hide under the sofa.

What’s real?

If humans are to live in a world populated by robots there to help them, the robots need to be able to play by our rules. Humans have whole parts of their brains given over to predicting how other humans will react. For example, we can empathise with others because we know that other beings have thoughts like us, and we can imagine what they think. This often spills over into anthropomorphism, where we give human characteristics to non-human animal or non-living things. Classic examples are where people believe their car has a particular personality, or think their computer is being deliberately annoying – they are just machines but our brains tend to attach motives to the behaviours we see.

Real-er robots?

Robots can produce very complex behaviours depending on the situations they are in and the ways we have interacted with them, which creates the illusion that they have some sort of ‘personality’ or motives in the way they are acting. This can help robots seem more natural and able to fit in with the social world around us. It can also improve the ways they provide us with assistance because they seem that bit more believable. Projects like the SICS’s ‘actDresses’ one help us by providing new ways that human users can customise the actions of their robots in a very natural way, in their case by getting the robots to dress for the part.

– Peter W McOwan and the CS4FN team, Queen Mary University of London (Updated from the archive)

More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

QMUL CS4FN EPSRC logos

The naked robot

A naked robot holding a flower
Image by bamenny from Pixabay 

Why are so many film robots naked? We take it for granted that robots don’t wear clothes, and why should they?

They are machines, not humans, after all. On the other hand, the quest to create artificial intelligence involves trying to create machines that share the special ingredients of humanity. One of the things that is certainly special about humans in comparison to other animals is the way we like to clothe and decorate our bodies. Perhaps we should think some more about why we do it but the robots don’t!

Shame or showoff?

The creation story in the Christian Bible suggests humans were thrown out of the Garden of Eden when Adam and Eve felt the need to cover up – when they developed shame. Humans usually wear more than just the bare minimum though, so wearing clothing can’t be all about shame. Nor is it just about practicalities like keeping warm. Turn up at an interview covering your body with the wrong sort of clothes and you won’t get the job. Go to a fancy dress party in the clothes that got you the job and you will probably feel really uncomfortable the moment you see that everyone else is wearing costumes. Clothes are about decorating our bodies as much as covering them.

Our urge to decorate our bodies certainly seems to be a deeply rooted part of what makes us human. After all, anthropologists consider finds like ancient beads as the earliest indications of humanity evolving from apehood. It is taken as evidence that there really was someone ‘in there’ back then. Body painting is used as another sign of our emerging humanity. We still paint our bodies millennia later too. Don’t think we’re only talking about children getting their faces painted – grownups do it too, as the vast make-up industry and the popularity of tattoos show. We put shiny metal and stones around our necks and on our hands too.

The fashion urge

Whatever is going on in our heads, clearly the robots are missing something. Even in the movies the intelligent ones rarely feel the need to decorate their bodies. R2D2? C3PO? Wall-E? The exceptions are the ones created specifically to pass themselves off as human like in Blade Runner.

You can of course easily program a robot to ‘want’ to decorate itself, or to refuse to leave its bedroom unless it has managed to drape some cloth over its body and shiny wire round its neck, but if it was just following a programmed rule would that be the same as when a human wears clothes? Would it be evidence of ‘someone in there’? Presumably not!

We do it because of an inner need to conform more than an inner need to wear a particular thing. That is what fashion is really all about. Perhaps programming an urge to copy others would be a start. In Wall-E, the robot shows early signs of this as he tries to copy what he sees the humans doing in the old films he watches. At one point he even uses a hubcap as a prop hat for a dance. Human decoration may have started as a part of rituals too.

Where to now?

Is this need to decorate our bodies something special, something linked to what makes us human? Should we be working on what might lead to robots doing something similar of their own accord? When archaeologists are hunting through the rubble in thousands of years’ time, will there be something other than beads that would confirm their robot equivalent to self-awareness? If robots do start to decorate and cover up their bodies because they want to rather than because it was what some God-like programmer coded them to do, surely something special will have happened. Perhaps that will be the point when the machines have to leave their Garden of Eden too.

Paul Curzon, Queen Mary University of London

From the archive

More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

QMUL CS4FN EPSRC logos

Shirts that keep score

Basketball player with shirt in mouth
Image by 愚木混株 Cdd20 from Pixabay 

When you are watching a sport in person, a quick glance at the scoreboard should tell you everything you need to know about what’s going on. But why not try to put that information right in the action? How much better would it be if all the players’ shirts could display not just the score, but how well each individual is doing?

Light up, light up

An Australian research group from the University of Sydney has made it happen. They rigged up two basketball teams’ shirts with displays that showed instant information as they played one another. The players (and everyone else watching the game) could see information that usually stays hidden, like how many fouls and points each player had. The displays were simple coloured bands in different places around the shirt, all connected up with tiny wires sewn into the shirts like thread. For every point a player got, for example, one of the bands on the player’s waist would light up. Each foul a player got made a shoulder band light up. There was also a light on players’ backs reserved for the leading team. Take the lead and all your team’s lights turned on, but lose it again and they went dark with defeat.

Sweaty but safe

All those displays were controlled by an on-board computer that each player harnessed to his or her body. That computer, in turn, was wirelessly connected to a central computer that kept track of winners, losers, fouls and baskets. The designers had to be careful about certain things, though. In case a player fell over and crushed their computer, the units were designed with ‘weak spots’ on purpose so they would detach rather than crumple underneath the player. And, since no one wants to get electrocuted while playing their favourite sport, the designers protected all the gear against moisture and sweat.

Keeping your head in the game

In the end, it was the audience at the game who got the most out of the system. They were able to track the players more closely than they normally would, and it helped those in the crowd who didn’t know much about basketball to understand what was going on. The players themselves had less time to think about what was on everyone’s clothes, as they were busy playing the game, but the system did help them a few times. One player said that she could see when her teammate had a high score, “and it made me want to pass to her more, as she had a ‘hot hand'”. Another said that it was easier to tell when the clock was running down, so she knew when to play harder. Plus, just seeing points on their shirts gave the players more confidence. There’s so much information available to you when you watch a game on television that, in a weird way, actually being in the stadium could make you less informed. Maybe in the future, the fans in the stands will see everything the TV audience does as well, when the players wear all their statistics on their shirts! We’ll see what the sponsors think of that…

the CS4FN team, Queen Mary University of London (From the archive)

More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

QMUL CS4FN EPSRC logos

Full metal jacket: the fashion of Iron Man

Spoiler Alert

Black and White drawing of Iron Man
Image by Victoria from Pixabay

Industrialist Tony Stark always dresses for the occasion, even when that particular occasion happens to be a fight with the powers of evil. His clothes are driven by computer science: the ultimate in wearable computing.

In the Iron Man comic and movie franchise Anthony Edward Stark, Tony to his friends, becomes his crime fighting alter ego by donning his high tech suit. The character was created by Marvel comic legend Stan Lee and first hit the pages in 1963. The back story tells how industrial armaments engineer and international playboy Stark is kidnapped and forced to work to develop new forms of weapons, but instead manages to escape by building a flying armoured suit.

Though the escape is successful Stark suffers a major heart injury during the kidnap ordeal, becoming dependant on technology to keep him alive. The experience forces him to reconsider his life, and the crime avenging Iron Man is born. Lee’s ‘businessman superhero’ has proved extremely popular and in recent years the Iron Man movies, starring Robert Downey Jr, have been box office hits. But as Tony himself would be the first to admit, there is more than a little computer science supporting Iron Man’s superhero standing.

Suits you

The Iron Man suit is an example of a powered exoskeleton. The technology surrounding the wearer amplifies the movement of the body, a little like a wearable robot. This area of research is often called ‘human performance augmentation’ and there are a number of organisations interested in it, including universities and, unsurprisingly, defence companies like Stark Industries. Their researchers are building real exoskeletons which have powers uncannily like those of the Iron Man suit.

To make the exoskeleton work the technology needs to be able to accurately read the exact movements of the wearer, then have the robot components duplicate them almost instantly. Creating this fluid mechanical shadow means the exoskeleton needs to contain massive computing power, able to read the forces being applied and convert them into signals to control the robot servo motors without any delay. Slow computing would cause mechanical drag for the wearer, who would feel like they were wading through treacle. Not a good idea when you’re trying to save the world.

Pump it up

Humans move by using their muscles in what are called antagonistic pairs. There are always two muscles on either side of the joint that pull the limb in different directions. For example, in your upper arm there are the muscles called the biceps and the triceps. Contracting the biceps muscle bends your elbow up, and contracting your triceps straightens your elbow back. It’s a clever way to control biological movement using just a single type of shortening muscle tissue rather than needing one kind that shortens and another that lengthens.

In an exoskeleton, the robot actuators (the things that do the moving) take the place of the muscles, and we can build these to move however we want, but as the robot’s movements need to shadow the person’s movements inside, the computer needs to understand how humans move. As the human bends their elbow to lift up an object, sensors in the exoskeleton measure the forces applied, and the onboard computer calculates how to move the exoskeleton to minimise the resulting strain on the person’s hand. In strength amplifying exoskeletons the actuators are high pressure hydraulic pistons, meaning that the human operators can lift considerable weight. The hydraulics support the load, the humans movements provide the control.

I knew you were going to do that

It is important that the human user doesn’t need to expend any effort in moving the exoskeleton; people get tired very easily if they have to counteract even a small but continual force. To allow this to happen the computer system must ensure that all the sensors read zero force whenever possible. That way the robot does the work and the human is just moving inside the frame. The sensors can take thousands of readings per second from all over the exoskeleton: arms, legs, back and so on.

This information is used to predict what the user is trying to do. For example, when you are lifting a weight the computer begins by calculating where all the various exoskeleton ‘muscles’ need to be to mirror your movements. Then the robot arm is instructed to grab the weight before the user exerts any significant force, so you get no strain but a lot of gain.

Flight suit?

Exoskeleton systems exist already. Soldiers can march further with heavy packs by having an exoskeleton provide some extra mechanical support that mimics their movements. There are also medical applications that help paralysed patients walk again. Sadly, current exoskeletons still don’t have the ability to let you run faster or do other complex activities like fly.

Flying is another area where the real trick is in the computer programming. Iron Man’s suit is covered in smart ‘control surfaces’ that move under computer control to allow him to manoeuvre at speed. Tony Stark controls his suit through a heads-up display and voice control in his helmet, technology that at least we do have today. Could we have fully functional Iron Man suits in the future? It’s probably just a matter of time, technology and computer science (and visionary multi-millionaire industrialists too).

Peter W McOwan and Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

QMUL CS4FN EPSRC logos

Let buttons be buttons

Buttons including one in the middle containing an integrated circuit
Image by Melanie from Pixabay with added integrated circuit button image by CS4FN

We are used to the idea that we use buttons with electronics to switch things on and off, but Rebecca Stewart and Sophie Skach decided to use real
buttons in the old-fashioned sense of a fashionable way to fasten up clothes.

Rebecca created integrated circuit buttons – electronics, sensors and a battery inside an actual button. Sophie then built them into a stylish jacket that included digital embroidery, embedding lighting and the circuitry to control it into the fabric of the jacket.

How do you control the light effects?

You just button and unbutton the jacket of course


Design your own

If you are interested in fashion design, why not design of a jacket, dress or shirt of your own that uses wearable technology. What would it do and how would you control it?

– Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

QMUL CS4FN EPSRC logos

What’s on your mind?

Telepathy is the supposed Extra Sensory Perception ability to read someone else’s mind at a distance. Whilst humans do not have that ability, brain-computer interaction researchers at Stanford have just made the high tech version a virtual reality.

Man holding out hand in front as though mind reading
Image by Andrei Cássia from Pixabay

It has long been know that by using brain implants or electrodes on a person’s head it is possible to tell the difference between simple thoughts. Thinking about moving parts of the body gives particularly useful brain signals. Thinking about moving your right arm, generates different signals to thinking about moving your left leg, for example, even if you are paralysed so cannot actually move at all. Telling two different things apart is enough to communicate – it is the basis of binary and so how all computer-to-computer communication is done. This led to the idea of the brain-computer interface where people communicate with and control a computer with their mind alone.

Stanford researchers made a big step forward in 2017, when they demonstrated that paralysed people could move a cursor on a screen by thinking of moving their hands in the appropriate direction. This created a point and click interface – a mind mouse – for the paralysed. Impressively, the speed and accuracy was as good as for people using keyboard applications

Stanford researchers have now gone a step even further and used the same idea to turn mental handwriting into actual typing. The person just thinks of writing letters with an imagined pen on imagined paper, the brain-computer interface then picks up the thoughts of subtle movements and the computer converts them into actual letters. Again the speed and accuracy is as good as most people can type. The paralysed participant concerned could communicate 18 words a minute and made virtually no mistakes at all: when the system was combined with auto-correction software, as we now all can use to correct our typing mistakes, it got letters right 99% of the time.

The system has been made possible by advances in both neuroscience and computer science. Recognising the letters being mind-written involves distinguishing very subtle differences in patterns of neurons firing in the brain. Recognising patterns is however, exactly what Machine Learning algorithms do. They are trained on lots of data and pick out patterns of similar data. If told what letter the person was actually trying to communicate then they can link that letter to the pattern detected. Here each letter will not lead to exactly the same pattern of brain signals firing each time, but they will largely clump together,. Other letters will also group but with slightly different patterns of firings. Once trained, the system works by taking the pattern of brain signals just seen and matching it to the nearest clumping pattern. The computer then guesses that the nearest clumping is the letter being communicated. If the system is highly accurate, as this one was at 94% (before autocorrection), then it means the patterns of most letters are very distinct. A letter being mind-written rarely fell into a brain pattern gap, which would have meant that letter could as easily have been the pattern of one letter as the other.

So a computer based “telepathy” is possible. But don’t expect us all to be able to communicate by mind alone over the internet any time soon. The approach involves having implants surgically inserted into the brain: in this case two computer chips connecting to your brain via 100 electrodes. The operation is a massive risk to take, and while perhaps justifiable for someone with a problem as severe as total paralysis, it is less obvious it is a good idea for anyone else. However, this shows at least it is possible to communicate written messages by mind alone, and once developed further could make life far better for severely disabled people in the future.

Yet again science fiction is no longer fantasy, it is possible, just not in the way the science fiction writers perhaps originally imagined by the power of a person’s mind alone.

by Paul Curzon, Queen Mary University of London, Spring 2021.

Smart bags

In our stress-filled world with ever increasing levels of anxiety, it would be nice if technology could sometimes reduce stress rather than just add to it. That is the problem that QMUL’s Christine Farion set out to solve for her PhD. She wanted to do something stylish too, so she created a new kind of bag: a smart bag.

White smart handbag with LEDs
Image by Christine Farion

Christine realised that one thing that causes anxiety for a lot of people is forgetting everyday things. It is very common for us to forget keys, train tickets, passports and other everyday things we need for the day. Sometimes it’s just irritating. At other times it can ruin the day. Even when we don’t forget things, we waste time unpacking and repacking bags to make sure we really do have the things we need. Of course, the moment we unpack a bag to check, we increase the chance that something won’t be put back!

Electronic bags

Christine wondered if a smart bag could help. Over the space of several years, she built ten different prototypes using basic electronic kits, allowing her to explore lots of options. Her basic design has coloured lights on the outside of the bag, and a small scanner inside. To use the bag, you attach electronic tags to the things you don’t want to forget. They are like the ones shops use to keep track of stock and prevent shoplifting. Some tags are embedded into things like key fobs, while others can be stuck directly on to an object. Then when you pack your bag, you scan the objects with the reader as you put them in, and the lights show you they are definitely there. The different coloured lights allow you to create clear links – natural mappings – between the lights and the objects. For her own bag, Christine linked the blue light to a blue key fob with her keys, and the yellow light to her yellow hayfever tablet box.

In the wild

Black informal bag with LEDs
Image by Christine Farion

One of the strongest things about her work was she tested her bags extensively ‘in the wild’. She gave them to people who used them as part of their normal everyday life, asking them to report to her what did and didn’t work about them. This all fed in to the designs for subsequent bags and allowed her to learn what really mattered to make this kind of bag work for the people using it. One of the key things she discovered was that the technology needed to be completely simple to use. If it wasn’t both obvious how to use and quick and simple to do it wouldn’t be used.

Christine also used the bags herself, keeping a detailed diary of incidents related to the bags and their design. This is called ‘autoethnography’. She even used one bag as her own main bag for a year and a half, building it completely into her life, fixing problems as they arose. She took it to work, shopping, to coffee shops … wherever she went.

Suspicious?

When she had shown people her prototype bags, one of the common worries was that the electronics would look suspicious and be a problem when travelling. She set out to find out, taking her bag on journeys around the country, on trains and even to airports, travelling overseas on several occasions. There were no problems at all.

Red smart handbag with LEDs
Image by Christine Farion

Fashion matters

As a bag is a personal item we carry around with us, it becomes part of our identity. She found that appropriate styling is, therefore, essential in this kind of wearable technology. There is no point making a smart bag that doesn’t fit the look that people want to carry around. This is a problem with a lot of today’s medical technology, for example. Objects that help with medical conditions: like diabetic monitors or drug pumps and even things as simple and useful as hearing aids or glasses, while ‘solving’ a problem, can lead to stigma if they look ugly. Fashion on the other hand does the opposite. It is all about being cool. Christine showed that by combining design of the technology with an understanding of fashion, her bags were seen as cool. Rather than designing just a single functional smart bag, ideally you need a range of bags, if the idea is to work for everyone.

Now, why don’t I have my glasses with me?

by Paul Curzon, Queen Mary University of London, Autumn 2018

Download Issue 25 of the cs4fn magazine “Technology Worn Out (and about) on Wearable Computing here.

The computer vs the casino: Wearable tech cheating

Close up of roulette wheel with ball on black 17
Image by Greg Montani from Pixabay

What happened when a legend of computer science took on the Las Vegas casinos? The answer, surprisingly, was the birth of wearable computing.

There have always been people looking to beat the system, to get that little bit extra of the odds going their way to allow them to clean up at the casino. Over the years maths and technology have been used, from a hidden mechanical arm up your sleeve allowing you to swap cards, to the more cerebral card counting. In the latter, a player remembers a running total of the cards played so they can estimate when high value cards will be dealt. One popular game to try and cheat was Roulette.

A spin of the wheel

Roulette, which comes from the French word ‘little wheel’, involves a dish containing a circular rotating part marked into red and black numbers. A simple version of the game was developed by the French mathematician, Pascal, and it evolved over the centuries to become a popular betting game. The central disc is spun and as it rotates a small ball is thrown into the dish. Players bet on the number that the ball will eventually stop at. The game is based on probability, but like most games there is a house advantage: the probabilities mean that the casino will tend to win more money than it loses.

Gamblers tried to work out betting strategies to win, but the random nature of where the ball stops thwarted them. In fact, the pattern of numbers produced from multiple roulette spins was so random that mathematicians and scientists have used these numbers as a random-number generator. Methods using them are even called Monte Carlo methods after the famous casino town. They are ways to calculate difficult mathematical functions by taking thousands of random samples of their value at different random places.

A mathematical system of betting wasn’t going to work to beat the game, but there was one possible weakness to be exploited: the person who ran the game and threw the ball into the wheel, the croupier.

No more bets please

There is a natural human instinct to spin the wheel and throw the ball in a consistent pattern. Each croupier who has played thousands of games has a slight bias in the speed and force with which they spin the wheel and throw the ball in. If you could just see where the wheel was when the spin started and the ball went in, you could use the short time before betting was suspended to make a rough guess of the area where the ball was more likely to land, giving you an edge. This is called ‘clocking the wheel’, but it requires great skill. You have to watch many games with the same croupier to gain a tiny chance of working out where their ball will go. This isn’t cheating in the same way as physically tampering with the wheel with weights and magnets (which is illegal), it is the skill of the gambler’s observation that gives the edge. Casinos became aware of it, so frequently changed the croupier on each game, so the players couldn’t watch long enough to work out the pattern. But if there was some technological way to work this out quickly perhaps the game could be beaten.

Blackjack and back room

Enter Ed Thorpe, in the 1950s, a graduate student in physics at MIT. Along with his interest in physics he had a love of gambling. Using his access to one of the world’s few room filling IBM computers at the university he was able to run the probabilities in card games and using this wrote a scientific paper on a method to win at Blackjack. This paper brought him to the attention of Claude Shannon, the famous and rather eccentric father of information theory. Shannon loved to invent things: the flame throwing trumpet, the insult machine and other weird and wonderful devices filled the basement workshop of his home. It was there that he and Ed decided to try and take on the casinos at Roulette and built arguably the first wearable computer.

Sounds like a win

The device comprised a pressure switch hidden in a shoe. When the ball was spun and passed a fixed point on the wheel, the wearer pressed the switch. A computer timer, strapped to the wrist, started and was used to track the progress of the ball as it passed around the wheel, using technology in place of human skill to clock the wheel. A series of musical tones told the person using the device where the ball would stop, each tone represented a separate part of the wheel. They tested the device in secret and found that using it gave them a 44% increased chance of correctly predicting the winning numbers. They decided to try it for real … and it worked! However, the fine wires that connected the computer to the earpiece kept breaking, so they gave up after winning only a few dollars. The device, though very simple and for a single purpose, is in the computing museum at MIT. The inventors eventually published the detail in a scientific paper called “The Invention of the First Wearable Computer,” in 1998.

The long arm of the law reaches out

Others followed with similar systems built into shoes, developing more computers and software to help cheat at Blackjack too, but by the mid 1980’s the casino authorities became wise to this way to win, so new laws were introduced to prevent the use of technology to give unfair advantages in casino games. It definitely is now cheating. If you look at the rules for casinos today they specifically exclude the use of mobile phones at the table, for example, just in case your phone is using some clever app to scam the casinos.

From its rather strange beginning, wearable computing has spun out into new areas and applications, and quite where it will go next is anybody’s bet.

by Peter W. McOwan, Queen Mary University of London, Autumn 2018

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos