Hedy Lamarr: The movie star, the piano player and the torpedo

Hedy Lamarr
eBay, Public domain, via Wikimedia Commons
Image credit: eBay, Public domain, via Wikimedia Commons

Hedy Lamarr was a movie star. Back in the 1940’s, in Hollywood’s Golden Age, she was considered one of the screen’s most beautiful women and appeared in several blockbusters. But Hedy was more than just good looks and acting skills. Even though many people remembered Hedy for her pithy quote “Any girl can be glamorous. All she has to do is stand still and look stupid”, at the outbreak of World War 2 she and composer George Antheil invented an encryption technique for a torpedo radio guidance system!

Their creative idea for an encryption system was based on the mechanism behind the ‘player piano’ – an automatic piano where the tune is controlled by a roll of paper with punched holes. The idea was to use what is now known as ‘frequency hopping’ to overcome the possibility of the control signal being jammed by the enemy. Normal radio communication involves the sender picking a radio frequency and then sending all communication at that frequency. Anyone who tunes in to that frequency can then listen in, but also jam it by sending their own more powerful signal at the same frequency. That’s why non-digital radio stations constantly tell you their frequency “96.2 FM” or whatever. Frequency hopping involves jumping from frequency to frequency throughout the broadcast. Then, only if sender and receiver share the secret of exactly when the jumps will be made, and to what frequencies, can the receiver pick up the broadcast or jam it. That is essentially what the piano roll could do. It stored the secret.

Though the navy didn’t actually use the method during World War II, they did use the principles during the Cuban missile crisis in the 1960’s. The idea behind the method is also used in today’s GPS, Wi-Fi, Bluetooth and mobile phone technologies, underpinning so much of the technology of today. In 2014 she was inducted into the US national inventor’s hall of fame.

Peter W McOwan, Queen Mary University of London (from the archive)


Click the player below or download the audio file (.m4a) to listen to this article read by Jo Brodie.


More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Lynn Conway: revolutionising chip design

Lynn Conway photo
Lynn Conway:
Photo from wikimedia by Charles Rogers CC BY-SA 2.5

MIT professor and transgender activist, Lynn Conway along with Carver Mead, completely changed the way we think about, do and teach VLSI (Very Large Scale Integration) chip design. Their revolutionary book on VLSI design quickly became the standard book used to teach the subject round the world. It wasn’t just a book though, it was a whole new way of doing electronics. Their ideas formed the foundation of the way electronics industry subsequently worked and still does today. Calling her impact as totally transformational is not at an exaggeration. Prior to this, she had worked for IBM, part of a team making major advances in microprocessor design. She was however, sacked by IBM for being transgender when she decided to transition. Times and views have fortunately also been transformed too and IBM subsequently apologised for their blatant discrimination!

A core part of the electronics revolution Mead and Conway triggered was to start thinking of electronics design as more like software. They advocated using special software design packages and languages that allowed hardware designers to put together a circuit design essentially by programming it. Once a design was completed, tools in the package could simulate the behaviour of the circuit allowing it to be thoroughly tested before the circuit was physically built. The result was designs were less likely to fail and creating them was much quicker. Even better, once tested, the design could then be compiled directly to silicon: the programmed version could be used to automatically create the precise layout and wiring of components below the transistor level to be laid on to the chip for fabrication.

This software approach allowed levels of abstraction to be used much more easily in electronics design: bigger components being created from smaller ones, in turn built from smaller ones still. Once designed the detailed implementation of those smaller components could be ignored in the design of larger components. A key part of this was Conway’s idea of scalable design rules to follow as the designs grew. Designers could focus on higher level design, building on previous design and with the details of creating the physical chips automated from the high level designs.

This transformation is similar (though probably even more transformational) to the switch from programming in low level languages to writing programs in high level languages and allowing a compiler to create the actual low-level code that is run. Just as that allowed vastly larger programs to be written, the use of electronic deign automation software and languages allowed massively larger circuits to be created.

Conway’s ideas also led to MOSIS: an Internet-based service whereby different designs by different customers could be combined onto one wafer for production. This meant that the fabrication costs of prototyping were no longer prohibitively expensive. Suddenly, creating designs was cheap and easy, a boon for both university and industrial research as well as for VLSI education. Conway for example pioneered the idea of allowing her students to create their own VLSI designs as part of her university course, with their designs all being fabricated together and and the resulting chips quickly returned. Large numbers could now learn VLSI design in a practical way gaining hands-on experience while still at university. This improvement in education together with the ease with which small companies could suddenly prototype new ideas made possible the subsequent boom in hi-tech start-up companies at the end of the 20th century.

Before Mead and Conway chip design was done slowly by hand by a small elite and needed big industry support. Afterwards it could be done quickly and easily by just about anyone, anywhere.

Paul Curzon, Queen Mary University of London


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Sorry to bug you: Grace Hopper

Close up head on of a flying butterfly or moth
Image by Josch13 from Pixabay

In the 2003 film The Matrix Reloaded, Neo, Morpheus, Trinity and crew continue their battle with the machines that have enslaved the human race in the virtual reality of the Matrix. To find the Oracle, who can explain what’s going on (which, given the twisty plot in the Matrix films, is always a good idea), Trinity needs to break into a power station and switch off some power nodes so the others can enter the secret floor. The computer terminal displays that she is disabling 27 power nodes, numbers 21 to 48. Unfortunately, that’s actually 28 nodes, not 27! A computer that can’t count and shows the wrong message!

Sadly, there are far too many programs with mistakes in them. These mistakes are known as bugs because back in 1945 Grace Hopper, one of the female pioneers of computer science, found an error caused by a moth trapped between the points at Relay 70, Panel F, of the Mark II Aiken Relay Calculator being tested at Harvard University. She removed the moth, and attached it to her test logbook, writing ‘First actual case of bug being found’, and so popularised the term ‘debugging’ for testing and fixing a computer program.

Grace Hopper is famous for more than just the word ‘bug’ though. She was one of the most influential of the early computer pioneers, responsible for perhaps the most significant idea in helping programmers to write large, bug-free programs.

As a Lieutenant in the US Navy reserves, having volunteered after Pearl Harbor, Grace was one of three of the first programmers of Harvard’s IBM Mark I computer. It was the first fully automatic programmed computer.

She didn’t just program those early computers though, she came up with innovations in the way computers were programmed. The programs for those early computers all had to be made up of so-called ‘machine instructions’. These are the simplest operations the computer can do: such as to add two numbers, move data from a place in memory to a register (a place where arithmetic can be done in a subsequent operation), jump to a different instruction in the program, and so on.

Programming in such basic instructions is a bit like giving someone directions to the station but having to tell them exactly where to put their foot for every step. Grace’s idea was that you could write programs in a language closer to human language where each instruction in this high-level language stood for lots of the machine instructions – equivalent to giving the major turns in those directions rather than every step.

The ultimate result was COBOL: the first widely used high-level programming language. At a stroke her ideas made programming much easier to do and much less error-prone. Big programs were now a possibility.

For this idea of high-level languages to work though you needed a way to convert a program written in a high-level language like COBOL into those machine instructions that a computer can actually do. It can’t fill in the gaps on its own! Grace had the answer – the ‘compiler’. It is just another computer program, but one that does a specialist task: the conversion. Grace wrote the first ever compiler, for a language called A-O, as well as the first COBOL compiler. The business computing revolution was up and running.

High-level languages like COBOL have allowed far larger programs to be written than is possible in machine-code, and so ultimately the expansion of computers into every part of our lives. Of course even high-level programs can still contain mistakes, so programmers still need to spend most of their time testing and debugging. As the Oracle would no doubt say, “Check for moths, Trinity, check for moths”.

– Peter W McOwan and Paul Curzon, Queen Mary University of London (from the archive)


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Future Friendly: Focus on Kerstin Dautenhahn

by Peter W McOwan, Queen Mary University of London

(from the archive)

Large robot facing a man in his home
Robot at home Image by Meera Patil from Pixabay

Kerstin Dautenhahn is a biologist with a mission: to help us make friends with robots. Kerstin was always fascinated by the natural world around her, so it was no surprise when she chose to study Biology at the University of Bielefeld in Germany. Afterwards she went on to study a Diploma in Biology where she did research on the leg reflexes in stick insects, a strange start it may seem for someone who would later become one of the world’s foremost robotics researchers. But it was through this fascinating bit of biology that Kerstin became interested in the ways that living things process information and control their body movements, an area scientists call biological cybernetics. This interest in trying to understand biology made her want to build things to test her understanding, these things would be based on ideas copied from biological animals but be run by computers, these things would be robots.

Follow that robot

From humble beginning building small robots that followed one another over a hilly landscape, she started to realise that biology was a great source of ideas for robotics, and in particular that the social intelligence that animals use to live and work with each other could be modelled and used to create sociable robots.

She started to ask fascinating questions like “What’s the best way for a robot to interrupt you if you are reading a newspaper – by gesturing with its arms, blinking its lights or making a sound?” and perhaps most importantly “When would a robot become your friend?” First at the University of Hertfordshire, now a Professor at the University of Waterloo she leads a world famous research group looking to try and build friendly robots with social intelligence.

Good robot / Bad robot – East vs West

Kerstin, like many other robotics researchers, is worried that most people tend to look on robots as being potentially evil. If we look at the way robots are portrayed in the movies that’s often how it seems: it makes a good story to have a mechanical baddie. But in reality robots can provide a real service to humans, from helping the disabled, assisting around the home and even becoming friends and companions. The baddie robot ideas tends to dominate in the west, but in Japan robots are very popular and robotics research is advancing at a phenomenal rate. There has been a long history in Japan of people finding mechanical things that mimic natural things interesting and attractive. It is partly this cultural difference that has made Japan a world leader in robot research. But Kerstin and others like her are trying to get those of us in the west to change our opinions by building friendly robots and looking at how we relate to them.

Polite Robots roam the room

When at the University of Hertfordshire, Kerstin decided that the best way to see how people would react to a robot around the house was to rent a flat near the university, and fill it with robots. Rather than examine how people interacted with robots in a laboratory, moving the experiments to a real home, with bookcases, biscuits, sofas and coffee tables, make it real. She and her team looked at how to give their robots social skills: what was the best way for a robot to approach a person, for example? At first they thought that the best approach would be straight from the front, but they found that humans felt this too aggressive, so the robots were trained to come up gently from the side. The people in the house were also given special ‘comfort buttons’, devices that let them indicate how they were feeling in the company of robots. Again interesting things happened, it turned out that not all, but quite a lot of people were on the whole happy for these robots to be close to themselves, closer in fact than they would normally let a human approach. Kerstin explains ‘This is because these people see the robot as a machine, not a person, and so are happy to be in close proximity. You are happy to move close to your microwave, and it’s the same for robots’. These are exciting first steps as we start to understand how to build robots with socially acceptable manners. But it turns out that robots need to have good looks as well as good manners if they are going to make it in human society.

Looks are everything for a robot?

This fall in acceptability
is called the ‘uncanny valley’

How we interact with robots also depends on how the robots look. Researchers had found previously that if you make a robot look too much like a human being, people expect it to be a human being, with all the social and other skills that humans have. If it doesn’t have these, we find interaction very hard. It’s like working with a zombie, and it can be very frightening. This fall in acceptability of robots that look like, but aren’t quite, human is what researchers call the ‘uncanny valley’, so people prefer to encounter a robot that looks like a robot and acts like a robot. Kerstin’s group found this effect too, so they designed their robots to look and act they way we would expect robots to look and act, and things got much more sociable. But they are still looking at how we act with more human like robots and built KASPAR, a robot toddler, which has a very realistic rubber face capable of showing expressions and smiling, and video camera eyes that allow the robot to react to your behaviours. He possesses arms so can wave goodbye or greet you with a friendly gesture. Most recently he was extended with multi-modal technology that allowed several children to play with him at the same time, He’s very lifelike and their hope was hopefully as KASPAR’s programming grew, and his abilities improved he, or some descendent of him, would emerge from the uncanny valley to become someone’s friend, and in particular, children with autism.

Autism – mind blindness and robots

The fact that most robots at present look like and act like robots can give them a big advantage to help them support children with autism. Autism is a condition that prevents you from developing an understanding of how to interact socially with the world. A current theory to explain the condition is that those who are autistic cannot form a correct understanding of others intentions, it’s called mind blindness. For example, if I came into the room wearing a hideous hat and asked you ‘Do you like my lovely new hat?’ you would probably think, ‘I don’t like the hat, but he does, so I should say I like it so as not to hurt his feelings’, you have a mental model of my state of mind (that I like my hat). An autistic person is likely to respond ‘I don’t like your hat’, if this is what he feels. Autistic people cannot create this mental model so find it hard to make friends and generally interact with people, as they can’t predict what people are likely to say, do or expect.

Playing with Robot toys

It’s different with robots, many autistic children have an affinity with robots. Robots don’t do unexpected things. Their behaviour is much simpler, because they act like robots. Using robots Kerstin’s group examined how we can use this interaction with robot toys to help some autistic children to develop skills to allow them to interact better with other people. By controlling the robot’s behaviours some of the children can develop ways to mimic social skills, which may ultimately improve their quality of life. There were some promising results, and the work continues to be only one way to try and help those suffering with this socially isolating condition.

Future friendly

It’s only polite that the last word goes to Kerstin from her time at Hertfordshire:

‘I firmly believe that robots as assistants can potentially be very useful in many application areas. For me as a researcher, working in the field of human-robot interaction is exciting and great fun. In our team we have people from various disciplines working together on a daily basis, including computer scientists, engineers and psychologist. This collaboration, where people need to have an open mind towards other fields, as well as imagination and creativity, are necessary in order to make robots more social.’

In the future, when robots become our workmates, colleagues and companions it will be in part down to Kerstin and her team’s pioneering effort as they work towards making our robot future friendly.


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Making sense of squishiness – 3D modelling the natural world

Look out the window at the human-made world. It’s full of hard, geometric shapes – our buildings, the roads, our cars. They are made of solid things like tarmac, brick and metal that are designed to be rigid and stay that way. The natural world is nothing like that though. Things bend, stretch and squish in response to the forces around them. That provides a whole bunch of fascinating problems for computer scientists like Lourdes Agapito of Queen Mary, University of London to solve.

Computer scientists interested in creating 3-dimensional models of the world have so far mainly concentrated on modelling the hard things. Why? Because they are easier! You can see the results in computer-animated films like Toy Story, and the 3D worlds like Second Life your avatar inhabits. Even the soft things tend to be rigid.

Lourdes works in this general area creating 3D computer models, but she wants to solve the problems of creating them automatically just from the flat images in videos and is specifically interested in things that deform – the squishy things.

Look out the window and watch the world go by. As you watch a woman walk past you have no problem knowing that you are looking at the same person as you were a second ago – even if she becomes partially hidden as she walks behind the post box and turns to post a letter. The sun goes behind a cloud and the scene is suddenly darker. It starts to rain and she opens an umbrella. You can still recognise her as the same object. Your brain is pulling some amazing tricks to make this seem so mundane. Essentially it is creating a model of the world – identifying all the 3-dimensional objects that you see and tracking them over time. If we can do it, why can’t a computer?

Unlike hard surfaces, deformable ones don’t look the same from one still to the next. You don’t have to just worry about changes in lighting, them being partially hidden, and that they appear different from a different angle. The object itself will be a different shape from one still to the next. That makes it far harder to work out which bits of one image are actually the same as the ones in the next. Lourdes has taken on a seriously hard problem.

Existing vision systems that create 3D objects have made things easier for themselves by using existing models. If a computer already has a model of a cube to compare what it sees with, then spotting a cube in the image stream is much easier than working it out from scratch. That doesn’t really generalise to deformable objects though because they vary too much. Another approach, used by the film industry, is to put highly visible markers on objects so that those markers can be tracked. That doesn’t help if you just want to point a camera out the window at whatever passes by though.

Lourdes aim is to be able to point a camera at a deformable object and have a computer vision system be able to create a 3D model simply by analysing the images. No markers, no existing models of what might be there, not even previous films to train it with, just the video itself. So far her team have created a system that can do this in some situations such as with faces as a person changes their expression. Their next goal is to be able to make their system work for a whole person as they are filmed doing arbitrary things. It’s the technical challenge that inspires Lourdes the most, though once the problems of deformable objects are solved there are applications of course. One immediately obvious area is in operating theatres. Keyhole surgery is now very common. It involves a surgeon operating remotely, seeing what they are doing by looking at flat video images from a fibre optic probe inside the body of the person being operated on. The image is flat but the inside of the person that the surgeon is trying to make cuts in is 3-dimensional. It would be far less error prone if what the surgeon was looking at was an accurate 3D model of the video feed rather than just a flat picture. Of course the inside of your body is made of exactly the kind of squishy deformable surfaces that Lourdes is interested in. Get the computer science right and technologies like this will save lives.

At the same time as tackling seriously hard if squishy computer science problems, Lourdes is also a mother of three. A major reason she can fit it all in, as she points out, is that she has a very supportive partner who shares in the childcare. Without him it would be impossible to balance all the work involved in leading a top European research team. It’s also important to get away from work sometimes. Running regularly helps Lourdes cope with the pressures and as we write she is about to run her first half marathon.

Lourdes may or may not be the person who turns her team’s solutions into the applications that in the future save lives in operating theatres, spot suspicious behaviour in CCTV footage or allow film-makers to quickly animate the actions of actors. Whoever does create the applications, we still need people like Lourdes who are just excited about solving the fundamental problems in the first place.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Recognising (and addressing) bias in facial recognition tech #BlackHistoryMonth

By Jo Brodie and Paul Curzon, Queen Mary University of London

A unit containing four sockets, 2 USB and 2 for a microphone and speakers.
Happy, though surprised, sockets Photo taken by Jo Brodie in 2016 at Gladesmore School in London.

Some people have a neurological condition called face blindness (also known as ‘prosopagnosia’) which means that they are unable to recognise people, even those they know well – this can include their own face in the mirror! They only know who someone is once they start to speak but until then they can’t be sure who it is. They can certainly detect faces though, but they might struggle to classify them in terms of gender or ethnicity. In general though, most people actually have an exceptionally good ability to detect and recognise faces, so good in fact that we even detect faces when they’re not actually there – this is called pareidolia – perhaps you see a surprised face in this picture of USB sockets below.

How about computers? There is a lot of hype about face recognition technology as a simple solution to help police forces prevent crime, spot terrorists and catch criminals. What could be bad about being able to pick out wanted people automatically from CCTV images, so quickly catch them?

What if facial recognition technology isn’t as good at recognising faces as it has sometimes been claimed to be, though? If the technology is being used in the criminal justice system, and gets the identification wrong, this can cause serious problems for people (see Robert Williams’ story in “Facing up to the problems of recognising faces“).

“An audit of commercial facial-analysis tools
found that dark-skinned faces are misclassified
at a much higher rate than are faces from any
other group. Four years on, the study is shaping
research, regulation and commercial practices.”

The unseen Black faces of AI algorithms
(19 October 2022) Nature

In 2018 Joy Buolamwini and Timnit Gebru shared the results of research they’d done, testing three different commercial facial recognition systems. They found that these systems were much more likely to wrongly classify darker-skinned female faces compared to lighter- or darker-skinned male faces. In other words, the systems were not reliable. (Read more about their research in “The gender shades audit“).

“The findings raise questions about
how today’s neural networks, which …
(look for) patterns in huge data sets,
are trained and evaluated.”

Study finds gender and skin-type bias
in commercial artificial-intelligence systems
(11 February 2018) MIT News

Their work has shown that face recognition systems do have biases and so are not currently at all fit for purpose. There is some good news though. The three companies whose products they studied made changes to improve their facial recognition systems and several US cities have already banned the use of this tech in criminal investigations. More cities are calling for it too and in Europe, the EU are moving closer to banning the use of live face recognition technology in public places. Others, however, are still rolling it out. It is important not just to believe the hype about new technology and make sure we do understand their limitations and risks.

More on

Further reading

More technical articles

• Joy Buolamwini and Timnit Gebru (2018) Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research 81:1-15. [EXTERNAL]
The unseen Black faces of AI algorithms (19 October 2022) Nature News & Views [EXTERNAL]


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


EPSRC supports this blog through research grant EP/W033615/1.

Hidden Figures: NASA’s brilliant calculators

Full Moon with a blue filter
Full Moon image by PIRO from Pixabay

NASA Langley was the birthplace of the U.S. space program where astronauts like Neil Armstrong learned to land on the moon. Everyone knows the names of astronauts, but behind the scenes a group of African-American women were vital to the space program: Katherine Johnson, Mary Jackson and Dorothy Vaughan. Before electronic computers were invented ‘computers’ were just people who did calculations and that’s where they started out, as part of a segregated team of mathematicians. Dorothy Vaughan became the first African-American woman to supervise staff there and helped make the transition from human to electronic computers by teaching herself and her staff how to program in the early programming language, FORTRAN.

The women switched from being the computers to programming them. These hidden women helped put the first American, John Glenn, in orbit, and over many years worked on calculations like the trajectories of spacecraft and their launch windows (the small period of time when a rocket must be launched if it is to get to its target). These complex calculations had to be correct. If they got them wrong, the mistakes could ruin a mission, putting the lives of the astronauts at risk. Get them right, as they did, and the result was a giant leap for humankind.

See the film ‘Hidden Figures’ for more of their story.

Paul Curzon, Queen Mary University of London

from the archive

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Gladys West: Where’s my satellite? Where’s my child?

Satellites are critical to much modern technology, and especially GPS. It allows our smartphones, laptops and cars to work out their exact position on the surface of the earth. This is central to all mobile technology, wearable or not, that relies on knowing where you are, from plotting a route your nearest Indian restaurant to telling you where a person you might want to meet is. Many, many people were involved in creating GPS, but it was only in Black History Month of 2017 when the critical part Gladys West played became widely known.

Work hard, go far

As a child Gladys worked with her family in the fields of their farm in rural Virginia. That wasn’t the life she wanted, so she worked hard through school, leaving as the top student. She won a scholarship to university, and then landed a job as a mathematician at a US navy base.

There she solved the maths problems behind the positioning of satellites. She worked closely with the programmers to write the code to do calculations based on her maths. Nine times out of ten the results that came back weren’t exactly right so much of her time was spent working out what was going wrong with the programs, as it was vital the results were very accurate.

Seasat and Geosat

Her work on the Seasat satellite won her a commendation. It was a revolutionary satellite designed to remotely monitor the oceans. It collected data about things like temperature, wind speed and wind direction at the sea’s surface, the heights of waves, as well as sensing data about sea ice. This kind of remote sensing has since had a massive impact on our understanding of climate change. Gladys specifically worked on the satellite’s altimeter. It was a radar-based sensor that allowed Seasat to measure its precise distance from the surface of the ocean below. She continued this work on later remote sensing satellites too, including Geosat, a later earth observation satellite.

Gladys West and Sam Smith look over data from the Global Positioning System,
which Gladys helped develop. Image: U.S. Navy, Public domain, via Wikimedia Commons US Navy, 1985,

GPS

Knowing the positions of satellites is the foundation for GPS. The way GPS works is that our mobile receivers pick up a timed signal from several different satellites. Calculating where we are can only be done if you first know very precisely where those satellites were when they sent the signal. That is what Gladys’ work provided.

GPS Watches

You can now buy, for example, buy GPS watches, allowing you to wear a watch that watches where you are. They can also be used by people with dementia, who have bad memory problems, allowing their carers to find them if they go out on their own but are then confused about where they are. They also allow parents to know where their kids are all the time. Do you think that’s a good use?

Since so much technology now relies on knowing exactly where we are, Gladys’ work has had a massive impact on all our lives.

Paul Curzon, Queen Mary University of London

Poster

Poster of Gladys West
Poster by Richard Butterworth

More on …

This article is also republished during Black History Month and is part of our


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Microwave health check

Using wearable tech to monitor elite athletes’ health

Microwaves aren’t just useful for cooking your dinner. Passing through your ears they might help check your health in future, especially if you are an elite athlete. Bioengineer Tina Chowdhury tells us about her multidisciplinary team’s work with the National Physics Laboratory (NPL).  

Lots of wearable gadgets work out things about us by sensing our bodies. They can tell who you are just by tapping into your biometric data, like fingerprints, features of your face or the patterns in your eyes. They can even do some of this remotely without you even knowing you’ve been identified. Smart watches and fitness trackers tell you how fast you are running, how fit you are and whether you are healthy, how many calories you have burned and how well you are sleeping or not sleeping. They also work out things about your heart, like how well it beats. This is done using optical sensor technology, shining light at your skin and measuring how much is scattered by the blood flowing through it.  

Microwave Sensors

With PhD student, Wesleigh Dawsmith, and electronic engineer, microwave and antennae specialist, Rob Donnan, we are working on a different kind of sensor to check the health of elite athletes. Instead of using visible light we use invisible microwaves, the kind of radiation that gives microwave ovens their name. The microwave-based wearables have the potential to provide real-time information about how our bodies are coping when under stress, such as when we are exercising, similar to health checks without having to go to hospital. The technology measures how much of the microwaves are absorbed through the ear lobe using a microwave antenna and wireless circuitry. How much of the microwaves are absorbed is linked to being dehydrated when we sweat and overheat during exercise. We can also use the microwave sensor to track important biomarkers like glucose, sodium, chloride and lactate which can be a sign of dehydration and give warnings of illnesses like diabetes. The sensor sounds an alarm telling the person that they need medication, or are getting dehydrated, so need to drink some water. How much of the microwaves are absorbed is linked to being dehydrated

Making it work

We are working with with Richard Dudley at the NPL to turn these ideas into a wearable, microwave-based dehydration tracker. The company has spent eight years working on HydraSenseNPL a device that clips onto the ear lobe, measuring microwaves with a flexible antenna earphone.

A big question is whether the ear device will become practical to actually wear while doing exercise, for example keeping a good enough contact with the skin. Another is whether it can be made fashionable, perhaps being worn as jewellery. Another issue is that the system is designed for athletes, but most people are not professional athletes doing strenuous exercise. Will the technology work for people just living their normal day-to-day life too? In that everyday situation, sensing microwave dynamics in the ear lobe may not turn out to be as good as an all-in-one solution that tracks your biometrics for the entire day. The long term aim is to develop health wearables that bring together lots of different smart sensors, all packaged into a small space like a watch, that can help people in all situations, sending them real-time alerts about their health.

Tina Chowdhury, Institute of Bioengineering, School of Engineering and Materials Science, Queen Mary University of London

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos