The Digital Seabed: Data in Augmented Reality

A globe (North Atlantic visible) showing ocean depth information, with the path of HMS Challenger shown in red.
A globe (North Atlantic visible) showing ocean depth information, with the path of HMS Challenger shown in red. Image by Daniel Gill.

For many of us, the deep sea is a bit of a mystery. But an exciting interactive digital tool at the National Museum of the Royal Navy is bringing the seabed to life!

It turns out that the sea floor is just as interesting as the land where we spend most of our time (unless you’re a crab, of course, in which case you spend most of your time on the sea floor). I recently learnt about the sea floor at the National Museum of the Royal Navy in Portsmouth, in their “Worlds Beneath the Waves” exhibition, which documents 150-years of deep-sea exploration.

 One ship which revolutionised deep ocean study was HMS Challenger. It left London in 1858 and went on to make a 68,890 nautical-mile journey all over the earth’s oceans. One of its scientific goals was to measure the depth of the seabed as it circled the earth. To make these measurements, a long rope with a weight at one end was dropped into the water, which sank to the bottom. The length of the rope needed until the weight hit the floor was measured. It’s a simple process, but it worked! 

Thankfully, modern technology has caught up with bathymetry (the study of the sea floor). Now, sea floor depths are measured using sonar (so sound) and lidar (light) from ships or using special sensors on satellites. All of these methods send signals down to the seabed, and count how long it takes for a response. Knowing the speed of sound or light through air and water, you can calculate the distance to whatever reflected the signal.

You may be thinking, why do we need to know how deep the ocean is? Well, apart from the human desire to explore and mapour planet, it’s also useful for navigation and safety: in smaller waterways and ports, it’s very helpful to know whether there’s enough water below the boat to stay afloat!

It’s also useful to look at fault lines, the deep valleys (such as Challenger Deep, the deepest known point in the ocean, named after HMS Challenger), and underwater mountain ranges which separate continental plates. Studying these can help us to predict earthquakes and understand continental drift (read more about continental drift).

The sand table with colours projected onto it showing height.
The sand table with colours projected onto it showing height. Image by Daniel Gill.

We now have a much better understanding of the seabed, including detailed maps of sea floor topography around the world. So, we know what the ocean floor looks like at the moment, but how can we use this to understand the future of our waterways? This is where computers come in.

Near the end of the exhibition sits a table covered in sand, which has, projected onto it, the current topography of the sand. Where the sand is piled up higher is coloured red and orange, and lower in green and blue. Looking across the table you can see how sand at the same level, even far apart, is still within the same band of colour.

The projected image automatically adjusts (below) to the removal of the hill in red (above).
The projected image automatically adjusts (below) to the removal of the hill in red (above). Image by Daniel Gill.

But this isn’t even the coolest part! When you pick up and move sand around, the colours automatically adjust to the new sand topography, allowing you to shape the seabed at will. The sand itself, however, will flow and move depending on gravity, so an unrealistically tall tower will soon fall down and form a more rotund mound. 

 Want to know what will happen if a meteor impacts? Grab a handful of sand and drop it onto the table (without making a mess) and see how the topographical map changes with time!

The technology above the table.
The technology above the table. Image by Daniel Gill.

So how does this work? Looking above the table, you can see an Xbox Kinect sensor, and a projector. The Kinect works much like the lidar systems installed on ships – it sends beams of infrared lights down onto the sand, which bounce off back to the sensor in a measured time. This creates a depth map, just like ships do, but on a much smaller scale. This map is turned into colours and projected back on to the sand. 

Virtual water fills the valleys.
Virtual water fills the valleys. Image by Daniel Gill.

This is not the only feature of this table, however: it can also run physics simulations! By placing your hand over the sand, you can add virtual water, which flows realistically into the lower areas of sand, and even responds to the movement of sand.

The mixing of physical and digital representations of data like this is an example of augmented, or mixed, reality. It can help visualise things that you might otherwise find difficult to imagine, perhaps by simulating the effects of building a new dam, for example. Models like this can help experts and students, and, indeed, museum visitors, to see a problem in a different and more interactive way.

– Daniel Gill, Queen Mary University of London

More on…

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

If you go down to the woods today…

A girl walking through a meadow full of flowers within woods
Image by Jill Wellington from Pixabay

In the 2025 RHS Chelsea Flower Show there was one garden that was about technology as well as plants: The Avanade Intelligent Garden  exploring how AI might be used to support plants. Each of the trees contained probes that sensed and recorded data about them which could then be monitored through an App. This takes pioneering research from over two decades ago a step further, incorporating AI into the picture and making it mainstream. Back then a team led by Yvonne Rogers built an ambient wood aiming to add excitement to a walk in the woods...

Mark Weiser had a dream of ‘Calm Computing’ and while computing sometimes seems ever more frustrating to use, the ideas led to lots of exciting research that saw at least some computers disappearing into the background. His vision was driven by a desire to remove the frustration of using computers but also the realization that the most profound technologies are the ones that you just don’t notice. He wanted technology to actively remove frustrations from everyday life, not just the ones caused by computers. He wrote of wanting to “make using a computer as refreshing as taking a walk in the woods.”

Not calm, but engaging and exciting!

No one argues that computers should be frustrating to use, but Yvonne Rogers, then of the Open University, had a different idea of what the new vision could be. Not calm. Anything but calm in fact (apart from frustrating of course). Not calm, but engaging and exciting!

Her vision of Weiser’s tranquil woods was not relaxing but provocative and playful. To prove the point her team turned some real woods in Sussex into an ‘Ambient Wood’. The ambient wood was an enhanced wood. When you entered it you took probes with you, that you could point and poke with. They allowed you to take readings of different kinds in easy ways. Time hopping ‘Periscopes’ placed around the woods allowed you to see those patches of woodland at other times of the year. There was also a special woodland den where you could then see the bigger picture of the woods as all your readings were pulled together using computer visualisations.

Not only was the Ambient Wood technology visible and in your face but it made the invisible side of the wood visible in a way that provoked questions about the wildlife. You noticed more. You saw more. You thought more. A walk in the woods was no longer a passive experience but an active, playful one. Woods became the exciting places of childhood stories again but now with even more things to explore.

The idea behind the Ambient Wood, and similar ideas like Bristol’s Savannah project, where playing fields are turned into African Savannah, was to revisit the original idea of computers but in a new context. Computers started as tools, and tools don’t disappear, they extend our abilities. Tools originally extended our physical abilities – a hammer allows us to hit things harder, a pulley to lift heavier things. They make us more effective and allow us to do things a mere human couldn’t do alone. Computer technology can do a similar thing but for the human intellect…if we design them well.

“The most important thing the participants gained was a sense of wonderment at finding out all sorts of things and making connections through discovering aspects of the physical woodland (e.g., squirrel’s droppings, blackberries, thistles)”

– Yvonne Rogers

The Weiser dream was that technology invisibly watches the world and removes the obstacles in the way before you even notice them. It’s a little like the way servants to the aristocracy were expected to always have everything just right but at the same time were not to be noticed by those they served. The way this is achieved is to have technology constantly monitoring, understanding what is going on and how it might affect us and then calmly fixing things. The problem at the time was that it needs really ‘smart’ technology – a high level of Artificial Intelligence to achieve and that proved more difficult than anyone imagined (though perhaps we are now much closer than we were). Our behaviour and desires, however, are full of subtlety and much harder to read than was imagined. Even a super-intellect would probably keep getting it wrong.

There are also ethical problems. If we do ever achieve the dream of total calm we might not like it. It is very easy to be gung ho with technology and not realize the consequences. Calm computing needs monitors – the computer measuring everything it can so it has as much information as possible to make decisions from (see Big Sister is Watching You).

A classic example of how this can lead to people rejecting technology intended to help is in a project to make a ‘smart’ residential home for the elderly. The idea was that by wiring up the house to track the residents and monitor them the nurses would be able to provide much better care, and relatives be able to see how things were going. The place was filled with monitors. For example, sensors in the beds measured resident’s weight while they slept. Each night the occupants weight could invisibly be taken and the nurses alerted of worrying weight loss over time. The smart beds could also detect tossing and turning so someone having bad nights could be helped. A smart house could use similar technology to help you or I have a good nights sleep and help us diet.

The problem was the beds could tell other things too: things that the occupants preferred to keep to themselves. Nocturnal visitors also showed up in the records. That’s the problem if technology looks after us every second of the day, the records may give away to others far more than we are happy with.

Yvonne’s vision was different. It was not that the computers try to second-guess everything but instead extend our abilities. It is quite easy for new technology to lead to our being poorer intellectually than we were. Calculators are a good example. Yes, we can do more complex sums quickly now, but at the same time without a calculator many people can’t do the sums at all. Our abilities have both improved and been damaged at the same time. Generative AI seems to be currently heading the same way, What the probes do, instead, is extend our abilities not reduce them: allowing us to see the woods in a new way, but to use the information however we wish. The probes encourage imagination.

The alternative to the smart house (or calculator) that pampers allowing your brain to stay in neutral, or the residential home that monitors you for the sake of the nurses and your relatives, is one where the sensors are working for you. Where you are the one the bed reports to helping you to then make decisions about your health, or where the monitors you wear are (only) part of a game that you play because its fun.

What next? Yvonne suggested the same ideas could be used to help learning and exploration in other ways, understanding our bodies: “I’d like to see kids discover new ways of probing their bodies to find out what makes them tick.”

So if Yvonne’s vision is ultimately the way things turn out, you won’t be heading for a soporific future while the computer deals with real life for you. Instead it will be a future where the computers are sparking your imagination, challenging you to think, filling you with delight…and where the woods come alive again just as they do in the storybooks (and in the intelligent garden).

Paul Curzon, Queen Mary University of London

(adapted from the archive)

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The tale of the mote and the petrel

by Paul Curzon, Queen Mary University of London
(Updated from the archive)

Giant petrel flying over ice and rock
Image by Eduardo Ruiz from Pixabay
Image by Eduardo Ruiz from Pixabay 

Biology and computer science can meet in some unexpected, not to mention inhospitable, places. Who would have thought that the chemical soup in the nests of Petrels studied by field biologists might help in the development of futuristic dust-sized computers, for example?

Just Keep Doubling

One of the most successful predictions in Computer Science was made by Gordon Moore, co-founder of Intel. Back in 1965 he suggested that the number of transistors that can be squeezed onto an integrated circuit – the hardware computer processors are made of – doubled every few years: computers get ever more powerful and ever smaller. In the 60 or so years since Moore’s paper it has remained an amazingly accurate prediction. Will it continue to hold though or are we reaching some fundamental limit? Researchers at chip makers are confident that Moore’s Law can be relied on for the foreseeable future. The challenge will be met by the material scientists, the physicists and the chemists. Computer scientists must then be ready for the Law’s challenge too: delivering the software advances so that its trends are translated into changes in our everyday lives. It will lead to ever more complex systems on a single chip and so ever smaller computers that will truly disappear into the environment.

Dusting computers

Motes are one technology developed on the back of this trend. The aim is to create dust-sized computers. For example, the worlds smallest computer as of 2015 was the Michigan Micro Mote. It was only a few milimetres big but was a fully working computer system able to power itself, sense the world, process the data it collects and communicate data collected to other computers. In 2018 IBM announced a computer with sides a millimetre long. Rising to the challenge, the Michigan team soon announced their new mote with sides a third of a millimetre! The shrinking of motes will is not likely to stop!

Scatter motes around the environment and they form unobservable webs of intelligent sensors. Scatter them on a battlefield to detect troop movements or on or near roads to monitor traffic flow or pollution. Mix them in concrete and monitor the state of a bridge. Embed them in the home to support the elderly or in toys to interact with the kids. They are a technology that drives the idea of the Internet of Things where everyday objects become smart computers.

Battery technology has long been
the only big problem that remains.

What barriers must be overcome to make dust sized motes a ubiquitous reality? Much of the area of a computer is taken up by its connections to the outside world – all those pins allowing things to be plugged in. They can now be replaced by wireless communications. Computers contain multiple chips each housing separate processors. It is not the transistors that are the problem but the packaging – the chip casings are both bulky and expensive. Now we have “multicore” chips: large numbers of processors on a single small chip courtesy of Moore’s Law. This gives computer scientists significant challenges over how to develop software to run on such complicated hardware and use the resources well. Power can come from solar panels to allow them to constantly recharge even from indoor light. Even then, though, they still need batteries to store the energy. Battery technology is the only big problem that remains.

Enter the Petrels

But how do you test a device like that? Enter the Petrels. Intel’s approach is not to test futuristic technology on average users but to look for extreme ones who believe a technology will deliver them massive benefits. In the case of Motes, their early extreme users were field biologists who want to keep tabs on birds in extremely harsh field conditions. Not only is it physically difficult for humans to observe sea birds’ nests on inhospitable cliffs but human presence disturbs the birds. The solution: scatter motes in the nests to detect heat, humidity and the like from which the state and behaviour of the birds can be deduced. A nest is an extremely harsh environment for a computer though, both physically and chemically. A whole bunch of significant problems, overlooked by normal lab testing, must be overcome. The challenge of deploying Motes in such a harsh environment led to major improvements in the technology.


Moore’s Law is with us for a while yet, and with the efforts of material scientists, physicists, chemists, computer scientists and even field biologists and the sea birds they study it will continue to revolutionise our lives.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Devices that work for everyone

Cartoon of the invisible man - only the clothes are visible

In 2009 Desi Cryer, who is Black, shared a light-hearted video with a serious message. He’d bought a new computer with a face tracking camera… which didn’t track his face, at all. It did track his White colleague Wanda’s face though. In the video (below) he asked her to go in front of the camera and move from side to side and the camera obediently tracked her face – wherever she moved the camera followed. When Desi moved back in front of the camera it stopped again. He wondered if the computer might be racist…

The computer recognises Desi’s colleague Wanda, but not him

Another video (below), this time from 2017, showed a dark-skinned man failing to get a soap to dispenser to give him some soap. Nothing happened when he put his hand underneath the sensor but as soon as his lighter-skinned friend put his hand under it – out popped some soap! The only way the first man could get any soap dispensed was to put a white tissue on his hand first. He wondered if the soap dispenser might be racist…

The soap dispenser only dispenses soap if it ‘see’s a white hand

What’s going on?

Probably no-one set out to maliciously design a racist device but designers might need to check that their products work with a range of different people before putting them on the market. This can save the company embarrassment as well as creating something that more people want to buy. 

Sensors working overtime

Both devices use a sensor that is activated (or in these cases isn’t) by a signal. Soap dispensers shine a beam of light which bounces off a hand placed below it and some of that light is reflected back. Paler skin reflects more light (and so triggers the sensor) than darker skin. Next to the light is a sensor which responds to the reflected light – but if the device was only tested on White people then the sensor wasn’t adjusted for the full range of skin tones and so won’t respond appropriately. Similarly cameras have historically been designed for White skin tones meaning darker tones are not picked up as well.

Things can be improved!

It’s a good idea, when designing something that will be used by lots of different people, to make sure that it will work correctly with everyone. Having a diverse design team and, importantly, making sure that everyone feels empowered to contribute is a good way to start. Another is to test the design with different target audiences early in the design process so that changes can be made before it’s too late. How a company responds to feedback when they’ve made an oversight is also important. In the case of the computer company they acknowledged the problem and went to work to improve the camera’s sensitivity. 

A problem with pulse oximeters

During the coronavirus pandemic many people bought a ‘pulse oximeter’, a device which clips painlessly onto a finger and measures how much oxygen is circulating in your blood (and your pulse). If the oxygen reading became too low people were advised to go to hospital. Oximeters shine red and infrared light from the top clip through the finger and the light is absorbed diferently depending on how much oxygen is present in the blood. A sensor on the lower clip measures how much light has got through but the reading can be affected by skin colour (and coloured nail polish). People were concerned that pulse oximeters would overestimate the oxygen reading for someone with darker skin (that is, tell them they had more oxygen than they actually had) and that the devices might not detect a drop in oxygen quickly enough to warn them.

In response the UK Government announced in August 2022 that it would investigate this bias in a range of medical devices to ensure that future devices work effectively for everyone.

In the days when film was developed the technicians would use what was called a ‘Shirley’ card
(a photograph of a White woman with brown hair) to colour-correct the photographs. The colour balancing meant darker-skinned tones didn’t come out as well, however the problem was only really addressed because chocolate manufacturers and furniture companies complained that the different chocolates and dark brown wood products weren’t showing up correctly!

The Racial Bias Built Into Photography
(25 April 2019) The New York Times

– Jo Brodie, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Bullseye! The intelligent dart board

A dart in the bulls eye of a dartboard
Image by Tim Bastian from Pixabay

Mark Rober, an engineer and YouTuber who worked for NASA, has created a dartboard that jumps in front of your dart to land you the best score. Throw a dart at his board and infra-red motion capture cameras track its path, and, software (and some maths) predicts where it will land. Motors then move the dartboard into a better position to up the score in real time!

– Jo Brodie, Queen Mary University of London

More on …

Related Magazines …


This article was funded by UKRI, through Professor Ursula Martin’s grant EP/K040251/2 and grant EP/W033615/1.

QMUL CS4FN EPSRC logos

The Hive at Kew

Art meets bees, science and electronics

(from the archive)

a boy lying in the middle of the Hive at Kew Gardens.
Image by Paul Curzon

Combine an understanding of science, with electronics skills and the creativity of an artist and you can get inspiring, memorable and fascinating experiences. That is what the Hive, an art instillation at Kew Gardens in London, does. It is a massive sculpture linked to a subtle sound and light experience, surrounded by a wildflower meadow, but based on the work of scientists studying bees.

The Hive is a giant aluminium structure that represents a bee hive. Once inside you see it is covered with LED lights that flicker on and off apparently randomly. They aren’t random though, they are controlled by a real bee hive elsewhere in the gardens. Each pulse of a light represents bees communicating in that real hive where the artist Wolfgang Buttress placed accelerometers. These are simple sensors like those in phones or a BBC micro:bit that sense movement. The sensitive ones in the bee hive pick up vibrations caused by bees communicating with each other The signals generated are used to control lights in the sculpture.

A new way to communicate

This is where the science comes in. The work was inspired by Martin Bencsik’s team at Nottingham Trent University who in 2011 discovered a new kind of communication between bees using vibrations. Before bees are about to swarm, where a large part of the colony split off to create a new hive, they make a specific kind of vibration, as they prepare to leave. The scientists discovered this using the set up copied by Wolfgang Buttress, using accelerometers in bee hives to help them understand bee behaviour. Monitoring hives like this could help scientists understand the current decline of bees, not least because large numbers of bees die when they swarm to search for a new nest.

Hear the vibrations through your teeth

Good vibrations

The Kew Hive has one last experience to surprise you. You can hear vibrations too. In the base of the Hive you can listen to the soundtrack through your teeth. Cover your ears and place a small coffee stirrer style stick between your teeth, and put the other end of the stick in to a slot. Suddenly you can hear the sounds of the bees and music. Vibrations are passing down the stick, through your teeth and bones of your jawbone to be picked up in a different way by your ears.

A clever use of simple electronics has taught scientists something new and created an amazing work of art.

– Paul Curzon, Queen Mary University of London


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

QMUL CS4FN EPSRC logos

Fencing the moon

Lunar module in landing configuration. Probes below each foot tell when the Lunar Module has almost landed.
Lunar module Eagle from the Apollo 11 moon landing getting ready to land (taken from the command module)
Image by NASA from Wikimedia (public domain)

The Apollo lunar modules that landed on the moon were guided by a complex mixture of computer program control and human control. Neil Armstrong and the other astronauts essentially operated an semi-automatic autopilot, switching on and off pre-programmed routines. One of the many problems the astronauts had to deal with was that the engines had to be shut down before the craft actually landed. Too soon and they would land too heavily with a crunch, too late and they could kick up the surface and the dust might cause the lunar module to explode. But how to know when?

They had ground sensing radar but would it be accurate enough? They needed to know when they were only feet above the surface. The solution was a cunning contraption: essentially a sensor button on the end of a long stick. These sensors dangled below each foot of the lunar module (see image). When they touched the surface the button pressed in, a light came on in the control panel and the astronaut knew to switch the engines off. Essentially, this sensor is the same as an epee: a fencing sword. In a fencing match the sword registers a hit on the opponent when the button on its tip is pressed against their body. Via a wire running down the sword and out behind the fencer, that switches on a light on the score board telling the referee who made the hit. So the Lunar Module effectively had a fencing bout with the moon…and won.

– Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


This cs4fn blog is funded by EPSRC, through grant EP/W033615/1.

QMUL CS4FN EPSRC logos

How do you solve a problem like arthritis?

A Venn diagram of low participation, low empowerment and low independence with images linked to each - people eating in a resterount, a person holding out arms at the top of a peak and two people walking.
Composite Image by CS4FN using inset images from Pixabay

Some diseases can’t be cured. Doctors and nurses just try to control the disease to stop them ruining people’s lives. Perhaps smartphone apps can pull off the trick of giving patients better care while giving clinicians more time to spend with the patients who most need them? A Venn diagram is at the centre of the Queen Mary team’s prototype.

What is rheumatoid arthritis?

Normally your immune system does a good job of fighting infection and keeping you healthy. But, if you have an autoimmune disease, it can also attack your healthy cells, causing inflammation and damage. Rheumatoid arthritis is like this: a painful condition that mostly affects hands, knees and feet as the person’s immune system attacks their joints, making them swell painfully. It affects around 400,000 people in the UK and is more common in women than men.

People with the disease alternate between periods when it is under control and they have few symptoms, and with days or weeks of painful ‘flares’ where it is very, very bad. During these flares it especially affects a person’s ability to live a normal life. It can be hard to move around comfortably, do exercise – plus it interferes with their ability to work. It can also leave them totally reliant on family and friends just to do everyday things like dress or eat, never mind go out. This can lead to depression and puts a strain on friendships.

Treating the disease

Treatment, which can include tablets, injections, physiotherapy and sometimes surgery, slows the disease, keeping it under control for long periods. Sufferers are also given advice on lifestyle changes. This all reduces the risk of joint damage and helps people live their life more fully.

At appointments, doctors collect information to help them see how the disease is progressing. A Disease Activity Score (DAS) calculator lets them combine measurements for pain, how tender or swollen their patient’s joints are and how many joints are affected. Regular blood tests keep track of the amount of inflammation and how the body is reacting to drugs. This helps them decide if they need to adjust the medication.

If it is caught early, modern medicine reduces the worst effects of the disease, helped by keeping a close eye on the Disease Activity Score as treatments may need to be repeatedly adjusted to control flares. This requires regular hospital visits which uses up scarce healthcare resources and is very time-consuming for patients. It is hampered because hospital appointments may only happen twice a year due to the number of patients. Everyone wants to give more personalised care, but hospitals just can’t afford to provide it.

Supporting doctors

So, what do you do when there just aren’t enough doctors to see everyone as regularly as needed to maintain their patients’ wellbeing? One solution is to use remote monitoring with an app on a patient’s smartphone, so involving patients more directly in their own care. They can use such apps to regularly record their own disease activity measurements, sharing the information with their doctor to save visiting the hospital.

A smart app

This is an improvement, but the measurements still require expert monitoring and can take more of the doctor’s time. However, if smartphones can actually be made to be, well, smart, then they could help give advice between hospital visits and alert the hospital team, when needed, so they can step in. This might involve, for example, loading the app with background knowledge about rheumatoid arthritis, expert knowledge from lots of doctors, and creating an artificial intelligence to use this information effectively for each patient.

Hospital specialists and computer scientists at Queen Mary are developing such a prototype based on Bayesian networks as the artificial intelligence core. Bayesian networks are based on reasoning about the causes of things and how likely different things are to be the cause of something being observed. Building the prototype involves finding out if patients and clinicians find such tools useful and acceptable (some people might find clinic visits reassuring, while some may be keener to avoid taking the time off work, for example).

Smart and patient centred

This still focusses on a clinician’s view of treatment using drugs though. With a smartphone app we can perhaps do better and take the person’s life into account – but how? The first step is to understand patient goals. Patients would need to be willing to share lots of information about themselves so that the software can learn as much as possible about them. Eventually, this might be done using sensors that automatically detect information: how much pain they are in, how stiff their joints are, how much they move around, how long it takes them to get out of a chair, how much sleep they get, how often they meet others, if and when they take their medicine, and so on. Rather than just focussing on medical treatment it can then focus advice ‘holistically’ on the whole person.

The Queen Mary team’s approach is centred around three different things: helping people with physical independence so they can move around and look after themselves; empowering them to manage their condition and general well-being themselves; and participation in the sense of helping them socialise, keep friendships and maintain family bonds.

The Bayesian network processes the information about patients and computes their predicted levels of independence, empowerment and participation, working out how good or bad things are for them at the moment. This places them in one of seven positions in a Venn diagram of the three dimensions over which areas need most attention. It then gives appropriate advice, aiming to keep all three dimensions in balance, monitoring what happens, but also alerting the hospital when necessary.

So, for example, if the Bayesian network judges independence low, participation high and empowerment low, the patient is in the Venn diagram intersection of low empowerment and low independence. Advice in the following weeks, based on this area of the Venn diagram, would focus on things like coping with pain and stiffness, getting better sleep, as well as how to manage the disease in general.

By personalising advice and focusing on the whole person, it is hoped patients will get more appropriate care as soon as they need it, but doctors’ time will also be freed up to focus on the patients who most need their help.

– Jo Brodie, Hamit Soyel and Paul Curzon, Queen Mary University of London, Spring 2021

Download Issue 27 of the cs4fn magazine on Smart Health here.

This post and issue 27 of the cs4fn magazine have been funded by EPSRC as part of the PAMBAYESIAN project.

Are you there yet?

Plenty of people love the Weasley family’s clock from the Harry Potter books and films. It shows where members of the family are at any given time. Instead of numbers giving the time, the clock face has locations where someone might be (home, school, shopping) and the many hands on the clock show the family members. The wizarding world uses magic to make their whereabouts clock work, but muggles (and squibs) can use mobile network data to build a simple version, and use Bayesian networks to improve it.

A cell phone tower looking up from inside to a blue sky
Image by Alistair McIntyre from Pixabay

Your mobile phone is in contact with several cell towers in the mobile provider’s network. When you want to send a message, it goes first to the nearest cell tower before passing through the network, finally reaching your friend’s phone. As you move around, from home to school, for example, you will pass several towers. The closer you are to a tower the stronger the signal there, and the phone network uses this to estimate where you are, based on signal strength from several towers. This means that, as long as your phone is with you, it can act as a sensor for your location and track you, just like the Weasley’s whereabouts clock.

You could also have a similar system at home that monitors your location, so that it switches on the lights and heating as you get closer to home to welcome you back. On a typical day you might head home somewhere between 3 and 6pm (depending on after-school events) and as you leave school the connection to your phone from the tower nearest the school will weaken, but connections will strengthen with the other cell towers on your route home. But what if you appear to be heading home at 11 in the morning? Perhaps you are, or maybe actually the signal has just dropped from the tower nearest to the school so a tower nearer your home is now getting the strongest signal!

A system using Bayesian logic to determine ‘near home’ or ‘not near home’ can be trained to put things into context. Unless you are ill, it’s unlikely that you’d be heading home before the afternoon so you can use these predicted timings to give a likelihood score of an event (such as you heading home). A Bayesian network takes a piece of information (‘person might be nearby’) and considers this in the context of previous knowledge (‘and that’s expected at this time of day so probably true’ or ‘but is unlikely to be nearby now so more information is needed’). Unlike machine learning which just looks for any patterns in data, in a Bayesian networks approach the way one thing being considered does or does not cause other things is built in from the outset. Here it builds in the different possible causes of the signal dropping at a cell tower.

You could also set up a similar system in a home using wifi points to predict where you are and so what you are doing. Information like that could then feed data into a personalised artificial intelligence looking after you. Not all magic has to be run by magic!

by Jo Brodie, Queen Mary University of London, Spring 2021

Download Issue 27 of the cs4fn magazine on Smart Health here.

This post and issue 27 of the cs4fn magazine have been funded by EPSRC as part of the PAMBAYESIAN project. This article was inspired by

Inspired by the blog on Presence Detection Part 1: Home Assistant & Bayesian Probability and a previous cs4fn article on making a Whereabouts Clock.