The Digital Seabed: Data in Augmented Reality

A globe (North Atlantic visible) showing ocean depth information, with the path of HMS Challenger shown in red.
A globe (North Atlantic visible) showing ocean depth information, with the path of HMS Challenger shown in red. Image by Daniel Gill.

For many of us, the deep sea is a bit of a mystery. But an exciting interactive digital tool at the National Museum of the Royal Navy is bringing the seabed to life!

It turns out that the sea floor is just as interesting as the land where we spend most of our time (unless you’re a crab, of course, in which case you spend most of your time on the sea floor). I recently learnt about the sea floor at the National Museum of the Royal Navy in Portsmouth, in their “Worlds Beneath the Waves” exhibition, which documents 150-years of deep-sea exploration.

 One ship which revolutionised deep ocean study was HMS Challenger. It left London in 1858 and went on to make a 68,890 nautical-mile journey all over the earth’s oceans. One of its scientific goals was to measure the depth of the seabed as it circled the earth. To make these measurements, a long rope with a weight at one end was dropped into the water, which sank to the bottom. The length of the rope needed until the weight hit the floor was measured. It’s a simple process, but it worked! 

Thankfully, modern technology has caught up with bathymetry (the study of the sea floor). Now, sea floor depths are measured using sonar (so sound) and lidar (light) from ships or using special sensors on satellites. All of these methods send signals down to the seabed, and count how long it takes for a response. Knowing the speed of sound or light through air and water, you can calculate the distance to whatever reflected the signal.

You may be thinking, why do we need to know how deep the ocean is? Well, apart from the human desire to explore and mapour planet, it’s also useful for navigation and safety: in smaller waterways and ports, it’s very helpful to know whether there’s enough water below the boat to stay afloat!

It’s also useful to look at fault lines, the deep valleys (such as Challenger Deep, the deepest known point in the ocean, named after HMS Challenger), and underwater mountain ranges which separate continental plates. Studying these can help us to predict earthquakes and understand continental drift (read more about continental drift).

The sand table with colours projected onto it showing height.
The sand table with colours projected onto it showing height. Image by Daniel Gill.

We now have a much better understanding of the seabed, including detailed maps of sea floor topography around the world. So, we know what the ocean floor looks like at the moment, but how can we use this to understand the future of our waterways? This is where computers come in.

Near the end of the exhibition sits a table covered in sand, which has, projected onto it, the current topography of the sand. Where the sand is piled up higher is coloured red and orange, and lower in green and blue. Looking across the table you can see how sand at the same level, even far apart, is still within the same band of colour.

The projected image automatically adjusts (below) to the removal of the hill in red (above).
The projected image automatically adjusts (below) to the removal of the hill in red (above). Image by Daniel Gill.

But this isn’t even the coolest part! When you pick up and move sand around, the colours automatically adjust to the new sand topography, allowing you to shape the seabed at will. The sand itself, however, will flow and move depending on gravity, so an unrealistically tall tower will soon fall down and form a more rotund mound. 

 Want to know what will happen if a meteor impacts? Grab a handful of sand and drop it onto the table (without making a mess) and see how the topographical map changes with time!

The technology above the table.
The technology above the table. Image by Daniel Gill.

So how does this work? Looking above the table, you can see an Xbox Kinect sensor, and a projector. The Kinect works much like the lidar systems installed on ships – it sends beams of infrared lights down onto the sand, which bounce off back to the sensor in a measured time. This creates a depth map, just like ships do, but on a much smaller scale. This map is turned into colours and projected back on to the sand. 

Virtual water fills the valleys.
Virtual water fills the valleys. Image by Daniel Gill.

This is not the only feature of this table, however: it can also run physics simulations! By placing your hand over the sand, you can add virtual water, which flows realistically into the lower areas of sand, and even responds to the movement of sand.

The mixing of physical and digital representations of data like this is an example of augmented, or mixed, reality. It can help visualise things that you might otherwise find difficult to imagine, perhaps by simulating the effects of building a new dam, for example. Models like this can help experts and students, and, indeed, museum visitors, to see a problem in a different and more interactive way.

– Daniel Gill, Queen Mary University of London

More on…

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Signing Glasses

Glasses sitting on top of a mobile phone.
Image by Km Nazrul Islam from Pixabay

In a recent episode of Dr Who, The Well, Deaf actress Rose Ayling-Ellis plays a Deaf character Aliss. Aliss is a survivor of some, at first unknown, disaster that has befallen a mining colony 500,000 years in the future. The Doctor and current companion Belinda arrive with troopers. Discovering Aliss is deaf they communicate with her using a nifty futuristic gadget of the troopers that picks up everything they say and converts it into text as they speak, projected in front of them. That allows her to read what they say as they speak.

Such a gadget is not so futuristic actually (other than in a group of troopers carrying them). Dictation programs have existed for a long time and now, with faster computers and modern natural language processing techniques, they can convert speech to text in real time from a variety of speakers without lots of personal training (though they still do make mistakes). Holographic displays also exist, though such a portable one as the troopers had is still a stretch. An alternative that definitely exists is that augmented reality glasses specifically designed for the deaf could be worn (though are still expensive). A deaf or hard of hearing person who owns a pair can read what is spoken through their glasses in real time as a person speaks to them, with the computing power provided by their smart phone, for example. It could also be displayed so that it appeared to be out in the world (not on the lenses), as though it were appearing next to the person speaking. The effect would be pretty much the same as in the programme, but without the troopers having had to bring gadgets of their own, just Aliss wearing glasses.

Aliss (and Rose) used British Sign Language of course, and she and the Doctor were communicating directly using it, so one might have hoped that by 500, 000 years in the future someone might have had the idea of projecting sign language rather than text. After all, British SIgn Language it is a language in its own right that has a different grammatical structure to English. It is therefore likely that it would be easier for a native BSL speaker to see sign language rather than read text in English.

Some Deaf people might also object to glasses that translate into English because it undermines their first language and so culture. However, ones that translated into sign language can do the opposite and reinforce sign language, helping people learn the language by being immersed in it (whether deaf or not). Services like this do in fact already exist, connecting Deaf people to expert Sign language interpreters who see and hear what they do, and translate for them – whether through glasses or laptops .

Of course all the above so far is about allowing Deaf people (like Aliss) fit into a non-deaf world (like that of the Troopers) allowing her to understand them. The same technology could also be used to allow everyone else fit into a Deaf world. Aliss’s signing could have been turned into text for the troopers in the same way. Similarly, augmented reality glasses, connected to a computer vision system, could translate sign language into English allowing non-deaf people wearing glasses to understand people who are signing..

So its not just Deaf people who should be wearing sign language translation glasses. Perhaps one day we all will. Then we would be able to understand (and over time hopefully learn) sign language and actively support the culture of Deaf people ourselves, rather than just making them adapt to us.

– Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Photogrammetry for fun, preservation and research – digitally stitching together 2D photographs to visualise the 3D world.

Composite image of one green glass bottle made from three photographs. Image by Jo Brodie
Composite image of one green glass bottle made from three photographs. Image by Jo Brodie

Imagine you’re the costume designer for a major new film about a historical event that happened 400 years ago. You’d need to dress the actors so that they look like they’ve come from that time (no digital watches!) and might want to take inspiration from some historical clothing that’s being preserved in a museum. If you live near the museum, and can get permission to see (or even handle) the material that makes it a bit easier but perhaps the ideal item is in another country or too fragile for handling.

This is where 3D imaging can help. Photographs are nice but don’t let you get a sense of what an object is like when viewed from different angles, and they don’t really give a sense of texture. Video can be helpful, but you don’t get to control the view. One way around that is to take lots of photographs, from different angles, then ‘stitch’ them together to form a three dimensional (3D) image that can be moved around on a computer screen – an example of this is photogrammetry.

In the (2D) example above I’ve manually combined three overlapping close-up photos of a green glass bottle, to show what the full size bottle actually looks like. Photogrammetry is a more advanced version (but does more or less the same thing) which uses computer software to line up the points that overlap and can produce a more faithful 3D representation of the object.

In the media below you can see a looping gif of the glass bottle being rotated first in one direction and then the other. This video is the result of a 3D ‘scan’ made from only 29 photographs using the free software app Polycam. With more photographs you could end up with a more impressive result. You can interact with the original scan here – you can zoom in and turn the bottle to view it from any angle you choose.

A looping gif of the 3D Polycam file being rotated one way then the other. Image by Jo Brodie

You might walk around your object and take many tens of images from slightly different viewpoints with your camera. Once your photogrammetry software has lined the images up on a computer you can share the result and then someone else would be able to walk around the same object – but virtually!

Photogrammetry is being used by hobbyists (it’s fun!) but is also being used in lots of different ways by researchers. One example is the field of ‘restoration ecology’ in particular monitoring damage to coral reefs over time, but also monitoring to see if particular reef recovery strategies are successful. Reef researchers can use several cameras at once to take lots of overlapping photographs from which they can then create three dimensional maps of the area. A new project recently funded by NERC* called “Photogrammetry as a tool to improve reef restoration” will investigate the technique further.

Photogrammetry is also being used to preserve our understanding of delicate historic items such as Stuart embroideries at The Holburne Museum in Bath. These beautiful craft pieces were made in the 1600s using another type of 3D technique. ‘Stumpwork’ or ‘raised embroidery’ used threads and other materials to create pieces with a layered three dimensional effect. Here’s an example of someone playing a lute to a peacock and a deer.

Satin worked with silk, chenille threads, purl, shells, wood, beads, mica, bird feathers, bone or coral; detached buttonhole variations, long-and-short, satin, couching, and knot stitches; wood frame, mirror glass, plush”, 1600s. Photo CC0 from Metropolitan Museum of Art uploaded by Pharos on Wikimedia.

A project funded by the AHRC* (“An investigation of 3D technologies applied to historic textiles for improved understanding, conservation and engagement“) is investigating a variety of 3D tools, including photogrammetry, to recreate digital copies of the Stuart embroideries so that people can experience a version of them without the glass cases that the real ones are safely stored in.

Using photogrammetry (and other 3D techniques) means that many more people can enjoy, interact with and learn about all sorts of things, without having to travel or damage delicate fabrics, or corals.

*NERC (Natural Environment Research Council) and AHRC (Arts and Humanities Research Council) are two organisations that fund academic research in universities. They are part of UKRI (UK Research & Innovation), the wider umbrella group that includes several research funding bodies.

Other uses of photogrammetry

Examples of cultural heritage and ecology are highlighted in the post but also interactive games (particularly virtual reality), engineering and crime scene forensics and the film industry use photogrammetry, an example is Mad Max: Fury Road which used the technique to create a number of its visual effects. Hobbyists also create 3D versions (called ‘3D assets’) of all sorts of objects and sell these to games designers to include in their games for players to interact with.

Careers

This was an example job advert (since closed) for a photogrammetry role in virtual reality.

Further reading

Other CS4FN posts about the use of 3D imaging

“The team behind the idea scanned several works of art using very accurate laser scanners that build up a 3D picture of the thing being scanned. From this they created a 3D model of the work. This then allowed a person wearing to feel as though they were touching the actual sculpture feeling all the detail.”

See also our collection of Computer Science & Research posts.


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Art Touch and Talk Tour Tech

CS4FN Banner

by Paul Curzon, Queen Mary University of London

.What could a blind or partially-sighted person get from a visit to an art gallery? Quite a lot if the art gallery puts their mind to it. Even more if they make use of technology. So much so, we may all want the enhanced experience.

A sculpture of a head and shouldrers, heavily textured with a network of lines and points
Image by NoName_13 from Pixabay

The best art galleries provide special tours for blind and partially-sighted people. One kind involves a guide or curator explaining paintings and other works of art in depth. It is not exactly like a normal guided tour that might focus on the history or importance of a painting. The best will give both an overview of the history and importance whilst also giving a detailed description of the whole picture as well as the detail, emphasising how each part was painted. They might, for example, describe the brush strokes and technique as well as what is depicted. They help the viewer create a really detailed mental model of the painting.

One visually-impaired guide who now gives such tours at galleries such as Tate Britain, Lisa Squirrel, has argued that these tours give a much deeper and richer understanding of the art than a normal tour and certainly more than someone just looking at the pictures and reading the text as they wander around. Lisa studied Art History at university and before visiting a gallery herself reads lots and lots about the works and artists she will visit. She found that guided tours by sighted experts using guided hand movements in front of a painting helped her build really good internal models of the works in her mind. Combined with her extensive knowledge from reading, she wasn’t building just a picture of the image depicted but of the way it was painted too. She gained a deep understanding of the works she explored including what was special about them.

The other kind of tour art galleries provide is a touching tour. It involves blind and partially-sighted visitors being allowed to touch selected works of art as part of a guided tour where a curator also explains the art. Blind art lover, Georgina Kleege, has suggested that touch tours give a much richer experience than a normal tour, and should also be put on for all for this reason. It is again about more than just feeling the shape and so “working out its form that”seeing” what a sighted person would take in at a glance. It is about gaining a whole different sensory experience of the work: its texture, for example, not a lesser version just of what it looks like.

How might technology help? Well, the company, NeuroDigital Technologies, has developed a haptic glove system for the purpose. Haptic gloves are gloves that contain vibration pads that stimulate the skin of the person in different, very fine ways so as to fool the wearer’s brain into thinking it is touching things of different shapes and textures. Their system has over a thousand different vibration patterns to simulate different feelings of touching surfaces. They also contain sensors that determine the precise position of the gloves in space as the person moves their hands around.

The team behind the idea scanned several works of art using very accurate laser scanners that build up a 3D picture of the thing being scanned. From this they created a 3D model of the work. This then allowed a person wearing to feel as though they were touching the actual sculpture feeling all the detail. More than that the team could augment the experience to give enhanced feelings in places in shadow, for example, or to emphasise different parts of the work.

A similar system could be applied to historical artifacts too: allowing people to “feel” not just see the Rosetta Stone, for example. Perhaps it could also be applied to paintings to allow a person to feel the brush strokes in a way that could just not otherwise be done. This would give an enhanced version of the experience Lisa felt was so useful of having her hand guided in front of a painting and the brush strokes and areas being described. Different colours might also be coded with different vibration patterns in this way allowing a series of different enhanced touch tours of a painting, first exploring its colours, then its brush strokes, and so on.

What about talking tours? Can technology help there? AIs can already describe pictures, but early versions at least were trained on the descriptions people have given to images on the Internet: “a black cat sitting on top of the TV looking cute”, The Mona Lisa: a young woman staring at you”. That in itself wouldn’t cut it. Neither would training the AI on the normal brief descriptions on the gallery walls next to works of art. However, art books and websites are full of detail and more recent AIs can give very detailed descriptions of art works if asked. These descriptions include what the picture looks like overall, the components, colours, brushstrokes and composition, symbolism, historical context and more (at least for famous paintings). With specific training from curators and art historians the AIs will only get better. What is still missing for a blind person though from the kind of experience Lisa has when experiencing painting with a guide, is the link to the actual picture in space – having the guide move her hand in front of the painting as the parts are described. However, all that is needed to fill that gap is to combine a chat-based AI with a haptic glove system (and provide a way to link descriptions to spatial locations on the image). Then, the descriptions can be linked to positions of a hand moving in space in front of a virtual version of the picture. Combine that with the kind of system already invented to help blind people navigate, where vibrations on a walking stick indicate directions and times to turn, and the gloves can then not only give haptic sensations of the picture in front of the picture or sculpture, but also guide the person’s movement over it.

Whether you have such an experience in a gallery, in front of the work of art, or in your own front room, blind and partially sighted people could soon be getting much better experiences of art than sighted people. At which point, as Georgina Kleege, suggested for normal touch tours, everyone else will likely want the full “blind” experience too.

More on …


Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

One in the eye for wearable tech

Contact lenses, normally used to simply, but usefully, correct people’s vision, could in the future do far more.

Eye reflecting city lights
Image by kp yamu Jayanath from Pixabay

Tiny microelectronic circuits, antennae and sensors can now be fabricated and set in the plastic of contact lenses. Researchers are looking at the possibility of using such sensors to sample and transmit the glucose level in the eye moisture: useful information for diabetics. Others are looking at lenses that can change your focus, or even project data onto the lens, allowing new forms of augmented and virtual reality.

Conveniently, you can turn the frequent natural motion from the blinks of your eye into enough power to run the sensors and transmitter, doing away with the need for charging. All this means that smart contact lenses could be a real eye opener for wearable tech.

by Peter W. McOwan, Queen Mary University of London, Autumn 2018