Tony Stockman: Sonification

Two different coloured wave patterns superimposed on one anohter on a black background with random dots like a starscape.
Image by Gerd Altmann from Pixabay

Tony Stockman, who was blind from birth, was a Senior Lecturer at QMUL until his retirement. A leading academic in the field of sonification of data, turning data into sound, he eventually became the President of the “International Community for Auditory Display”: the community of researchers working in this area.

Traditionally, we put a lot of effort into finding the best ways to visualise data so that people can easily see the patterns in it. This is an idea that Florence Nightingale, of lady of the lamp fame, pioneered with Crimean War data about why soldiers were dying. Data visualisation is considered so important it is taught in primary schools where we all learn about pie charts and histograms and the like. You can make a career out of data visualisation, working in the media creating visualisations for news programmes and newspapers, for example, and finding a good visualisation is massively important working as a researcher to help people understand your results. In Big Data a good visualisation can help you gain new insights into what is really happening in your data. Those who can come up with good visualisations can become stars, because they can make such a difference (like Florence Nightingale, in fact)

Many people of course, Tony included cannot see, or are partially sighted, so visualisation is not much help! Tony therefore worked on sonifying data instead, exploring how you can map data onto sounds rather than imagery in a way that does the same thing.: makes the patterns obvious and understandable.

His work in this area started with his PhD where he was exploring how breathing affects changes in heart rate. He first needed a way to both check for noise in the recording and then also a way to present the results so that he could analyse and so understand them. So he invented a simple way to turn data into sound using for example frequencies in the data to be sound frequencies. By listening he could find places in his data where interesting things were happening and then investigate the actual numbers. He did this out of necessity just to make it possible to do research but decades later discovered there was by then a whole research community by then working on uses of and good ways to do sonification,

He went on to explore how sonification could be used to give overviews of data for both sighted and non-sighted people. We are very good at spotting patterns in sound – that is all music is after all – and abnormalities from a pattern in sound can stand out even more than when visualised.

Another area of his sonification research involved developing auditory interfaces, for example to allow people to hear diagrams. One of the most famous, successful data visualisations was the London Tube Map designed by Harry Beck who is now famous as a result because of the way that it made the tube map so easy to understand using abstract nodes and lines that ignored distances. Tony’s team explored ways to present similar node and line diagrams, what computer scientist’s call graphs. After all it is all well and good having screen readers to read text but its not a lot of good if all it tells you reading the ALT text that you have the Tube Map in front of you. And this kind of graph is used in all sorts of every day situations but are especially important if you want to get around on public transport.

There is still a lot more to be done before media that involves imagery as well as text is fully accessible, but Tony showed that it is definitely possible to do better, He also showed throughout his career that being blind did not have to hold him back from being an outstanding computer scientists as well as a leading researcher, even if he did have to innovate himself from the start to make it possible.

More on …


Related Magazine …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Shh! Can you hear that diagram?

What does a diagram sound like? What does the shape of a sound feel like? Researchers at Queen Mary, University of London have been finding out.

At first sight listening to diagrams and feeling sounds might sound like nonsense, but for people who are visually impaired it is a practical issue. Even if you can’t see them, you can still listen to words, after all. Spoken books were originally intended for partially-sighted people, before we all realised how useful they were. Screen readers similarly read out the words on a computer screen making the web and other programs accessible. Blind people can also use touch to read. That is essentially all Braille is, replacing letters with raised patterns you can feel.

The written world is full of more than just words though. There are tables and diagrams, pictures and charts. How does a paritally-sighted person deal with them? Is there a way to allow them to work with others creating or manipulating diagrams even when each person is using a different sense?

That’s what the Queen Mary researchers, working with the Royal National Institute for the Blind and the British Computer Association of the Blind explored. Their solution was a diagram editor with a difference. It allows people to edit ‘node-and-link’ diagrams: like the London underground map, for example, where the stations are the nodes and the links show the lines between them. The diagram editor converts the graphical part of a diagram, such as shapes and positions, into sounds you can listen to and textured surfaces you can feel. It allows people to work together exploring and editing a variety of diagrams including flowcharts, circuit diagrams, tube maps, mind maps, organisation charts and software engineering diagrams. Each person, whether fully sighted or not, ‘views’ the diagram in the way that works for them.

The tool combines speech and non-speech sounds to display a diagram. For example, when the label of a node is spoken, it is accompanied by a bubble bursting sound if it’s a circle, and a wooden sound if it’s a square. The labels of highlighted nodes are spoken with a higher pitched voice to show that they are highlighted. Different types of links are also displayed using different sounds to match their line style. For example, the sound of a straight line is smoother than that of a dashed line. The idea for arrows came from listening to one being drawn on a chalk board. They are displayed using a short and a long sound where the short sound represents the arrow head, and the long sound represents its tail. Changing the order they are presented changes the direction of the arrow: either pointing towards or away from the node.

For the touch part, the team use a PHANTOM Omni haptic device, which is a robotic arm attached to a stylus that can be programmed to simulate feeling 3D shapes, textures and forces. For example, in the diagram editor nodes have a magnetic effect: if you move the stylus close to one the stylus gets pulled towards it. You can grab a node and move it to another location, and when you do, a spring like effect is applied to simulate dragging. If you let it go, the node springs back to its original location. Sound and touch are also integrated to reinforce each other. As you drag a node, you hear a chain like sound (like dragging a metal ball chained to a prisoner?!). When you drop it in a new location, you hear the sound of a dart hitting a dart board.

The Queen Mary research team tried out the editor in a variety of schools and work environments where visually impaired and sighted people use diagrams as part of their everyday activities and it seemed to work well. It’s free to download so why not try it yourself. You might see diagrams in a whole new light.

Paul Curzon, Queen Mary University of London


More on…


Related Magazine …

Signing Glasses

Glasses sitting on top of a mobile phone.
Image by Km Nazrul Islam from Pixabay

In a recent episode of Dr Who, The Well, Deaf actress Rose Ayling-Ellis plays a Deaf character Aliss. Aliss is a survivor of some, at first unknown, disaster that has befallen a mining colony 500,000 years in the future. The Doctor and current companion Belinda arrive with troopers. Discovering Aliss is deaf they communicate with her using a nifty futuristic gadget of the troopers that picks up everything they say and converts it into text as they speak, projected in front of them. That allows her to read what they say as they speak.

Such a gadget is not so futuristic actually (other than in a group of troopers carrying them). Dictation programs have existed for a long time and now, with faster computers and modern natural language processing techniques, they can convert speech to text in real time from a variety of speakers without lots of personal training (though they still do make mistakes). Holographic displays also exist, though such a portable one as the troopers had is still a stretch. An alternative that definitely exists is that augmented reality glasses specifically designed for the deaf could be worn (though are still expensive). A deaf or hard of hearing person who owns a pair can read what is spoken through their glasses in real time as a person speaks to them, with the computing power provided by their smart phone, for example. It could also be displayed so that it appeared to be out in the world (not on the lenses), as though it were appearing next to the person speaking. The effect would be pretty much the same as in the programme, but without the troopers having had to bring gadgets of their own, just Aliss wearing glasses.

Aliss (and Rose) used British Sign Language of course, and she and the Doctor were communicating directly using it, so one might have hoped that by 500, 000 years in the future someone might have had the idea of projecting sign language rather than text. After all, British SIgn Language it is a language in its own right that has a different grammatical structure to English. It is therefore likely that it would be easier for a native BSL speaker to see sign language rather than read text in English.

Some Deaf people might also object to glasses that translate into English because it undermines their first language and so culture. However, ones that translated into sign language can do the opposite and reinforce sign language, helping people learn the language by being immersed in it (whether deaf or not). Services like this do in fact already exist, connecting Deaf people to expert Sign language interpreters who see and hear what they do, and translate for them – whether through glasses or laptops .

Of course all the above so far is about allowing Deaf people (like Aliss) fit into a non-deaf world (like that of the Troopers) allowing her to understand them. The same technology could also be used to allow everyone else fit into a Deaf world. Aliss’s signing could have been turned into text for the troopers in the same way. Similarly, augmented reality glasses, connected to a computer vision system, could translate sign language into English allowing non-deaf people wearing glasses to understand people who are signing..

So its not just Deaf people who should be wearing sign language translation glasses. Perhaps one day we all will. Then we would be able to understand (and over time hopefully learn) sign language and actively support the culture of Deaf people ourselves, rather than just making them adapt to us.

– Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Sign Language for Train Departures

BSL for CS4FN
Image by Daniel Gill

This week (5-11th May) is Deaf Awareness Week, an opportunity to celebrate d/Deaf* people, communities, and culture, and to advocate for equal access to communication and services for the d/Deaf and hard of hearing. A recent step forward is that sign language has started appearing on railway stations.

*”deaf” with a lower-case “d” refers to
audiological experience of deafness,
or those who might have become deafened
or hard of hearing in later life, so might identify
closer to the hearing community.
“Deaf” with an upper-case “D” refers
to the cultural experience of deafness, or those
who might have been born Deaf and
therefore identify with the Deaf community.
This is similar to how people might describe themselves
as “having a disability” versus “being disabled”.

If you’re like me and travel by train a lot (long time CS4FN readers will be aware of my love of railway timetabling), you may have seen these relatively new British Sign Language (BSL) screens at various railway stations.

They work by automatically converting train departure information into BSL by stitching together pre-recorded videos of BSL signs. Pretty cool stuff! 

When I first saw these, though, there was one small thing that piqued my interest – if d/Deaf people can see the screen, why not just read the text? I was sure it wasn’t an oversight: Network Rail and train operators worked closely with d/Deaf charities and communities when designing the system: so being a researcher in training, I decided to look into it. 

A train information screen with sign language
Image by Daniel Gill

It turns out that the answer has various lines of reasoning.

There’s been many years of research investigating reading comprehension for d/Deaf people compared to their hearing peers. A cohort of d/Deaf children, in a 2015 paper, had significantly weaker reading comprehension skills than both hearing children of the same chronological and reading age.

Although this gap does seem to close with age, some d/Deaf people may be far more comfortable and skilful using BSL to communicate and receive information. It should be emphasised that BSL is considered a separate language and is structured very differently to spoken and written English. As an example, take the statement:

“I’m on holiday next month.”

In BSL, you put the time first, followed by topic and then comment, so you’d end up with:

“next month – holiday – me”

As one could imagine, trying to read English (a second language for many d/Deaf people) with its wildly different sentence structure could be a challenge… especially as you’re rushing through the station looking for the correct platform for your train!

Sometimes, as computer scientists, we’re encouraged to remove redundancies and make our systems simpler and easy-to-use. But something that appears redundant to one person could be extremely useful to another – so as we go on to create tools and applications, we need to make sure that all target users are involved in the design process.

Daniel Gill, Queen Mary University of London

More on…

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The wrong trousers? Not any more!

A metal figure sitting on the floor head down
Image by kalhh from Pixabay

Inspired by the Wallace & Gromit film ‘The Wrong Trousers’, Johnathan Rossiter of the University of Bristol builds robotic trousers. We could all need them as we get older.

Think of a robot and you probably think of something metal: something solid and hard. But a new generation of robot researchers are exploring soft robotics: robots made of materials that are squishy. When it comes to wearable robots, being soft is obviously a plus. That is the idea behind Jonathan’s work. He is building trousers to help people stand and walk.

Being unable to get out of an armchair without help can be devastating to a person’s life. There are many conditions like arthritis and multiple sclerosis, never mind just plain old age, that make standing up difficult. It gets to us all eventually and having difficulty moving around makes life hard and can lead to isolation and loneliness. The less you move about, the harder it gets to do, because your muscles get weaker, so it becomes a vicious circle. Soft robotic trousers may be able to break the cycle.

We are used to the idea of walking sticks, frames, wheelchairs and mobility scooters to help people get around. Robotic clothes may be next. Early versions of Jonathan’s trousers include tubes like a string of sausages that when pumped full of air become more solid, shortening as they bulge
out, so straightening the leg. Experiments have shown that inflating trousers fitted with them, can make a robot wearing them stand. The problem is that you need to carry gas canisters around, and put up with the psshhht! sound whenever you stand!

The team have more futuristic (and quieter) ideas though. They are working on designs
based on ‘electroactive polymers’. These are fabrics that change when electricity
is applied. One group that can be made into trousers, a bit like lycra tights, silently shrink with an electric current: exactly what you need for robotic trousers. To make it work you need a computer control system that shrinks and expands them in the right places at the right time to move the leg
wearing them. You also need to be able to store enough energy in a light enough way that the trousers can be used without frequent recharging.

It’s still early days, but one day they hope to build a working system that really can help older people stand. Jonathan promises he will eventually build the right trousers.

– Paul Curzon, Queen Mary University of London (from the archive)

More on …

The rise of the robots [PORTAL]


Related Magazine …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

My first signs

Alexander Graham Bell was inspired by the deafness of his mother to develop new technologies to help. Lila Harrar, then a computer science student at Queen Mary, University of London was also inspired by a deaf person to do something to make a difference. Her chance came when she had to think of something to do for her undergraduate project.

Sign language relief sculpture on a stone wall: "Life is beautiful, be happy and love each other", by Czech sculptor Zuzana Čížková on Holečkova Street in Prague-Smíchov, by a school for the deaf
Sign language relief sculpture on a stone wall: “Life is beautiful, be happy and love each other”, by Czech sculptor Zuzana Čížková on Holečkova Street in PragueSmíchov, by a school for the deaf
Image (cropped) ŠJů, Wikimedia Commons, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons

Her inspiration came from working with a deaf colleague in a part-time job on the shop floor at Harrods. The colleague often struggled to communicate to customers so Lila decided to do something to encourage hearing as well as deaf people to learn Sign Language. She developed an interactive tutor program that teaches both deaf and non-deaf users Sign Language. Her software included games and quizzes along with the learning sections- and she caught the attention of the company Microbooks. They were so impressed that they decided to commercialise it. As Lila discovered you need both creativity and logical thinking skills to do well at Computer Science – with both, together with a bit of business savvy, perhaps you could become the country’s next great innovator.

– Peter W. McOwan and Paul Curzon, Queen Mary University of London

More on …

Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



EPSRC supports this blog through research grant EP/W033615/1. 

QMUL CS4FN EPSRC logos

Involving disabled people in the design of ICT tools and devices

by Jo Brodie, Queen Mary University of London

Image by Gerd Altmann from Pixabay (CROPPED)

The World Health Organisation currently estimates that around 1.3 billion people, or one in six people on Earth, “experience significant disability”. Designers who are creating devices and tools for people to use need to make sure that the products they develop can be used by as many people as possible, not just non-disabled people, to make sure that everyone can benefit from them.

Disabled people can face lots of barriers in the workplace including some that seem simple to address – problems using everyday ICT and other tech. While there are a lot of fantastic Assistive Technology (AT) products unfortunately not all are suitable and so are abandoned by disabled people as they don’t serve their needs.

One challenge is that some of the people who have been doing the designing might not have direct experience of disability themselves, so they are less able to think about their design from that perspective. Solutions to this can include making sure that disabled computer scientists and human-computer interaction researchers are part of the team of designers and creators in the first place, or by making it easier for other disabled people to be involved at an early stage of design. This means that their experience and ideas can contribute to making the end product more relevant and useful for them and others. Alongside this there is education and advocacy – helping more young computer scientists, technologists and human-computer interaction designers to start thinking early about how their future products can be more inclusive.

An EPSRC project “Inclusive Public Activities for information and Communication Technologies” has been looking at some practical ways to help. Run by Prof. Cathy Holloway and Dr. Maryam Bandukda and their wider team at UCL they have established a panel of disabled academics and professionals who can be ‘critical friends’ to researchers planning new projects. By co-creating a set of guidelines for researchers they are providing a useful resource but it also means that disabled voices are heard at an early stage of the design process so that projects start off in the right direction.

Prof. Holloway and Dr. Bandukda are based at the Global Disability Innovation Hub (GDI Hub) in the department of computer science at UCL. GDI Hub is a global leader in disability innovation and inclusion and has research reaching over 30 million people in 60 countries. The GDI Hub also educates people to increase awareness of disability, reduce stigma and lay the groundwork for more disability-aware designers to benefit people in the future with better products.

An activity that the UCL team ran in February 2024, for schools in East London, was a week-long inclusive ICT “Digital Skills and Technology Innovation” bootcamp. They invited students in Year 9 and above to learn about 3D printing, 3D modelling, laser cutting, AI and machine learning using Python, artificial reality and virtual reality experiences along with a chance to visit Google’s Accessible Discovery Centre and use their skills to “tackle real-world challenges”.

What are some examples of Assistive Technology?

Screen-reading software can help blind or visually impaired people by reading aloud the words on the page. This is something that can help sighted people too, your document can read itself to you while you do something else. The entire world of audio books exists for this reason! D/deaf people can take part more easily in Zoom conversations if text-to-caption software is available so they can read what’s being said. That can also help those whose hearing is fine but who speak a different language and might miss some words. Similarly you can dictate your clever ideas to your computer and device which will type it for you. This can be helpful for someone with limited use of their hands, or just someone who’d rather talk than type – this might also explain the popularity of devices and tools like Alexa or Siri.

Web designers want to (and may need to*) make their websites accessible to all their visitors. You can help too – a simple thing that you can do is to add ALT Text (alternative text) to images. If you ever share an image or gif to social media it’s really helpful to describe what’s in the image for screen readers so that people who can’t view it can still understand what you meant.

*Thanks to regulations that were adopted in 2018 the designers of public sector websites (e.g. government and local council websites where people pay their council tax or apply for benefits) must make sure that their pages meet certain accessibility standards because “​​people may not have a choice when using a public sector website or mobile app, so it’s important they work for everyone. The people who need them the most are often the people who find them hardest to use”.

More on …

Careers

Examples of computer science and disability-related jobs

Both of the jobs listed below are CLOSED and are just for your information only.

  • [CLOSED] Islington Council, Digital Accessibility Apprentice (f/t), £24k, clos 7 July
    • Are you interested in web design and do you want to help empower disabled people to become fully engaged within the community? This is a great opportunity to learn about the rapidly growing digital accessibility industry. Qualified and experienced digital accessibility specialists are sought after.
  • [CLOSED] Global Disability Innovation Hub, Communications and Engagement Officer, £32k, London / hybrid, closed 4 July 2024
    • This role is focused on maximising comms-based engagement across the GDI Hub’s portfolio, supporting GDI Hub’s growing outreach across project-based deliverables and organisational comms channels (e.g. social media, websites, content generation).


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos