Involving disabled people in the design of ICT tools and devices

by Jo Brodie, Queen Mary University of London

Image by Gerd Altmann from Pixabay (CROPPED)

The World Health Organisation currently estimates that around 1.3 billion people, or one in six people on Earth, “experience significant disability”. Designers who are creating devices and tools for people to use need to make sure that the products they develop can be used by as many people as possible, not just non-disabled people, to make sure that everyone can benefit from them.

Disabled people can face lots of barriers in the workplace including some that seem simple to address – problems using everyday ICT and other tech. While there are a lot of fantastic Assistive Technology (AT) products unfortunately not all are suitable and so are abandoned by disabled people as they don’t serve their needs.

One challenge is that some of the people who have been doing the designing might not have direct experience of disability themselves, so they are less able to think about their design from that perspective. Solutions to this can include making sure that disabled computer scientists and human-computer interaction researchers are part of the team of designers and creators in the first place, or by making it easier for other disabled people to be involved at an early stage of design. This means that their experience and ideas can contribute to making the end product more relevant and useful for them and others. Alongside this there is education and advocacy – helping more young computer scientists, technologists and human-computer interaction designers to start thinking early about how their future products can be more inclusive.

An EPSRC project “Inclusive Public Activities for information and Communication Technologies” has been looking at some practical ways to help. Run by Prof. Cathy Holloway and Dr. Maryam Bandukda and their wider team at UCL they have established a panel of disabled academics and professionals who can be ‘critical friends’ to researchers planning new projects. By co-creating a set of guidelines for researchers they are providing a useful resource but it also means that disabled voices are heard at an early stage of the design process so that projects start off in the right direction.

Prof. Holloway and Dr. Bandukda are based at the Global Disability Innovation Hub (GDI Hub) in the department of computer science at UCL. GDI Hub is a global leader in disability innovation and inclusion and has research reaching over 30 million people in 60 countries. The GDI Hub also educates people to increase awareness of disability, reduce stigma and lay the groundwork for more disability-aware designers to benefit people in the future with better products.

An activity that the UCL team ran in February 2024, for schools in East London, was a week-long inclusive ICT “Digital Skills and Technology Innovation” bootcamp. They invited students in Year 9 and above to learn about 3D printing, 3D modelling, laser cutting, AI and machine learning using Python, artificial reality and virtual reality experiences along with a chance to visit Google’s Accessible Discovery Centre and use their skills to “tackle real-world challenges”.

What are some examples of Assistive Technology?

Screen-reading software can help blind or visually impaired people by reading aloud the words on the page. This is something that can help sighted people too, your document can read itself to you while you do something else. The entire world of audio books exists for this reason! D/deaf people can take part more easily in Zoom conversations if text-to-caption software is available so they can read what’s being said. That can also help those whose hearing is fine but who speak a different language and might miss some words. Similarly you can dictate your clever ideas to your computer and device which will type it for you. This can be helpful for someone with limited use of their hands, or just someone who’d rather talk than type – this might also explain the popularity of devices and tools like Alexa or Siri.

Web designers want to (and may need to*) make their websites accessible to all their visitors. You can help too – a simple thing that you can do is to add ALT Text (alternative text) to images. If you ever share an image or gif to social media it’s really helpful to describe what’s in the image for screen readers so that people who can’t view it can still understand what you meant.

*Thanks to regulations that were adopted in 2018 the designers of public sector websites (e.g. government and local council websites where people pay their council tax or apply for benefits) must make sure that their pages meet certain accessibility standards because “​​people may not have a choice when using a public sector website or mobile app, so it’s important they work for everyone. The people who need them the most are often the people who find them hardest to use”.

More on …

Careers

Examples of computer science and disability-related jobs

Both of the jobs listed below are CLOSED and are just for your information only.

  • [CLOSED] Islington Council, Digital Accessibility Apprentice (f/t), £24k, clos 7 July
    • Are you interested in web design and do you want to help empower disabled people to become fully engaged within the community? This is a great opportunity to learn about the rapidly growing digital accessibility industry. Qualified and experienced digital accessibility specialists are sought after.
  • [CLOSED] Global Disability Innovation Hub, Communications and Engagement Officer, £32k, London / hybrid, closed 4 July 2024
    • This role is focused on maximising comms-based engagement across the GDI Hub’s portfolio, supporting GDI Hub’s growing outreach across project-based deliverables and organisational comms channels (e.g. social media, websites, content generation).


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Finding work experience, or a job in computer science

How to find a job. The letter O of the word How is replaced with the circular part of a cartoon magnifying glass and the letter O of the word Job is replaced with a cog or gearwheel.
Image by M. H. from Pixabay

We’re occasionally asked by school pupils, their parents and teachers about where young people can find out about work experience in something to do with computer science. We’ve put together some general information which we hope is helpful, and there’s also information further down the page that might be useful for people who’ve finished a computing degree and are wondering “What’s next?”.

Work experience for school students

(This section was originally published on our website for teachers – Teaching London Computing).

Supermarkets – not in the store but in the office, learning about inventory software used to manage stock for in-store shopping as well as online shopping (e.g. Ocado etc).

Shops – more generally pretty much every shop has an online presence and may want to display items for sale (perhaps also using software to handle payment).

Websites – someone who’s a blacksmith might not use a computer in their work directly, but the chances are they’d want to advertise their metal-flattening skills to a wider audience which is only really possible with a web presence.

Websites involve technical aspects (not necessarily Python types of things but certainly HTML and CSS / JavaScript) but also making websites accessible for users with visual impairments, e.g. labelling elements helpfully and remembering to add ALT TEXT for users of screenreaders. Technical skills are important but thinking about the end-user is super-important too, and often a skill that people pick up ‘on the job’ rather than being trained about (though that is changing).

Usability – making websites or physical products (e.g. home appliances, cameras, phones, printers, microwaves) easier to use by finding out how easily users can interact with them and considering options for improvement. For computing systems this involves HCI (human-computer interaction) and UX (user experience – e.g. how frustrating is a website?).

Transport – here in London we have buses with a GPS transponder that emits a signal which is picked up by sensors, co-ordinated and translated into real-time information about the whereabouts of various buses on the system. Third-party apps can also use some of this data to provide a service for people who want to know the quickest route to a particular place.

Council services – it’s possible to pay parking fines, council tax and other things online, also utility company bills. The programs involved here need to keep people’s private data secure as well.

Banks – are heavy users of ‘fintech’ (financial technology) and security systems, though that might preclude them taking on people in a work experience setting. Similarly GP surgeries have dedicated IT systems (such as EMIS) for handling confidential patient information and appointments. Even if they can’t take on tech work experience students they may have other work experience opportunities.

Places that offer (or have previously offered) work experience

  • ARM: Manchester, Sheffield, Cambridge
  • BT: Virtual work experience

Other resources

Indeed.com website
How to find work experience (Year 12 student guide)

TechDevJobs website
Our ‘jobs in computing’ resource (homepage) should give you an idea of the different sectors which employ all sorts of computer scientists to do all sorts of different things (see the list of jobs organised by sector). There are about 70 80 jobs there so far; it doesn’t cover everything though (that’s almost an impossible task!).

There are obvious computing-related jobs such as a software company looking for a software developer but there’s also a job for a lawyer-researcher (someone who is able to practise as a lawyer if necessary but is going to be doing research) into Cloud Computing. For example there are all sorts of regulatory aspects to computing, some currently under consideration by the UK Government on data leaks, privacy, appropriateness of use and how securely information is stored, and what penalties there are for misuse.

Possibly a local law firm is doing some work in this area and might be open to offering work experience.

Other resources for recent graduates

The TechDev Jobs website (listed above in Other resources) is a great place to start. The jobs ‘advertised’ are usually closed but the collection lists several organisations that are currently employing people in the field of computer science (in the widest sense) and we are adding more all the time. Finding out about jobs is also about finding out about different sectors, some of which you might not have heard of yet – but they are all potential sources of jobs for people with computing skills.

Recent graduates or soon-to-graduate students may be able to help newer students get to grips with things in the Year 1 modules. Sometimes it’s not the computer science and programming that they or the lecturers need assistance with but really practical stuff like logging on and finding the relevant resources.

Education / schools: the UK Government has a ‘Get into Teaching’ website with a page on Becoming a computing teacher. You can also find teacher vacancies at the TES website, here’s what jobs are currently available for secondary teachers but you can filter by type of role and location.

The Find A Job website from DWP (https://findajob.dwp.gov.uk/search) can be filtered by location and keyword too. Put in a keyword and see what pops up, then filter by salary etc.

Further study: if you’re interested in continuing your studies you might consider a Masters degree (MSc) in computer science and see the panel below for information on studying for a PhD, for which you are usually paid.

The Prospects website has a page called What can I do with a computer science degree?, which should give you an idea of options and help you widen your search.

The Entry Level Games site isn’t a jobs board but if you’re interested in games design then it gives you a really helpful overview of some of the typical roles, what’s needed to do those roles and information from people who’ve done those jobs.

If you are interested in creating assistive technology or making computing more inclusive you might be interested in the work of the Global Disability Innovation Hub.

Networking is also a good idea to build up contacts and hear about different roles, some people find LinkedIn useful as an online version of networking and as a great place to hear about newly-opened vacancies. You can also take part in local hackathons, or volunteer at code clubs etc. This sort of thing is useful for your CV too.

There are probably organisations near you and it’s fairly likely that they’ll be using computers in one way or another, and you might be useful to them. Open up Google Maps and navigate to where you’re living, then zoom in and see what organisations are nearby. Make a note of them and if they have a vacancies page save that link in a document so that you can visit it every so often and see if a relevant new job has been added. Or contact them speculatively with your CV.

If you have a Gmail account you can set up Google Alerts. Whenever a new web page (e.g. a new job vacancy is published) that satisfies your search criteria you’ll get a daily email with a summary of what’s been added and the link to find out more. This is a way of bringing the job adverts to you!

More on…


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Photogrammetry for fun, preservation and research

Digitally stitching together 2D photographs to visualise the 3D world

Composite image of one green glass bottle made from three photographs. Image by Jo Brodie
Composite image of one green glass bottle made from three photographs. Image by Jo Brodie

Imagine you’re the costume designer for a major new film about a historical event that happened 400 years ago. You’d need to dress the actors so that they look like they’ve come from that time (no digital watches!) and might want to take inspiration from some historical clothing that’s being preserved in a museum. If you live near the museum, and can get permission to see (or even handle) the material that makes it a bit easier but perhaps the ideal item is in another country or too fragile for handling.

This is where 3D imaging can help. Photographs are nice but don’t let you get a sense of what an object is like when viewed from different angles, and they don’t really give a sense of texture. Video can be helpful, but you don’t get to control the view. One way around that is to take lots of photographs, from different angles, then ‘stitch’ them together to form a three dimensional (3D) image that can be moved around on a computer screen – an example of this is photogrammetry.

In the (2D) example above I’ve manually combined three overlapping close-up photos of a green glass bottle, to show what the full size bottle actually looks like. Photogrammetry is a more advanced version (but does more or less the same thing) which uses computer software to line up the points that overlap and can produce a more faithful 3D representation of the object.

In the media below you can see a looping gif of the glass bottle being rotated first in one direction and then the other. This video is the result of a 3D ‘scan’ made from only 29 photographs using the free software app Polycam. With more photographs you could end up with a more impressive result. You can interact with the original scan here – you can zoom in and turn the bottle to view it from any angle you choose.

A looping gif of the 3D Polycam file being rotated one way then the other. Image by Jo Brodie

You might walk around your object and take many tens of images from slightly different viewpoints with your camera. Once your photogrammetry software has lined the images up on a computer you can share the result and then someone else would be able to walk around the same object – but virtually!

Photogrammetry is being used by hobbyists (it’s fun!) but is also being used in lots of different ways by researchers. One example is the field of ‘restoration ecology’ in particular monitoring damage to coral reefs over time, but also monitoring to see if particular reef recovery strategies are successful. Reef researchers can use several cameras at once to take lots of overlapping photographs from which they can then create three dimensional maps of the area. A new project recently funded by NERC* called “Photogrammetry as a tool to improve reef restoration” will investigate the technique further.

Photogrammetry is also being used to preserve our understanding of delicate historic items such as Stuart embroideries at The Holburne Museum in Bath. These beautiful craft pieces were made in the 1600s using another type of 3D technique. ‘Stumpwork’ or ‘raised embroidery’ used threads and other materials to create pieces with a layered three dimensional effect. Here’s an example of someone playing a lute to a peacock and a deer.

Satin worked with silk, chenille threads, purl, shells, wood, beads, mica, bird feathers, bone or coral; detached buttonhole variations, long-and-short, satin, couching, and knot stitches; wood frame, mirror glass, plush”, 1600s. Photo CC0 from Metropolitan Museum of Art uploaded by Pharos on Wikimedia.

A project funded by the AHRC* (“An investigation of 3D technologies applied to historic textiles for improved understanding, conservation and engagement“) is investigating a variety of 3D tools, including photogrammetry, to recreate digital copies of the Stuart embroideries so that people can experience a version of them without the glass cases that the real ones are safely stored in.

Using photogrammetry (and other 3D techniques) means that many more people can enjoy, interact with and learn about all sorts of things, without having to travel or damage delicate fabrics, or corals.

*NERC (Natural Environment Research Council) and AHRC (Arts and Humanities Research Council) are two organisations that fund academic research in universities. They are part of UKRI (UK Research & Innovation), the wider umbrella group that includes several research funding bodies.

Other uses of photogrammetry

Examples of cultural heritage and ecology are highlighted in the post but also interactive games (particularly virtual reality), engineering and crime scene forensics and the film industry use photogrammetry, an example is Mad Max: Fury Road which used the technique to create a number of its visual effects. Hobbyists also create 3D versions (called ‘3D assets’) of all sorts of objects and sell these to games designers to include in their games for players to interact with.

Jo Brodie, Queen Mary University of London

More on …

Careers

This is a past example of a job advert in this area (since closed) for a photogrammetry role in virtual reality.

Also see our collection of Computer Science & Research posts.


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Music & Computing: TouchKeys: getting more from your keyboard

Image by Elisa from Pixabay

Even if you’re the best keyboard player in the world the sound you can get from any one key is pretty much limited to ‘loud’ or ‘soft’, ‘short’ or ‘long’ depending on how hard and how quickly you press it. The note’s sound can’t be changed once the key is pressed. At best, on a piano, you can make it last longer using the sustain pedal. A violinist, on the other hand, can move their finger on the string while it’s still being played, changing its pitch to give a nice vibrato effect. Wouldn’t it be fun if keyboard players could do similar things.

Andrew McPherson and other digital music researchers at QMUL and Drexel University came up with a way to give keyboard performers more room to express themselves like this. TouchKeys is a thin plastic coating, overlaid on each key of a keyboard, but barely noticeable to the keyboard player. The coating contains sensors and electronics that can change the sound when a key is touched. The TouchKeys’ electronics connect to the keyboard’s own controller and so changes the sounds already being made, expanding the keyboard’s range. This opens up a whole world of new sonic possibilities to a performer.

The sensors can follow the position and movement of your fingers and respond appropriately in real-time, extending the range of sounds you can get from your keyboard. By wiggling your finger from side-to-side on a key you can make a vibrato effect, or you change the note’s pitch completely by sliding your finger up and down the key. The technology is similar to a phone’s touchscreen where different movements (‘gestures’) make different things happen. An advantage of the system is that it can easily be applied to a keyboard a musician already knows how to play, so they’ll find it easy to start to use without having to make big changes to their style of playing.

They wanted to get TouchKeys out of the lab and into the hands of more musicians, so teamed up with members of London’s Music Hackspace community, who run courses in electronic music, to create some initial versions for sale. Early adopters were able to choose either a DIY kit to add to their own keyboard, wire up and start to play, or choose a ready-to-play keyboard with the TouchKeys system already installed.

The result is that lots of musicians are already using TouchKeys to get more from their keyboard in exciting new ways.

Jo Brodie and Paul Curzon, Queen Mary University of London


Watch …

  • Making technology to make music
    • Earlier this year Professor Andrew McPherson gave his inaugural lecture (a public lecture given by an academic who has been promoted) at Imperial College London where he is continuing his research. Watch his lecture.

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos



Joyce Weisbecker: a teenager the first indie games developer?

CS4FN Banner

by Paul Curzon, Queen Mary University of London

Video games were once considered to be only of interest to boys, and the early games industry was dominated by men. Despite that, a teenage girl, Joyce Weisbecker, was one of the pioneers of commercial game development.

Originally, video games were seen as toys for boys. Gradually it was realised that there was a market for female game players too, if only suitably interesting games were developed, so the games companies eventually started to tailor games for them. That also meant, very late in the day, they started to employ women as games programmers. Now it is a totally normal thing to do. However, women were also there from the start, designing games. The first female commercial programmer (and possibly first independent developer) was Joyce Weisbecker. Working as an independent contractor she wrote her first games for sale in 1976 for the RCA Studio II games console that was released in January 1977.

RCA Studio II video games console
Image by WikimediaImages from Pixabay

Joyce was only a teenager when she started to learn to program computers and wrote her first games. She learnt on a computer that her engineer father designed and built at home called FRED (Flexible Recreational and Educational Device). He worked for RCA (originally the Radio Corporation of America), one of the major electronics, radio, TV and record companies of the 20th century. The company diversified their business into computers and Joyce’s father designed them for RCA (as well as at home for a hobby). He also invented a programming language called CHIP-8 that was used to program the RCA computers. This all meant Joyce was in a position to learn CHIP-8 and then to write programs for RCA computers including their new RCA Studio II games console before the machine was released, as a post-high school summer job.

The code for two games that she wrote in 1976, called Snake Race and Jackpot, were included in the manual for an RCA microcomputer called the COSMAC VIP, and she also wrote more programs for it the following year. These computers came in kit form for the buyer to build themselves. Her programs were example programs included for the owner to type in and then play once they had built the machine. Including them meant their new computer could do something immediately.

She also wrote the first game that she was paid for in that Summer of 1976. It was for the RCA Studio II games console, and it earned her $250 – well over $1000 in today’s money, so worth having for a teenager who would soon be going on to college. It was a quiz program, called TV School House I. It pitted two people against each other, answering questions on topics such as maths, history and geography, with two levels of difficulty. Questions were read from question booklets and whoever typed in the multiple choice answer number the fastest got the points for a question, with more points the faster they were. There is currently a craze for apps that augment physical games and this was a very early version of the genre.

Speedway screen from Wikimedia

She quickly followed it with racing and chase games, Speedway and Tag, though as screens were still very limited then, with only tiny screens, the graphics of all these games were very, very simple – eg racing rectangles around a blocky, rectangular racing track.

Unfortunately, the RCA games console itself was a commercial failure as it couldn’t compete with consoles like the Atari 2600, so RCA soon ended production. Joyce, meanwhile, retired from the games industry, still a teenager, ultimately becoming a radar signal processing engineer.

While games like Pong had come much earlier, the Atari 2600, which is credited with launching the first video game boom, was released in 1977, with Space Invaders, one of the most influential video games of all time, released in 1980. Joyce really was at the forefront of commercial games design. As a result her papers related to games programming, including letters and program listings, are now archived in the Strong National Museum of Play in New York.

More on …


Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

Happy #WorldEmojiDay 2024 – here’s an emoji film quiz & some computer science history

Emoji! 💻 😁

World Emoji Day is celebrated on the 17th of July every year (why?) and so we’ve put together a ‘Can you guess the film from the emoji’ quiz and added some emoji-themed articles about computer science and the history of computing.

  1. An emoji film quiz
  2. Emoji accessibility, and a ‘text version’ of the quiz
  3. Computer science articles about emoji

Emoji are small digital pictures that behave like text – you can slot them easily them in sentences (you don’t have to ‘insert an image’ from a file or worry about the picture pushing the text out of the way). You can even make them bigger or smaller with the text (🎬 – compare the one in the section title below). People use them as a quick way of sharing a thought or emotion, or adding a comment like a thumbs up so they’re (sort of) a form of data representation. Even so, communication with emoji can be just as tricky, in terms of being misunderstood, just as with using words alone. Different age groups might read the same emoji and understand something quite different from it. What do you think 🙂 (‘slightly smiling face’ emoji) means? What do people older or younger than you think it means? Lots of people think it means “I’m quite happy about this” but others use it in a more sarcastic way.

1. An emoji film quiz 🎬

You can view the quiz online or download and print from Word or PDF versions. If you’re in a classroom with a projector the PowerPoint file is the one you want.

More Computational Thinking Puzzles

2. Emoji accessibility, and a text version of the quiz

We’ve included a text version for blind or visually impaired people which can either be read out by someone or by a screen reader. Use the ‘Text quiz’ files in Word or PDF above.

More generally, when people share photographs and other images on social media it’s helpful if they add some information about the image to the ‘Alt Text’ (alternative text) box. This tells people who can’t easily see the image what’s in the picture. Screenreaders will also tell people what the emojis are in a tweet or text message, but if you use too many… it might sound like this 😬.

3. Computer science articles about emoji

This next article is about the history of computing and the development of the graphical icons for apps that started life being drawn on gridded paper by Susan Kare. You could print some graph / grid paper and design your own!

A copy of this post can also be found as a permanent page at https://cs4fn.blog/emoji/


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Art Touch and Talk Tour Tech

A sculpture of a head and shouldrers, heavily textured with a network of lines and points
Image by NoName_13 from Pixabay

What could a blind or partially-sighted person get from a visit to an art gallery? Quite a lot if the art gallery puts their mind to it. Even more if they make use of technology. So much so, we may all want the enhanced experience.

The best art galleries provide special tours for blind and partially-sighted people. One kind involves a guide or curator explaining paintings and other works of art in depth. It is not exactly like a normal guided tour that might focus on the history or importance of a painting. The best will give both an overview of the history and importance whilst also giving a detailed description of the whole picture as well as the detail, emphasising how each part was painted. They might, for example, describe the brush strokes and technique as well as what is depicted. They help the viewer create a really detailed mental model of the painting.

One visually-impaired guide who now gives such tours at galleries such as Tate Britain, Lisa Squirrel, has argued that these tours give a much deeper and richer understanding of the art than a normal tour and certainly more than someone just looking at the pictures and reading the text as they wander around. Lisa studied Art History at university and before visiting a gallery herself reads lots and lots about the works and artists she will visit. She found that guided tours by sighted experts using guided hand movements in front of a painting helped her build really good internal models of the works in her mind. Combined with her extensive knowledge from reading, she wasn’t building just a picture of the image depicted but of the way it was painted too. She gained a deep understanding of the works she explored including what was special about them.

The other kind of tour art galleries provide is a touching tour. It involves blind and partially-sighted visitors being allowed to touch selected works of art as part of a guided tour where a curator also explains the art. Blind art lover, Georgina Kleege, has suggested that touch tours give a much richer experience than a normal tour, and should also be put on for all for this reason. It is again about more than just feeling the shape and so “working out its form that”seeing” what a sighted person would take in at a glance. It is about gaining a whole different sensory experience of the work: its texture, for example, not a lesser version just of what it looks like.

How might technology help? Well, the company, NeuroDigital Technologies, has developed a haptic glove system for the purpose. Haptic gloves are gloves that contain vibration pads that stimulate the skin of the person in different, very fine ways so as to fool the wearer’s brain into thinking it is touching things of different shapes and textures. Their system has over a thousand different vibration patterns to simulate different feelings of touching surfaces. They also contain sensors that determine the precise position of the gloves in space as the person moves their hands around.

The team behind the idea scanned several works of art using very accurate laser scanners that build up a 3D picture of the thing being scanned. From this they created a 3D model of the work. This then allowed a person wearing to feel as though they were touching the actual sculpture feeling all the detail. More than that the team could augment the experience to give enhanced feelings in places in shadow, for example, or to emphasise different parts of the work.

A similar system could be applied to historical artifacts too: allowing people to “feel” not just see the Rosetta Stone, for example. Perhaps it could also be applied to paintings to allow a person to feel the brush strokes in a way that could just not otherwise be done. This would give an enhanced version of the experience Lisa felt was so useful of having her hand guided in front of a painting and the brush strokes and areas being described. Different colours might also be coded with different vibration patterns in this way allowing a series of different enhanced touch tours of a painting, first exploring its colours, then its brush strokes, and so on.

What about talking tours? Can technology help there? AIs can already describe pictures, but early versions at least were trained on the descriptions people have given to images on the Internet: “a black cat sitting on top of the TV looking cute”, The Mona Lisa: a young woman staring at you”. That in itself wouldn’t cut it. Neither would training the AI on the normal brief descriptions on the gallery walls next to works of art. However, art books and websites are full of detail and more recent AIs can give very detailed descriptions of art works if asked. These descriptions include what the picture looks like overall, the components, colours, brushstrokes and composition, symbolism, historical context and more (at least for famous paintings). With specific training from curators and art historians the AIs will only get better. What is still missing for a blind person though from the kind of experience Lisa has when experiencing painting with a guide, is the link to the actual picture in space – having the guide move her hand in front of the painting as the parts are described. However, all that is needed to fill that gap is to combine a chat-based AI with a haptic glove system (and provide a way to link descriptions to spatial locations on the image). Then, the descriptions can be linked to positions of a hand moving in space in front of a virtual version of the picture. Combine that with the kind of system already invented to help blind people navigate, where vibrations on a walking stick indicate directions and times to turn, and the gloves can then not only give haptic sensations of the picture in front of the picture or sculpture, but also guide the person’s movement over it.

Whether you have such an experience in a gallery, in front of the work of art, or in your own front room, blind and partially sighted people could soon be getting much better experiences of art than sighted people. At which point, as Georgina Kleege, suggested for normal touch tours, everyone else will likely want the full “blind” experience too.

Paul Curzon, Queen Mary University of London

More on …


Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

Accessible Technology in the Voting Booth

CS4FN Banner

by Daniel Gill, Queen Mary University of London

Voting at an election: people deposting their voting slip
Image AI generated by Vilius Kukanauskas from Pixabay

On Thursday 4th July 2024, millions of adults around the UK went to their local polling station to vote for their representative in the House of Commons. However, for the 18% of adults who have a disability, this can be considerably more challenging. While the right of voters to vote independently and secretly is so important, many blind and partially sighted people cannot do so without assistance. Thankfully this is changing, and this election was hailed as the most accessible yet. So how does technology enable blind and partially sighted people to vote independently?

 There are two main challenges when it comes to voting for blind and partially sighted people. The names of candidates are listed down the left-hand side, so firstly, a voter needs to find the row of the person who they want to vote for. They then, secondly, need to put a cross in the box to the right. The image below gives an example of what the ballot paper looks like:

A mock up of a "CS4FN" voting slip with candidates
HOPPER, Grace
TURING, Alan Mathison
BENIOFF, Paul Anthony
Lovelace, Ada

To solve the first problem, we can turn to audio. An audio device can be used to play a recording of the candidates as the appear on the ballot paper. Some charities also provide a phone number to call before the election, with a person who can read this list out. This is great, of course, but it does rely on the voter remembering the position of the person that they want to vote for. A blind or partially sighted voted is also allowed to use a text reader device, or perhaps a smart phone with a special app, to read out what is on the ballot paper in the booth.

Lots of blind and partially impaired people are able to read braille: a way of representing English words using bumps on the paper (read more about braille in this CS4FN article). One might think that this would solve all the problems, but, in fact, there is a requirement that all the ballot papers for each constituency have a standard design to ensure they can be counted efficiently and without error.

The solution to the second problem is far more practical: the excitingly named tactile voting device. This is a simple plastic device which is placed on top of the ballot paper. Each of the boxes on the ballot paper (as shown to the right of the image above), has a flap above it with its position number embossed on it. When the voter finds the number of the person they want to vote for, they simply turn over the flap, and are guided by a perfectly aligned square guide to where the box is. The voter can then use that guide to draw the cross in the box.

This whole process is considerably more complicated than it is for those without disabilities – and you might be thinking, “there must be an easier way!” Introducing the McGonagle Reader (MGR)! This device combines both solutions into one device that can be used in the voting booth. Like the tactile voting device, it has flaps which cover each of the boxes for drawing the cross. But, next to those, buttons, which, when pressed, read out the information of the candidate for that row. This can save lots of time, removing the need to remember the position of each candidate – a voter can simply go down the page and find who they want to vote for and turn over the correct flap.

When people have the right to vote, it is especially important to ensure that they have the ability to use that right. This means that no matter the cost or the logistics, everyone should have access to the tools they need to vote for their representative. Progress is now being made but a lot more work still needs to be done.

To help ensure this happens in future, the RNIB want to know the experiences of those who voted or didn’t vote in the UK 2024 general election – see the survey linked from the RNIB page here.

More on …


Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

The basics of Quantum Computing: Qubits

CS4FN Banner

by Paul Curzon, Queen Mary University of London

An eye looking at two blue spheres
Image by Gerd Altmann from Pixabay

Reality is weird, very weird. The first thing you have to do to understand the reality of reality is to drop your common sense. Only then can you start to understand it especially when it comes to the quantum world of the very small. Our brains evolved to naturally make sense of human scale things, rather than the very large or very small. Accept the weirdness, though, and there are lots of opportunities, especially for computer scientists. That is why it is now an exciting area of research with theoretical physicists, engineers and computer scientists working together to make progress.

Not common sense

Imagine you are a person trying to understand the world a thousand years ago. Clearly the world MUST be flat. It looks flat and suggesting you are standing on a sphere is just ridiculous. People living on the other side would obviously fall off if it was a sphere! Except the world is a sphere and people in Australia (or Europe if you are Australian) don’t fall off. Common sense doesn’t work (until you understand how gravity works). That’s why science is so powerful. Common Sense also doesn’t work for understanding the reality of the very small. This branch of physics, quantum physics, is very important for computer scientists not only because the building blocks of our computers are becoming ever smaller, but because when you get so small that the laws of quantum physics matter, computers can work in new, exciting ways, ways that are far better than our current computers.

Bits and qubits

Let’s start with binary, the fundamental way we represent information in a computer. The basic building block of information is the bit. A bit is something that can have one of two states. It can be a 1 or a 0. That means a bit can store some information. These two states of 1 and 0 might be physically represented in lots of ways, such as a high voltage stored versus a low voltage stored, or a pulse of light versus no pulse of light, or someone’s hand up versus their hand down. If you have two bits then you can store one of 4 pieces of information in them because of the possible combinations (00, 01, 10 and 11), with three bits and you can store 8 different things. Those collections of bits can then stand for different numbers (that is all binary is), and by building big circuits from simple basic circuits that do simple manipulations on bits (i.e., logic gates) we can do ever more complex calculations with them and ultimately everything our current computers are capable of.

The spin of an electron

A pedestrian light showing green/walk
Image by Hans from Pixabay

Bits can be represented by anything that has 2 states. So suppose you want to represent your bits using something really small like electrons. Electrons have a property called spin. You can imaging them as spinning balls of charge (though they are not exactly spinning like a spinning ball … electrons aren’t balls and they aren’t actually rotating in the normal sense – remember reality is weird so these analogies are just there to help give an idea, but it is never as simple as that). Now, electrons can “spin” in exactly one of two ways called spin up and spin down. There are only two possible kinds of spin because in the quantum world things come in discrete amounts, not continuous ones. They jump from one state to another, like a pedestrian (walk/don’t walk) traffic light going from red to green instantly) rather than gradually changing between them (such as the way a car gradually speeds up to the speed limit). An electron is either spin up or spin down, like the pedestrian lights, never something in between.

Now, it is possible to set the spin of an electron and to measure whether it has spin that is spin up or spin down, so an electron can, in principle, be used to store a binary bit given it has two states (spin up for 1 and spin down for 0, say). However, this is where weirdness really comes in. It turns out that it is possible for an electron to be both spin up and spin down at once as long as the spin is not measured, due to the way the quantum world works. A quantum pedestrian light doing a similar thing would have only one light that could be red or green. However, it would be both red and green at the same time UNTIL someone looked at it to see which state it was in (so measured the state). At that point they would become, and the person would only see, one colour or the other. This is called quantum superposition. To understand this it is better to think about reality being about probabilities not certainties. Imagine that the electron is like a tossed coin that is still in the air. It has a probability of being Heads and of being Tails. Only when it lands (so is measured) is it actually one or the other. An electron is combining both possibilities until the spin is measured.

The quantum tortoise and the hare

You may have the quaint idea that reality is made of sub-atomic particles (like electrons or protons) that are solid little bits of matter that are very ball like and exist in one place at any given time. Actually they aren’t like that at all. It is better to think of particles as just having probabilities of being at one place or another – they are kind of smeared across space, everywhere at once, like a ripple pattern across a pond, just with different probabilities of actually being in any place when their position is measured. When you do measure their position you find they definitely are in one place or another, appearing to be a particle again, not a wave.

It may help to think of this in terms of watching slow moving tortoises and fast moving hares passing you as they race. The position of a slow moving tortoise you see wander by is easy to call: it has a very high probability to be in a particular place. The position of a fast moving hare that whizzes past is much harder to call: it has a far lower probability to be in a given place at any time. However, without looking you can’t tell. You just know the probabilities. Of course with particles it isn’t exactly like that just as an electron’s spin isn’t exactly like a ball spinning. It is only when a particle’s position is actually checked (i.e. measured) that it is definitely at a known place and that smeared probability collapses to certainty. A quantum tortoise and hare racing past would be in all possible positions round the race track just with different probabilities. Suppose you only checked (so measured their position) at the finish line. It is only because of that measurement that the probabilities of where they were through the race turn into specific measured so known positions with a quantum hare or a quantum tortoise having actually won.

This weirdness is linked to the fact that the fundamental components that reality is made up of are both particles in given places (think of an electron or a proton) and waves passing through space (think of light or ripples in a pond) at the same time. So light behaves like a particle and like a wave. Similarly, an electron does too. 

Electron spin as Qubits

Other properties of sub-atomic particles act in the same way as a particle’s position being smeared across lots of possibilities at once. This includes the spin of an electron. Until it is measured, an electron is superposed in both a spin up and spin down state at the same time (spinning both ways at once!): there is just a probability that the electron is in each state, it isn’t actually definitely in either. That means as long as you do not measure its spin, the electron as a device storing a piece of information is storing both 1 and 0 at the same time, each with a given probability. As such it behaves differently to an actual bit which must be either 1 or 0. We therefore call such an electron-based storage a qubit rather than a bit. 

In theory, we can do computations on qubits manipulating and combining them in simple ways using the quantum equivalent of logic gates. Once we have created quantum logic gates to do simple manipulations, we can combine those gates into bigger and bigger circuits that do more complicated quantum calculations. As long as the states of the qubits are not measured all the states through the circuit are superposed in both states with particular probabilities. Unlike a normal circuit which does one series of computations based on its inputs, these quantum circuits are in effect doing all possible computations of that circuit at once. It is only when we measure the answer at the output, say, that the qubits in the circuit are fixed at either 1 or 0 and an actual result is delivered. This is like the tortoise and hare being everywhere (whatever racing strategy they followed) with some probability until we measure the result at the finish line (the output of the race). Because all states existed at once, lots of computation exists simultaneously, this means that such a circuit can, in theory, and with the right algorithms, deliver answers far, far faster than a conventional circuit could possibly do, given the latter can only do one computation at a time,

From theory to practice

That is the theory, and it is gradually being realised in practice. Qubits can be created and their values changed. Various quantum logic gates have also now been invented and so small quantum computers do now exist. Quantum algorithms to do certain tasks quickly have been invented. Since the original ideas were mooted, progress has been relatively slow, but now that the ideas have been shown to work in practice, more and more is being achieved, making it an exciting time to be doing quantum computing research.

More on …

  • Quantum Computing (to come)

Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

Gutta-Percha: how a tree launched a global telecom revolution

CS4FN Banner

by Paul Curzon, Queen Mary University of London

(from the archive)

Rubber tree being tapped
Image  from Pixabay

Obscure plants and animals can turn out to be surprisingly useful. The current mass extinction of animal and plant species needs to be stopped for lots of reasons but an obvious one is that we risk losing forever materials that could transform our lives. Gutta-percha is a good example from the 19th century. It provided a new material with uses ranging from electronic engineering to bioengineering. It even transformed the game of golf. Perhaps its greatest claim to fame though is that it kick-started the worldwide telecoms boom of the 19th century that ultimately led to the creation of global networks including the Internet.

Gutta-percha trees are native to South East Asia and Australia. Their sap is similar to rubber. It’s actually a natural polymer: a kind of material made of gigantic molecules built up of smaller structures that are repeated over and over again. Plastics, amber, silk, rubber and wool are all made of polymers. Though very similar to it, unlike rubber, Gutta-percha is biologically inert – it doesn’t react with biological materials – and that was the key to its usefulness. It was discovered by Western explorers in the middle of the 17th century, though local Malay people already knew about it and used it.

Chomping wires

So how did it play a part in creating the first global telecom network? Back in the 19th century, the telegraph was revolutionising the way people communicated. It meant messages could be sent across the country in minutes. The trouble was when the messages got to the coast they ground to a halt. Messages could only travel across an ocean as fast as a boat could take them. They could whiz from one end of America to the other in minutes but would then take several weeks to make it to Europe. The solution was to lay down undersea telegraph cables. However, to carry electricity an undersea cable needs to be protected and no one had succeeded in doing that. Rubber had been tried as an insulating layer for the cables but marine animals and plants just attacked it, and once the cable was open to the sea it became useless for sending signals. Gutta-percha on the other hand is a great insulator too but it doesn’t degrade in sea-water.

As it was the only known material that worked, soon all marine cable used Gutta-percha and as a result the British businessmen who controlled its supply became very rich. Soon telegraph cables were being laid everywhere – the original global telecoms network. To start with the network carried telegraph signals then was upgraded to voice and now is based on fibre-optics – the backbone of the Internet.

Rotting teeth

Gutta-percha has also been used by dentists – just as marine animals don’t attack it, it doesn’t degrade inside the human body either. That together with it being easy to shape makes it perfect for dental work. For example, it is used in root canal operations. The pulp and other tissue deep inside a rotting tooth are removed by the dentist leaving an empty chamber. Gutta-percha turns out to be an ideal material to fill the space, though medical engineers and materials scientists are trying to develop synthetic materials like Gutta-percha, but that have even better properties for use in medicine and dentistry.

Dimpled balls

That just leaves golf! Early golf balls were filled with feathers. In 1848 Robert Adams Paterson came up with the idea of making them out of Gutta-percha since it was much easier to make than the laborious process of sewing balls of feathers. It was quickly realised, if by accident, that after they had been used a few times they would fly further. It turned out this was due to the dimples that were made in the balls each time they were hit. The dimples improved the aerodynamics of the ball. That’s why modern golf balls are intentionally covered in dimples.

So gutta-percha has revolutionised global communications, changed the game of golf and even helped people with rotting teeth. Not bad for a tree.

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.