Sounding out a Sensory Garden

A girl in a garden holding an orange flower
Image by Joel santana Joelfotos from Pixabay

When the construction of Norman Jackson Children’s Centre in London started, the local council commissioned artists to design a sensory garden full of wonderful sights and sounds so the 3 to 5 year old children using the centre could have fun playing there. Sand pit, water feature, metal tree and willow pods all seemed pretty easy to install and wouldn’t take much looking after, but what about sound? How do you bring interesting sound to an outdoor space and make it fun for young children? Nela Brown from Queen Mary was given the job.

After thinking about the problem for a while she came up with an idea for an interactive sound installation. She wanted to entertain any children visiting the centre, but she especially wanted it to benefit children with poor language skills. She wanted it to be informal but have educational and social value, even though it was outside.

You name it, they press it!

Somewhere around the age of 18 months, children become fascinated with pressing buttons. Toys, TV remotes, light switches, phones, you name it they want to press it. Given the chance to press all the buttons at the same time in quick succession, that is exactly what young children will do. They will also get bored pretty quickly and move on to something else if their toy just makes lots of noise with little variety or interest.

Nela had to use her experience and understanding of the way children play and learn to work out a suitable ‘user interface’ for the installation. That is she had to design how the children would interact with it and be able to experience the effects. The user interface had to look interesting enough to get the attention of the children playing in the garden in the first place. It also obviously had to be easy to use. Nela watched children playing as part of her preparation to design the installation both to get ideas and get a feel for how they learn and play.

Sit on it!

She decided to use a panel with buttons that triggered sounds built into a seat. One important way to make any gadget easier to use is for it to give ‘real-time feedback’. That is, it should do something like play sound or change colour as soon as you press any button, so you know immediately that the button press did do something. To achieve this and make them even more interesting her buttons would both change colour and play sound when they were pressed. She also decided the panel would need to be programmed so children wouldn’t do what they usually do: press all of the buttons at once, get bored and walk away.

Nela recorded traditional stories, poems and nursery rhymes with parents and children from the local area, and composed music to fit around the stories. She also researched different online sound libraries to find interesting sound effects and soundscapes. Of the three buttons, one played the soundscapes, another played the sound effects and the last played a mixture of stories, poems and nursery rhymes. Nela hoped the variety would make it all more interesting for the children so keep their attention longer and by including stories and nursery rhymes she would be helping with language skills.

Can we build it?

Coming up with the ideas was only part of the problem. It then had to be built. It had to be weatherproof, vandal-proof and allow easy access to any parts that might need replacing. As the installation had to avoid disturbing people in the rest of the garden, furniture designer Joe Mellows made two enclosed seats out of cedar wood cladding each big enough for two children, which could house the installation and keep the sound where only the children playing with it would hear it. A speaker was built into the ceiling and two control panels made of aluminium were built into the side. The bottom panel had a special sensor, which could ‘sense’ when a child was sitting in (or standing on) the seat. It was an ultrasonic range finder – a bit like bat-senses using echoes from high frequency sounds humans can’t hear to work out where objects are. The sensor had to be covered with stainless steel mesh, so the children couldn’t poke their fingers through it and injure themselves or break the sensor. The top panel had three buttons that changed colour and played sound files when pressed.

Interaction designer Gabriel Scapusio did the wiring and the programming. Data from the sensors and buttons was sent via a cable, along with speaker cables, through a pipe underground to a computer and amplifier housed in the Children’s Centre. The computer controlling the music and colour changes was programmed using a special interactive visual programming environment for music, audio, and media called Max/MSP that has been in use for years by a wide range of people: performers, composers, artists, scientists, teachers, and students.

The panels in each seat were connected to an open-source electronics prototyping platform by Arduino. It’s intended for artists, designers, hobbyists, and anyone interested in creating interactive objects or environments, so is based on flexible, easy-to-use hardware and software.

The next job was to make sure it really did work as planned. The volume from the speakers was tested and adjusted according to the approximate head position of young children so it was audible enough for comfortable listening without interfering with the children playing in the rest of the garden. Finally it was crunch time. Would the children actually like it and play with it?

The sensory garden did make a difference – the children had lots of fun playing in it and within a few days of the opening one boy with poor language skills was not just seen playing with the installation but listening to lots of stories he wouldn’t otherwise have heard. Nela’s installation has lots of potential to help children like this by provoking and then rewarding their curiosity with something interesting that also has a useful purpose. It is a great example of how, by combining creative and technical skills, projects like these can really make a difference to a child’s life.

the CS4FN team (from the archive)

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Tony Stockman: Sonification

Two different coloured wave patterns superimposed on one anohter on a black background with random dots like a starscape.
Image by Gerd Altmann from Pixabay

Tony Stockman, who was blind from birth, was a Senior Lecturer at QMUL until his retirement. A leading academic in the field of sonification of data, turning data into sound, he eventually became the President of the “International Community for Auditory Display”: the community of researchers working in this area.

Traditionally, we put a lot of effort into finding the best ways to visualise data so that people can easily see the patterns in it. This is an idea that Florence Nightingale, of lady of the lamp fame, pioneered with Crimean War data about why soldiers were dying. Data visualisation is considered so important it is taught in primary schools where we all learn about pie charts and histograms and the like. You can make a career out of data visualisation, working in the media creating visualisations for news programmes and newspapers, for example, and finding a good visualisation is massively important working as a researcher to help people understand your results. In Big Data a good visualisation can help you gain new insights into what is really happening in your data. Those who can come up with good visualisations can become stars, because they can make such a difference (like Florence Nightingale, in fact)

Many people of course, Tony included cannot see, or are partially sighted, so visualisation is not much help! Tony therefore worked on sonifying data instead, exploring how you can map data onto sounds rather than imagery in a way that does the same thing.: makes the patterns obvious and understandable.

His work in this area started with his PhD where he was exploring how breathing affects changes in heart rate. He first needed a way to both check for noise in the recording and then also a way to present the results so that he could analyse and so understand them. So he invented a simple way to turn data into sound using for example frequencies in the data to be sound frequencies. By listening he could find places in his data where interesting things were happening and then investigate the actual numbers. He did this out of necessity just to make it possible to do research but decades later discovered there was by then a whole research community by then working on uses of and good ways to do sonification,

He went on to explore how sonification could be used to give overviews of data for both sighted and non-sighted people. We are very good at spotting patterns in sound – that is all music is after all – and abnormalities from a pattern in sound can stand out even more than when visualised.

Another area of his sonification research involved developing auditory interfaces, for example to allow people to hear diagrams. One of the most famous, successful data visualisations was the London Tube Map designed by Harry Beck who is now famous as a result because of the way that it made the tube map so easy to understand using abstract nodes and lines that ignored distances. Tony’s team explored ways to present similar node and line diagrams, what computer scientist’s call graphs. After all it is all well and good having screen readers to read text but its not a lot of good if all it tells you reading the ALT text that you have the Tube Map in front of you. And this kind of graph is used in all sorts of every day situations but are especially important if you want to get around on public transport.

There is still a lot more to be done before media that involves imagery as well as text is fully accessible, but Tony showed that it is definitely possible to do better, He also showed throughout his career that being blind did not have to hold him back from being an outstanding computer scientists as well as a leading researcher, even if he did have to innovate himself from the start to make it possible.

More on …


Related Magazine …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Sign Language for Train Departures

BSL for CS4FN
Image by Daniel Gill

This week (5-11th May) is Deaf Awareness Week, an opportunity to celebrate d/Deaf* people, communities, and culture, and to advocate for equal access to communication and services for the d/Deaf and hard of hearing. A recent step forward is that sign language has started appearing on railway stations.

*”deaf” with a lower-case “d” refers to
audiological experience of deafness,
or those who might have become deafened
or hard of hearing in later life, so might identify
closer to the hearing community.
“Deaf” with an upper-case “D” refers
to the cultural experience of deafness, or those
who might have been born Deaf and
therefore identify with the Deaf community.
This is similar to how people might describe themselves
as “having a disability” versus “being disabled”.

If you’re like me and travel by train a lot (long time CS4FN readers will be aware of my love of railway timetabling), you may have seen these relatively new British Sign Language (BSL) screens at various railway stations.

They work by automatically converting train departure information into BSL by stitching together pre-recorded videos of BSL signs. Pretty cool stuff! 

When I first saw these, though, there was one small thing that piqued my interest – if d/Deaf people can see the screen, why not just read the text? I was sure it wasn’t an oversight: Network Rail and train operators worked closely with d/Deaf charities and communities when designing the system: so being a researcher in training, I decided to look into it. 

A train information screen with sign language
Image by Daniel Gill

It turns out that the answer has various lines of reasoning.

There’s been many years of research investigating reading comprehension for d/Deaf people compared to their hearing peers. A cohort of d/Deaf children, in a 2015 paper, had significantly weaker reading comprehension skills than both hearing children of the same chronological and reading age.

Although this gap does seem to close with age, some d/Deaf people may be far more comfortable and skilful using BSL to communicate and receive information. It should be emphasised that BSL is considered a separate language and is structured very differently to spoken and written English. As an example, take the statement:

“I’m on holiday next month.”

In BSL, you put the time first, followed by topic and then comment, so you’d end up with:

“next month – holiday – me”

As one could imagine, trying to read English (a second language for many d/Deaf people) with its wildly different sentence structure could be a challenge… especially as you’re rushing through the station looking for the correct platform for your train!

Sometimes, as computer scientists, we’re encouraged to remove redundancies and make our systems simpler and easy-to-use. But something that appears redundant to one person could be extremely useful to another – so as we go on to create tools and applications, we need to make sure that all target users are involved in the design process.

Daniel Gill, Queen Mary University of London

More on…

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Robert Weitbrecht and his telecommunication device for the deaf

Robert Weitbrecht was born deaf. He went on to become an award winning electronics scientist who invented the acoustic coupler (or modem) and a teletypewriter (or teleprinter) system allowing the deaf to communicate via a normal phone call.

A modem telephone: the telephone slots into a teletypewriter here with screen rather than printer.
A telephone modem: Image by Juan Russo from Pixabay

If you grew up in the UK in the 1970s with any interest in football, then you may think of teleprinters fondly. It was the way that you found out about the football results at the final whistle, watching for your team’s result on the final score TV programme. Reporters at football grounds across the country, typed in the results which then appeared to the nation one at a time as a teleprinter slowly typed results at the bottom of the screen. 

Teleprinters were a natural, if gradual, development from the telegraph and Morse code. Over time a different simpler binary based code was developed. Then by attaching a keyboard and creating a device to convert key presses into the binary code to be sent down the wire you code type messages instead of tap out a code. Anyone could now do it, so typists replaced Morse code specialists. The teleprinter was born. In parallel, of course, the telephone was invented allowing people to talk to each other by converting the sound of someone speaking into an electrical signal that was then converted back into sound at the other end. Then you didn’t even need to type, never mind tap, to communicate over long distances. Telephone lines took over. However, typed messages still had their uses as the football results example showed.

Another advantage of the teletypewriter/teleprinter approach over the phone, was that it could be used by deaf people. However, teleprinters originally worked over separate networks, as the phone network was built to take analogue voice data and the companies controlling them across the world generally didn’t allow others to mess with their hardware. You couldn’t replace the phone handsets with your own device that just created electrical pulses to send directly over the phone line. Phone lines were for talking over via one of their phone company’s handsets. However, phone lines were universal so if you were deaf you really needed to be able to communicate over the phone not use some special network that no one else had. But how could that work, at a time when you couldn’t replace the phone handset with a different device?

Robert Weitbrecht solved the problem after being prompted to do so by deaf orthodontist, James Marsters. He created an acoustic coupler – a device that converted between sound and electrical signals –  that could be used with a normal phone. It suppressed echoes, which improved the sound quality. Using old, discarded teletypewriters he created a usable system Slot the phone mouthpiece and ear piece into the device and the machine “talked” over the phone in an R2D2 like language of beeps to other machines like it. It turned the electrical signals from a teletypewriter into beeps that could be sent down a phone line via its mouthpiece. It also decoded beeps when received via the phone earpiece in the electrical form needed by the teleprinter. You typed at one end, and what you typed came out on the teleprinter at the other (and vice versa). Deaf and hard of hearing people could now communicate with each other over a normal phone line and normal phones! The idea of Telecommunications Device for the Deaf that worked with normal phones was born. However, they still were not strictly legal in the US so James Marsters and others lobbied Washington to allow such devices.

The idea (and legalisation) of acoustic couplers, however, then inspired others to develop similar modems for other purposes and in particular to allow computers to communicate via the telephone network using dial-up modems. You no longer needed special physical networks for computers to link to each other, they could just talk over the phone! Dial-up bulletin boards were an early application where you could dial up a computer and leave messages that others could dial up to read there via their computers…and from that idea ultimately emerged the idea of chat rooms, social networks and the myriad other ways we now do group communication by typing.

The first ever (long distance) phone call between two deaf people (Robert Weitbrecht and James Marsters) using a teletypewriter / teleprinter was:

“Are you printing now? Let’s quit for now and gloat over the success.”

Yes, let’s.

– Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The wrong trousers? Not any more!

A metal figure sitting on the floor head down
Image by kalhh from Pixabay

Inspired by the Wallace & Gromit film ‘The Wrong Trousers’, Johnathan Rossiter of the University of Bristol builds robotic trousers. We could all need them as we get older.

Think of a robot and you probably think of something metal: something solid and hard. But a new generation of robot researchers are exploring soft robotics: robots made of materials that are squishy. When it comes to wearable robots, being soft is obviously a plus. That is the idea behind Jonathan’s work. He is building trousers to help people stand and walk.

Being unable to get out of an armchair without help can be devastating to a person’s life. There are many conditions like arthritis and multiple sclerosis, never mind just plain old age, that make standing up difficult. It gets to us all eventually and having difficulty moving around makes life hard and can lead to isolation and loneliness. The less you move about, the harder it gets to do, because your muscles get weaker, so it becomes a vicious circle. Soft robotic trousers may be able to break the cycle.

We are used to the idea of walking sticks, frames, wheelchairs and mobility scooters to help people get around. Robotic clothes may be next. Early versions of Jonathan’s trousers include tubes like a string of sausages that when pumped full of air become more solid, shortening as they bulge
out, so straightening the leg. Experiments have shown that inflating trousers fitted with them, can make a robot wearing them stand. The problem is that you need to carry gas canisters around, and put up with the psshhht! sound whenever you stand!

The team have more futuristic (and quieter) ideas though. They are working on designs
based on ‘electroactive polymers’. These are fabrics that change when electricity
is applied. One group that can be made into trousers, a bit like lycra tights, silently shrink with an electric current: exactly what you need for robotic trousers. To make it work you need a computer control system that shrinks and expands them in the right places at the right time to move the leg
wearing them. You also need to be able to store enough energy in a light enough way that the trousers can be used without frequent recharging.

It’s still early days, but one day they hope to build a working system that really can help older people stand. Jonathan promises he will eventually build the right trousers.

– Paul Curzon, Queen Mary University of London (from the archive)

More on …

The rise of the robots [PORTAL]


Related Magazine …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Turn Right in Tenejapa

Designing software that is inclusive for global markets is easy. All you have to do is get an AI to translate everything in the interface into multiple languages…or perhaps to do it properly it is harder than that! Not everyone thinks like you do.

Coloured arrows turning and pointing in lots of different directions on a curved surface
Image by Gerd Altmann from Pixabay

Suppose you are the successful designer of a satellite navigation system. You’ve made lots of money selling it in the UK and the US and are now ready to take on the world. You want to be inclusive. It should be natural and easy to use by all. You therefore aim to produce versions for every known language. It should be easy shouldn’t it. The basic system is fine. It can use satellite signals to work out where it is. You already have maps of everywhere based on Google Earth that you have been selling to the English Speakers. It can work out routes and gives perfectly good directions just as the user needs them – like “Turn Left 200 meters ahead”. It is already based on Unicode, the International standard for storing characters so can cope with characters from all languages. All you need to do now is get a team of translators to come up with the equivalent of the small number of phrases used by the device (which, of course will also involve switching units from eg meters to yards and the like, but that is easy for a computer) and add a language selection mechanism. You have thought of everything. Simple…

Not so simple, actually. You may need more than just translators, and you may need more than just to change the words. As linguists have discovered, for example, a third of known languages have no concept of left and right. Since language helps determine the way we think, that also suggests the people who speak those languages don’t use the concepts. “Turn right” is meaningless. It has no equivalent.

So how do such people give directions or otherwise describe positions. Well it turns out many use a method that for a long time some linguists suggested would never occur. Experiments have also shown that not only do they talk that way, but they also may think that way.

Take Tzeltal. It is spoken very widely in Mexico. A dialect of it that is spoken by about 15 000 people in the Indian community of Tenejapa has been studied closely by Stephen Levinson and Penelope Brown. It is a large area roughly covering one slope of a mountainous region. The language has no general notion of left or right. Unlike in European languages where we refer to directions based on the way we are facing (known as a relative frame of reference), in Tzeltal directions use what is known as an absolute frame of reference. It is as though they have a compass in their heads and do the equivalent of referring to North, South, East and West all the time. Rather than “The cup is to the left of the teapot”, they might say the equivalent of “The cup is North of the teapot”. How did this system arise? Well they don’t actually refer to North and South directly, but more like uphill and downhill, even when away from the mountain side: they subconsciously keep track of where uphill would be. So they are saying something more like “The cup is on the uphill side of the teapot”.

In Tenejapa they think diferently about direction too

Experiments have suggested they think differently too – Show Europeans a series of objects ordered so “pointing” to their left on a table, turn them through 180 degrees and ask them to order the same objects on the table in front of them, and they will generally put them “pointing” to their left. In experiments with native Tzeltal speakers and they tended to put them “pointing” to their right (Still pointing uphill or whatever). Similar things apply when they make gestures. Its not just the words they use that are different, it is the way they internally represent the world that differs.

So back to the drawing board with the navigation system. If you really want it to be completely natural for all, then for each language you need more than just translators. You need linguists who understand the way people think and speak about directions in each language. Then you will have to do more than just change the words the system outputs, but recode the navigation system to work the way they think. A natural system for the Tzeltal would need to keep track of the Tenejapan uphill and give directions relative to that.

It isn’t just directions of course, there are many ways that our language and cultures lead to us thinking and acting differently. Design metaphors are also used a lot in interactive systems but they only work if they fit their users’ culture. For example, things are often ordered left to right as that as the way we read…except who is we there? Not everyone reads left to right!

Writing software for International markets isn’t as easy as it seems. You have to have good knowledge not just of local languages but also differences in culture and deep differences in the way different people see the world… If you want to be an International success then you will be better at it if you work in a way that shows you understand and respect those from elsewhere.

by Paul Curzon, Queen Mary University of London, adapted from the archive

More on …

Magazines …

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



EPSRC supports this blog through research grant EP/W033615/1. 

Designing an interactive prayer mat

Successful interactive systems design is often based on detecting a need that really good solutions do not yet exist for, then coming up with a realistic solution others haven’t thought of. The real key is then having the technical and design skill and perseverance to actually build it, as well as the perseverance to go through lots of rounds of prototyping to get it right. Even then it is still a long haul needing different people and business skills to end up with a successful product. Kamal Ali showed how its done with the development of My Salah Mat, an interactive prayer mat to help young children learn to pray.

A child in prayer bowing low to God in the direction of Mecca
Image by Samer Chidiac from Pixabay

He realised there was a need watching his 4-year old struggling to get his feet and hands, forehead and nose in the right place to pray: correctly bowing low to God in the direction of Mecca. Instead he kept lying on his tummy. Kamal’s first thought was to try and buy something that would help.

He searched for something suitable: perhaps a mat with the positions marked on in some child friendly way, and was surprised when he could find nothing. Thinking it was a good idea anyway, and with a background in product design, he set about creating a Photoshop prototype himself. One of the advantages of prototyping is that it encourages “design-by-doing” and just in doing that he had new ideas – children need help with the words of prayers too, so why not write them on the mat in child friendly ways. From there realising it could be interactive with buttons to press so it could read out instructions was the next step. After all young children may struggle with reading themselves: it is important to really know your users and what will and will not work for them!

As he was already running a company, he knew how to get a physical prototype made so after working on the idea with a friend he created the first one. From there there were lots more rounds of prototyping to get the look and feel right for young kids, for example, and to ensure it would fill their need really, really well.

He also focussed on the one clear group: of young children and designed for their need. Once that design was successful the company then developed a very different design based on the same idea for adult / reverts. That is an important interaction design lesson. Different groups of potential users may need different designs and trying to design one product for everyone may not end up working for anyone. Find a specific group and design really well for them!

In the process of creating the design Kamal started to wonder why he was doing it. He realised it was not to make money – he was really thinking of it as a social venture. It was not about profit but all about doing social good: as he has said:

” I finally realised that my motivation was to create a high quality product that could help children learn how to pray Salah. Most importantly, children would want to pray and interact with the different aspects of Salah. This was my true motivation and the most important thing to me.”

Great interactive system product design takes inspiration, skill and a lot of perseverance, but the real key is to be able to identify a real unfulfilled need, and come up with realistic solutions that both fill the need and people want. That is not just about having an idea, it is about doing rounds and rounds of prototyping and trial and error with people who will be the users to get the design right. If you do get it right and you can do all sorts of good.

by Paul Curzon, Queen Mary University of London.

More on …

Magazines …

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



EPSRC supports this blog through research grant EP/W033615/1. 

My first signs

Alexander Graham Bell was inspired by the deafness of his mother to develop new technologies to help. Lila Harrar, then a computer science student at Queen Mary, University of London was also inspired by a deaf person to do something to make a difference. Her chance came when she had to think of something to do for her undergraduate project.

Sign language relief sculpture on a stone wall: "Life is beautiful, be happy and love each other", by Czech sculptor Zuzana Čížková on Holečkova Street in Prague-Smíchov, by a school for the deaf
Sign language relief sculpture on a stone wall: “Life is beautiful, be happy and love each other”, by Czech sculptor Zuzana Čížková on Holečkova Street in PragueSmíchov, by a school for the deaf
Image (cropped) ŠJů, Wikimedia Commons, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons

Her inspiration came from working with a deaf colleague in a part-time job on the shop floor at Harrods. The colleague often struggled to communicate to customers so Lila decided to do something to encourage hearing as well as deaf people to learn Sign Language. She developed an interactive tutor program that teaches both deaf and non-deaf users Sign Language. Her software included games and quizzes along with the learning sections- and she caught the attention of the company Microbooks. They were so impressed that they decided to commercialise it. As Lila discovered you need both creativity and logical thinking skills to do well at Computer Science – with both, together with a bit of business savvy, perhaps you could become the country’s next great innovator.

– Peter W. McOwan and Paul Curzon, Queen Mary University of London

More on …

Magazines …

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



EPSRC supports this blog through research grant EP/W033615/1. 

QMUL CS4FN EPSRC logos

Working in Computer Science: An Autistic Perspective (Part 2)

by Daniel Gill, Queen Mary University of London

In Part 1, we spoke to Stephen Parry about his experiences of working in computer science as an autistic person. In this second part, we discuss with him his change from this stressful working environment to teaching A-Level computer science, and how rewarding he has found teaching as a career.

Following a tough experience at his last workplace, Stephen decided he needed a change. He used this as a prompt to start thinking about alternatives:

“[When] things aren’t working out, you need to take a step back and work out what the problem is before it becomes really serious. I still hadn’t had a diagnosis by that point, so things probably would have gone very differently if I had, but I took a step back after that job. I was fed up of being stressed, trying to help people [who] have already got far too much money make more money, and then being told that I was being paid too much. That was kind of my experience from my last employer. And so, I decided that I wanted to get stressed for something worthwhile instead: my mum had been a teacher, so I’d always had it in mind as a possibility.”

Stephen did, of course, have some reservations financially. 

“I’d always thought it was financially too much of a step down, which a lot of people in the computer science industry will find out. I did take pretty much a 50% pay cut to become a trainee teacher: in fact, worse than that. But it’s amazing when you want to do something, what differences that makes! And there’s plenty of people out there that will sacrifice a salary to start their own business, and all the power to them. But people don’t think [like this] when they’re thinking about becoming a teacher, for example, which I think is wrong. Yes, teachers should be better paid than they are, but they’re never going to be as well paid as programmers or team leaders or whatever in industry. You shouldn’t expect that to be the case, because we’re public servants at the end of the day, and we’re here for the job as much as we are for the money. We want our roof over our head, but we’re not looking to get mega rich. We’re there to make a difference.”

While considering this change of profession, Stephen reflected on his existing skills, and whether they fit the role of teaching. With support from his wife and a DWP (Department for Work and Pensions) work coach, he was reminded of his ability to “explain technical stuff to [people] in a language [they] could understand.”

Stephen had the opportunity to get his first experience of teaching as a classroom volunteer. Alongside a qualified teacher, he was able to lead a lesson – which he found particularly exciting:

“It was a bit like being on drugs. It was exhilarating. I sort of sat there thinking, you know, this is something I really want to do.”

It’s around this time that Stephen got his autism diagnosis. For autistic people who receive a diagnosis, there can be a lot of mixed emotions. For some, it can be a huge sense of relief – finally understanding who they are, and how that has affected their actions and behaviours throughout their life. And for others it can come as a shock [EXTERNAL]. For Stephen, this news meant reconsidering his choice of a career in teaching:

“I had to stop and think, because, when you get your diagnosis for the first time as an adult or as an older person anyway, it does make you stop and think about who you are. It does somewhat challenge your sense of self.”

“It kind of turns your world a bit on its head. So, it did knock me a fair bit. It did knock my sense of self. But then I began to sort of put pieces together and realise just what an impact it had on my working life up until that point. And then the question came across, can I still do the job? Am I going to be able to teach? Is it really an appropriate course of action to take? I didn’t get the answer straight away, but certainly over the months and the years, I came to the conclusion it was a bit like when I talk to students who say, ‘should I do computer science?’ And I say to them, ‘well, can you program?’ ‘Yes.’ ‘Yes, you do need to do computer science.’ It’s not just you can if you want to – it’s a ‘you should do CS.’ It’s the same thing if you’re on the spectrum, or you’re in another minority, a significant minority like that, where you’re able to engage with a teaching role: you should do.”

Stephen did go on to complete teacher training, and has now worked as an A-Level and GCSE teacher for 15 years. He still benefits from his time in work, however, as he is able to enlighten future computer science students about the workplace:

“Well, you know the experiences I’ve had as a person in industry, where else are the students going to be exposed to that second-hand? Hopefully they’ll be exposed to it first-hand, but, if I can give them a leg up, and an introduction to that, being forewarned and forearmed and all that, then that’s what should happen. 

“I do spend a chunk of my teaching explaining what it’s like working in industry: explaining the difficulties of dealing with management; (1) when you think you know better, you might not know better – you don’t know yet; (2) if you do, keep your mouth shut until the problem occurs, then offer a positive and constructive solution. Hopefully they won’t say ‘why didn’t you say something sooner?’ If they do, just say, ‘Well, I wasn’t sure it was my place to, I’m only new.’”

Teaching is famously a very rewarding career path, and this is no different for Stephen. In our discussion, he outlined a few things that he enjoyed about teaching:

“It’s [a] situation where what you do, lives on. If I drop dead tomorrow, all that stuff that I learned about; how different procedure calls work or whatever, could potentially just disappear into the ether. But because I’ve shared it with all my students, they will hopefully make use of it, and it will carry on. And it’s a way of having a legacy, which I think we all want, to a certain extent.”

“Young people nowadays, particularly those of us on the spectrum, but it applies to all, the world does everything possible at the moment to destroy most young people’s self-esteem. Really, really knock people flat. Society is set up that way. Our social media is set up that way. Our traditional media is set up that way. It’s all about making people feel pretty useless, pretty rubbish in the hope, in some cases, of selling them something that will make them feel better, which never does, or in other cases, just make someone else feel good by making someone else feel small. It’s kind of the more the darker side of humanity coming out that teaching is an opportunity to counter that. If you can make a young person feel good about themselves; if you can help them conquer something that they’re not able to do; if you could help them realise that it doesn’t matter if they can’t, they’re still just as important and wonderful and valuable as a human being.”

“The extracurricular activities that I do: ‘Exploring the Christian faith’ here at college. And part of that is helping people [to] find a spiritual worth they didn’t realise they had. So, you get that opportunity as a teacher, which a bus driver doesn’t get, for example. Bus drivers are very useful – they do a wonderful job. But once they’ve dropped you off, that’s the end of the job. Sometimes we’re a bit like bus drivers as teachers. You go out the door with your grades, and that’s fine, but then some people keep coming back. I haven’t spotted the existential elastic yet, but it’s there somewhere. I’m sure I didn’t attach it. But that is another one of the things that motivates me to be a teacher.”

Stephen Parry now teaches at a sixth-form college near Sheffield. The author would like to thank Stephen for taking time out of his busy schedule to take part in this interview.

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

EPSRC supports this blog through research grant EP/W033615/1,

Working in Computer Science: An Autistic Perspective (Part 1)

by Daniel Gill, Queen Mary University of London

Autism is a condition with many associated challenges, but for some people it presents some benefits. This distinction is greatly apparent in the workplace, where autistic people often find it difficult to get along with others (and their boss), and to complete the work that has been set for them. It’s not all negatives though: many autistic people find the work in which they thrive, and given the right circumstances and support, an autistic person is able to succeed in such an environment.

We often rightly hear about the greats in computer science; Ada Lovelace, Alan Turing, Lynn Conway (who sadly passed away earlier this month) – but let us not forget the incredible teams of computer scientists working around the clock; maintaining the Internet, building the software we use every day, and teaching the next generation. For this two-part article, I have spoken with Stephen Parry, an autistic computer scientist, who, after working in industry for 20 years, now teaches the subject in a sixth-form college in Sheffield. His autistic traits have caused him challenges throughout his career, but this is not a unique experience – many autistic computer scientists also face the same challenges.

Stephen’s experience with programming started at the age of 14, after being introduced to computers at a curriculum enrichment course. He decided against taking a then “really rubbish” O-Level (now GCSEs) Computer Science course, and the existence of the accompanying A-Level “just didn’t come up on my radar”. He was, however, able to take home the college’s sole RML 380Z for the summer, a powerful computer for the time, with which, he was able to continue to practice programming.

When it came time to go to university, he opted first to study chemistry, a subject he had been studying at A-Level. Though after a short amount of time he realised that he wasn’t as interested in chemistry as he first thought – so he decided to switch to computer science. In our discussions, he praised the computer science course at the University of Sheffield:

“[I] really enjoyed [the course] and got on well with it. So, I kind of drifted into it as far as doing it seriously is concerned. But it’s been a hobby of mine since I was 14 years old, and once I was on the degree, I mean, the degree at Sheffield was a bit like a sweetie shop. It really was absolutely brilliant. We did all kinds of weird and wonderful stuff, all of it [was] really interesting and engaging, and the kind of stuff that you wouldn’t get by either playing around on your own or going out into [the] workplace. As I’ve always said, that’s what a university should be. It should expose you to the kind of stuff that you can’t get anywhere else, the stuff that employers haven’t realised they need yet.”

Of autistic people who go to university, research shows they are much more likely the general population to pick STEM subjects [EXTERNAL]. For lots of autistic people, the clear logical and fundamental understanding behind scientific subjects is a great motivator. Stephen describes how this is something that appeals to him.

“[What] I enjoy about computer science is how it teaches you how the computer actually works at a fundamental level. So, you’re not just playing with a black box anymore – it’s something you understand. And especially for someone on the [autism] spectrum, that’s a really important aspect of anything you do. You want to understand how things work. If you’re working with something, and you don’t understand how it works, usually it’s not very satisfying and kind of frustrating. Whereas, if you understand the principles going on inside of it then, when you know you’ve got it, it kind of unlocks it for you.”

While autistic traits often result in challenges for autistic people, there are some which can present a benefit to someone in computer science. A previous CS4FN article described how positive traits like ‘attention to detail’ and ‘resilience and determination’ link well to programming. Stephen agrees that these traits can help him to solve problems:

“If I get focused on a problem, the hyper focus kicks in, and I will just keep plugging away until it’s done, fixed or otherwise overcome. I know it’s both a benefit and hazard – it’s a double edged sword, but at the same time, you know you have to have that attention to detail and that, to put it another way, sheer bloody mindedness to be determined that you’re going to make it work, or you’re going to understand how it works, and that does come definitely from the [autism] spectrum.”

Although he enjoyed the content greatly, Stephen had a rocky degree, both in and out of lectures. However, some unexpected benefits arose from being at university; he both found faith and met his future wife. These became essential pillars of support, as he prepared to enter the workforce. This he did, working both as a programmer and in a variety of IT admin and technical support roles. 

About 78% of autistic adults are currently out of work [EXTERNAL] (compared with 20% in the general population). This is, in part, reflective of the fact that some autistic people are unable to work because of their condition. But for many others, despite wanting to work, they cannot because they do not get the support they need (and are legally entitled to) within their workplace.

At this time, however, Stephen wasn’t aware of his condition, only receiving his diagnosis in his 40s. He described how this transition from university to work was very challenging.

“I moved into my first job, and I found it very, very difficult because I didn’t know that I’ve got this sort of difference – this different way my brain works that affects everything that you do. I didn’t know when I came across difficulties, it was difficult to understand why, at least to an extent, for me and for other people, it was deeply frustrating. I mean, speak to just about every manager I’ve ever had, and the same sort of pattern tends to come out. Most of them recognised that I was very difficult to manage because I found myself very difficult to manage. But time management is an issue with everything – trying to complete tasks to any kind of schedule, trying to plan anything. Oh, my days, when I hear the word SMART. [It’s an] acronym [meaning] specific, measurable, achievable, realistic and time specific. I hear that, and it just it makes me feel physically ill sometimes, because I cannot. I cannot SMART plan.”

However, during his time in work, he had some good luck. Despite the challenges associated with autism, some managers took advantage of the positive skills that he brings to the table:

“I found that a real challenge, interpersonally speaking, things like emotional regulation and stuff like that, which I struggle with, and I hate communicating on the phone and various other things, make me not the most promising employee. But the managers that I’ve had over the years that have valued me the most are the ones who recognised the other side of the coin, which is [that] over the years, I have absorbed so much knowledge about computer science and there are very [few] problems that you can come across that I don’t have some kind of insight into.”

This confidence in a range of areas in computer science is also a result of Stephen’s ability to link lots of areas and experiences together, a positive skill that some autistic people have:

“I found that with the mixture of different job roles I did, i.e. programming, support, network admin and database admin, my autism helped me form synergies between the different roles, allowing me to form links and crossover knowledge between the different areas. So, for example, as a support person with programming experience, I had insight into why the software I was helping the user with did not work as desired (e.g. the shortcuts or mistakes the programmer had likely made) and how maybe to persuade it to work. As a programmer with support experience, you had empathy with the user and what might give them a better UX, as well as how they might abuse the software. All this crossover, also set me up for being able to teach confidently on a huge range of aspects of CS.”

For autistic students who are planning on working in a computer science career, he has this to say:

“As an autistic person, and I would say this to anybody with [autism], you need to cultivate the part of you that really wants to get on well with people and wants to be able to care about people and understand people. Neurotypical people get that ability out of the box, and some of them take it for granted. I tend to find that the autistic people who actually find that they can understand people, that they work at it until they can, [are] often more conscientious as a result. And I think it’s important that if you’re an autistic person, to learn how to be positive about people and affirm people, and interact with them in positive ways, because it can make you a more caring and more valuable human being as a as a result.”

“Look for jobs where you can really be an asset, where your neurodiversity is the asset to what you’re trying to do, but at the same time, don’t be afraid to try to, and learn how to engage with people. Although it’s harder, it’s often more rewarding as a result.”

After working in industry for 20 years, the last half as which as a contractor, Stephen decided to take a considerable pay drop and become a computer science teacher. In the second part of this article, we will continue our conversation and find out what led him to choose a career change to teaching. 

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

EPSRC supports this blog through research grant EP/W033615/1,