Sarah Angliss: Hugo is no song bird

by Jane Waite, Queen Mary University of London

What was the first technology for recording music: CDs? Records? 78s, The phonograph? No. Trained songbirds came before all of them.

Composer, musician, engineer and visiting fellow at Goldsmiths University, Sarah Angliss, usually has a robot on stage performing live with her. These robots are not slick high tech cyber-beings, but junk modelled automata. One, named Hugo, sports a spooky ventriloquist dolls head! Sarah builds and programs her robots, herself.

She is also a sound historian, and worked on a Radio 4 documentary, ‘The Bird Fancyer’s Delight‘, uncovering how birds have been used to provide music across the ages. During the 1700’s people trained songbirds to sing human invented tunes in their homes. You could buy special manuals showing how to train your pet bird. By playing young birds a tune over and over again, and in the absence of other birds to put them right, they would adopt that song as their own. Playing the recorder was one way to train them, but special instruments were also invented to do the job automatically.

With the invention of the phonograph, home songbird popularity plummeted but it didn’t completely die out. Blackbirds, thrushes, canaries, budgies, bullfinches and other songbirds have continued to be schooled to learn songs that they would never sing in the wild.


This article was first published on our archived CS4FN site, and a copy is also on page 9 of issue 21 of the CS4FN magazine “Computing Sounds Wild”. You can download a free PDF copy from the link below, along with all of our free material.


Related Magazine …


More from Sarah Angliss


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Music & Computing: TouchKeys: getting more from your keyboard

By Jo Brodie and Paul Curzon, Queen Mary University of London

Even if you’re the best keyboard player in the world the sound you can get from any one key is pretty much limited to ‘loud’ or ‘soft’, ‘short’ or ‘long’ depending on how hard and how quickly you press it. The note’s sound can’t be changed once the key is pressed. At best, on a piano, you can make it last longer using the sustain pedal. A violinist, on the other hand, can move their finger on the string while it’s still being played, changing its pitch to give a nice vibrato effect. Wouldn’t it be fun if keyboard players could do similar things.

Andrew McPherson and other digital music researchers at QMUL and Drexel University came up with a way to give keyboard performers more room to express themselves like this. TouchKeys is a thin plastic coating, overlaid on each key of a keyboard, but barely noticeable to the keyboard player. The coating contains sensors and electronics that can change the sound when a key is touched. The TouchKeys’ electronics connect to the keyboard’s own controller and so changes the sounds already being made, expanding the keyboard’s range. This opens up a whole world of new sonic possibilities to a performer.

The sensors can follow the position and movement of your fingers and respond appropriately in real-time, extending the range of sounds you can get from your keyboard. By wiggling your finger from side-to-side on a key you can make a vibrato effect, or you change the note’s pitch completely by sliding your finger up and down the key. The technology is similar to a phone’s touchscreen where different movements (‘gestures’) make different things happen. An advantage of the system is that it can easily be applied to a keyboard a musician already knows how to play, so they’ll find it easy to start to use without having to make big changes to their style of playing.

They wanted to get TouchKeys out of the lab and into the hands of more musicians, so teamed up with members of London’s Music Hackspace community, who run courses in electronic music, to create some initial versions for sale. Early adopters were able to choose either a DIY kit to add to their own keyboard, wire up and start to play, or choose a ready-to-play keyboard with the TouchKeys system already installed.

The result is that lots of musicians are already using TouchKeys to get more from their keyboard in exciting new ways.


Earlier this year Professor Andrew McPherson gave his inaugural lecture (a public lecture given by an academic who has been promoted) at Imperial College London where he is continuing his research. You can watch his lecture – Making technology to make music – below.

Further reading

Andrew McPherson’s work on the Bela platform

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos



Singing bird – a human choir, singing birdsong

by Jane Waite, Queen Mary University of London

Image by Dieter from Pixabay

“I’m in a choir”. “Really, what do you sing?” “I did a blackbird last week, but I think I’m going to be woodpecker today, I do like a robin though!”

This is no joke! Marcus Coates a British artist, got up very early, and working with a wildlife sound recordist, Geoff Sample, he used 14 microphones to record the dawn chorus over lots of chilly mornings. They slowed the sounds down and matched up each species of bird with different types of human voices. Next they created a film of 19 people making bird song, each person sang a different bird, in their own habitats, a car, a shed even a lady in the bath! The 19 tracks are played together to make the dawn chorus. See it on YouTube below.

Marcus talks about his work, and the installation at Brighton Fabrica.

Marcus didn’t stop there, he wrote a new bird song score. Yes, for people to sing a new top ten bird hit, but they have to do it very slowly. People sing ‘bird’ about 20 times slower than birds sing ‘bird’ ‘whooooooop’, ‘whooooooop’, ‘tweeeeet’. For a special performance, a choir learned the new song, a new dawn chorus, they sang the slowed down version live, which was recorded, speeded back up and played to the audience, I was there! It was amazing! A human performance, became a minute of tweeting joy. Close your eyes and ‘whoop’ you were in the woods, at the crack of dawn!

Computationally thinking a performance

Computational thinking is at the heart of the way computer scientists solve problems. Marcus Coates, doesn’t claim to be a computer scientist, he is an artist who looks for ways to see how people are like other animals. But we can get an idea of what computational thinking is all about by looking at how he created his sounds. Firstly, he and wildlife sound recordist, Geoff Sample, had to focus on the individual bird sounds in the original recordings, ignore detail they didn’t need, doing abstraction, listening for each bird, working out what aspects of bird sound was important. They looked for patterns isolating each voice, sometimes the bird’s performance was messy and they could not hear particular species clearly, so they were constantly checking for quality. For each bird, they listened and listened until they found just the right ‘slow it down’ speed. Different birds needed different speeds for people to be able to mimic and different kinds of human voices suited each bird type: attention to detail mattered enormously. They had to check the results carefully, evaluating, making sure each really did sound like the appropriate bird and all fitted together into the Dawn Chorus soundscape. They also had to create a bird language, another abstraction, a score as track notes, and that is just an algorithm for making sounds!

Fun to try

Use your computational thinking skills to create a notation for an animal’s voice, a pet perhaps? A dog, hamster or cat language, what different sounds do they make, and how can you note them down. What might the algorithm for that early morning “I want my breakfast” look like? Can you make those sounds and communicate with your pet? Or maybe stick to tweeting? (You can follow @cs4fn on Twitter too).

Enjoy the slowed-down performance of this pet starling which has added a variety of mimicked sounds to its song repertoire.


This article was originally published on the CS4FN website and can also be found on page 15 in the magazine linked below. It also featured on Day 7 of our CS4FN Christmas Computing Advent Calendar.


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Every Breath You Take: Reclaim the Internet

The 1983 hit song by the Police “Every breath you take” is up there in the top 100 pop songs ever. It seems a charming love song, and some couples even treat it as “their” song, playing it for the first dance at their wedding. Some of the lyrics “Every single day…I’ll be watching you”, if in a loving relationship, might be a good and positive thing. As the Police’s Sting has said though, the lyrics are about exactly the opposite.

Image by Adam from Pixabay

It is being sung by a man obsessed with his former girlfriend. He is singing a threat. It is about sinister stalking and surveillance, about nasty use of power by a deranged man over a woman who once loved him.

Reclaim the Internet

Back in 1983 the web barely existed, but what the song describes is now happening every day, with online stalking, trolling and other abuse a big problem. What starts in the virtual world, we now see, spills over into the real world, too. This is one reason why we need to Reclaim the Internet and why online privacy is important. We must all call out online abuse. Prosecuters need to treat it seriously. Social media companies need to find ways to prevent abusive content being posted and remove it quickly. They need easier ways for us to protect our privacy and to know it is protected. They need to be up for the challenge.

Reclaim your privacy

The lyrics fit our lives in another way too, about another kind of relationship. When we click those unreadable consent forms for using a new app, we give permission for the technology companies that we love so much to watch over us. They follow the song as a matter of course (in a loving way they say). They are “watching you” as you keep your gadgets on you “every single day”; “every night you stay” online you are recorded along with anyone you are with online; they watch “every move you make” (physically with location aware devices and virtually, noting every click, every site visited, everything you are interested in they know from your searches); “every step you take” (recorded by your fitness tracker); and “every breath you take” (by your healthcare app); “every bond you break” is logged (as you unlike friends and as you leave websites never to go back); “every game you play” (of course), “every word you say” (everything you type is noted, but the likes of Alexa also record every sound too, shipping your words off to be processed by distant company servers). They really are watching you.

Let’s hope the companies really are loving and don’t turn out to have an ugly underside, changing personality and becoming abusive once they have us snared. Remember their actual aim is to make money for shareholders. They don’t actually love us back. We may fall out of love with them, but by then they will already know everything about us, and will still be watching every move we make. Perhaps you should not be giving up your privacy so freely.

You belong to me?

We probably can’t break our love affair, anyway. We’ve already sold them our souls (for nothing much at all). As the lyrics say: “You belong to me.”

More on…

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page and talk are funded by EPSRC on research agreement EP/W033615/1.