Composing from Compression

by Geraint Wiggins, Queen Mary University of London

Computers compress files to save space. But it also allows them to create music!

Recoloured Cranium head abstract image by Gordon Johnson from Pixabay

Music is special. It’s one of the things, like language, that makes us human, separating us from animals. It’s also special as art, because it doesn’t exist as an object in the world – it depends on human memory. “But what about CDs? They’re objects in the world”, you might say and you’d be right, but the CD is not the music. The CD contains data files of numbers. Those numbers are translated by electronics into the movements in a loudspeaker, to create sound waves. Even the sound waves aren’t music! They only become music when a human hears them, because understanding music is about noticing repetition, variation and development in its structure. That’s why songs have verses and choruses: so we can find a starting point to understand its structure. In fact, we’re so good at understanding musical structure, we don’t even notice we’re doing it. What’s more, music affects us emotionally: we get excited (using the same chemicals that get us excited when we’re in love or ready to flee danger) when we hear the anthem section of a trance track, or recognise the big theme returning at the end of a symphony.

Surprisingly, brains seem to understand musical structure in a way that’s like the algorithms computer scientists use to compress data. It’s better to store data compressed than uncompressed, because it takes less storage space. We think that’s why brains do it too.

Even more surprisingly, brains also seem to be able to learn the best way to store compressed music data. Computers use bits as their basic storage unit, but we can make groups of bits work like other things (numbers, words, pictures, angry birds…); brains seem to do something similar. For example, pitch (high vs. low notes) in sequence is an important part of music: we build melodies by lining up notes of different pitch one after the other. As we learn to hear music (starting before birth, and continuing throughout life), we learn to remember pitch in ever more efficient ways, giving our compression algorithms better and better chances to compress well. And so we remember music better.

Our team use compression algorithms to understand how music works in the human mind. We have discovered that, when our programs compress music, they can sometimes predict musical structures, even if neither they nor a human have “heard” them before. To compress something, you find large sections of repeated data and replace each with a label saying “this is one of those”. It’s like labelling a book with its title: if you’ve read Lord of the Rings, when I say the title you know what I mean without me telling the story. If we do this to the internal structure of music, there are little repetitions everywhere, and the order that they appear is what makes up the music’s structure.

If we compress music, but then decompress it in a different way, we can get a new piece of music in a similar style or genre. We have evidence that human composers do that too!

What our programs are doing is learning to create new music. There’s a long way to go before they produce music you’ll want to dance to – but we’re getting there!


This article was first published on the original CS4FN website and a copy can be found on page 12 in Issue 18 of the CS4FN magazine: Machines that are creative. You can download a free PDF copy below, along with all of our other free magazines and booklets at our downloads site.


Related Magazine …

EPSRC supports this blog through research grant EP/W033615/1.

Daphne Oram: the dawn of music humans can’t play

by Paul Curzon, Queen Mary University of London

Music notes over paint brush patterns
Image by Gerd Altmann from Pixabay

What links James Bond, a classic 1950s radio comedy series and a machine for creating music by drawing? … Electronic music pioneer: Daphne Oram.

Oram was one of the earliest musicians to experiment with electronic music, and was the first woman to create an electronic instrument. She realised that the advent of electronic music meant composers no longer had to worry about whether anyone could actual physically perform the music they composed. If you could write it down in a machine readable way then machines could play it electronically. That idea opened up whole new sounds and forms of music and is an idea that pop stars and music producers still make use of today.

She learnt to play music as a child and was good enough to be offered a place at the Royal College of Music, though turned it down. She also played with radio electronics with her brothers, creating radio gadgets and broadcasting music from one room to another. Combining music with electronics became her passion and she joined the BBC as a sound engineer. This was during World War 2 and her job included being the person ready during a live music broadcast to swap in a recording at just the right point if, for example, there was an air raid that meant the performance had to be abandoned. The show, after all, had to go on.

Composing electronic music

She went on to take this idea of combining an electronic recording with live performance further and composed a novel piece of music called Still Point that fully combined orchestral with electronic music in a completely novel way. The BBC turned down the idea of broadcasting it, however, so it was not played for 70 years until it was rediscovered after her death, ultimately being played at a BBC Prom.

Composers no longer had to worry
about whether anyone could actually
physically perform the music they composed

She started instead to compose electronic music and sounds for radio shows for the BBC which is where the comedy series link came in. She created sound effects for a sketch for the Goon Show (the show which made the names of comics including Spike Milligan and Peter Sellers). She constantly played with new techniques. Years later it became standard for pop musicians to mess with tapes of music to get interesting effects, speeding them up and down, rerecording fragments, creating loops, running tapes backwards, and so on. These kinds of effects were part of amazing sounds of the Beatles, for example. Oram was one of the first to experiment with these kinds of effects and use them in her compositions – long before pop star producers.

One of the most influential things she did was set up the BBC Radiophonic Workshop which went on to revolutionise the way sound effects and scores for films and shows were created. Oram though left the BBC shortly after it was founded, leaving the way open for other BBC pioneers like Delia Derbyshire. Oram felt she wasn’t getting credit for her work, and couldn’t push forward with some of her ideas. Instead Oram set herself up as an independent composer, creating effects for films and theatre. One of her contracts involved creating electronic music that was used on the soundtracks of the early Bond films starring Sean Connery – so Shirley Bassey is not the only woman to contribute to the Bond sound!

The Music Machine

While her film work brought in the money, she continued with her real passion which was to create a completely new and highly versatile way to create music…by drawing. She built a machine – the Oramics Machine – that read a composition drawn onto film reels. It fulfilled her idea of having a machine that could play anything she could compose (and fulfilled a thought she had as a child when she wondered how you could play the notes that fell between the keys on a piano!).

Image by unknown photographer from wikimedia.

The 35mm film that was the basis of her system that dates all the way back to the 19th century when George Eastman, Thomas Edison and Kennedy Dixon pioneered the invention film based photography and then movies. It involved a light sensitive layer being painted on strips of film with holes down the side that allowed the film to be advanced. This gave Oram a recording media. She could etch or paint subtle shapes and patterns on to the film. In a movie light was shone through the film, projecting the pictures on the film on to the screen. Oram instead used light sensors to detect the patterns on the film and convert it to electronic signals. Electronic circuitry she designed (and was awarded patents for) controlled cathode ray tubes that showed the original drawn patterns but now as electrical signals. Ultimately these electrical signals drove speakers. Key to the flexibility of the system was that different aspects of the music were controlled by patterns on different films. One for example controlled the frequency of the sound, others the timbre or tone quality and others the volume. These different control signals for the music were then combined by Oram’s circuitry. The result of combining the fine control of the drawings with the multiple tapes meant she had created a music machine far more flexible in the sound it could produce than any traditional instrument or orchestra. Modern music production facilities use very similar approaches today though based on software systems rather than the 1960s technology available to Oram.

Ultimately, Daphne Oram was ahead of her time as a result of combining her two childhood fascinations of music and electronics in a way that had not been done before. She may not be as famous as the great record producers who followed her, but they owe a lot to her ideas and innovation.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

The first computer music

by Paul Curzon, Queen Mary University of London

(updated from the archive)

Robot with horn
Image by www_slon_pics from Pixabay

The first recorded music by a computer program was the result of a flamboyant flourish added on the end of a program that played draughts in the early 1950s. It played God Save the King.

The first computers were developed towards the end of the second world war to do the number crunching needed to break the German codes. After the War several groups set about manufacturing computers around the world: including three in the UK. This was still a time when computers filled whole rooms and it was widely believed that a whole country would only need a few. The uses envisioned tended to be to do lots of number crunching.

A small group of people could see that they could be much more fun than that, with one being school teacher Christopher Strachey. When he was introduced to the Pilot ACE computer on a visit to the National Physical Laboratories, in his spare time he set about writing a program that could play against humans at draughts. Unfortunately, the computer didn’t have enough memory for his program.

He knew Alan Turing, one of those war time pioneers, when they were both at university before the War. He luckily heard that Turing, now working at the University of Manchester, was working on the new Feranti Mark I computer which would have more memory, so wrote to him to see if he could get to play with it. Turing invited him to visit and on the second visit, having had a chance to write a version of the program for the new machine, he was given the chance to try to get his draughts program to work on the Mark I. He was left to get on with it that evening.

He astonished everyone the next morning by having the program working and ready to demonstrate. He had worked through the night to debug it. Not only that, as it finished running, to everyone’s surprise, the computer played the National Anthem, God Save the King. As Frank Cooper, one of those there at the time said: “We were all agog to know how this had been done.” Strachey’s reputation as one of the first wizard programmers was sealed.

The reason it was possible to play sounds on the computer at all, was nothing to do with music. A special command called ‘Hoot’ had been included in the set of instructions programmers could use (called the ‘order’ code at the time) when programming the Mark I computer. The computer was connected to a loud speaker and Hoot was used to signal things like the end of the program – alerting the operators. Apparently it hadn’t occurred to anyone there but Strachey that it was everything you needed to create the first computer music.

He also programmed it to play Baa Baa Black Sheep and went on to write a more general program that would allow any tune to be played. When a BBC Live Broadcast Unit visited the University in 1951 to see the computer for Children’s Hour the Mark I gave the first ever broadcast performance of computer music, playing Strachey’s music: the UK National Anthem, Baa Baa Black Sheep and also In the Mood.

While this was the first recorded computer music it is likely that Strachey was beaten to creating the first actual programmed computer music by a team in Australia who had similar ideas and did a similar thing probably slightly earlier. They used the equivalent hoot on the CSIRAC computer developed there by Trevor Pearcey and programmed by Geoff Hill. Both teams were years ahead of anyone else and it was a long time before anyone took the idea of computer music seriously.

Strachey went on to be a leading figure in the design of programming languages, responsible for many of the key advances that have led to programmers being able to write the vast and complex programs of today.

The recording made of the performance has recently been rediscovered and restored so you can now listen to the performance yourself:


More on …

Related Magazines …


This blog is funded by UKRI, through grant EP/W033615/1.

Diamond Dogs: Bowie’s algorithmic creativity

by Paul Curzon, Queen Mary University of London

(Updated from the archive)

Bowie black and white portrait
Image by Cristian Ferronato from Pixabay

Rock star David Bowie co-wrote a program that generated lyric ideas. It gave him inspiration for some of his most famous songs. It generated sentences at random based on something called the ‘cut-up’ technique: an algorithm for writing lyrics that he was already doing by hand. You take sentences from completely different places, cut them into bits and combine them in new ways. The randomness in the algorithm creates strange combinations of ideas and he would use ones that caught his attention, sometimes building whole songs around the ideas they expressed.

Tools for creativity

Rather than being an algorithm that is creative in itself, it is perhaps more a tool to help people (or perhaps other algorithms) be more creative. Both kinds of algorithm are of course useful. It does help highlight an issue with any “creative algorithm”, whether creating new art, music or writing. If the algorithm produces lots of output and a human then chooses the ones to keep (and show others), then where is the creativity? In the algorithm or in the person? That selection process of knowing what to keep and what to discard (or keep working on) seems to be a key part of creativity. Any truly creative program should therefore include a module to do such vetting of its work!

All that, aside, an algorithm is certainly part of the reason Bowie’s song lyrics were often so surreal and intriguing!


Write a cut-up technique program

Why not try and write your own cut-up technique program to produce lyrics. You will likely need to use String processing libraries of whatever language you choose. You could feed it things like the text of webpages or news reports. If you don’t program yet, do it by hand cutting up magazines, shuffling the part sentences before gluing them back together.


More on …

Related Magazines …


This blog is funded by UKRI, through grant EP/W033615/1.

A Wookie for three minutes please – how Foley artists can manipulate natural and synthesised sounds for film, TV and radio

by Jane Waite and Paul Curzon, Queen Mary University of London.
This story was originally published on CS4FN and in an issue of the magazine (see below).

Theatre producers, radio directors and film-makers have been trying to create realistic versions of natural sounds for years. Special effects teams break frozen celery stalks to mimic breaking bones, smack coconut shells on hard packed sand to hear horses gallop, rustle cellophane for crackling fire. Famously, in the first Star Wars movie the Wookie sounds are each made up of up to six animal clips combined, including a walrus! Sometimes the special effect people even record the real thing and play it at the right time! (Not a good idea for the breaking bones though!) The person using props to create sounds for radio and film is called a Foley artist, named after the work of Jack Donovan Foley in the 1920’s. Now the Foley artist is drawing on digital technology to get the job done.

Black and white photo of a walrus being offered a fish, with one already in its mouth
“Are you sure that’s a microphone?” Walrus photo by Kabomani-Tapir from Pixabay

Designing sounds

Sound designers have a hard job finding the right sounds. So how about creating sound automatically using algorithms? Synthetic sound! Research into sound creation is a hot topic, not just for special effects but also to help understand how people hear and for use in many other sound based systems. We can create simple sounds fairly easily using musical instruments and synthesisers, but creating sounds from nature, animal sounds and speech is much more complicated.

The approaches used to recognize sounds can be the basis of generating sounds too. You can either try and hand craft a set of rules that describe what makes the sound sound the way it does, or you can write algorithms that work it out for themselves.

Paying patterns attention

One method, developed as a way to automatically generate synthetic sound, is based on looking for patterns in the sounds. Computer scientists often create mathematical models to better understand things, as well as to recognize and generate computer versions of them. The idea is to look at (or here listen to) lots of examples of the thing being studied. As patterns become obvious they also start to identify elements that don’t have much impact. Those features are ignored so the focus stays on the most important parts. In doing this they build up a general model, or view, that describes all possible examples. This skill of ignoring unimportant detail is called abstraction, and if you create a general view, a model of something, this is called generalisation: both important parts of computational thinking. The result is a hand-crafted model for generating that sound.

That’s pretty difficult to do though, so instead computer scientists write algorithms to do it for them. Now, rather than a person trying to work out what is, or is not important, training algorithms work it out using statistical rules. The more data they see, the stronger the pattern that emerges, which is why these approaches are often referred to as ‘Big Data’. They rely on number crunching vast data sets. The learnt pattern is then matched against new data, looking for examples, or as the basis of creating new examples that match the pattern.

The rain in train(ing)

Number crunching based on Big Data isn’t the only way though, sometimes general patterns can be identified from knowledge of the thing being investigated. For example, rain isn’t one sound but is made up of lots of rain drops all doing a similar thing. Natural sounds often have that kind of property. So knowledge of a phenomenon can be used to create a basic model to build a generator around. This is an approach Richard Turner, now at Cambridge University, has pioneered, analysing the statistical properties of natural sounds. By creating a basic model and then gradually tweaking it to match the sound-quality of lots of different natural sounds, his algorithms can learn what natural sounds are like in general. Then, given a specific natural ‘training’ sound, it can generate synthetic versions of that sound by choosing settings that match its features. You could give it a recorded sample of real rain, for example. Then his sound processing algorithms apply a bunch of maths that pull out the important features of that particular sound based on the statistical models. With the critical features identified, and plugged in to his general model, a new sound of any length can then be generated that still matches the statistical pattern of, and so sounds like, the original. Using the model you can create lots of different versions of rain, that all still sound like rain, lots of different campfires, lots of different streams, and so-on.

For now, the celery stalks are still in use, as are the walrus clippings, but it may not be long before film studios completely replace their Foley bag of tricks with computerised solutions like Richard’s. One wookie for 3 minutes and a dawn chorus for 5 please.

 


Become a Foley Artist with Sonic Pi

You can have a go at being a Foley artist yourself. Sonic Pi is a free live-coding synth for music creation that is both powerful enough for professional musicians, but intended to get beginners into live coding: combining programming with composing to make live music.

It was designed for use with a Raspberry Pi computer, which is a cheap way to get started, though works with other computers too. Its also a great, fun way to start to learn to program.

Play with anything, and everything, you find around the house, junk or otherwise. See what sounds it makes. Record it, and then see what it makes you think of out of context. Build up your own library of sounds, labelling them with things they sound like. Take clips of films, mute the sound and create your own soundscape for them. Store the sound clips and then manipulate them in Sonic Pi, and see if you can use them as the basis of different sounds.

Listen to the example sound clips made with Sonic Pi on their website, then start adapting them to create your own sounds, your own music. What is the most ‘natural sound’ you can find or create using Sonic Pi?

 


 

This article was also originally published in issue 21 of the CS4FN magazine ‘Computing Sounds Wild’ on p16. You can download a PDF copy of Issue 21, as well as all of our previous published material, free, at the CS4FN downloads site.

Computing Sounds Wild explores the work of scientists and engineers who are using computers to understand, identify and recreate wild sounds, especially those of birds. We see how sophisticated algorithms that allow machines to learn, can help recognize birds even when they can’t be seen, so helping conservation efforts. We see how computer models help biologists understand animal behaviour, and we look at how electronic and computer generated sounds, having changed music, are now set to change the soundscapes of films. Making electronic sounds is also a great, fun way to become a computer scientist and learn to program.

Front cover of CS4FN Issue 21 – Computing sounds wild

 

 

Stopping sounds getting left behind: the Bela computer (from @BelaPlatform)

By Jo Brodie and Paul Curzon, Queen Mary University of London

Computer-based musical instruments are so flexible and becoming more popular. They have had one disadvantage though. The sound could drag behind the musician in a way that made some digital instruments seem unplayable. Thanks to a new computer called Bela, that problem may now be a thing of the past.

 

 

A Bela computer surrounded by transistors, resistors, sensors, integrated circuits, buttons & switches. Credit: Andrew McPherson

If you pluck a guitar string or thwack a drum the sound you hear is instantaneous. Well, nearly. There’s a tiny delay. The sound still has to leave the instrument and travel to your ear. The vibration of the string or drum skin pushes the air back and forth, and vibrating air is all a sound is. Your ear receives the sound as soon as that vibrating air gets to you. Then your brain has to recognise it as a sound (and tell you what kind of sound it is, which direction it came from, which instrument produced it and so on!). The time it takes for sound and then your brain to do all that is measured in tens of milliseconds – thousandths of a second. It is called ‘latency‘, not because the delay makes it ‘late’ (though it does!), but from the Latin word latens which means hidden or concealed, because the time between the signal being created and being received, it is hidden from us.

Digital instruments take slightly longer than physical instruments, however, because electronic circuitry and computer processing is involved. It’s not just the sound going through air to ear but a digital signal whizzing through a circuit, or being processed by a computer, first to generate the sound which then goes through air to ear.

Your ear (actually your brain) will detect two sounds as being separate if there’s a gap of around 30 milliseconds between them. Drop that gap down to around 10 milliseconds between the sounds and you’ll hear them as a single sound. If that circuit-whizzing adds 10-20 milliseconds then you’re going to notice that the instrument is lagging behind you, making it feel unplayable. Reducing a digital instrument’s latency is therefore a very important part of improving the experience for the musician.

In 2014 Andrew McPherson and colleagues at Queen Mary University of London aimed to solve this problem. They developed Bela, a tiny computer, similar in size to a Raspberry Pi or Arduino, that can be used in a variety of digital instruments but which is special because it has an ultra-low latency of only around 2 milliseconds – super fast.

How does it do it? A computer can seem to run slowly if it is trying to do lots of things at the same time (e.g. lots of apps running or too many windows open at once). That is when the experience for the user can be a bit glitchy. Bela works by prioritising the audio signal above ALL other activities to ensure that, no matter what else the computer is doing, the gap between input (pressing a key) and output (hearing a sound) is barely noticeable. The small size of Bela also makes it completely portable and so easy to use in musical performances without needing the performer to be tethered to a large computer.

There is definitely a demand for such a computer amongst musicians. Andrew and the team wanted to make Bela available, so began fundraising through Kickstarter to create more kits. Their fundraiser reached £5,000 within four hours and within a month they’d raised £54,000, so production could begin and they launched a company, Augmented Instruments Ltd, to sell the Bela hardware kits.

Bela allows musicians to stop worrying about the sounds getting left behind. Instead, they can just get on with playing and creating amazing sounds.

See Bela in action on YouTube. Follow them on Twitter.

Featured image credit: Andrew McPherson.

 

 

Die another Day? Or How Madonna crashed the Internet

A lone mike under bright stage lights

From the cs4fn archive …

When pop star Madonna took to the stage at Brixton Academy in 2001 for a rare appearance she made Internet history and caused more that a little Internet misery. Her concert performance was webcast; that is it was broadcast real time over the Internet. A record-breaking audience of 9 million tuned in, and that’s where the trouble started…

The Internet’s early career

The Internet started its career as a way of sending text messages between military bases. What was important was that the message got through, even if parts of the network were damaged say, during times of war. The vision was to build a communications system that could not fail; even if individual computers did, the Internet would never crash. The text messages were split up into tiny packets of information and each of these was sent with an address and their position in the message over the wire. Going via a series of computer links it reached its destination a bit like someone sending a car home bit by bit through the post and then rebuilding it. Because it’s split up the different bits can go by different routes.

Express yourself (but be polite please)

To send all these bits of information a set of protocols (ways of communicating between the computers making up the Internet) were devised. When passing on a packet of information the sending machine first asks the receiving machine if it is both there and ready. If it replies yes then the packet is sent. Then, being a polite protocol, the sender asks the receiver if the packets all arrived safely. This way, with the right address, the packets can find the best way to go from A to B. If on the way some of the links in the chain are damaged and don’t reply, the messages can be sent by a different route. Similarly if some of the packets gets lost in transit between links and need to be resent, or packets are delayed in being sent because they have to go by a round about route, the protocol can work round it. It’s just a matter of time before all the packets arrive at the final destination and can be put back in order. With text the time taken to get there doesn’t really matter that much.

The Internet gets into the groove

The problem with live pop videos, like a Madonna concert, is that it’s no use if the last part of the song arrives first, or you have to wait half an hour for the middle chorus to turn up, or the last word in a sentence vanishes. It needs to all arrive in real time. After all, that is how it’s being sung. So to make web casting work there needs to be something different, a new way of sending the packets. It needs to be fast and it needs to deal with lots more packets as video images carry a gigantic amount of data. The solution is to add something new to the Internet, called an overlay network. This sits on top of the normal wiring but behaves very differently.

The Internet turns rock and roll rebel

So the new real time transmission protocol gets a bit rock and roll, and stops being quite so polite. It takes the packets and throws them quickly onto the Internet. If the receiver catches them, fine. If it doesn’t, then so what? The sender is too busy to check like in the old days. It has to keep up with the music! If the packets are kept small, an odd one lost won’t be missed. This overlay network called the Mbone, lets people tune into the transmissions like a TV station. All these packages are being thrown around and if you want to you can join in and pick them up.

Crazy for you

Like dozens of cars

all racing to get through

a tunnel there were traffic jams.

It was Internet gridlock.

The Madonna webcast was one of the first real tests of this new type of approach. She had millions of eager fans, but it was early days for the technology. Most people watching had slow dial-up modems rather than broadband. Also the number of computers making up the links in the Internet were small and of limited power. As more and more people tuned in to watch, more and more packets needed to be sent and more and more of the links started to clog up. Like dozens of cars all racing to get through a tunnel there were traffic jams. Packets that couldn’t get through tried to find other routes to their destination … which also ended up blocked. If they did finally arrive they couldn’t get through onto the viewers PC as the connection was slow, and if they did, very many were too late to be of any use. It was Internet gridlock.

Who’s that girl?

Viewers suffered as the pictures and sound cut in and out. Pictures froze then jumped. Packets arrived well after their use by date, meaning earlier images had been shown missing bits and looking fuzzy. You couldn’t even recognise Madonna on stage. Some researchers found that packets had, for example, passed over seven different networks to reach a PC in a hotel just four miles away. The packets had taken the scenic route round the world, and arrived too late for the party. It wasn’t only the Madonna fans who suffered. The broadcast made use of the underlying wiring of the Internet and it had filled up with millions of frantic Madonna packets. Anyone else trying to use the Internet at the time discovered that it had virtually ground to a halt and was useless. Madonna’s fans had effectively crashed the Internet!

Webcasts in Vogue

Today’s webcasts have moved on tremendously using the lessons learned from the early days of the Madonna Internet crash. Today video is very much a part of the Internet’s day-to-day duties: the speed of the computer links of the Internet and their processing power has increased massively; more homes have broadband so the packets can get to your PC faster; satellite uplinks now allow the network to identify where the traffic jams are and route the data up and over them; extra links are put into the Internet to switch on at busy times; there are now techniques to unnoticeably compress videos down to small numbers of packets, and intelligent algorithms have been developed to reroute data effectively round blocks. We can also now combine the information flowing to the viewers with information coming back from them so allowing interactive webcasts. With the advent of digital television this service is now in our homes and not just on our PC’s.

Living in a material world

It’s because of thousands of scientists working on new and improved technology and software that we can now watch as the housemate’s antics stream live from the Big Brother house, vote from our armchair for our favourite talent show contestant or ‘press red’ and listen to the director’s commentary as we watch our favourite TV show. Like water and electricity the Internet is now an accepted part of our lives. However, as we come up with even more popular TV shows and concerts, strive to improve the quality of sound and pictures, more people upgrade to broadband and more and more video information floods the Internet … will the Internet Die another Day?

Peter W. McOwan and Paul Curzon, Queen Mary University of London, 2006

Read more about women in computing in the cs4fn special issue “The Woman are Here”.

Punk robots learn to pogo

It’s the second of three punk gigs in a row for Neurotic and the PVCs, and tonight they’re sounding good. The audience seem to be enjoying it too. All around the room the people are clapping and cheering, and in the middle of the mosh pit the three robots are dancing. They’re jumping up and down in the style of the classic punk pogo, and they’ve been doing it all night whenever they like the music most. Since Neurotic came on the robots can hardly keep still. In fact Neurotic and the PVCs might be the best, most perfect band for these three robots to listen to, since their frontman, Fiddian, made sure they learned to like the same music he does.

Programming punks

It’s a tough task to get a robot to learn what punk music sounds like, but there are lots of hints lurking in our own brains. Inside your brain are billions of connected cells called neurons that can send messages to one another. When and where the messages get sent depends on how strong each connection is, and we forge new connections whenever we learn something.

What the robots’ programmers did was to wire up a network of computerised connections like the ones in a real brain. Then they let the robots sample lots of different kinds of music and told them what it was, like reggae, pop, and of course, Fiddian’s collection of classic punk. That way the connections in the neural network got stronger and stronger – the more music the robots listened to, the easier it got for them to recognise what kind of stuff it was. When they recognised a style they’d been told to look out for, they would dance, firing a cylinder of compressed air to make them jump up and down.

The robots’ first gig

The last step was to tell the robots to go out and enjoy some punk. The programmers turned off the robots’ neural connections to other kinds of music, so no Kylie or Bob Marley would satisfy them. They would only dance to the angry, churning sound of punk guitars. The robots got dressed up in spray-painted leather, studded belts and safety pins, so with their bloblike bodies they looked like extra-tough boxing gloves on sticks. Then the three two-metre tall troublemakers went to their first gig.

Whenever a band begins to play, the robots’ computer system analyses the sound coming from the stage. If the patterns in it look the same as the idea of punk music they’ve learned, the robots begin to dance. If the pattern isn’t quite right, they stand still. For lots of songs they hardly dance at all, which might seem weird since all the bands that are playing the gig call themselves punk bands. Except there are many different styles of punk music, and the robots have been brought up listening to Fiddian’s favourites. The other styles aren’t close enough to the robots’ idea of punk – they’ve developed taste, and it’s the same as Fiddian’s. Which is why the robots go crazy for Neurotic and the PVCs. Fiddian’s songs are influenced by classic punk like the Clash, the Sex Pistols and Siouxsie & the Banshees, which is exactly the music he’s taught the robots to love. As the robots jump wildly up and down, it’s clear that Neurotic and the PVCs now have three tall, tough, computerised superfans.