“Get rhythm when you get the blues” – as Country legend Johnny Cash’s lyrics suggest, rhythm cheers people up. We can all hear, feel and see it. We can clap, tap or beatbox. It comes naturally, but how? We don’t really know. You can help find out by playing a game based on some music that involves nothing but clapping. If you were one of the best back in 2015, you could have been invited to play live with a London orchestra.
We can all play a rhythm both using our bodies and instruments, though maybe for most of us with only a single cowbell, rather than a full drum kit. By performing simple rhythms with other people we can make really complex sounds, both playing music and playing traditional clapping games. Rhythm matters. It plays a part in social gatherings and performance in cultures and traditions across the world. It even defines different types of music from jazz to classical, from folk to pop and rock.
Lots of people with a great sense of rhythm, whether musicians or children playing complex clapping games in the playground, have never actually studied how to do it though. So how do we learn rhythm? Our team based at Queen Mary, joined up with the London Sinfonietta chamber orchestra and app developers Touch Press, to find out, using music called Clapping Music.
Clapping Music is a 4-minute piece by the minimalist composer Steve Reich. The whole thing is based on one rhythmic pattern that two people clap together. One person claps the pattern without changing it – known as the static pattern. The other changes the pattern, shifting the rhythm by one beat every twelve repetitions. The result is an ever-changing cycle of surprisingly complicated rhythms. In spite of it’s apparent simplicity, it’s really challenging to play and has inspired all sorts of people from rock legend David Bowie to the virtuoso, deaf percussionist Dame Evelyn Glennie. You can learn to play Clapping Music and help us to understand how we learn rhythm at the same time.
Our team created a free game for the iPhone and iPad also called Clapping Music. You play for real against the static pattern. To get the best score you must keep tapping accurately as the pattern changes, but stay in step with the static rhythm. It’s harder than it sounds!
We analysed the anonymous gameplay data, together with basic information about the people playing like their age and musical experience. By looking at how people progress though the game we explored how people of different ages and experience develop rhythmic skills.
It has led to some interesting computer science to design the algorithms that measure how accurate a person’s tapping is. It sounds easy but actually is quite challenging. For example, we don’t want to penalise someone playing the right pattern slightly delayed more than another person playing completely the wrong pattern. It has also thrown up questions about game design. How do we set and change how difficult the game is? Players, however skillful, must feel challenged to improve, but it must not be so difficult that they can’t do it.
You don’t need to be a musician to play, in fact we would love as many people as possible to download it and get tapping and clapping! High scorers were invited to take part in live performance events on stage with members of the London Sinfonietta back in 2015. Get the app, get tapping, get rhythm (and have some fun – you won’t get the blues)!
– by Marcus Pearce and Samantha Duffy, Queen Mary University of London
Updated from the archive
This post was originally published in our CS4FN magazine (issue 19) in 2015, so the tense has been updated to reflect that it’s now 2025.
Many names stand out as pioneers of electronic music, combining computer science, electronics and music to create new and amazing sounds. Kraftwerk would top many people’s lists of the most influential bands and Jean-Michel Jarre must surely be up there. Giorgio Moroder returned to the limelight with Daft Punk, having previously invented electronic disco in producing Donna Summer’s “I feel love”. Will.i.am, La Roux or Goldfrapp might be on your playlist. One of the most influential creators of electronic music, a legend to those in the know, is barely known by comparison though: Delia Derbyshire.
Delia worked for the BBC radiophonic workshop, the department tasked with producing innovative music to go with the BBC’s innovative programming, and played a major part in its fame. She had originally tried to get a job at Decca records but was told they didn’t employ women in their recording studios (big loss for them!) In creating the sounds and soundscapes behind hundreds of tv and radio programmes, long before electronic music went mainstream, her ideas have influenced just about everyone in the field, whether they have heard of her or not.
The first person to realise that machines would one day be able to not just play music but also be able to compose it, was Victorian programmer, and Countess, Ada Lovelace.
So have you heard her work? Her most famous piece of music you will most definitely know. She created the original electronic version of the Dr Who theme long before pop stars were playing electronic music. Each individual note was created separately, by cutting, splicing, speeding up and slowing down recordings of things like a plucked string and white noise. So why didn’t you know of her? It’s time more people did.
Mike Lynch was one of Britain’s most successful entrepreneurs. An electrical engineer, he built his businesses around machine learning long before it was a buzz phrase. He also drew heavily on a branch of maths called Bayesian statistics which is concerned with understanding how likely, even apparently unlikely, things are to actually happen. This was so central to his success that he named his super yacht, Bayesian, after it. Tragically, he died on the yacht, when Bayesian sank in a freak, extremely unlikely, accident. The gods of the sea are cruel.
Mike started his path to becoming an entrepreneur at school. He was interested in music, and especially the then new but increasingly exciting, digital synthesisers that were being used by pop bands, and were in the middle of revolutionising music. He couldn’t afford one of his own, though, as they cost thousands. He was sure he could design and build one to sell more cheaply. So he set about doing it.
He continued working on his synthesiser project as a hobby at Cambridge University, where he originally studied science, but changed to his by-then passion of electrical engineering. A risk of visiting his room was that you might painfully step on a resistor or capacitor, as they got everywhere. That was not surprising giving his living room was also his workshop. By this point he was also working more specifically on the idea of setting up a company to sell his synthesiser designs. He eventually got his first break in the business world when chatting to someone in a pub who was in the music industry. They were inspired enough to give him the few thousand pounds he needed to finance his first startup company, Lynett Systems.
By now he was doing a PhD in electrical engineering, funded by EPSRC, and went on to become a research fellow building both his research and innovation skills. His focus was on signal processing which was a natural research area given his work on synthesisers. They are essentially just computers that generate sounds. They create digital signals representing sounds and allow you to manipulate them to create new sounds. It is all just signal processing where the signals ultimately represent music.
However, Mike’s research and ideas were more general than just being applicable to audio. Ultimately, Mike moved away from music, and focussed on using his signal processing skills, and ideas around pattern matching to process images. Images are signals too (resulting from light rather than sound). Making a machine understand what is actually in a picture (really just lots of patches of coloured light) is a signal processing problem. To work out what an image shows, you need to turn those coloured blobs into lines, then into shapes, then into objects that you can identify. Our brains do this seamlessly so it seems easy to us, but actually it is a very hard problem, one that evolution has just found good solutions to. This is what happens whether the image is that captured by the camera of a robot “eye” trying to understand the world or a machine trying to work out what a medical scan shows.
This is where the need for maths comes in to work out probabilities, how likely different things are. Part of the task of recognising lines, shapes and objects is working out how likely one possibility is over another. How likely is it that that band of light is a line, how likely is it that that line is part of this shape rather than that, and so on. Bayesian statistics gives a way to compute probabilities based on the information you already know (or suspect). When the likelihood of events is seen through this lens, things that seem highly unlikely, can turn out to be highly probably (or vice versa), so it can give much more accurate predictions than traditional statistics. Mike’s PhD used this way of calculating probabilities even though some statisticians disdained it. Because of that it was shunned by some in the machine learning community too, but Mike embraced it and made it central to all his work, which gave his programs an edge.
While Lynett Systems didn’t itself make him a billionaire, the experience from setting up that first company became a launch pad for other innovations based on similar technology and ideas. It gave him the initial experience and skills, but also meant he had started to build the networks with potential investors. He did what great entrepreneurs do and didn’t rest on his laurels with just one idea and one company, but started to work on new ideas, and new companies arising from his PhD research.
He realised one important market for image pattern recognition, that was ripe for dominating, was fingerprint recognition. He therefore set about writing software that could match fingerprints far faster and more accurately than anyone else. His new company, Cambridge Neurodynamics, filled a gap, with his software being used by Police Forces nationwide. That then led to other spin-offs using similar technology
He was turning the computational thinking skills of abstraction and generalisation into a way to make money. By creating core general technology that solved the very general problems of signal processing and pattern matching, he could then relatively easily adapt and reuse it to apply to apparently different novel problems, and so markets, with one product leading to the next. By applying his image recognition solution to characters, for example, he created software (and a new company) that searched documents based on character recognition. That led on to a company searching databases, and finally to the company that made him famous, Autonomy.
One of his great loves was his dog, Toby, a friendly enthusiastic beast. Mike’s take on the idea of a search engine was fronted by Toby – in an early version, with his sights set on the nascent search engine market, his search engine user interface involved a lovable, cartoon dog who enthusiastically fetched the information you needed. However, in business finding your market and getting the right business model is everything. Rather than competing with the big US search engine companies that were emerging, he switched to focussing on in-house business applications. He realised businesses were becoming overwhelmed with the amount of information they held on their servers, whether in documents or emails, phone calls or videos. Filing cabinets were becoming history and being replaced by an anarchic mess of files holding different media, individually organised, if at all, and containing “unstructured data”. This kind of data contrasts with the then dominant idea that important data should be organised and stored in a database to make processing it easier. Mike realised that there was lots of data held by companies that mattered to them, but that just was not structured like that and never would be. There was a niche market there to provide a novel solution to a newly emerging business problem. Focussing on that, his search company, Autonomy, took off, gaining corporate giants as clients including the BBC. As a hands-on CEO, with both the technical skills to write the code himself and the business skills to turn it into products businesses needed, he ensured the company quickly grew. It was ultimately sold for $11 billion. (The sale led to an accusation of fraud in hte US, but, innocent, he was acquitted of all the charges).
Investing
From firsthand experience he knew that to turn an idea into reality you needed angel investors: people willing to take a chance on your ideas. With the money he made, he therefore started investing himself, pouring the money he was making from his companies into other people’s ideas. To be a successful investor you need to invest in companies likely to succeed while avoiding ones that will fail. This is also about understanding the likelihood of different things, obviously something he was good at. When he ultimately sold Autonomy, he used the money to create his own investment company, Invoke Capital. Through it he invested in a variety of tech startups across a wide range of areas, from cyber security, crime and law applications to medical and biomedical technologies, using his own technical skills and deep scientific knowledge to help make the right decisions. As a result, he contributed to the thriving Silicon Fen community of UK startup entrepreneurs, who were and continue to do exciting things in and around Cambridge, turning research and innovation into successful, innovative companies. He did this not only through his own ideas but by supporting the ideas of others.
Mike was successful because he combined business skills with a wide variety of technical skills including maths, electronic engineering and computer science, even bioengineering. He didn’t use his success to just build up a fortune but reinvested it in new ideas, new companies and new people. He has left a wonderful legacy as a result, all the more so if others follow his lead and invest their success in the success of others too.
Becoming a successful entrepreneur often starts with seeing a need: a problem someone has that needs to be fixed. For David Ronan, the need was for anyone to mix and master music but the problem was that of how hard it is to do this. Now his company RoEx is fixing that problem by combining signal processing ans artificial intelligence tools applied to music. It is based on his research originally as a PhD student
Musicians want to make music, though by “make music” they likely mean playing or composing music. The task of fiddling with buttons, sliders and dials on a mixing desk to balance the different tracks of music may not be a musician’s idea of what making music is really about, even though it is “making music” to a sound engineer or producer. However, mixing is now an important part of the modern process of creating professional standard music.
This is in part a result of the multitrack record revolution of the 1960s. Multitrack involves recording different parts of the music as different tracks, then combining them later, adding effects, combining them some more … George Martin with the Beatles pioneered its use for mainstream pop music in the 1960s and the Beach Boys created their unique “Pet Sounds” through this kind of multitrack recording too. Now, it is totally standard. Originally, though, recording music involved running a recording machine while a band, orchestra and/or singers did their thing together. If it wasn’t good enough they would do it all again from the beginning (and again, and again…). This is similar to the way that actors will act the same scene over and over dozens of times until the director is happy. Once happy with the take (or recording) that was basically it and they moved on to the next song to record.
With the advent of multitracking, each musician could instead play or sing their part on their own. They didn’t have to record at the same time or even be in the same place as the separate parts could be mixed together into a single whole later. Then it became the job of engineers and the producer to put it all together into a single whole. Part of this is to adjust the levels of each track so they are balanced. You want to hear the vocals, for example, and not have them drowned out by the drums. At this point the engineer can also fix mistakes, cutting in a rerecording of one small part to replace something that wasn’t played quite right. Different special effects can also be applied to different tracks (playing one track at a different speed or even backwards, with reverb or auto-tuned, for example). You can also take one singer and allow them to sing with multiple versions of themselves so that they are their own backing group, and are singing layered harmonies with themselves. One person can even play all the separate instruments as, for example, Prince often did on his recordings. The engineers and producer also put it all together and create the final sound, making the final master recording. Some musicians, like Madonna, Ariana Grande and Taylor Swift do take part in the production and engineering parts of making their records or even take over completely, so they have total control of their sound. It takes experience though and why shouldn’t everyone have that amount of creative control?
Doing all the mixing, correction and overdubbing can be laborious and takes a lot of skill, though. It can be very creative in itself too, which is why producers are often as famous as the artists they produce (think Quincy Jones or Nile Rogers, for example). However, not everyone wanting to make their own music is interested in spending their time doing laborious mixing, but if you don’t yet have the skill yourself and cant afford to pay a producer what do you do?
That was the need that David spotted. He wanted to do for music what instagram filters did for images, and make it easy for anyone to make and publish their own professional standard music. Based in part on his PhD research he developed tools that could do the mixing, leaving a musician to focus on experimenting with the sound itself.
David had spent several years leading the research team of an earlier startup he helped found called AI Music. It worked on adaptive music: music that changes based on what is happening around it, whether in the world or in a video game being played. It was later bought by Apple. This was the highlight of his career to that point and it helped cement his desire to continue to be an innovator and entrepreneur.
With the help of Queen Mary, where he did his PhD, he therefore decided to set up his new company RoEx. It provides an AI driven mixing and mastering service. You choose basic mixing options as well as have the ability to experiment with different results, so still have creative control. However, you no longer need expensive equipment, nor need to build the skills to use it. The process becomes far faster too. Mixing your music becomes much more about experimenting with the sound: the machine having taken over the laborious parts, working out the optimum way to mix different tracks and produce a professional quality master recording at the end.
David didn’t just see a need and have an idea of how to solve it, he turned it into something that people want to use by not only developing the technology, but also making sure he really understood the need. He worked with musicians and producers through a long research and development process to ensure his product really works for any musician.
What was the first technology for recording music: CDs? Records? 78s, The phonograph? No. Trained songbirds came before all of them.
Composer, musician, engineer and visiting fellow at Goldsmiths University, Sarah Angliss, usually has a robot on stage performing live with her. These robots are not slick high tech cyber-beings, but junk modelled automata. One, named Hugo, sports a spooky ventriloquist dolls head! Sarah builds and programs her robots, herself.
She is also a sound historian, and worked on a Radio 4 documentary, ‘The Bird Fancyer’s Delight‘, uncovering how birds have been used to provide music across the ages. During the 1700’s people trained songbirds to sing human invented tunes in their homes. You could buy special manuals showing how to train your pet bird. By playing young birds a tune over and over again, and in the absence of other birds to put them right, they would adopt that song as their own. Playing the recorder was one way to train them, but special instruments were also invented to do the job automatically.
With the invention of the phonograph, home songbird popularity plummeted but it didn’t completely die out. Blackbirds, thrushes, canaries, budgies, bullfinches and other songbirds have continued to be schooled to learn songs that they would never sing in the wild.
By Jo Brodie and Paul Curzon, Queen Mary University of London
Even if you’re the best keyboard player in the world the sound you can get from any one key is pretty much limited to ‘loud’ or ‘soft’, ‘short’ or ‘long’ depending on how hard and how quickly you press it. The note’s sound can’t be changed once the key is pressed. At best, on a piano, you can make it last longer using the sustain pedal. A violinist, on the other hand, can move their finger on the string while it’s still being played, changing its pitch to give a nice vibrato effect. Wouldn’t it be fun if keyboard players could do similar things.
Andrew McPherson and other digital music researchers at QMUL and Drexel University came up with a way to give keyboard performers more room to express themselves like this. TouchKeys is a thin plastic coating, overlaid on each key of a keyboard, but barely noticeable to the keyboard player. The coating contains sensors and electronics that can change the sound when a key is touched. The TouchKeys’ electronics connect to the keyboard’s own controller and so changes the sounds already being made, expanding the keyboard’s range. This opens up a whole world of new sonic possibilities to a performer.
The sensors can follow the position and movement of your fingers and respond appropriately in real-time, extending the range of sounds you can get from your keyboard. By wiggling your finger from side-to-side on a key you can make a vibrato effect, or you change the note’s pitch completely by sliding your finger up and down the key. The technology is similar to a phone’s touchscreen where different movements (‘gestures’) make different things happen. An advantage of the system is that it can easily be applied to a keyboard a musician already knows how to play, so they’ll find it easy to start to use without having to make big changes to their style of playing.
They wanted to get TouchKeys out of the lab and into the hands of more musicians, so teamed up with members of London’s Music Hackspace community, who run courses in electronic music, to create some initial versions for sale. Early adopters were able to choose either a DIY kit to add to their own keyboard, wire up and start to play, or choose a ready-to-play keyboard with the TouchKeys system already installed.
The result is that lots of musicians are already using TouchKeys to get more from their keyboard in exciting new ways.
Earlier this year Professor Andrew McPherson gave his inaugural lecture (a public lecture given by an academic who has been promoted) at Imperial College London where he is continuing his research. You can watch his lecture – Making technology to make music – below.
Computers compress files to save space. But it also allows them to create music!
Music is special. It’s one of the things, like language, that makes us human, separating us from animals. It’s also special as art, because it doesn’t exist as an object in the world – it depends on human memory. “But what about CDs? They’re objects in the world”, you might say and you’d be right, but the CD is not the music. The CD contains data files of numbers. Those numbers are translated by electronics into the movements in a loudspeaker, to create sound waves. Even the sound waves aren’t music! They only become music when a human hears them, because understanding music is about noticing repetition, variation and development in its structure. That’s why songs have verses and choruses: so we can find a starting point to understand its structure. In fact, we’re so good at understanding musical structure, we don’t even notice we’re doing it. What’s more, music affects us emotionally: we get excited (using the same chemicals that get us excited when we’re in love or ready to flee danger) when we hear the anthem section of a trance track, or recognise the big theme returning at the end of a symphony.
Surprisingly, brains seem to understand musical structure in a way that’s like the algorithms computer scientists use to compress data. It’s better to store data compressed than uncompressed, because it takes less storage space. We think that’s why brains do it too.
Even more surprisingly, brains also seem to be able to learn the best way to store compressed music data. Computers use bits as their basic storage unit, but we can make groups of bits work like other things (numbers, words, pictures, angry birds…); brains seem to do something similar. For example, pitch (high vs. low notes) in sequence is an important part of music: we build melodies by lining up notes of different pitch one after the other. As we learn to hear music (starting before birth, and continuing throughout life), we learn to remember pitch in ever more efficient ways, giving our compression algorithms better and better chances to compress well. And so we remember music better.
Our team use compression algorithms to understand how music works in the human mind. We have discovered that, when our programs compress music, they can sometimes predict musical structures, even if neither they nor a human have “heard” them before. To compress something, you find large sections of repeated data and replace each with a label saying “this is one of those”. It’s like labelling a book with its title: if you’ve read Lord of the Rings, when I say the title you know what I mean without me telling the story. If we do this to the internal structure of music, there are little repetitions everywhere, and the order that they appear is what makes up the music’s structure.
If we compress music, but then decompress it in a different way, we can get a new piece of music in a similar style or genre. We have evidence that human composers do that too!
What our programs are doing is learning to create new music. There’s a long way to go before they produce music you’ll want to dance to – but we’re getting there!
This article was first published on the original CS4FN website and a copy can be found on page 12 in Issue 18 of the CS4FN magazine: Machines that are creative. You can download a free PDF copy below, along with all of our other free magazines and booklets at our downloads site.
What links James Bond, a classic 1950s radio comedy series and a machine for creating music by drawing? … Electronic music pioneer: Daphne Oram.
Oram was one of the earliest musicians to experiment with electronic music, and was the first woman to create an electronic instrument. She realised that the advent of electronic music meant composers no longer had to worry about whether anyone could actual physically perform the music they composed. If you could write it down in a machine readable way then machines could play it electronically. That idea opened up whole new sounds and forms of music and is an idea that pop stars and music producers still make use of today.
She learnt to play music as a child and was good enough to be offered a place at the Royal College of Music, though turned it down. She also played with radio electronics with her brothers, creating radio gadgets and broadcasting music from one room to another. Combining music with electronics became her passion and she joined the BBC as a sound engineer. This was during World War 2 and her job included being the person ready during a live music broadcast to swap in a recording at just the right point if, for example, there was an air raid that meant the performance had to be abandoned. The show, after all, had to go on.
Composing electronic music
She went on to take this idea of combining an electronic recording with live performance further and composed a novel piece of music called Still Point that fully combined orchestral with electronic music in a completely novel way. The BBC turned down the idea of broadcasting it, however, so it was not played for 70 years until it was rediscovered after her death, ultimately being played at a BBC Prom.
Composers no longer had to worry about whether anyone could actually physically perform the music they composed
She started instead to compose electronic music and sounds for radio shows for the BBC which is where the comedy series link came in. She created sound effects for a sketch for the Goon Show (the show which made the names of comics including Spike Milligan and Peter Sellers). She constantly played with new techniques. Years later it became standard for pop musicians to mess with tapes of music to get interesting effects, speeding them up and down, rerecording fragments, creating loops, running tapes backwards, and so on. These kinds of effects were part of amazing sounds of the Beatles, for example. Oram was one of the first to experiment with these kinds of effects and use them in her compositions – long before pop star producers.
One of the most influential things she did was set up the BBC Radiophonic Workshop which went on to revolutionise the way sound effects and scores for films and shows were created. Oram though left the BBC shortly after it was founded, leaving the way open for other BBC pioneers like Delia Derbyshire. Oram felt she wasn’t getting credit for her work, and couldn’t push forward with some of her ideas. Instead Oram set herself up as an independent composer, creating effects for films and theatre. One of her contracts involved creating electronic music that was used on the soundtracks of the early Bond films starring Sean Connery – so Shirley Bassey is not the only woman to contribute to the Bond sound!
The Music Machine
While her film work brought in the money, she continued with her real passion which was to create a completely new and highly versatile way to create music…by drawing. She built a machine – the Oramics Machine – that read a composition drawn onto film reels. It fulfilled her idea of having a machine that could play anything she could compose (and fulfilled a thought she had as a child when she wondered how you could play the notes that fell between the keys on a piano!).
The 35mm film that was the basis of her system that dates all the way back to the 19th century when George Eastman, Thomas Edison and Kennedy Dixon pioneered the invention film based photography and then movies. It involved a light sensitive layer being painted on strips of film with holes down the side that allowed the film to be advanced. This gave Oram a recording media. She could etch or paint subtle shapes and patterns on to the film. In a movie light was shone through the film, projecting the pictures on the film on to the screen. Oram instead used light sensors to detect the patterns on the film and convert it to electronic signals. Electronic circuitry she designed (and was awarded patents for) controlled cathode ray tubes that showed the original drawn patterns but now as electrical signals. Ultimately these electrical signals drove speakers. Key to the flexibility of the system was that different aspects of the music were controlled by patterns on different films. One for example controlled the frequency of the sound, others the timbre or tone quality and others the volume. These different control signals for the music were then combined by Oram’s circuitry. The result of combining the fine control of the drawings with the multiple tapes meant she had created a music machine far more flexible in the sound it could produce than any traditional instrument or orchestra. Modern music production facilities use very similar approaches today though based on software systems rather than the 1960s technology available to Oram.
Ultimately, Daphne Oram was ahead of her time as a result of combining her two childhood fascinations of music and electronics in a way that had not been done before. She may not be as famous as the great record producers who followed her, but they owe a lot to her ideas and innovation.
The first recorded music by a computer program was the result of a flamboyant flourish added on the end of a program that played draughts in the early 1950s. It played God Save the King.
The first computers were developed towards the end of the second world war to do the number crunching needed to break the German codes. After the War several groups set about manufacturing computers around the world: including three in the UK. This was still a time when computers filled whole rooms and it was widely believed that a whole country would only need a few. The uses envisioned tended to be to do lots of number crunching.
A small group of people could see that they could be much more fun than that, with one being school teacher Christopher Strachey. When he was introduced to the Pilot ACE computer on a visit to the National Physical Laboratories, in his spare time he set about writing a program that could play against humans at draughts. Unfortunately, the computer didn’t have enough memory for his program.
He knew Alan Turing, one of those war time pioneers, when they were both at university before the War. He luckily heard that Turing, now working at the University of Manchester, was working on the new Feranti Mark I computer which would have more memory, so wrote to him to see if he could get to play with it. Turing invited him to visit and on the second visit, having had a chance to write a version of the program for the new machine, he was given the chance to try to get his draughts program to work on the Mark I. He was left to get on with it that evening.
He astonished everyone the next morning by having the program working and ready to demonstrate. He had worked through the night to debug it. Not only that, as it finished running, to everyone’s surprise, the computer played the National Anthem, God Save the King. As Frank Cooper, one of those there at the time said: “We were all agog to know how this had been done.” Strachey’s reputation as one of the first wizard programmers was sealed.
The reason it was possible to play sounds on the computer at all, was nothing to do with music. A special command called ‘Hoot’ had been included in the set of instructions programmers could use (called the ‘order’ code at the time) when programming the Mark I computer. The computer was connected to a loud speaker and Hoot was used to signal things like the end of the program – alerting the operators. Apparently it hadn’t occurred to anyone there but Strachey that it was everything you needed to create the first computer music.
He also programmed it to play Baa Baa Black Sheep and went on to write a more general program that would allow any tune to be played. When a BBC Live Broadcast Unit visited the University in 1951 to see the computer for Children’s Hour the Mark I gave the first ever broadcast performance of computer music, playing Strachey’s music: the UK National Anthem, Baa Baa Black Sheep and also In the Mood.
While this was the first recorded computer music it is likely that Strachey was beaten to creating the first actual programmed computer music by a team in Australia who had similar ideas and did a similar thing probably slightly earlier. They used the equivalent hoot on the CSIRAC computer developed there by Trevor Pearcey and programmed by Geoff Hill. Both teams were years ahead of anyone else and it was a long time before anyone took the idea of computer music seriously.
Strachey went on to be a leading figure in the design of programming languages, responsible for many of the key advances that have led to programmers being able to write the vast and complex programs of today.
The recording made of the performance has recently been rediscovered and restored so you can now listen to the performance yourself:
Rock star David Bowie co-wrote a program that generated lyric ideas. It gave him inspiration for some of his most famous songs. It generated sentences at random based on something called the ‘cut-up’ technique: an algorithm for writing lyrics that he was already doing by hand. You take sentences from completely different places, cut them into bits and combine them in new ways. The randomness in the algorithm creates strange combinations of ideas and he would use ones that caught his attention, sometimes building whole songs around the ideas they expressed.
Tools for creativity
Rather than being an algorithm that is creative in itself, it is perhaps more a tool to help people (or perhaps other algorithms) be more creative. Both kinds of algorithm are of course useful. It does help highlight an issue with any “creative algorithm”, whether creating new art, music or writing. If the algorithm produces lots of output and a human then chooses the ones to keep (and show others), then where is the creativity? In the algorithm or in the person? That selection process of knowing what to keep and what to discard (or keep working on) seems to be a key part of creativity. Any truly creative program should therefore include a module to do such vetting of its work!
All that, aside, an algorithm is certainly part of the reason Bowie’s song lyrics were often so surreal and intriguing!
Write a cut-up technique program
Why not try and write your own cut-up technique program to produce lyrics. You will likely need to use String processing libraries of whatever language you choose. You could feed it things like the text of webpages or news reports. If you don’t program yet, do it by hand cutting up magazines, shuffling the part sentences before gluing them back together.
– Paul Curzon, Queen Mary University of London (Updated from the archive)