Composing ancient Korean music

600 years ago King Sejong the Great of Korea published ‘Hangul’, a new and improved writing system for his people. To celebrate he asked his court scholars to write an epic poem in Hangul, then asked his musicians to compose music to accompany it. The result was Yongbieocheonga, or ‘Songs of the Dragon Flying to Heaven’.

It was performed by musicians playing wind and stringed instruments. The musical instruments the AI composed for are Daegeum and Piri (wind instruments), Haegeum and Ajaeng (bowed string instruments) and Geomungo and Gayageum (plucked string instruments). Each instrument had its own melody written out for the musician to follow. Only one piece of the written music survives fully intact (it is still performed!). Melodies of other pieces of music have survived but only for a single instrument. That means those pieces can’t be played by a group of musicians because all the other harmonies are missing.

A team of computer scientists decided to recreate the missing 15th century Korean harmonies from just the single melodies (in the way the Bach Google Doodle does, see You’ll Be Bach!). They wanted to expand the ability of their AI tools to make sense of music beyond western music.

They first taught their AI musician to recognise Korean music written in Hangul. Then, it learnt which notes sound best played together by different instruments. Finally, to generate music that could be played, it matched melodies and rhythms. 

It created a melody for each different instrument. The researchers then asked Korean musicians to perform the whole piece and to judge how well the AI musician had done. Happily, they thought that the music worked well and sounded correct. They could perform it with only a few small tweaks. 

You can listen to one of the performances and find out more below.

Jo Brodie and Paul Curzon, Queen Mary University of London


Watch…

More on …

We have LOTS of articles about music, audio and computer science. Have a look in these themed portals for more:

Getting technical…


The Music and AI pages are sponsored by the EPSRC (UKRI3024: DA EPSRC university doctoral landscape award additional funding 2025 – Queen Mary University of London).

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


Separate your stems

Two cartoon faces, both purple, but the one on the left is a bluer purple and the one on the is a redder purple. Two speech bubbles say "I have more blue" for the bluer purple and "I have more red" for the redder purple.
Image by CS4FN

AI can unmix music and isolate vocals

Purple can be created by mixing together red and blue paint. You can probably tell which of the faces in the image has more blue and which has more red. Does music work the same way?

Your brain can recognise the red and blue in purple while still seeing it as a whole colour. Music is similar. When you listen to a song your ears and brain hear all the sounds at once. The singing, guitars, drums and keyboard parts are mixed together, but you can also focus on the singing, or the keyboards or ….

Computer scientists have gone a step further with Artificial Intelligence. By training AI tools on lots of different songs they have taught them to do “source separation” – unmixing a recorded song back into its separate bits. Those separate bits are called stems. It is like taking purple paint and unmixing it to give blue and red again!

A wide grey vase with two flowers in it (one red, one blue) at opposite ends of the vase with their stems definitely very separated.
Stems adapted from a plant pot image by HASSAN DYB from Pixabay.

“Not that kind of stem!”

Did you know?

Photographer Todd McLellan photographs gadgets he’s carefully taken apart, to show all the pieces (search the web for his “Things Come Apart”). When a piece of music is blended together and an AI separates it again it’s a bit more like trying to un-bake a cake!

Jo Brodie and Paul Curzon, Queen Mary University of London


More on…

We have LOTS of articles about music, audio and computer science. Have a look in these themed portals for more:


The Music and AI pages are sponsored by the EPSRC (UKRI3024: DA EPSRC university doctoral landscape award additional funding 2025 – Queen Mary University of London).

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


Jamming with JAM_BOT – an AI musician

A robot with a keyboard stomach playing the keyboard.
Image by CS4FN

Jordan Rudess is a rock keyboard player whose concerts sell out around the world. He works with a team of computer scientists at the MIT Media Labto make his synthesisers do amazing things. Together they created an AI musician called JAM_BOT to play with him on-stage.

Jordan’s bot learnt the different ways he plays by the team giving it lots of his music. It learned about the rhythms and melodies he uses. It could then compose its own versions of his music when prompted.

JAM_BOT AI plays along on-stage

Jordan also trained JAM_BOT to play with him. It could carry on playing music that Jordan had started, or create a backing track to music he was currently playing. Jordan was able to choose how JAM_BOT played with him on stage using the keys on his keyboard.

What happened next?

The resulting concert was a mix of performer and AI with a delighted audience (and computer science team). Afterwards Jordan said “It’s been pretty mind-blowing to create this tech-based version of myself – like looking into a real-time musical mirror.”

Jo Brodie and Paul Curzon, Queen Mary University of London

More on …

  • A model of virtuosity (2024) MIT News [EXTERNAL]
    • Acclaimed keyboardist Jordan Rudess’s collaboration with the MIT Media Lab culminates in live improvisation between an AI “jam_bot” and the artist.

We have LOTS of articles about music, audio and computer science. Have a look in these themed portals for more:

Getting Technical


The Music and AI pages are sponsored by the EPSRC (UKRI3024: DA EPSRC university doctoral landscape award additional funding 2025 – Queen Mary University of London).

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


You’ll be Bach! – create music with the Bach Google Doodle

The Bach Google Doodle is an AI musician which has learned the patterns in over 300 pieces of music from Johann Sebastian Bach, a famous 18th century German composer. The AI musician will take the notes you give it and suggest harmonies in Bach’s style. It takes a melody and creates backing melodies for different instruments that sound pleasing. 

Visit the Bach Google Doodle, put some notes together, press ‘Harmonize’ and see what you think of the result. If you don’t like its first suggestion you can press Harmonize to try again.

How to use it

Once on the page click the large play symbol (a white triangle) to open the doodle, and then again to run the intro demo (which you can skip on later visits).

Use your mouse to place notes at different positions on the five horizontal lines. If you hover over a note an X will appear so you can delete it and place it somewhere else. If you press and hold a note an option will appear to let you sharpen it (raise it by a semitone) or flatten it (lower it by a semitone). You can press the play icon to hear what your composition sounds like. Then press HARMONIZE to activate the AI. It will look at your piece of music and suggest the backing track (harmonies). You can then click a smiley or cross face if you like it or didn’t like it.

Hover your mouse cursor over all the other bits of the page too – there are lots of fun things to play with including some Easter eggs.

About the doodle

🎹 Celebrating Johann Sebastian Bach was Google’s first-ever AI-powered doodle and “is an interactive experience encouraging players to compose a two measure melody of their choice. With the press of a button, the Doodle then uses machine learning to harmonize the custom melody into Bach’s signature music style (or a Bach 80’s rock style hybrid if you happen to find a very special easter egg in the Doodle…”

▶️ You can also watch Google’s short video ‘Behind the Doodle’ on YouTube.

Jo Brodie and Paul Curzon, Queen Mary University of London


The Music and AI pages are sponsored by the EPSRC (UKRI3024: DA EPSRC university doctoral landscape award additional funding 2025 – Queen Mary University of London).

Subscribe to be notified whenever we publish a new post to the CS4FN blog.