Software for Justice

A jury is given misleading information in court by an expert witness. An innocent person goes to prison as a result. This shouldn’t happen, but unfortunately it does and more often than you might hope. It’s not because the experts or lawyers are trying to mislead but because of some tricky mathematics. Fortunately, a team of computer scientists at Queen Mary, University of London are leading the way in fixing the problem.

The Queen Mary team, led by Professor Norman Fenton, is trying to ensure that forensic evidence involving probability and statistics can be presented without making errors, even when the evidence is incredibly complex. Their solution is based on specialist software they have developed.

Many cases in courts rely on evidence like DNA and fibre matching for proof. When police investigators find traces of this kind of evidence from the crime scene they try to link it to a suspect. But there is a lot of misunderstanding about what it means to find a match. Surprisingly, a DNA match between, say, a trace of blood found at the scene and blood taken from a suspect does not mean that the trace must have come from the suspect.

Forensic experts talk about a ‘random match probability’. It is just the probability that the suspect’s DNA matches the trace if it did not actually come from him or her. Even a one-in-a-billion random match probability does not prove it was the suspect’s trace. Worse, the random match probability an expert witness might give is often either wrong or misleading. This can be because it fails to take account of potential cross-contamination, which happens when samples of evidence accidentally get mixed together, or even when officers leave traces of their own DNA from handling the evidence. It can also be wrong due to mistakes in the way the evidence was collected or tested. Other problems arise if family members aren’t explicitly ruled out, as that makes the random match probability much higher. When the forensic match is from fibre or glass, the random match probabilities are even more uncertain.

The potential to get the probabilities wrong isn’t restricted to errors in the match statistics, either. Suppose the match probability is one in ten thousand. When the experts or lawyers present this evidence they often say things like: “The probability that the trace came from anybody other than the defendant is one in ten thousand.” That statement sounds OK but it isn’t true.

The problem is called the prosecutor fallacy. You can’t actually conclude anything about the probability that the trace belonged to the defendant unless you know something about the number of potential suspects. Suppose this is the only evidence against the defendant and that the crime happened on an island where the defendant was one of a million adults who could have committed the crime. Then the random match probability of one in ten thousand actually means that about one hundred of those million adults match the trace. So the probability of innocence is ninety-nine out of a hundred! That’s very different from the one in ten thousand probability implied by the statement given in court.

Norman Fenton’s work is based around a theorem, called Bayes’ theorem, which gives the correct way to calculate these kinds of probabilities. The theorem is over 250 years old but it is widely misunderstood and, in all but the simplest cases is very difficult to calculate properly. Most cases include many pieces of related evidence – including evidence about the accuracy of the testing processes. To keep everything straight, experts need to build a model called a Bayesian network. It’s like a graph that maps out different possibilities and the chances that they are true. You can imagine that in almost any court case, this gets complicated awfully quickly. It is only in the last 20 years that researchers have discovered ways to perform the calculations for Bayesian networks, and written software to help them. What Norman and his team have done is develop methods specifically for modelling legal evidence as Bayesian networks in ways that are understandable by lawyers and expert witnesses.

Norman and his colleague Martin Neil have provided expert evidence (for lawyers) using these methods in several high-profile cases. Their methods help lawyers to determine the true value of any piece of evidence – individually or in combination. They also help show how to present probabilistic arguments properly.

Unfortunately, although scientists accept that Bayes’ theorem is the only viable method for reasoning about probabilistic evidence, it’s not often used in court, and is even a little controversial. Norman is leading an international group to help bring Bayes’ theorem a little more love from lawyers, judges and forensic scientists. Although changes in legal practice happen very slowly (lawyers still wear powdered wigs, after all), hopefully in the future the difficult job of judging evidence will be made easier and fairer with the help of Bayes’ theorem.

If that happens, then thanks to some 250 year-old maths combined with some very modern computer science, fewer innocent people will end up in jail. Given the innocent person in the dock could one day be you, you will probably agree that’s a good thing.

Paul Curzon, Queen Mary University of London (originally published in 2011)

More on … justice

  • Edie Schlain Windsor and same sex marriage
    • Edie was a computer scientist whose marriage to another woman was deemed ineligible for certain rights provided (at that time) only in a marriage between a man and a woman. She fought for those rights and won.

Related Magazine …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Singing bird – a human choir, singing birdsong

Image by Dieter from Pixabay

“I’m in a choir”. “Really, what do you sing?” “I did a blackbird last week, but I think I’m going to be woodpecker today, I do like a robin though!”

This is no joke! Marcus Coates a British artist, got up very early, and working with a wildlife sound recordist, Geoff Sample, he used 14 microphones to record the dawn chorus over lots of chilly mornings. They slowed the sounds down and matched up each species of bird with different types of human voices. Next they created a film of 19 people making bird song, each person sang a different bird, in their own habitats, a car, a shed even a lady in the bath! The 19 tracks are played together to make the dawn chorus. See it on YouTube below.

Marcus didn’t stop there, he wrote a new bird song score. Yes, for people to sing a new top ten bird hit, but they have to do it very slowly. People sing ‘bird’ about 20 times slower than birds sing ‘bird’ ‘whooooooop’, ‘whooooooop’, ‘tweeeeet’. For a special performance, a choir learned the new song, a new dawn chorus, they sang the slowed down version live, which was recorded, speeded back up and played to the audience, I was there! It was amazing! A human performance, became a minute of tweeting joy. Close your eyes and ‘whoop’ you were in the woods, at the crack of dawn!

Computationally thinking a performance

Computational thinking is at the heart of the way computer scientists solve problems. Marcus Coates, doesn’t claim to be a computer scientist, he is an artist who looks for ways to see how people are like other animals. But we can get an idea of what computational thinking is all about by looking at how he created his sounds. Firstly, he and wildlife sound recordist, Geoff Sample, had to focus on the individual bird sounds in the original recordings, ignore detail they didn’t need, doing abstraction, listening for each bird, working out what aspects of bird sound was important. They looked for patterns isolating each voice, sometimes the bird’s performance was messy and they could not hear particular species clearly, so they were constantly checking for quality. For each bird, they listened and listened until they found just the right ‘slow it down’ speed. Different birds needed different speeds for people to be able to mimic and different kinds of human voices suited each bird type: attention to detail mattered enormously. They had to check the results carefully, evaluating, making sure each really did sound like the appropriate bird and all fitted together into the Dawn Chorus soundscape. They also had to create a bird language, another abstraction, a score as track notes, and that is just an algorithm for making sounds!

Fun to try

Use your computational thinking skills to create a notation for an animal’s voice, a pet perhaps? A dog, hamster or cat language, what different sounds do they make, and how can you note them down. What might the algorithm for that early morning “I want my breakfast” look like? Can you make those sounds and communicate with your pet? Or maybe stick to tweeting? (You can follow @cs4fn on Twitter too).

Enjoy the slowed-down performance of this pet starling which has added a variety of mimicked sounds to its song repertoire.

Jane Waite, Queen Mary University of London


Watch …


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Ethics – What would you do?

Signs pointing RIGHT to right and WRONG to the left
Right / Wrong image by Tumisu from Pixabay

You often hear about unethical behaviours, be it in politicians or popstars, but getting to grips with ethics, which deals with issues about what behaviours are right and wrong, is an important part of computer science too. Find out about it and at the same time try our ethical puzzle below and learn something about your own ethics…

Is that legal?

Ethics are about the customs and beliefs that a society has about the way people should be treated. These beliefs can be different in different countries, sometimes even between different regions of the same country, which is why it’s always important to know something about the local area when going on holiday. You don’t want to upset the local folk. Ethics tend to form the basis of countries’ laws and regulations, combining general agreement with practicality. Sticking your tongue out may be rude and so unethical, but the police have better things to do than arrest every rude school kid. Similarly, slavery was once legal, but was it ever ethical? Laws and ethics also have other differences; individuals tend to judge unethical behaviour, and shun those who behave inappropriately, while countries judge illegal behaviour – using a legal system of courts, judges and juries to enforce laws with penalties.

Dilemmas, what to do?

Now imagine you have the opportunity to go treading on the ethical and legal toes of people across the world from the PC in your home. Suddenly the geographical barriers that once separated us vanish. The power of computer science, like any technology, can be used for good or evil. What is important is that those who use it understand the consequences of their actions, and choose to act legally and ethically. Understanding legal requirements, for example contracts, computer misuse and data protection are important parts of a computer scientist’s training, but can you learn to be ethical?

Computer scientists study ethics to help them prepare for situations where they have to make decisions. This is often done by considering ethical dilemmas. These are a bit like the computer science equivalent of soap opera plots. You have a difficult problem, a dilemma, and have to make a choice. You suddenly discover you have a unknown long lost sister living on the other side of the Square, do you make contact or not, (on TV this choice is normally followed by a drum roll as the episode ends).

Give it a go

Here is your chance to try an ethical dilemma for yourself. Read the alternatives and choose what you would do in this situation. Then click on the poll choice. Like all good ‘personality tests’ you find out something about yourself: in this case which type of ethical approach you have in the situation according to some famous philosophers. There are also some fascinating facts to impress your mates. We’ll share the answers tomorrow.

Your Dilemma and your ethical personality

You are working for a company who are about to launch a new computer game. The adverts have gone out, the newspapers and TV are ready for the launch … then the day before you are told that there is a bug, a mistake, in the software. It means players sometimes can’t kill the dragon at the end of the game. If you hit the problem the only solution is to start the final level again. It can be fixed they think but it will take about a week or so to track it down. The computer code is hard to fix as it’s been written by 10 different people and 5 of them have gone on a back-packing holiday so can’t be contacted.

Peter McOwan, Queen Mary University of London

What the answers mean about you at the end!


Related Magazine …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The answers

If you picked Option 1

1) Go ahead and launch. After all, there are still plenty of parts to the game that do work and are fun, there will always be some errors, and for this game in particular thousands have been signing up for text alerts to tell them when it’s launched. It will make many thousands happy.

That means you follow an ethical approach called ‘Act utilitarianism’.

Act Happy

The main principle of this theory, put forward by philosopher John Stuart Mill, is to create the most happiness (another name for happiness here is utility thus utilitarianism). For each situation you behave (act) in a way that increases the happiness of the largest number of people, and this is how you decide what is wrong or right. You may take different actions in similar situations. So you choose to launch a flawed game if you know that you have pre-sales of a hundred thousand, but another time decide to not launch a different flawed game where there are only one thousand pre-sales, as you wont be making so many people unhappy. It’s about considering the utility for each action you take. There is no hard and fast rule.

If you picked Option 2

2) Cancel the launch until the game is fixed properly, no one should have to buy a game that doesn’t work 100 per cent.

That means you follow an ethical approach called ‘Duty Theory’

Do your Duty

Duty theories are based on the idea of there being universal principles, such as ‘you should never ever lie, whatever the circumstances’. This is also known as the dentological approach to ethics (philosophers like to bring in long words to make simple things sound complicated!). The German philosopher Emanuel Kant was one of the main players in this field. His ‘Categorical Imperative’ (like I said long words…) said “only act in a way that you would want everyone else to act” (…simple idea!). So if you don’t think there should ever be mistakes in software then don’t make any yourself. This can be quite tough!

If you picked Option 3

3) Go ahead and launch. After all it’s almost totally working and the customers are looking forward to it. There will always be some errors in programs: it’s part of the way complicated software is, and a delay to game releases leads to disappointment.

You would be following the approach called ‘Rule utilitarianism’.

Spread a little happiness

Say something nice to everyone you meet today…it will drive them crazy

The main principle of this flavour of utilitarianism theory, put forward by philosopher Jeremy Bentham, is to create the most happiness (happiness here is called utility thus utilitarianism). You follow general rules that increase the happiness of the largest number of people, and this is how you decide what’s wrong or right. So in our dilemma the rule could be ‘even if the game isn’t 100% correct, people are looking forward to it and we can’t disappoint them’. Here the rule increases happiness, and we apply it again in the future if the same situation occurs.

CS4FN Advent 2023 – Day 18: cracker or hacker? Cyber security

It’s Day 18 of the CS4FN Christmas Computing Advent Calendar. We’ve been posting a computing-themed article linked to the picture on the ‘front’ of the advent calendar for the last 17 days and today is no exception. The picture is of a Christmas cracker so today’s theme is going to be computer hacking and cracking – all about Cyber Security.

If you’ve missed any of our previous posts, please scroll to the end of this one and click on the Christmas tree for the full list.

A cracker, ready to pop. Image drawn and digitised by Jo Brodie.

The terms ‘cracker’ and ‘hacker’ are often used interchangeably to refer to people who break into computers though generally the word hacker also has a friendlier meaning – someone who uses their skills to find a workaround or a solution (e.g. ‘a clever hack’) whereas a cracker is probably someone who shouldn’t be in your system and is up to no good. Both people can use very similar skills though – one is using them to benefit others, the other to be benefit themselves.

We have an entire issue of the CS4FN magazine all about Cyber Security – it’s issue 24 and is called ‘Keep Out’ but we’ll let you in to read it. All you have to do is click on this very secret link, then click on the magazine’s front cover to download the PDF. But don’t tell anyone else…

Both the articles below were originally published in the magazine as well as on the CS4FN website.

Piracy on the open Wi-fi

by Jane Waite, Queen Mary University of London.

You arrive in your holiday hotel and ask about Wi-Fi. Time to finish off your online game, connect with friends, listen to music, kick back and do whatever is your online thing. Excellent! The hotel Wi-Fi is free and better still you don’t even need one of those huge long codes to access it. Great news, or is it?

Pirate flag and wifi picture adapted from an image by OpenClipart-Vectors from Pixabay

You always have to be very cautious around public Wi-Fi whether in hotels or cafes. One common attack is for the bad guys to set up a fake Wi-Fi with a name very similar to the real one. If you connect to it without realising, then everything you do online passes through their computer, including all those user IDs and passwords you send out to services you connect to. Even if the passwords they see are encrypted, they can crack them offline at their leisure.

Things just got more serious. A group has created a way to take over hotel Wi-Fi. In July 2017, the FireEye security team found a nasty bit of code, malware, linked to an email received by a series of hotels. The malware was called GAMEFISH. But this was no game and it certainly had a bad, in fact dangerous, smell! It was a ‘spear phishing’ attack on the hotel’s employees. This is an attack where fake emails try to get you to go to a malware site (phishing), but where the emails appear to be from someone you know and trust.

Once in the hotel network, so inside the security perimeter, the code searched for the machines running the hotel’s Wi- Fi and took them over. Once there they sat and watched, sniffing out passwords from the Wi-Fi traffic: what’s called a man-in-the-middle attack.

The report linked the malware to a very serious team of Russian hackers, called FancyBear (or APT28), who have been associated with high profile attacks on governments across the world. GAMEFISH used a software tool (an ‘exploit’) called EternalBlue, along with some code that compiled their Python scripts locally, to spread the attack. Would you believe, EternalBlue is thought to have been created by the US Government’s National Security Agency (NSA), but leaked by a hacker group! EternalBlue was used in the WannaCry ransomware too. This may all start to sound rather like a farfetched thriller but it is not. This is real! So think before you click to join an unsecured public Wi-Fi.

Just between the two of us: mentalism and covert channels

by Peter W McOwan, Queen Mary University of London.

Secret information should stay secret. Beware ‘covert channels’ though. They are a form of attack where an illegitimate way of transferring information is set up. Stopping information leaking is a bit like stopping water leaking – even the smallest hole can be exploited. Magicians have been using covert channels for centuries, doing mentalism acts that wow audiences with their ‘telepathic’ powers.

The secret codes of Mentalism

In the 1950’s Australian couple Sydney and Lesley Piddington took the entertainment world by storm. They had the nation perplexed, puzzled and entertained. They were seemingly able to communicate telepathically over great distances. It all started in World War 2 when Sydney was a prisoner of war. To keep up morale, he devised a mentalism act where he ‘read the minds’ of other soldiers. When he later married Lesley they perfected the act and became an overnight sensation, attracting BBC radio audiences of 20 million. They communicated random words and objects selected by the audience, even when Lesley was in a circling aeroplane or Sydney was in a diving bell in a swimming pool. To this day their secret remains unknown, though many have tried to work it out. Perhaps they used a hidden transmitter. After all that was fairly new technology then. Or perhaps they were using their own version of an old mentalism trick: a code to transmit information hidden in plain sight.

Sounds mysterious

Sydney had a severe stutter, and some suggested it was the pauses he made in words rather than the words themselves that conveyed the information. Using timing and silence to code information seems rather odd, but it can be used to great effect.

In the phone trick ‘Call the wizard’, for example, a member of the audience chooses any card from a pack. You then phone your accomplice. When they answer you say “I have a call for the wizard”. Your friend names the card suits: “Clubs … spades … diamonds … hearts”. When they reach the suit of the chosen card you say: “Thanks”.

Your phone friend now knows the suit and starts counting out the values, Ace to King. When they reach the chosen card value you say: “Let me pass you over”. Your accomplice now knows both suit and value so dramatically reveals the card to the person you pass the phone to.

This trick requires a shared understanding of the code words and the silence between them. When combined with the background count, information is passed. The silence is the code.

Timing can similarly be used by a program to communicate covertly out of a secure network. Information might be communicated by the time a message is sent rather than its contents, for example

Codes on the table

Covert channels can be hidden in the existence and placement of things too. Here’s another trick.

The receiving performer leaves the room. A card is chosen from a pack by a volunteer. When the receiver arrives back they are instantly able to tell the audience the name of the card. The secret is in the table. Once the card has been selected, pack and box are replaced on the table. The agreed code might be:

If the box is face up and its flap is closed: Clubs.
If the box is face up and its flap is open: Spades.
If the box is face down and its flap is closed: Diamonds.
If the box is face down and its flap is open: Hearts.

That’s the suits taken care of. Now for the value. The performers agree in advance how to mentally chop up the card table into zones: top, middle and bottom of the table, and far right, right, left and far left. That’s 3 x 4 unique locations. 12 places for 12 values. The pack of cards is placed in the correct pre-agreed position, box face up or not, flap open or closed as needed. What about the 13th possibility? Have the audience member hold their hand out flat and leave the cards on it for them to ‘concentrate’ on.

Again a similar idea can be used as a covert channel to subvert a security system: information might be passed based on whether a particular file exists or not, say.

Making it up as you go along

These are just a couple of examples of the clever ideas mentalists have used to amaze and entertain audiences with feats of seemingly superhuman powers. Our cs4fn mentalism portal has more. Some claim they have the powers for real, but with two dedicated performers and a lot of cunning memory work, it’s often hard to decipher performers’ methods. Covert channels can be similarly hard to spot.

Perhaps the Piddingtons secret was actually a whole range of different methods. Just before she died Lesley Piddington is said to have told her son, “Even if I wanted to tell you how it was done, I don’t think I would be able”. How ever it was done, they were using some form of covert channel to cement their place in magic history. As Sydney said at the end of each show “You be the judge”.


Advert for our Advent calendar
Click the tree to visit our CS4FN Christmas Computing Advent Calendar

EPSRC supports this blog through research grant EP/W033615/1.

CS4FN Advent 2023 – Day 16: candy cane or walking aid: designing for everyone, human computer interaction

Welcome to Day 16 of the CS4FN Christmas Computing Advent Calendar in which we’re posting a blog post every day in December until (and including) Christmas Day.

In this series of posts we’re both celebrating the breadth of computing research but also the history of our own CS4FN project which has been inspiring young people about computing and supporting teachers in teaching the topic, in part by distributing free magazines to subscribing UK schools since 2005 (ask your teacher to subscribe for next year’s magazine).

Today’s advent calendar picture is of a candy cane which made me think both of walking aids and of support sticks that alert others that the person using it is blind or visually impaired.

Above: A white candy cane with green and red stripes. Image drawn and digitised by Jo Brodie.

We’ve worked with several people over the years to write about their research into making life easier for people with a variety of disabilities. Issue 19 of our magazine (“Touch it, feel it, hear it!”) focused on the DePiC project (‘Design Patterns for Inclusive Collaboration’) which included work on helping visually impaired sound engineers to use recording studio equipment, and you can read one of the articles (see ‘2. The Haptic Wave’) from that magazine below.

Another of our CS4FN magazines (issue 27, called “Smart Health: decisions, decisions, decisions“) was about Bayesian mathematics and its use in computing, but one of those uses might be an app with the potential to help people with arthritis get medical support when they most need it (rather than having to wait until their next appointment) – download the magazine by clicking on its title and scroll to page 16 & 17 (p9 of the 11 page PDF). Our writing also supports the (obvious) case, that disabled people must be involved at the design and decision-making stages.

1. Design for All (and by All!)

by Paul Curzon, QMUL.

Making things work for everyone

Designing for the disabled – that must be a niche market mustn’t it? Actually no. One in five people have a disability of some kind! More surprising still, the disabled have been the inspiration behind some of the biggest companies in the world. Some of the ideas out there might eventually give us all super powers.

Just because people have disabilities doesn’t mean they can’t be the designers, the innovators themselves of course. Some of the most innovative people out there were once labelled ‘disabled’. Just because you are different doesn’t mean you aren’t able!

Where do innovators get their ideas from? Often they come from people driven to support people currently disadvantaged in society. The resulting technologies then not only help those with disabilities but become the everyday objects we all rely on. A classic example is the idea of reducing the kerbs on pavements to make it possible for people in wheelchairs to get around. Turns out of course that they also help people with pushchairs, bikes, roller-blades and more. That’s not just a one-off example, some of the most famous inventors and biggest companies in the world have their roots in ‘design for all’.

Designing for more extreme situations pushes designers into thinking creatively, thinking out of the box. That’s when totally new solutions turn up. Designing for everyone is just a good idea!

2. Blind driver filches funky feely sound machine! The Haptic Wave

by Jane Waite, QMUL.

The blind musician Joey Stuckey in his recent music video commandeers then drives off in a car, and yes he is blind. How can a blind person drive a car, and what has that got to do with him trying to filch a sound machine? So maybe taking the car was just a stunt, but he really did try and run off with a novel sound machine!

As well as fronting his band Joey is an audio engineer. Unlike driving a car, which is all about seeing things around you – signs, cars pedestrians – being an audio engineer seems a natural job for someone who is blind. Its about recording, mixing and editing music, speech and sound effects. What matters most is that the person has a good ear. Having the right skills could easily lead to a job in the music industry, in TV and films, or even in the games industry. It’s also an important job. Getting the sound right is critical to the experience of a film or game. You don’t want to be struggling to hear mumbling actors, or the sound effects to drown out a key piece of information in a game.

Mixing desks

Once upon a time Audio engineers used massive physical mixing desks. That was largely ok for a blind person as they could remember the positions of the controls as well as feel the buttons. As the digital age has marched on, mixing desks have been replaced by Digital Audio Workstations. They are computer programs and the trouble is that despite being about sound, they are based on vision.

When we learn about sound we are shown pictures of wavy lines: sound waves. Later, we might use an oscilloscope or music editing software, and see how, if we make a louder sound, the curves get taller on the screen: the amplitude. We get to hear the sound and see the sound wave at the same time. That’s this multimodal idea again, two ways of sensing the same thing.

Peter Francken in his home studio. Image from Wikimedia Commons. Image licensed under the Creative Commons Attribution-Share Alike 3.0 Unported, 2.5 Generic, 2.0 Generic and 1.0 Generic license.

Mixing desks

Once upon a time Audio engineers used massive physical mixing desks. That was largely ok for a blind person as they could remember the positions of the controls as well as feel the buttons. As the digital age has marched on, mixing desks have been replaced by Digital Audio Workstations. They are computer programs and the trouble is that despite being about sound, they are based on vision.

When we learn about sound we are shown pictures of wavy lines: sound waves. Later, we might use an oscilloscope or music editing software, and see how, if we make a louder sound, the curves get taller on the screen: the amplitude. We get to hear the sound and see the sound wave at the same time. That’s this multimodal idea again, two ways of sensing the same thing.

But hang on, sound isn’t really a load of wavy lines curling out of our mouths, and shooting away from guitar strings. Sound is energy and atoms pushing up against each other. But we think of sound as a sound wave to help us understand it. That’s what a computer scientist calls abstraction: representing things in a simpler way. Sound waves are an abstraction, a simplified representation, of sound itself.

The representation of sound as sound waves, as a waveform, helps us work with sound, and with Digital Audio Workstations it is now essential for audio engineers. The engineer works with lines, colors, blinks and particularly sound waves on a screen as they listen to the sound. They can see the peaks and troughs of the waves, helping them find the quiet, loud and distinctive moments of a piece of music, at a glance, for example. That’s great as it makes the job much easier…but only if you are fully sighted. It makes things impossible for someone with a visual impairment. You can’t see the sound waves on the editing screen. Touching a screen tells you nothing. Even though it’s ultimately about sounds, doing your job has been made as hard as driving a car. This is rather sad given computers have the potential to make many kinds of work much more accessible to all.

Feel the sound

The DePIC research team, a group of people from Goldsmiths, Queen Mary University of London and Bath Universities with a mission to solve problems that involve the senses, decided to fix it. They’ve created the first ever plug-in software for professional Digital Audio Workstations that makes peak level meters completely accessible. It uses ‘sonification’: it turns those visual signals in to sound! decided to fix the problems. They brought together Computer Scientists, Design experts, and Cognitive Scientists and most importantly of all audio engineers who have visual impairments. Working together over two years in workshops sharing their experiences and ideas, developing, testing and improving prototypes to figure out how a visually impaired engineer might ‘see’ soundwaves. They created the HapticWave, a device that enables a user to feel rather than see a sound wave.

The HapticWave

The HapticWave combines novel hardware and software to provide a new interface to the traditional Digital Audio Workstation. The hardware includes a long wooden box with a plastic slider. As you move the slider right and left you move forward and backwards through the music. On the slider there is a small brass button, called a fader. Tiny embossed stripes on the side of the slider let you know where the fader is relative to the middle and ends of the slider. It moves up and down in sync with the height of the sound wave. So in a quiet moment the fader returns to the centre of the slider. When the music is loud, the fader zooms to the top of the handle. As you slide forwards and backwards through the music the little button shoots up and down, up and down tracing the waveform. You feel its volume changing. Music with heavy banging beats has your brass button zooming up and down, so mind your fingers!

So back to the title of the article! Joey trialled the HapticWave at a research workshop and rather wanted to take one home, he loved it so much he jokingly tried distracting the researchers to get one. But he didn’t get away with it – maybe his getaway car just wasn’t fast enough!


Find out more about disabled computer scientists, and how computer science and human interaction design can help people with disabilities.

3. An audio illusion, and an audiovisual one

This one-minute video illustrates an interesting audio illusion, demonstrating that our brains are ‘always using prior information to make sense of new information coming in’.

The McGurk Effect

You can read more about the McGurk effect on page 7 of issue 5 of the CS4FN magazine, called ‘The Perception Deception‘.


Advert for our Advent calendar
Click the tree to visit our CS4FN Christmas Computing Advent Calendar

EPSRC supports this blog through research grant EP/W033615/1.

CS4FN Advent 2023 – Day 7: Computing for the birds: dawn chorus, birds as data carriers and a Google April Fool (plus a puzzle!)

Welcome to Day 7 of our advent calendar. Yesterday’s post was about Printed Circuit Birds Boards, today’s theme is the Christmas robin redbreast which features on lots of Christmas cards and today is making a special appearance on our CS4FN Computing advent calendar.

A little robin redbreast. Image drawn and digitised by Jo Brodie.

In this longer post we’ll focus on the ways computer scientists are learning about our feathered friends and we’ll also make room for some of the bird-brained April Fools jokes in computing too.

We hope you enjoy it, and there’s also a puzzle at the end.

1. Computing Sounds Wild – bird is the word

Our free CS4FN magazine, Computing Sounds Wild (you can download a copy here), features the word ”bird” 60 times so it’s definitely very bird-themed.

An interest in nature and an interest in computers don’t obviously go well together. For a band of computer scientists interested in sound they very much do, though. In this issue we explore the work of scientists and engineers using computers to understand, identify and recreate wild sounds, especially those of birds. We see how sophisticated algorithms that allow machines to learn, can help recognize birds even when they can’t be seen, so helping conservation efforts. We see how computer models help biologists understand animal behaviour, and we look at how electronic and computer-generated sounds, having changed music, are now set to change the soundscapes of films. Making electronic sounds is also a great, fun way to become a computer scientist and learn to program.”

2. Singing bird – a human choir singing birdsong

by Jane Waite, QMUL
This article was originally published on the CS4FN website and can also be found on page 15 in the magazine linked above.

“I’m in a choir”. “Really, what do you sing?” “I did a blackbird last week, but I think I’m going to be woodpecker today, I do like a robin though!”

This is no joke! Marcus Coates a British artist, got up very early, and working with a wildlife sound recordist, Geoff Sample, he used 14 microphones to record the dawn chorus over lots of chilly mornings. They slowed the sounds down and matched up each species of bird with different types of human voices. Next they created a film of 19 people making bird song, each person sang a different bird, in their own habitats, a car, a shed even a lady in the bath! The 19 tracks are played together to make the dawn chorus. See it on YouTube below.

Marcus didn’t stop there, he wrote a new bird song score. Yes, for people to sing a new top ten bird hit, but they have to do it very slowly. People sing ‘bird’ about 20 times slower than birds sing ‘bird’ ‘whooooooop’, ‘whooooooop’, ‘tweeeeet’. For a special performance, a choir learned the new song, a new dawn chorus, they sang the slowed down version live, which was recorded, speeded back up and played to the audience, I was there! It was amazing! A human performance, became a minute of tweeting joy. Close your eyes and ‘whoop’ you were in the woods, at the crack of dawn!

Computationally thinking a performance

Computational thinking is at the heart of the way computer scientists solve problems. Marcus Coates, doesn’t claim to be a computer scientist, he is an artist who looks for ways to see how people are like other animals. But we can get an idea of what computational thinking is all about by looking at how he created his sounds. Firstly, he and wildlife sound recordist, Geoff Sample, had to focus on the individual bird sounds in the original recordings, ignore detail they didn’t need, doing abstraction, listening for each bird, working out what aspects of bird sound was important. They looked for patterns isolating each voice, sometimes the bird’s performance was messy and they could not hear particular species clearly, so they were constantly checking for quality. For each bird, they listened and listened until they found just the right ‘slow it down’ speed. Different birds needed different speeds for people to be able to mimic and different kinds of human voices suited each bird type: attention to detail mattered enormously. They had to check the results carefully, evaluating, making sure each really did sound like the appropriate bird and all fitted together into the Dawn Chorus soundscape. They also had to create a bird language, another abstraction, a score as track notes, and that is just an algorithm for making sounds!

3. Sophisticated songbird singing – how do they do it?

by Dan Stowell, QMUL
This article was originally published on the CS4FN website and can also be found on page 14 in the magazine linked above.

How do songbirds make such complex sounds? The answer is on a different branch of the tree of evolution…
We humans have a set of vocal folds (or vocal cords) in our throats, and they vibrate when we speak to make the pitched sound. Air from your lungs passes over them and they chop up the column of air letting more or less through and so making sound waves. This vocal ‘equipment’ is similar in mammals like monkeys and dogs, our evolutionary neighbours. But songbirds are not so similar to us. They make sounds too, but they evolved this skill separately, and so their ‘equipment’ is different: they actually have two sets of vocal folds, one for each lung.

Image by Dieter_G from Pixabay

Sometimes if you hear an impressive, complex sound from a bird, it’s because the bird is actually using the two sides of their voice-box together to make what seems like a single extra-long or extra-fancy sound. Songbirds also have very strong muscles in their throat that help them change the sound extremely quickly. Biologists believe that these skills evolved so that the birds could tell potential mates and rivals how healthy and skillful they were.

So if you ever wondered why you can’t quite sing like a blackbird, now you have a good excuse!

4. Data transmitted on the wing

Computers are great ways of moving data from one place to another and the internet can let you download or share a file very quickly. Before I had the internet at home if I wanted to work on a file on my home computer I had to save a copy from my work computer onto a memory stick and plug it in to my laptop at home. Once I ‘got connected’ at home I was then able to email myself with an attachment and use my home broadband to pick up file. Now I don’t even need to do that. I can save a file on my work computer, it synchronises with the ‘cloud’ and when I get home I can pick up where I left off. When I was using the memory stick my rate of data transfer was entirely down to the speed of road traffic as I sat on the bus on the way to work. Fairly slow, but the data definitely arrived in one piece.

In 1990 a joke memo was published for April Fool’s Day which suggested the use of homing pigeons as a form of internet, in which the birds might carry small packets of data. The memo, called ‘IP over Avian Carriers’ (that is, a bird-based internet), was written in a mock-serious tone (you can read it here) but although it was written for fun the idea has actually been used in real life too. Photographers in remote areas with minimal internet signal have used homing pigeons to send their pictures back.

The beautiful (and quite possibly wi-fi ready, with those antennas) Victoria Crowned Pigeon. Not a carrier pigeon admittedly, but much more photogenic.  Image by Foto-Rabe from Pixabay

A company in the US which offers adventure holidays including rafting used homing pigeons to return rolls of films (before digital film took over) back to the company’s base. The guides and their guests would take loads of photos while having fun rafting on the river and the birds would speed the photos back to the base, where they could be developed, so that when the adventurous guests arrived later their photos were ready for them.

Further reading

Pigeons keep quirky Poudre River rafting tradition afloat (17 July 2017) Coloradoan.

5. Serious fun with pigeons

On April Fool’s Day in 2002 Google ‘admitted’ to its users that the reason their web search results appeared so quickly and were so accurate was because, rather than using automated processes to grab the best result, Google was actually using a bank of pigeons to select the best results. Millions of pigeons viewing web pages and pecking picking the best one for you when you type in your search question. Pretty unlikely, right?

In a rather surprising non-April Fool twist some researchers decided to test out how well pigeons can distinguish different types of information in hospital photographs. They trained pigeons by getting them to view medical pictures of tissue samples taken from healthy people as well as pictures taken from people who were ill. The pigeons had to peck one of two coloured buttons and in doing so learned which pictures were of healthy tissue and which were diseased. If they pecked the correct button they got an extra food reward.

Pigeon, possibly pondering people’s photographs. Image by Davgood Kirshot from Pixabay

The researchers then tested the pigeons with a fresh set of pictures, to see if they could apply their learning to pictures they’d not seen before. Incredibly the pigeons were pretty good at separating the pictures into healthy and unhealthy, with an 80 per cent hit rate.

Further reading

Principle behind Google’s April Fools’ pigeon prank proves more than a joke (27 March 2019) The Conversation.

6. Today’s puzzle

You can download this as a PDF to PRINT or as an editable PDF that you can fill in on a COMPUTER.

You might wonder “What do these kriss-kross puzzles have to do with computing?” Well, you need to use a bit of logical thinking to fill one in and come up with a strategy. If there’s only one word of a particular length then it has to go in that space and can’t fit anywhere else. You’re then using pattern matching to decide which other words can fit in the spaces around it and which match the letters where they overlap. Younger children might just enjoy counting the letters and writing them out, or practising phonics or spelling.

We’ll post the answer tomorrow.

7. Answer to yesterday’s puzzle

Image by Paul Curzon / CS4FN.

The creation of this post was funded by UKRI, through grant EP/K040251/2 held by Professor Ursula Martin, and forms part of a broader project on the development and impact of computing.


Advert for our Advent calendar
Click the tree to visit our CS4FN Christmas Computing Advent Calendar

EPSRC supports this blog through research grant EP/W033615/1.

Blade: the emotional computer.

Zabir talking to Blade who is reacting
Image taken from video by Zabir for QMUL

Communicating with computers is clunky to say the least – we even have to go to IT classes to learn how to talk to them. It would be so much easier if they went to school to learn how to talk to us. If computers are to communicate more naturally with us we need to understand more about how humans interact with each other.

The most obvious ways that we communicate is through speech – we talk, we listen – but actually our communication is far more subtle than that. People pick up lots of information about our emotions and what we really mean from the expressions and the tone of our voice – not from what we actually say. Zabir, a student at Queen Mary was interested in this so decided to experiment with these ideas for his final year project. He used a kit called Lego Mindstorm that makes it really easy to build simple robots. The clever stuff comes in because, once built, Mindstorm creations can be programmed with behaviour. The result was Blade.

In the video above you can see Blade the robot respond. Video by Zabir for QMUL

Blade, named after the Wesley Snipes film, was a robotic face capable of expressing emotion and responding to the tone of the user’s voice. Shout at Blade and he would look sad. Talk softly and, even though he could not understand a word of what you said he would start to appear happy again. Why? Because your tone says what you really mean whatever the words – that’s why parents talk gobbledegook softly to babies to calm them.

Blade was programmed using a neural network, a computer science model of the way the brain works, so he had a brain similar to ours in some simple ways. Blade learnt how to express emotions very much like children learn – by tuning the connections (his neurons) based on his experience. Zabir spent a lot of time shouting and talking softly to Blade, teaching him what the tone of his voice meant and so how to react. Blade’s behaviour wasn’t directly programmed, it was the ability to learn that was programmed.

Eventually we had to take Blade apart which was surprisingly sad. He really did seem to be more than a bunch of lego bricks. Something about his very human like expressions pulled on our emotions: the same trick that cartoonists pull with the big eyes of characters they want us to love.

Zabir went on to work in the city for Merchant Bank, JP Morgan

– Paul Curzon, Queen Mary University of London


⬇️ This article has also been published in two CS4FN magazines – first published on p13 in Issue 4, Computer Science and BioLife, and then again on page 18 in Issue 26 (Peter McOwan: Serious Fun), our magazine celebrating the life and research of Peter McOwan (who co-founded CS4FN with Paul Curzon and researched facial recognition). There’s also a copy on the original CS4FN website. You can download free PDF copies of both magazines below, and any of our other magazines and booklets from our CS4FN Downloads site.

This video below Why faces are special from Queen Mary University of London asks the question “How does our brain recognise faces? Could robots do the same thing?”.

Peter McOwan’s research into face recognition informed the production of this short film. Designed to be accessible to a wide audience, the film was selected as one of the finalist 55 from 1450 films submitted to the festival CERN CineGlobe film festival 2012.

Related activities

We have some fun paper-based activities you can do at home or in the classroom.

  1. The Emotion Machine Activity
  2. Create-A-Face Activity
  3. Program A Pumpkin

See more details for each activity below.

1. The Emotion Machine Activity

From our Teaching London Computing website. Find out about programs and sequences and how how high-level language is translated into low-level machine instructions.

2. Create-A-Face Activity

Fom our Teaching London Computing website. Get people in your class (or at home if you have a big family) to make a giant robotic face that responds to commands.

3. Program A Pumpkin

Especially for Hallowe’en, a slightly spookier, pumpkin-ier version of The Emotion Machine above.


Related Magazine …


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Beheading Hero’s mechanical horse

Pegasus image by Dorota Kudyba from Pixabay

An early ‘magical’ (nearly headless) automaton from Ancient Greece

Stories of Ancient Greece abound with myths but also of amazing inventions. Some of the earliest automatons, mechanical precursors of robots, were created by the Ancient Greeks. Intended to delight and astound or be religious idols, they brought statues of animals and people to life. One story holds that Hero of Alexandria invented a magical, mechanical horse that not only moved and drank water, but was also impossible to behead. It just carried on drinking as you sliced a sword clean through its neck. The head remained solidly attached to body. Myth or Mystery? How could it be done?

The Ancient Greeks were clever. With many inventions we think of as modern, the Greeks got there first. They even invented the first known computer. Hero of Alexandria was one of the cleverest, an engineer and prolific inventor. Despite living in the first century, he invented the first known steam engine (long before the famous ones from the start of the industrial revolution), the first vending machine, a musical instrument that was the first wind-powered machine, and even the pantograph, a parallelogram structure used to make exact copies of drawings, enlarged or reduced. Did Hero invent a magical mechanical horse? He did, and you really could slice cleanly through its robotic neck with a sword, leaving the head in place.

Magic, myth and mystery

Queen Mary’s Peter McOwan was fascinated by magic and especially Hero’s horse as a child, and was keen to build one. When TEMI, a European project was funded he had his chance. TEMI aimed to bring more showmanship, magic and mystery to schools to increase motivation. By making lessons more like detective work, solving mysteries, they can be lots more fun. The project needed lots of mysteries, just like Hero’s horse, and artist Tim Sargent was commissioned to recreate the horse.

If you’re ever in Athens, you can see a version of Hero’s horse, as well as many other Greek inventions at Kotsanas Museum of Ancient Greek Technology.

How does it work?

The challenge was to create a version that used only Ancient Greek technology – no electricity or electromagnets, only mechanical means like gears, bearings, levers, cogs and the like. It was actually done with a clever rotating wheel. As the sword slices through a gap in the neck, it always connects head and body together first in front, then behind the blade. Can you work out how it was done? See a video of the mechanism in action below, with Peter introducing it.

Paul Curzon, Queen Mary University of London

Watch …


Related Magazine …


ssue 26 of the CS4FN magazine which is a memorial issue for *Peter McOwan, who died in June 2019. Peter, along with Paul Curzon, was one of the co-founders of CS4FN.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

How far can you hear? Modelling distant birdsong.


by Dan Stowell, Queen Mary University of London

Blackbird singing at sunrise to an orange sky
Sunrise blackbird image by No-longer-here from Pixabay

How do we know how many birds there are out there: in the countryside, and in the city? Usually, it’s because people have been sent out to count the birds – by sight but especially by sound. Often you can hear a bird singing even when it’s hidden from sight so listening can be a much more effective way of counting.

In the UK, volunteers have been out counting birds for decades, co-ordinated by organisations such as the British Trust for Ornithology (BTO). But pretty quickly they came up against a problem: you can’t always detect every bird around you, even if you’re an expert at it. Birds get harder to detect the further away they are. To come up with good numbers, the BTO estimates what fraction of the birds you are likely to miss, according to how far away you are, and uses that to improve the estimate from the volunteer surveys.

But, Alison Johnston and others at the BTO noticed that it’s even more complicated than that: you can hear some types of bird very clearly over a long distance, while other birds make a sound that disappears into the background easily. If a pigeon is cooing in the forest, maybe you can’t hear it beyond a few metres. Whereas the twit-twoo of an owl might carry much further. So they measured how likely it is that one of their volunteers will hear each species, at different distances.

They created mathematical models that took into account these factors. Implemented in programs the models can then adjust the reports coming in from the volunteers doing the counting. This is how volunteers and computers are combined in ‘citizen science’ work which gathers observations from people all around the country. Sightings and numbers are collected, but the raw numbers themselves don’t give you the correct picture – they need to be adjusted using mathematical models that help fill in the gaps.


You can perfect your own recognition of British birdsong with the audio clips here.


Related Magazine …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Threads & Yarns – textiles and electronics

At first sight nothing could be more different than textiles and electronics. Put opposites together and you can maybe even bring historical yarns to life. That’s what Queen Mary’s G.Hack team helped do. They are an all-woman group of electronic engineering and computer science research students and they helped build an interactive art installation combining textiles and personal stories about health.

In June 2011 the G.Hack team was asked by Jo Morrison and Rebecca Hoyes from Central Saint Martins College of Art and Design to help make their ‘Threads & Yarns‘ artwork interactive. It was commissioned by the Wellcome Trust as a part of their 75th Anniversary celebrations. They wanted to present personal accounts about the changes that have taken place in health and well-being over the 75 years since they were founded.

Flowers powered

Jo and Rebecca had been working on the ‘Threads & Yarns’ artwork for 6 months. It was inspired by the floor tiling at the London Victoria and Albert Museum and was made up of 125 individually created material flowers spread over a 5 meter long white perspex table. They wanted some of the flowers to be interactive, lighting up and playing sounds linked to stories about health and well-being at the touch of a button.

Central Saint Martins College Textile students worked with senior citizens from the Euston and Camden area, recording the stories they told as they made the flowers. G.Hack then ran a workshop with the students to show them how physical computing could be built into textiles and so create interactive flowers. Short sound bites from the recorded stories were eventually included in nine of the flowers.

The interactive part was built using an open source (i.e., free and available for anyone to use) hardware platform called Arduino. It makes physical computing accessible to anyone giving an easy way to create programs that control lights, buttons and other sensors.

The audio stories of the senior citizens were edited down into 1-minute sound bites and stored on a memory card like those used in digital cameras. Each of the nine flowers were lit by eight Light Emitting Diodes (LEDs). They are low energy lights so they don’t heat up, which is important if they are going to be built into fabrics. They are found in most household electronics, such as to show whether a gadget is turned on or off. When a button is pressed on the ‘Threads & Yarns’ artwork, it triggers the audio of a story to be played and simultaneously lights the LEDs on the linked flower, switching off again when the audio story finishes.

Smooth operators

The artwork had to work without problems throughout the day so the G.Hack team had to make sure everything would definitely go smoothly. The day before the opening of the exhibition they did final testing of the interactive flowers in their electronics workshop. They then worked with Central Saint Martins and museum staff to install the electronics into the artwork. They designed the system to be modular. This was both to allow the electronics to be separate from the artwork itself as well as to ease combining the two. On the day of the exhibition, the team arrived early to test everything one more time before the opening. They also stayed throughout the day to be on call in case of any problems.

Leading up to the opening of the exhibition were a busy few weeks for G.Hack with lots of late nights spent testing, troubleshooting and soldering in the workshop but it was all worth it as the final artwork looked fantastic and received a lot of positive feedback from people visiting the exhibition. It was a really positive experience all round! G.Hack and Central Saint Martins formed a bond that will likely extend into future partnerships. ‘Threads & Yarns’ meanwhile is off on a UK ‘tour’.

Art may have brought the textiles, history and health stories together as embodied in the flowers. It’s the electronics that brought the yarn to life though.

Paul Curzon, Queen Mary University of London, June 2011


G.Hack

G.Hack was a supportive and friendly space for women to do hands-on experimental production fusing art and technology at Queen Mary University of London. As a group they aimed to strengthen each other’s confidence and ability in using a wide range of different technologies. They supported each other’s research and helped each other extend their expertise in science and technology through public engagement, collaborating with other universities and commercial companies.

The members of G.Hack involved in ‘Threads & Yarns’ were Nela Brown, Pollie Barden, Nicola Plant, Nanda Khaorapapong, Alice Clifford, Ilze Black and Kavin Preethi Narasimhan.


Related Magazines


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos