The complexity of searching to speak

A close up of a blue eye
Image by Melanie from Pixabay

Locked-in to the Game of 20-Questions

One of the worst medical conditions I can imagine is locked-in syndrome. It leaves you with all your mental abilities intact, but totally paralysed except perhaps for the blink of an eye. A perfectly working mind is locked inside a useless body, able to sense all around but unable to communicate. Despite this, one of the most uplifting books I have read is “The Diving Bell and the Butterfly”. It’s the autobiography of Jean-Dominique Bauby, written after he woke up in a hospital bed with locked-in syndrome. In the book, he describes a life with locked-in syndrome, including how he communicated not only with medical staff, friends and family but also to write the book itself. He did so without any technological help.

The book was written via human-human interaction of a heroic form. Put yourself in his position, waking up in a hospital bed totally paralysed. All you can do is blink one eyelid. What would be the best way for you to communicate so that you could write a whole book? You have only a helper with a pen and paper to write down your “words”?

How did Bauby do it?

The system Bauby used involved the helper reading the alphabet aloud “A…B…C …” When the letter he was thinking of was spoken he blinked. The helper would write that letter down and then start again, letter after letter. Try it with a friend: communicate your initials to them that way…now think about that being the only way you have to talk to anyone. I hope your name isn’t Zebedee Zacharius Zog or Zara Zootle!

Bauby realized that the ABC algorithm could be improved upon. He had been the Editor-in-chief of the French women’s magazine Elle before that hospital bed so knew about language. He knew that some letters are more common than others in natural language, so got the helper to read out the letters in order of frequency in French “E…S…A…R…” That way the helper got to the common letters more quickly. A similar trick has been used through the ages to crack secret codes (see our Beheading story elsewhere).

Now as a computer scientist I immediately start thinking I could have made his life so much better (even before the step replacing the human helper with blink detection gadgets and the like). It is a problem about the “complexity”, the efficiency, of algorithms.

In the worst case, perhaps dictating a story where someone does nothing but snore for the whole book, “Zzzzz”, it takes 26 questions per letter. On average over the whole book roughly 13 letters will be said per letter dictated. Bauby’s modification improves things down to about 10 questions per letter, but the worst case is still 26. Thinking as a computer scientist the problem is a search problem (searching for one letter in 26) and the solution he used is known as linear search.

Linear search is a “linear” algorithm with “linear” complexity meaning the time it takes to give an answer is proportionate to the amount of date. Double the data and the algorithm takes twice as long to find the answer. Computer Scientists write this as O(n) as short hand for saying it is linear.

Do it in 5

Other search algorithms are far faster than linear search. From some simple computer science I learnt as an undergraduate, I know that a search through 26 things only needs at most 5 true/false or blink/no blink questions at worst, not 26!

How do we do it? By using the same strategy as is used in the children’s game of 20 questions. It is a search problem too: a search to find the name of a famous person out of thousands, possibly millions of people. And yet it does not take thousands or millions of questions to win. It is all in the questions that you ask.

Played well you do not ask as the first question “Is it Einstein?”, the equivalent to “Is it E?” Rather you first ask: “Are they female?”. Why is that a better question? It is because it rules out half the possibilities whatever the answer. Start with a million possibilities and always ask a halving question and you Try it – start with 1 million and see how many times you have to halve it before you get down to 1:

1 000 000  people are possible to start with
500 000 people left after 1 question
250 000 people left after 2 questions
125 000 people left after 3 questions
64 000 people left after 4 questions, approximately its actually a bit better than this
32 000 people left after 5 questions
16 000 people left after 6 questions
8 000 people left after 7 questions
4 000 people left after 8 questions
2 000 people left after 9 questions
1 000 people left after 10 questions
500 people left after 11 questions
250 people left after 12 questions
125 people left after 13 questions
64 people left after 14 questions, approximately
32 people left after 15 questions
16 people left after 16 questions
8 people left after 17 questions
4 people left after 18 questions
2 people left after 19 questions
1 person left after 20 questions

So starting with a million people, if you always ask halving questions, it only takes 20 questions (in the worst case) to have found the person, they were thinking of.

The equivalent halving question for the alphabet is “Is it before N?” Keep asking questions like that about letters in the remaining portion of the alphabet, rather than asking if it was a specific famous person, and you get down to a single letter in no more than 5 questions.


26 letters are possible to start with
13 letter left after 1 questions
7 letter left after 2 questions
4 letter left after 3 questions
2 letter left after 4 questions
1 letter left after 5 questions

Tweak it based on letter frequencies and you can do even better for some letters.

Algorithms that halve the size of the problem at each step like this are called logarithmic algorithms. The number of steps it takes no longer increases proportionately with the amount of data, but with the logarithm of the amount of data. If we have n pieces of data to search it takes log (n) steps to find it. This is because the logarithm (to base 2) of a number is just the number of times you can halve the number before you get to 1. The notation log(n) is just a maths way of counting the number of times we halve. As we saw logarithmic algorithms like this are massively more efficient. When you double the amount of data (say from half a million pieces of data to search to a million pieces of data to search, you only need one extra step to find the answer (rather than half a million more). Our new halving algorithm has logarithmic efficiency, which we write O(log n). O(log n) algorithms are massively faster in general than O(n) algorithms.

Bauby should have got the helper to ask such halving questions. Think about it. 5 questions at worst rather than 26, multiplied up by all the letters in his book. If only he had known some computer science, how much easier his life would have been. How much easier it would be to write the book. Now we have worked out a method we can think how we could automate it with suitable technology. How wonderfully computer science can improve lives.

But wait a minute. Perhaps the computer scientist would have ensured his book was never completed and his life was even more a hell. We did not start with technology but we did start with computer science. Perhaps we should have started with the person. We have been counting questions asked and that is the job of the helper for which it may be tedious but it’s not difficult. What if blinking is a great effort for him. His solution involved him blinking only once per letter. Ours requires him to blink 5 times. Multiply that by a whole book and it is 5 times harder for him if blinking is hard. Furthermore, his solution is easy for anyone to walk in and understand. Ours is complex and might need some explaining before the visitor understands and Bauby is not going to be the one to do the explaining.

It worked for him

One thing is certain about Bauby’s solution – it worked for him. He wrote a whole book that way after all. Perhaps the helper did more than just write down his words. Perhaps they opened the curtains, talked to him about the outside world or just provided some daily human warmth. Perhaps the whole point of writing the book was that it gave him an excuse to have a person there to communicate with all the time. The communication method would not then be facilitating the needs of the book, but the book facilitating a deep need for direct communication with a person. Replace the human and perhaps you have replaced the thing that was actually keeping him alive. In extreme usability situation such as this the important thing is that the user really is involved throughout. It is they who ultimately have to adapt the available resources to something that works for them, not only technically but also emotionally and socially. Otherwise we may devise a “solution” that is in theory wonderful and in practice hell on earth. Complexity of algorithms is important but Computer Scientists have to think about much more than just maths and efficient solutions.

Paul Curzon, Queen Mary University of London (updated from the archive)

Further Reading

Find out about what life is like with locked in syndrome by reading: The Diving Bell and the Butterfly by J-D Bauby. Fourth Estate.

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


Beatrice Worsley: pioneering programmer

Many of the earliest programmers were women, just as before them many of the human computers were women. It was only later that somehow programming changed in people’s heads to be something men did. One of the earliest was Mexican born, Canadian, Beatrice Worsley. She wrote the first program to be run on the EDSAC computer and gained one of the earliest pure Computer Science PhDs on programming.

Beatrice was outstanding at both Maths and Science at school and at university demonstrated it by coming top in the class in many of the subjects she took during her degree, gaining grades corresponding to a First every year. She spent the war as a Wren in the Canadian Navy (including working as a researcher for the Navy) before going back to university to do a Masters at MIT. There she completed her project that involved surveying virtually all the computing devices of the time (whether completed or planned). Her survey not only included the very primitive computers being bulit immediately after the war, but also the earlier mechanical calculators, such as the ones IBM made its name creating, and differential analysers. The latter are analogue computing devices that were both precursors, and work in a completely different way, to today’s digital machines. Rather than converting data into 0s and 1s they manipulate physical equivalents of actual values as represented by wheels and discs. Unlike the digital computers to come which could be applied to any problem, they solved just one kind of mathematical problem so were not flexible in the way modern computers are.

This Masters thesis set her up for her future career as a computer scientist: she had realised by then that computing was the future. Initially, she got a job helping run IBM mechanical calculators in Toronto. As part of this job she actually built her own working differential anlalyser out of the children’s construction set Meccano.

At this point, Maurice Wilkes ar Cambridge University was building a new kind of machine called EDSAC. This differed from the very first digital computers in that it included stored programs – the program it followed was just data within computer memory, not something physically hard-wired into the machine. This followed the ideas first spelled out by Alan Turing in his description of a Turing Machine and developed by John von Neumann as the von Neumann architecture. It had the basic design we now think of as the basis of modern computers.

Beatrice was sent to Cambridge to find out more about it while it was being constructed, and got involved in getting it to work. It successfully ran its first stored program on 6 May 1949. That first program calculated a table of squares of numbers and it was written by Beatrice, making her the first programmer of what is arguably the first fully-fledged computer as we now know it (as opposed to a demonstration prototype). She also as a result wrote one of the earliest research papers about programs, describing this and other early programs and how they worked on EDSAC. She stayed on in Cambridge to do a PhD on the topic ultimately becoming one of the first people to gain a Computer Science PhD, and probably it was the first PhD about digital computers as we now know them. It built on the work of Computing giants, Alan Turing and Claude Shannon in discussing how programs could efficiently run not just on idealised machines such as Turing Machines, but on real ones like EDSAC. She also wrote more scientific programs for EDSAC, including one to correct pendulum measurements at sea, presumably in part due to her time as a WREN researcher Back in Canada. She also helped design an early programming language, Transcode, and co-wrote a sophisticated compiler for it based on their deep understanding of the hardware. This was because the computer at Toronto, the “Ferranti computer at the University of Toronto” was incredibly tricky to program in its machine code. Worsley could do it but many others struggled. Transcode in effect simulated an easy to program computer running on top of it, based on in idea by John Backus. As a result hundreds of people learnt to program it. Linked to this and starting with her PhD thesis she pioneered the use of programming libraries to make programming easier through her career.

Beatrice Worsley was not the very first person to write a program, or to get a Computing-linked PhD, but she was certainly one of the first, as well as one of the first people to work professionally as a computer scientist. She was certainly a computing pioneer whose programs made history and whose programming research made a solid contribution to the nascent discipline of programming. Computer Science certainly wasn’t a man’s world at the start, and there is no reason why it should be now.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


Why ‘Correct’ Computers Can Be So Wrong

Have you ever followed a recipe perfectly, only to find the cake tastes… well, a bit weird? You followed every rule, measured every ingredient, and checked every step. The recipe said you did it right. But your taste buds, and your grandma’s disappointed sigh, told you something was definitely wrong. In the world of computer science, we have this problem all the time. We call it the difference between verification and validation.

Verification is like checking the recipe. Did you follow the rules exactly as written? A computer program called a “verifier” can look at the code a programmer wrote and check it against the “recipe” (the formal specification). If the code matches the recipe, it gets a big green checkmark. It’s officially “correct.”

But validation is like the taste test. Does the final cake actually taste good? Does it make people happy? This is about asking: did we make the right thing?

The Case of the Annoying Vending Machine

Imagine a new high-tech vending machine. Its specification (its recipe) says: “If a user inserts money and the selected item is out of stock, return the money.”

A programmer builds the machine. A verifier checks it. You put in a pound, press “B2” for salt and vinegar crisps, and it’s out of stock. The machine correctly spits your pound coin back out onto the floor.

Is the machine correct? According to the recipe, yes! Verification passed. Is it the right machine? No! It’s incredibly annoying!

Your expectation as a human was probably completely different. You expected it to say, “Sorry, B2 is out of stock. Please choose something else.” You expected it to hold onto your money and let you make another choice. Cheese and Onion Crisps are just as good. The machine followed the rules, but it completely misunderstood what you, the user, actually wanted. It was correct, but wrong.

This is the frustrating gap where most “bad” software lives. It follows its own strange rules perfectly but seems to have no common sense about what people actually expect.

Teaching the Genie to Read Minds (Almost!)

So how do we fix this? Our research introduces an idea called Semantic Expectation Logic (SEL). It sounds complicated, but the core idea is simple: what if we could teach the computer to understand not just its own recipe, but also the collection of fuzzy, unwritten expectations that people have in their heads?

Think of it like upgrading from a rule-following genie to a mind-reading one.

A rule-following genie gives you exactly what you wish for. If you wish to “be on a flight”, it might put you on the wing of a 747. Technically correct, but not what you expected!

SEL is our attempt to build a “mind-reading” genie for software. We do it in three steps:

  1. Write Down the Rules of the World: We tell the computer about fundamental truths. For the vending machine, a rule might be “Money can only be returned after a transaction is finished or explicitly cancelled by the user.”
  2. Ask People What They Expect: Instead of guessing, we show people what the machine does in different situations. “The machine just spat your coin on the floor. Is that what you expected?” We collect all these “yes” and “no” answers, along with why. We call this “mining expectations”.
  3. Check for Gaps: Our SEL tool then looks at what the machine actually does and compares it to both the “Rules of the World” and the “Mined Expectations”.

It might find:

  • A Bug: The machine sets itself on fire. (It violates a Rule of the World: “Vending machines should not set themselves on fire.”)
  • A Validation Failure (The “Aha!” Moment): The machine follows its own rules perfectly but violates a strong expectation. Our tool would flag this: “Warning! 98% of people expected the machine to ask for another choice, but it just returned the money. This is a Surprise!

We even created a “Surprise Factor” metric that gives a score from 0% to 100% for how surprising a piece of software is. A low score means the software behaves as people expect. A high score means you’ve built a technically “correct” but incredibly annoying vending machine.

By making expectations a central part of the testing process, we can start building software that isn’t just correct according to its own bizarre logic, but is also correct in a way that actually makes sense to the people using it. We can build the cake that not only follows the recipe but also tastes great.

Vasileios Klimis, Queen Mary University of London

More on …

Getting Technical

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


Johnny Ball’s ‘Two Wrongs Do Make a Right’ Trick

(or how to stack properly)

We present a CS4FN exclusive (from our archive): a mathematical card trick contributed by TV celebrity mathematician, Johnny Ball. The BAFTA-winning father of radio DJ Zoë Ball is famous for his TV shows like Think of a Number, Think Again and Johnny Ball Explains All, plus numerous brilliant books on maths and science.

The trick is as follows, in Johnny’s words…

There is a very simple card trick that I often use, as I can get to the explanation very quickly. In fact, for a mathematically minded audience, they are already some way to working out why it works, as soon as it does work. It’s called the Two Wrongs Do Make a Right Trick.

Just memorize the top card of a pack; let’s assume it’s the 4 of Hearts. Hand the pack to someone and ask them to choose any smallish number (eg between 1 and 10), then to deal that number of cards face down. Then announce the next card dealt will be the 4 of Hearts. Oops – it isn’t? Get them to place all the dealt cards on top of the pack again. Now ask them to choose a number bigger than the first one, but not so big that we run out of cards (eg between 10 and 20). They now deal that number face down, and the next card dealt will be the 4 of Hearts. Oops – wrong again. Let’s now see if we can make two wrongs equal a right.

Place the cards back on top. Now ask for a last chance. Take the smaller number from the larger. They now deal that number face down, and the next card dealt is the 4 of Hearts. Success!

Why does it work?

Hidden in this distracting presentation, the tale of your mistakes played up for laughs, is some clever mathematics. It uses a magic technique called a stack, which is a set up sequence of cards in a known order. Sometimes magicians will put a pre-defined stack of cards, for example four aces, on top of the deck and do a false shuffle, and then be able to deal a wining poker hand because they know exactly what cards will be dealt. In this trick your ‘mistakes’ create the stack for you.

X stacks the deck

Starting with your known card on top (that’s your secret card), your spectator chooses a number from 1 to 10. Let’s call that number X. You deal down X cards so that the first card down is the secret card, next down is the second from the original top of the pile and so on. At the end of your count of X there are X cards on the table, with the secret card on the bottom. Oops, mistake number one. Don’t worry, you can regroup. Just place the mistaken card back on top of the pile of undealt cards, then stack those X dealt cards back on top, and your deck now looks like this from the top down: (X-1) indifferent cards, the secret card, and the rest of the pack.

Y marks the spot

Second try – this time you ask for a bigger number, a number between 10 and 20. Let’s call that new number Y. Counting and dealing down Y cards, where number Y is bigger than number X, means that once Y cards are on the table you can reveal mistake two, oops again. But let’s ‘think of the numbers’ of the cards on the table. That pile has Y cards, and because we stacked the deck in the first phase the secret card is now at position X from the bottom of these Y cards, meaning it’s at position Y-X. Exactly the position in the stack to be revealed by the finale where you subtract X from Y and count down (oh wait, that’s another mathematical TV programme).

Think again – with some real numbers!

The algebra of Xs and Ys says it works, but to see this is true let’s test it and think of some specific numbers, say X=5 and Y=15. The first deal stacks the secret card at position 5 from the top. We then do the second deal of 15 cards, reversing as we go. Onto the table goes 1, 2, 3, 4, SECRET, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15. Pick up this pile and put it on top of the pack. The secret card is now in a new stack. From the top down it’s at 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, SECRET, rest of pack. Now count down 10 cards from the top of this new stack (that’s X-Y = 15-5 = 10), dealing 10 cards from the final stack gives us 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, and the next card is the SECRET card! Magic.

Proving algorithms work

A magic trick like this is what computer scientists call an ‘algorithm’: steps that if you follow them guarantee some result – here that your secret card is revealed. What we showed above is a logical, mathematical proof that the trick works. Now, programs are just algorithms (written in a programming language) so in the same way as we proved our trick works we can prove logically that a program we write always works too!

Even if you do do such a proof, it is also a good idea with software to do lots of testing too – putting in some values to check it works for specific cases too (lots more than the one example we checked). Doing that first can also help you come up with the proof.

Finale the stack pays off

Stacks, in this case the series of cards in a given order, that you deal from the top of the pack, are also used in computer science, though the meaning is slightly different. It is a data structure, a place to store information, that works like a stack of chairs. You can add a chair or take one from the top of a stack of chairs but not from the middle, and its the same with adding and removing data in a computer science stack.

Computer scientists like to have ways of thinking of data, normally numbers in the computer memory, in such a way that they know where things are so they can use them when they need them. A stack is an example of what computer scientists call an abstract data type, a data structure with rules of how you add and remove data to and from it. The stack is a ‘container’ of data (or nodes if the things stored are more complex than just plain numbers, and link off to other parts of memory). Computer scientists do two basic things to stacks: they ‘push’ and they ‘pop’. In a push a new element is added to the top of the stack, and all those elements under effectively move down a place. It’s like adding those cards on top of the secret card. In a pop the top element is removed from the stack, and up jumps all the elements underneath, like dealing a card from the top of the deck. Your laptop and your mobile phone will have a stack somewhere in them. The software will be pushing and popping and doing lots of other clever computer science tricks to keep track of the information and how it’s moving around. The programmer will probably use even smarter things like stack pointers. These are a note for the computer to tell it what was last changed and where in the stack it was.

One too many?

One of the nastiest (and easiest to make) errors in computer software can come from something called a stack overflow. A program pops from or pushes to the top of the stack. That position in memory is kept track of using the ‘stack pointer’, which is just like a sign pointing to where the top of the stack is. So we push and pop from where the stack pointer is pointing changing the position of the pointer as we do. A stack normally has a fixed maximum size (like hitting the ceiling with a stack of chairs). A stack overflow is where you shoot off the end of the stack, so where you try to push a value on the stack when it is already full. It is like a magician counting past 52 cards. It can only end in tears. Sometimes this nasty trick is used by computer hackers to gain control of a computer system. Once the stack has been overshot, the computer surrenders. The software has literally lost its place, and the hackers can move in and take over.

Fortunately maths can come to the rescue by creating tests to ensure that the stack doesn’t pop its clogs or push up the daisies. Even better you can prove that the program controlling the stack (just an algorithm) never lets you overflow the stack in the same way that we proved our trick always works.

Let’s pop over and give the last word to Johnny

Ever ready to make things simple to understand and easy to do, Johnny points out:

You can reduce the trick to choosing just 1 and then 2 with just three cards, and show that the manoevring moves the top card down by the difference in numbers chosen.

Now why didn’t we think of that?

Peter W McOwan, Paul Curzon, Queen Mary University of London

but critically of course (and also inspiring us about Maths since we were kids), Johnny Ball

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


Granite: games for good?

Are video games good or bad for you? One game certainly did a lot of good for Bruce Mouat. He started playing a video game called Granite when he was only 8. It is a Curling game. As an adult he leads the GB Curling team in the Olympics and is one of the best players in the world, having multiple world gold medals.

Curling is a sport requiring both physical skill, strategy and nerve. It involves sliding polished stones of super hard granite down an ice track. The aim is to try and land your stones nearest the centre of a series of concentric circles 40 or so metres away. Despite the distance, millimetre accuracy is often needed. It is also called curling because the stones do not run in a straight line but curve their way into the target, in part controlled by sweepers who sweep the ice in front of the stones to alter the curl of the path as well as the distance.

It is an incredibly strategic sport with the players having to think several moves ahead, but then having to put that strategy into practice with their skill. In that way it is very much like Snooker. That is where the video game came in. Obsessed with the video game as a child, he became outstanding at the decision making in the sport by playing the video game against his granny. The reason the video game helped was because it is more than just a game, it can just as well be called a simulation, like a flight simulator. The latter are considered so accurate that pilots train on them to gain the hours needed to become skilful controlling a plane. That is what the video game Granite does for Curling.

To work as a simulator it needs what is called a physics engine. The programmers have to code the actual physics of polished granite stones sliding on ice and of ice being swept, covering all the small differences made by ice temperature, the speed of the sweeping and so on. The programmers, who were married curlers themselves, made the physics so accurate, that everything in the game behaves as close to possible as it would in reality. That means you can really learn strategy by playing it, even if you then still have to back that by hours of practice playing real curling to gain the physical skill to land stones exactly where you want them. It also can of course help inspire people to start to play the sport in reality.

In the 2026 Olympic final Bruce and Team came away with their second Silver medal, won in consecutive Olympic games, this time beaten by Canada. Even in losing the final match, Bruce Mouat made some absolutely amazing shots, both in terms of strategy and perfect physical skill under extreme pressure.

So playing a video game can certainly be for good: when the physics is so good it acts as a simulator and so can be a way to put in the hours of practice needed to be world leading, but just for fun. So have a go at playing Granite and maybe you will also gain a love of the game and so sport, then one day become a Winter Olympian yourself. Or if you love some other game or sport and learn to program, perhaps you could code a perfect simulator for it to give future players who enjoy your game an edge in the sport.

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


The Proof of Love

A pseudocode poem in a red heart:
Violets are violet
if roses are red and violets are blue
then
    Life is sweet
else
    I love you
Image by CS4FN

For Valentine’s day we created this card with a pseudocode love poem. But what does it mean? Is it a statement of love or not? Now that Valentines Day is gone and logic and rational thinking are starting to reassert themselves, here is a logical argument of what it means: an informal proof of love.

We are using our own brand of poetic pseudocode here, and what matters is the formal semantics of the language (ie mathematical meaning).

What we will do is simplify the code to code that is mathematically equivalent but far simpler, explaining the semantics as we do. We will by reasoning in a similar way to doing algebra, just replacing equals with equals.

First let’s write the poem out in more program looking notation, making clearer what are statements (actions) and expressions (things that evaluate to a value. In the English reading version we are using the verb TO BE in both ways which confuses things a little.

Let’s make clear the difference between variables (that hold values and values). The colours are all values (data that can be stored) – so we will write them in capitals: RED, BLUE, VIOLET. The flowers are variables (places to store values) so we will leave them as lowercase: violets, roses. Then the title becomes an assignment (written more formally as :=). It sets the value of variable violets to value VIOLET. (We are pedantic and believe the colour of violet flowers is violet not blue!)

What comes after the keyword IF is an expression – it evaluates to a boolean (TRUE value or FALSE value) so is referred to as a boolean expression. You can think of boolean expressions as tests – true or false questions. We will use == to indicate a true/false question about whether two things are the same value.

The other two lines are statements – think of them as print statements that print a message as the action they do.

The algorithm of the poem then is:

violets := VIOLET;
IF ((roses == RED) AND (violets == BLUE))
THEN
PRINT "Life is Sweet"
ELSE
PRINT "I love you"


Now let us simplify the algorithm to an equivalent version a step at a time. In the assignment of the first line, the variable violets is set to value VIOLET and then not changed, so we can substitute the value (a colour in uppercase) VIOLET for variable violets (in lowercase) where it appears, and remove the assignment. This gives a simpler version of the algorithm that does exactly the same:

IF ((roses == RED) AND (VIOLET == BLUE))
THEN
PRINT "Life is Sweet"
ELSE
PRINT "I love you"

Now in the boolean expression, we have the subexpression

(VIOLET==BLUE)

but these are two different values and this is asking whether they are the same or not. It evaluates to true if they are the same and FALSE if they are different. It is therefore equivalent to FALSE so the whole expression becomes:

(roses==RED) AND FALSE

The original algorithm is equivalent to

IF ((roses == RED) AND FALSE)
THEN
PRINT "Life is Sweet"
ELSE
PRINT "I love you"

Now whatever X is a boolean expression (X & FALSE) will always evaluate to FALSE because it needs both to be TRUE for the whole to be TRUE. The whole boolean expression therefore simplifies to FALSE and the algorithm to:

IF (FALSE)
THEN
PRINT "Life is Sweet"
ELSE
PRINT "I love you"

Notice how this means it does not matter what colour the roses are at all. Whether roses are red, white, blue or something else, the algorithm result will not be affected.

The semantics of an IF statement is that it evaluates exactly one of its two statements. Where its test (boolean expression) evaluates to TRUE, it executes the first THEN branch (here the Life is Sweet branch) and ignores the other ELSE branch (the I love you branch). Where its test evaluates to FALSE it instead ignores the THEN branch completely and executes the ELSE branch.

Here the test is FALSE, so it ignores the THEN branch and is equivalent to the ELSE branch. The algorithm as a whole is exactly equivalent to the far simpler algorithm:

PRINT "I love you"


Going back to the poem, it is therefore logically completely equivalent to a poem
I Love You

We have given a mathematical proof of love! The idea though is that only computer scientists with a formal semantics (ie maths meaning) of the pseudocode language used would see it!

More on …

Paul Curzon, Queen Mary University of London

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


Music AI Kriss Kross Puzzle

A Kriss Kross Puzzle 
Puzzle design credit: https://puzzlemaker.discoveryeducation.com/criss-cross/

Puzzle design credit: https://puzzlemaker.discoveryeducation.com/criss-cross/

Download and print the puzzle

Answers are at the bottom of https://cs4fn.blog/bitof6 where you can also read a copy of the magazine articles about Music and Artificial Intelligence.

Clues

  • 1. _ _ _ _ _ a piece of text with musical symbols instead of letters that tells a performer which
    notes to play, also a piece of music that accompanies a film (5 letters)
  • 2. and 10. _ _ _ _ _ _ (6 letters) separation is when computer scientists use AI to take a piece of music
    and split it into its _ _ _ _ _ (5 letters) – read more about this in ‘Separate your stems
  • 3. The _ _ _ _ _ _ is the main part of the tune you might sing along to (6 letters)
  • 4. A piece of music is made up of lots of different _ _ _ _ _ (5 letters)
  • 5. We measure how loud something is in _ _ _ _ _ _ _ _ (8 letters)
  • 6. A sequence of instructions that tell a computer what to do _ _ _ _ _ _ _ _ _ (9 letters)
  • 7. If you halve the length of a guitar string the note is an _ _ _ _ _ _ (6 letters)
  • 8. A guitar-like harp-lute from Ghana _ _ _ _ _ _ _ _ (8 letters) – read more about this in ‘The day the music didn’t die
  • 9. How high or how low a musical note is _ _ _ _ _ (5 letters)
  • 10. (see 2.)

Jo Brodie, Queen Mary University of London


More on…

We have LOTS of articles about music, audio and computer science. Have a look in these themed portals for more:


The Music and AI pages are sponsored by the EPSRC (UKRI3024: DA EPSRC university doctoral landscape award additional funding 2025 – Queen Mary University of London).

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


How machines “hear” music

Listen to a song and you might tap your feet. Computers can “listen” to music but they don’t have feet to tap! They don’t have ears or a brain either so they don’t “listen” in the way that you do. They use maths.

Turning sound into numbers

A computer is just a machine that does calculations on numbers. It doesn’t really “hear” music. To it everything is just numbers. Its programs convert sounds into numbers that it can do maths with.

When someone plucks a guitar, the string vibrates (wobbles back and forth). That sends a pulse of energy (a sound wave) through the air. Our ears detect that pulse. A computer measures the sound wave. A song has lots of different sound waves mixed together, and they can all be described with numbers that a computer measures.

One measurement is pitch – how high and squeaky or how low and rumbly the sound is. A guitar string playing a higher note vibrates faster than a lower note, sending its energy pulses into the air more quickly. We measure that as the number of sound waves arriving each second (called the frequency).

A wave that starts red then become blue as the waves squash together
If we could see a sound wave it might look a bit like this. The red sound wave has a lower frequency than the blue sound wave where the distance between each ‘wobble’ narrows. Image by CS4FN

The red and blue wavy line shows what a sound wave might look like if we could see it. The blue part of the wave is vibrating faster than the red so has a higher frequency. Humans hear it as a higher note, computers ‘hear’ it by sensing more soundwaves each second.

A wave that starts red then become blue as the waves squash together. A black wave matches it exactly aside from being taller.
Image by CS4FN

Another measurement is the volume, or how loud the sound is. That relates to how hard the guitarist plucked the string so how ‘tall’ the sound wave is. The wavy black line has the same frequency as the red and blue wave but the black sound wave is bigger: it has a larger amplitude. Humans hear it as louder, computers record bigger numbers.

Once a computer has recorded the measurements as numbers, it can then do maths on the numbers. That is where things get interesting. Programs can then change the numbers to make new and different sounds. Or they can use algorithms to generate their own numbers, then play them as music!

How loud?

Volume is measured in decibels (dB for short). A lower number means the sound is quieter, a higher number means it is louder. The loudest a UK car is allowed to be is 70 dB.

How loud do you think these sounds are?

Table with volumes
How Loud?
Sound    Volume
Car 70dB
Doorbell ?
Jet plane taking off ?
Breathing ?
Vacuum cleaner ?
Balloon Popping ?
Whispering ?
Rainfall ?
A robin singing ?
Loudest shout ever by a teacher ?

Answers at https://cs4fn.blog/bitof6/

Jo Brodie and Paul Curzon, Queen Mary University of London


More on…

We have LOTS of articles about music, audio and computer science. Have a look in these themed portals for more:


The Music and AI pages are sponsored by the EPSRC (UKRI3024: DA EPSRC university doctoral landscape award additional funding 2025 – Queen Mary University of London).

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


All the notes?

A boy with headphones surrounded by swirling music
Boy listening to music image by Olena from Pixabay

There are infinitely many musical notes, just like there are infinitely many colours. That matters if you are designing a new digital musical instrument. You have a lot more choice than on a piano!

Octaves 

Most Western music is divided equally into groups of 12 notes (‘octaves’) that musicians use. The gap between any two notes sounds the same. This is known as equal temperament tuning. 

Activity: Play the 12 notes 

You can play the 12 notes of an octave on the online piano https://bit.ly/pianoCS4FN. Play Middle C (marked with a red dot), then press each key in turn including the black keys. Play 12 notes and you have played the 12 notes of an octave.

Music as colour

The rainbow picture (below) shows there are many colours to pick from not just red, orange, yellow… A set of crayons would be enormous if it included every possible colour! Instead you get a selection just as in the picture: we picked 3 colours equally spaced apart: red, yellow and blue. Western music does the same thing with sound, picking 12 notes that sound equally spaced.

A spectrum of colour running from red to blue with red, yellow and blue selected equal distances apart
Image by CS4FN

There are lots of other notes that you could sing within an octave. Traditional music often uses different sets of notes. The Arabic system divides an octave into 24 notes, for example. They have more ‘sound crayons’ to play with! You could even start singing on a low note and continually raise your pitch until you reached the higher note, like sweeping through every colour in a musical rainbow.

If you sing a note, then sing the same note but an octave higher (eg Middle C then the next highest C), your vocal cords are now vibrating twice as fast! The frequency of the top note is twice as high as the lower one. Your vocal cords doubled their speed.

Jo Brodie and Paul Curzon, Queen Mary University of London


More on…

We have LOTS of articles about music, audio and computer science. Have a look in these themed portals for more:


The Music and AI pages are sponsored by the EPSRC (UKRI3024: DA EPSRC university doctoral landscape award additional funding 2025 – Queen Mary University of London).

Subscribe to be notified whenever we publish a new post to the CS4FN blog.