The Sweet Learning Computer: Learning Ladder

The board for the ladder game with the piece on the bottom rung
The Ladder board. Image by Paul Curzon

Can a machine learn from its mistakes, until it plays a game perfectly, just by following rules? Donald Michie worked out a way in the 1960s. He made a machine out of matchboxes and beads called MENACE that did just that. Our version plays the game Ladder and is made of cups and sweets. Punish the machine when it loses by eating its sweets!

Let’s play the game, Ladder. It is played on a board like a ladder with a single piece (an X) placed on the bottom rung of the ladder. Players take it in turns to make a move, either 1, 2 or 3 places up the ladder. You win if you move the piece to the top of the ladder, so reach the target. We will play on a ladder with 10 rungs as on the right (but you can play on larger ladders).

To make the learning machine, you need 9 plastic cups and lots of wrapped sweets coloured red, green and purple. Spread out the sheets showing the possible board positions (see below) and place a cup on each. Put coloured sweets in each cup to match the arrows: for most positions there are red, green and purple arrows, so you put a red, green and purple sweet in those cups. Once all cups have sweets matching the arrows, your machine is ready to play (and learn).

The machine plays first. Each cup sits on a possible board position that your machine could end up in. Find the cup that matches the board position the game is in when it is its go.  Shut your eyes and take a sweet at random from that cup, placing it next to the cup. Make the move indicated by the arrow of that colour. Then the machine’s human opponent makes a move. Once they have moved the machine plays in the same way again, finding the position and taking a sweet to decide its move. Keep playing alternately like this until someone wins. If the machine ends up in a position with no sweets in that cup, then it resigns.

The possible board positions showing possible moves with coloured arrows.
The 9 board positions with arrows showing possible moves. Place a cup on each board position with sweets corresponding to the arrows. Image by Paul Curzon

If the machine loses, then eat the sweet corresponding to the last move it made. It will never make that mistake again! Win or lose, put all the other sweets back.

The initial cup for board position 8, with a red and purple sweet.
The initial cup for board position 8, with a red and purple sweet. Image by Paul Curzon

Now, play lots of games like that, punishing the machine by eating the sweet of its last move each time it loses. The machine will play badly at first. It’s just making moves at random. The more it loses, the more sweets (losing moves) you eat, so the better it gets. Eventually, it will play perfectly. No one told it how to win – it learnt from its mistakes because you ate its sweets! Gradually the sweets left encode rules of how to win.

Try slightly different rules. At the moment we just punish bad moves. You could reward all the moves that led to it by adding another sweet of the same colour too. Now the machine will be more likely to make those moves again. What other variations of rewards and punishments could you try?

Why not write a program that learns in the same way – but using data values in arrays to represent moves instead of sweets. Not so yummy!

– Paul Curzon, Queen Mary University of London

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Film Futures: Brassed Off

The pit head of a colliery at sunset with a vivid red sky behind the setting sun
Image from Pixabay

Computer Scientists and digital artists are behind the fabulous special effects and computer generated imagery we see in today’s movies, but for a bit of fun, in this series, we look at how movie plots could change if they involved Computer Scientists. Here we look at an alternative version of the film Brassed Off.

***SPOILER ALERT***

Brassed Off, starring Pete Postlethwaite, Tara Fitzgerald and Ewan McGregor, is set at a time when the UK coal and steel industries were being closed down with terrible effects on local communities across the North of England and Wales. It tells the story of the closing of the fictional Grimley Pit (based on the real mining village of Grimethorpe), from the point of view of the members of the colliery brass band and their families. The whole village relies on the pit for their livelihoods.

Danny, the band’s conductor is passionate about the band and wants to keep it going, even if the pit closes. Many of the other band members are totally despondent and just want to take the money that is on offer if they agree to the closure without a fight. They feel they have no future, and have given up hope over both the pit and the band (why have a colliery band if there is no colliery?)

Gloria, a company manager who grew up in the village arrives, conducting a feasibility study for the company to determine if the pit is profitable or not as justification for keeping it open or closing it down. A wonderful musician, she joins the band but doesn’t tell them that she is now management (including not telling her childhood boyfriend, and band member, Andy).

The story follows the battle to keep the pit open, and the effects on the community if it closes, through the eyes of the band members as they take part in a likely final ever brass band competition…

Brassed Off: with computer science

In our computer science film future version, the pit is still closing and Gloria is still management, but with a Computer Science PhD in digital music, she has built a flugelhorn playing robot with a creative AI brain. It can not only play brass band instruments but arrange and compose too. On arriving at Grimley she asks if her robot can join the band. Initially, every one is against the idea, but on hearing how good it is, and how it will help them do well in the national brass band competition they relent. The band, with robot, go all the way to the finals and ultimately win…

The pit, however, closes and there are no jobs, at all, not even low quality work in local supermarkets (automatic tills and robot shelf-stackers have replaced humans) or call centres (now replaced by chatbots). Gloria also loses her job due to a shake-out of middle managers as the AIs take over the knowledge economy jobs. Luckily, she is ok, as with university friends, she starts a company building robot musicians which is an amazing success. The band never make the finals again as bands full of Gloria’s flugelhorn and cornet playing robots take over (also taking the last of the band’s self-esteem). In future years, all the brass bands in the competition are robot bands as with all the pits closing the communities around them collapse. The world’s last ever flugelhorn player is a robot. Gloria and Andy never do get to kiss…

In real life…

Could a robot play a musical instrument? One existed centuries before the computer age. In 1737  Jacques de Vaucanson revealed his flute playing automaton to the public. A small human height figure, it played a real flute, that could be replaced to prove the machine could really play a real instrument. Robots have played various instruments, including drums and a cello playing robot that played with an orchestra in Malmo. While robot orchestras and bands are likely, it seems less likely that humans would stop playing as a result.

Can an AI compose music? Victorian, Ada Lovelace predicted they one day would, a century before the first computer was ever built. She realised that this would be the case just from thinking about the machines that Charles Babbage was trying to build. Her prediction eventually came true. Now of course, generative AI is being used to compose music, and can do so in any style, whether classical or pop. How good, or creative, it is may be debated but it won’t be long before they have super-human music composition powers.

So, a flugelhorn playing robot, that also composes music, is not a pipe dream!

What about the social costs that are the real theme of the film though? When the UK pits and steelworks closed whole communities were destroyed with great, and long lasting, social cost. It was all well and good for politicians to say there are new jobs being created by the new service and knowledge economy, but that was no help when no thought or money had actually been put in to helping communities make the transition. “Get on your bike” was their famous, if ineffective, solution. For example, if the new jobs were to be in technology as suggested then massive technology training programmes for those put out of work were needed, along with financial support in the meantime. Instead, whole communities were effectively left to rot and inequality increased massively. Areas in the North of England and Wales that had been the backbone of the UK economy, still haven’t really recovered 40 years later.

Are we about to make the same mistakes again? We are certainly arriving at a similar point, but now it is those knowledge economy jobs that were supposed to be the saviours 40 years ago that are under threat from AI. There may well be new jobs as old ones disappear…but even if they do will the people who lose their jobs be in a position to take the new ones, or are we heading towards a whole new lost generation. As back then, without serious planning and support, including successful efforts to reduce inequality in society, the changes coming could again cause devastation, this time much more widespread. As it stands technology is increasing, not decreasing, inequality. We need to start now, including coming up with a new economic model of how the world will work that actively reduces inequality in society. Many science fiction writers have written of utopian futures where people only work for fun (eg Arthur C Clarke’s classic “Childhood’s End” is one I’m reading at the moment), but that only happens if wealth is not sucked up by the lucky few. (In “Childhood’s End” it takes alien invaders to force out inequality.)

We can avoid a dystopian future, but only if we try…really hard.

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Google’s “PigeonRank” and arty-pigeon intelligence

pigeon
Pigeon, possibly pondering people’s photographs.
Image by Davgood Kirshot from Pixabay

On April Fool’s Day in 2002 Google ‘admitted’ to its users that the reason their web search results appeared so quickly and were so accurate was because, rather than using automated processes to grab the best result, Google was actually using a bank of pigeons to select the best results. Millions of pigeons viewing web pages and pecking picking the best one for you when you type in your search question. Pretty unlikely, right?

In a rather surprising non-April Fool twist some researchers decided to test out how well pigeons can distinguish different types of information in hospital photographs.

Letting the pigeons learn from training data
They trained pigeons by getting them to view medical pictures of tissue samples taken from healthy people as well as pictures taken from people who were ill. The pigeons had to peck one of two coloured buttons and in doing so learned which pictures were of healthy tissue and which were diseased. If they pecked the correct button they got an extra food reward.

Seeing if their new knowledge is ‘generalisable’ (can be applied to unfamiliar images)
The researchers then tested the pigeons with a fresh set of pictures, to see if they could apply their learning to pictures they’d not seen before. Incredibly the pigeons were pretty good at separating the pictures into healthy and unhealthy, with an 80 per cent hit rate. Doctors and pathologists* probably don’t have to worry too much about pigeons stealing their jobs though as the pigeons weren’t very good at the more complex cases. However this is still useful information. Researchers think that they might be able to learn something, about how humans learn to distinguish images, by understanding the ways in which pigeons’ brains and memory works (or don’t work). There are some similarities between pigeons’ and people’s visual systems (the ways our eyes and brains help us understand an image).

[*pathology means the study of diseases. A pathologist is a medical doctor or clinical scientist who might examine tissue samples (or images of tissue samples) to help doctors diagnose and treat diseases.]

How well can you categorise?

This is similar to a way that some artificial intelligences work. A type of machine learning called supervised learning gives an artificial intelligence system a batch of photographs labelled ‘A’, e.g. cats, and a different batch of photographs labelled ‘B’, e.g. dogs. The system makes lots of measurements of all the pictures within the two categories and can use this information to decide if a new picture is ‘CAT’ or ‘DOG’ and also how confident it is in saying which one.

Can pigeons tell art apart?

Pigeons were also given a button to peck and shown artworks by Picasso or Monet. At first they’d peck the button randomly but soon learned that they’d get a treat if they pecked at the same time they were shown a Picasso. When a Monet appeared they got no treat. After a while they learned to peck when they saw the Picasso artworks and not peck when shown a Monet. But what happened if they were shown a Monet or Picasso painting that they hadn’t seen before? Amazingly they were pretty good, pecking for rewards when the new art was by Picasso and ignoring the button when it was a new Monet. Art critics can breathe a sigh of relief though. If the paintings were turned upside down the pigeons were back to square one and couldn’t tell them apart.

Like pigeons, even humans can get this wrong sometimes. In 2022 an art curator realised that a painting by Piet Mondrian had been displayed upside down for 75 years… I wonder if the pigeons would have spotted that.

– Jo Brodie, Queen Mary University of London

Share this post


Part of a series of ‘whimsical fun in computing’ to celebrate April Fool’s (all month long!).

Find out about some of the rather surprising things computer scientists have got up to when they're in a playful mood.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Hiroshi Kawano and his AI abstract artist

Piet Mondrian is famous for his pioneering pure abstract paintings that consist of blocks of colour with thick black borders. This series of works is iconic now. You can buy designs based on them on socks, cards, bags, T-shorts, vases, and more, He also inspired one of the first creative art programs. Written by Hiroshi Kawano it created new abstract art after Mondrian.

An Artificial Mondrian style picture of blocks of primary colours with blck borders.
Image by CS4FN after Mondrian inspired by Artificial Mondrian

Hiroshi Kawano was himself a pioneer of digital and algorithmic art. From 1964 he produced a series of works that were algorithmically created in that they followed instructions to produce the designs, but those designs were all different as they included random number generators – effectively turning art into a game of chance, throwing dice to see what to do next. Randomness can be brought in in this way to make decisions about the sizes, positions, shapes and colours in the images, for example.

His Artificial Mondrian series from the late 1960s were more sophisticated than this though. He first analysed Mondrian’s paintings determining how frequently each colour appeared in each position on the canvas. This gave him a statistical profile of real Mondrian works. His Artificial Mondrian program then generated new designs based on coloured rectangles but where the random number generator matched the statistical pattern of Mondrian’s creative decisions when choosing what block of colour to paint in an area. The dice were in effect loaded to match Mondrian’s choices. The resulting design was not a Mondrian, but had the same mathematical signature as one that Mondrian might paint. One example KD 29 is on display at the Tate modern this year (2025) until June 2025 as part of the Electric Dreams exhibition (you can also buy a print from the Tate Modern Shop).

Kawano’s program didn’t actually paint, it just created the designs and then Hiroshi did the actual painting following the program’s design. Colour computer printers were not available then but the program could print out the patterns of black rectangles that he then coloured in.

Whilst far simpler, his program’s approach prefigures the way modern generative AI programs that create images work. They are trained on vast numbers of images, from the web, for example. They then create a new image based on what is statistically likely to match the prompt given. Ask for a cat and you get an image that statistically matches existing images labelled as cats. Like his the generative AI programs are also combining algorithm, statistics from existing art, and randomness to create new images.

Is such algorithmic art really creative in the way an artist is creative though? It is quite easy (and fun) to create your own Mondrian inspired art, even without an AI. However, the real creativity of an artist is in coming up with such a new iconic and visually powerful art style in the first place, as Piet Mondrian did, not in just copying his style. The most famous artists are famous because they came up with a signature style. Only when the programs are doing that are they being as creative as the great human artists. Hiroshi Kawano’s art (as opposed to his program’s) perhaps does pass the test as he came up with a completely novel medium for creating art. That in itself was incredibly creative at the time.

Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page and talk are funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSrC logos

ELIZA: the first chatbot to fool people

Chatbots are now everywhere. You seemingly can’t touch a computer without one offering its opinion, or trying to help. This explosion is a result of the advent of what are called Large Language Models: sophisticated programs that in part copy the way human brains work. Chatbots have been around far longer than the current boom, though. The earliest successful one, called ELIZA, was, built in the 1960s by Joseph Weizenbaum, who with his Jewish family had fled Nazi Germany in the 1930s. Despite its simplicity ELIZA was very effective at fooling people into treating it as if it were a human.

Head thinking in a speech bubble
Image adapted from one by by Gerd Altmann from Pixabay

Weizenbaum was interested in human-computer interaction, and whether it could be done in a more human-like way than just by typing rigid commands as was done at the time. In doing so he set the ball rolling for a whole new metaphor for interacting with computers, distinct from typing commands or pointing and clicking on a desktop. It raised the possibility that one day we could control computers by having conversations with them, a possibility that is now a reality.

His program, ELIZA, was named after the character in the play Pygmalion and musical My Fair Lady. That Eliza was a working class women who was taught to speak with a posh accent gradually improving her speech, and part of the idea of ELIZA was that it could gradually improve based on its interactions. At core though it was doing something very simple. It just looked for known words in the things the human typed and then output a sentence triggered by that keyword, such as a transformation of the original sentence. For example, if the person typed “I’m really unhappy”, it might respond “Why are you unhappy?”.

In this way it was just doing a more sophisticated version of the earliest “creative” writing program – Christopher Strachey’s Love Letter writing program. Strachey’s program wrote love letters by randomly picking keywords and putting them into a set of randomly chosen templates to construct a series of sentences.

The keywords that ELIZA looked for were built into its script written by the programmer and each allocated a score. It found all the keywords in the person’s sentence but used the one allocated the highest score. Words like “I” had a high score so were likely to be picked if present. A sentence starting “I am …” can be transformed into a response “Why are you …?” as in the example above. to make this seem realistic, the program needed to have a variety of different templates to provide enough variety of responses, though. To create the response, ELIZA broke down the sentence typed into component parts, picked out the useful parts of it and then built up a new response. In the above example, it would have pulled out the adjective, “happy” to use in its output with the template part “Why are you …”, for example.

If no keyword was found, so ELIZA had no rule to apply, it could fall back on a memory mechanism where it stored details of the past statements typed by the person. This allowed it to go back to an earlier thing the person had said and use that instead. It just moved on to the next highest scoring keyword from the previous sentence and built a response based on that.

ELIZA came with different “characters” that could be loaded in to it with different keywords and templates of how to respond. The reason ELIZA gained so much fame was due to its DOCTOR script. It was written to behave like a psychotherapist. In particular, it was based on the ideas of psychologist Carl Rogers who developed “person-centred therapy”, where a therapist, for example, echos back things that the person says, always asking open-ended questions (never yes/no ones) to get the patient talking. (Good job interviewers do a similar thing!) The advantage of it “pretending” to be a psychotherapist like this is that it did not need to be based on a knowledge bank of facts to seem realistic. Compare that with say a chatbot that aims to have conversations about Liverpool Football Club. To be engaging it would need to know a lot about the club (or if not appear evasive). If the person asked it “Who do you think the greatest Liverpool manager was?” then it would need to know the names of some former Liverpool managers! But then you might want to talk about strikers or specific games or … A chatbot aiming to have conversations about any topic the person comes up with convincingly needs facts about everything! That is what modern chatbots do have: provided by them sucking up and organising information from the web, for example. As a psychotherapist, DOCTOR never had to come up with answers, and echoing back the things the person said, or asking open-ended questions, was entirely natural in this context and even made ti seem as though it cared about what the people were saying.

Because Eliza did come across as being empathic in this way, the early people it was trialled on were very happy to talk to it in an uninhibited way. Weizenbaum’s secretary even asked him to leave while she chatted with it, as she was telling it things she would not have told him. That was despite the fact, or perhaps partly because, she knew she was talking to a machine. Others were convinced they were talking to a person just via a computer terminal. As a result it was suggested at the time that it might actually be used as a psychotherapist to help people with mental illness!

Weizenbaum was clear though that ELIZA was not an intelligent program, and it certainly didn’t care about anyone, even if it appeared to be. It certainly would not have passed the Turing Test, set previously by Alan Turing that if a computer was truly intelligent people talking to it would be indistinguishable from a person in its answers. Switch to any knowledge-based topic and the ELIZA DOCTOR script would flounder!

ELIZA was also the first in a less positive trend, to make chatbots female because this is seen as something that makes men more comfortable. Weizenbaum chose a female character specifically because he thought it would be more believable as a supportive, emotional female. The Greek myth Pygmalion from which the play’s name derives is about a male sculptor falling in love with a female sculpture he had carved, that then comes to life. Again this fits a trend of automaton and robots in films and reality being modelled after women simply to provide for the whims of men. Weizenbaum agreed he had made a mistake, saying that his decision to name ELIZA after a woman was wrong because it reinforces a stereotype of women. The fact that so many chatbots have then copied this mistake is unfortunate.

Because of his experiences with ELIZA he went on to become a critic of Artificial Intelligence (AI). Well before any program really could have been called intelligent (the time to do it!), he started to think about the ethics of AI use, as well as of the use of computers more generally (intelligent or not). He was particularly concerned about them taking over human tasks around decision making. He particularly worried that human values would be lost if decision making was turned into computation, beliefs perhaps partly shaped by his experiences escaping Germany where the act of genocide was turned into a brutally efficient bureaucratic machine, with human values completely lost. Ultimately, he argued that computers would be bad for society. They were created out of war and would be used by the military as a a tool for war. In this, given, for example, the way many AI programs have been shown to have built in biases, never mind the weaponisation of social media, spreading disinformation and intolerance in recent times, he was perhaps prescient.

by Paul Curzon, Queen Mary University of London

Fun to do

If you can program why not have a go at writing an ELIZA-like program yourself….or perhaps a program that runs a job interview for a particular job based on the person specification for it.

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing
Cover of CS4FN Issue 16 - Clean up your language


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page and talk are funded by EPSRC on research agreement EP/W033615/1.

I wandered lonely as a mass of dejected vapour – try some AI poetry

Ever used an online poem generator, perhaps to get started with an English assignment? They normally have a template and some word lists you can fill in, with a simple algorithm that randomly selects from the word lists to fill out the template. “I wandered lonely as a cloud” might become “I zoomed destitute as a rainbow” or I danced homeless as a tree”. It would all depend on those word lists. Artificial Intelligence and machine learning researchers are aiming to be more creative.

Stanford University, the University of Massachusetts and Google have created works that look like poems, by accident. They were using a machine learning Artificial Intelligence they had previously ‘trained’ on romantic novels to research the creation of captions for images, and how to translate text into different languages. They fed it a start and end sentence, and let the AI fill in the gap. The results made sense though were ‘rather dramatic’: for example

“he was silent for a long moment
he was silent for a moment
it was quiet for a moment
it was dark and cold
there was a pause
it was my turn”

Is this a real poem? What makes a poem a poem is in itself an area of research, with some saying that to create a poem, you need a poet and the poet should do certain things in their ‘creative act’. Researchers from Imperial College London and University College Dublin used this idea to evaluate their own poetry system. They checked to see if the poems they generated met the requirements of a special model for comparing creative systems. This involved things like checking whether the work formed a concept, and including measures such as flamboyance and lyricism.

Read some poems written by humans and compare them to poems created by online poetry generators. What makes it creativity? Maybe that’s up to you!

Jane Waite, Queen Mary University of London


More on …

  • The algorithm that could not speak its name
    • See also this article about Christopher Strachey, who came up with the first example of a computer program that could create lines of text (from lists of words) to make up love poems.

Related Magazine …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

A handshaking puzzle

By Przemysław Wałęga, Queen Mary University of London

Logical reasoning and proof, whether done using math notation or informally in your head, is an important tool of computer scientists. The idea of proving, however, is often daunting for beginners and it takes a lot of practice to master this skill. Here we look at a simple puzzle to get you started.

Two art models shaking hands
Image by GU LA from Pixabay

Computer Scientists use logical reasoning and proofs a lot. They can be used to ensure correctness of algorithms. Researchers doing theoretical computer science use proofs all the time, working out theories about computation.

Proving mathematical statements can be very challenging, though. Coming up with a proof often requires making observations about a problem and exploiting a variety of different proof methods. Making sure that the proof is correct, concise, and easy to follow matters too, but that in itself needs skill and a lot of practice. As a result, proving can be seen as a real art of mathematics.

Let’s think about a simple puzzle to show how logical thinking can be used when solving a problem. The puzzle can be solved without knowing any specific maths, so anyone can attempt it, but it will probably look very hard to start with.

Before you start working on it though, let me recommend that first you try to solve it entirely in your mind, that is, with no pen and paper (and definitely no computer!). 

The Puzzle

Here is the puzzle, which I heard at a New Year’s party from a friend Marcin:

Mrs. and Mr. Taylor hosted a party and invited four other couples. After the party, everyone gathered in the hallway to say their goodbyes with handshakes. No one shook hands with themselves (of course!) or their partner, and no one shook hands with the same person more than once. Each person kept track of how many people they had shaken hands with. At one point, Mr. Taylor shouted “STOP” and asked everyone to say how many people they had shaken hands with. He received nine different answers. 

How many people did Mrs Taylor shake hands with?

I will give you some hints to help solving the puzzle, but first try to solve it on you own, and see how far you get. Maybe you will be solve the puzzle on your own? 

Why did I recommend solving the puzzle without pen and paper? Because, our goal is to use logical and critical thinking instead of finding a solution in a “brute force” manner, that is, blindly listing all the possibilities and checking each of them to find a solution to the puzzle. As an example of a brute force way of solving a problem, take a crossword puzzle where you have all but one of the letters of a word. You have no idea what the clue is about, so instead you just try the 26 possible letters for the missing one and see which make a word and then check which that do fit the clue! 

Notice that the setting of our puzzle is finite: there are 10 people shaking hands, so the number of ways they shake hands is also finite if bigger than say checking 26 different letters of the crossword problem. That means you could potentially list all the possible ways people might shake hands to solve the puzzle. This is, however, not what we are aiming for. We would like to solve the puzzle by analysing the structure of the problem instead of performing brute force computation. 

Also, it is important to realise that often mathematicians solve puzzles (or prove theorems) about situations in which the number of possibilities is infinite so the brute force approach of listing them all is not possible at all. There are also many situations where the brute force approach is applicable in theory, but in practice it would require considering too many cases: so many that even the most powerful computers would not be able to provide us with an answer in our lifetimes. 

Handshake
Image by Robert Owen-Wahl from Pixabay

For our puzzle, you may be tempted to list all possible handshake situations between 10 people. Before you do start listing them, let’s check how much time you would need for that.  You have to consider every pair that can be formed from 10 people. A mathematician refers to that as “10 choose 2”, the answer to which is that there are 45 possible pairs among 10 people (the first person pairs with 9 others, the next has 8 others to pair with having been paired with the first already, and so on and 9+8+….+1 = 45). However, 45 is not the number that we are looking for. Each of these pairs can either shake hands or not, and we need to consider all those different possibilities. There are 245 such handshake combinations. How big is this number? The number 210 is 1024, so it is approximately 1000. Hence 240=(210)4 (which is clearly smaller than our 245) is approximately 10004 = 1,000,000,000,000 that is, a trillion. Listing a trillion combinations should sound scary to you. Indeed, if you can be quick enough to write each of the trillion combinations in one second, you will spend 31 688 years. Let’s not try this!

Of course, we can look more closely at the description of the puzzle to decrease the number of combinations. For example, we know that nobody shakes hands with their partner, which will already massively reduce the number. However, let’s try to solve the puzzle without using any external memory aids or computational power. Only our minds.

Can you solve it? A key trick that mathematicians and computer scientists use is to break down problems into simpler problems first (decomposition). You may not be able to solve this puzzle straight away, so instead think about what facts you can deduce about the situation instead.

If you need help, start by considering Hint 1 below. If you are still stuck, maybe Hint 2 will help? Answer these questions and you will be a long way to solving the puzzle.

Hints

  1. Mr. Taylor received nine different answers. What are these answers?
  2. Knowing the numbers above, can you work out who is a partner of whom?

No luck in solving the puzzle? Try to spend some more time before giving up!  Then read on. If you managed to solve it you can compare your way of thinking with the full solution below.

Solution

First we will answer Hint 1. We can show that the answers received by Mr. Taylor are 0, 1, 2, 3, 4, 5, 6, 7, and 8. There are 5 couples, meaning that there are 10 people at the party (Mr. and Mrs. Taylor + 4 other couples). Each person can shake hands with at least 0 people and at most 8 other people (since there are 10 people, and they cannot shake hands with themselves or their partner). Since Mr. Taylor received nine different answers from the other 9 people, they need to be 0, 1, 2, 3, 4, 5, 6, 7, and 8. This is an important observation which we will use in the second part of the solution.

Next, we will answer Hint 2. Let’s call P0 the person who answered 0, P1 the person who answered 1, …, P8 the person who answered 8. The person with the highest (or the lowest) number of handshakes is a good one to look at first.

  • Who is the partner of P8? P8 did not shake hands with themselves and with P0 (as P0 did not shake hands with anybody). So P8 had to shake hands with all the other 8 people. Since no one shakes hands with their partner, it follows that P0 is the partner of P8!
  • Who is the partner of P7? They did not shake hands with themselves, with P0 and with P1, because we already know that P1 shook hands with P8, and they shook hands with only one person. So the partner of P7 can be either P8, P0, or P1. Since P8 and P0 are partners, P7 needs to be the partner of P1.
  • Following through with this analysis for P6 and P5, we can show that the following are partners: P8 and P0, P7 and P1, P6 and P2, P5 and P3. The only person among P0, … , P8 who is left without a partner is P4. So P4 needs to be Mrs. Taylor, the partner of Mr. Taylor, the one person left who didn’t give a number. 

Consequently, we have also showed that Mr Taylor shook hands with 4 people. 

Observe that the analysis above does not only provide us an answer to the puzzle, but it also allows us to uniquely determine the handshake setting as presented in the picture below (called a graph by Computer Scientists). Here, people are nodes (circles) and handshakes are represented as edges (lines) in the graph. Red edges correspond to handshakes with P8, blue edges are handshakes with P7, green with P6 and yellow with P5. Partners are located next to each other, for example, Mr. Taylor is a partner with P4.

Image by CS4FN

Large Language Models

Although this article is about logical thinking, and not about tools to solve logic puzzles, it is interesting to see if current AI models are capable of solving the puzzle. As puzzles go it is relatively easy and occurs on the Internet in different settings and languages, so large language models (LLMs) trained on the Internet should be good at solving it. Let’s test them!

I asked ChatGPT-4o and Gemini 1.5 Flash to solve the puzzle. To my surprise, both of them did struggle. They had problems with understanding some parts of the problem formulation and made mathematical mistakes. 

  • ChatGPT seems to wrongly assume that Mr. Taylor answered his own question about the number of handshakes, and so, that each of 10 people announced a different number of handshakes. As follows from our solution of the puzzle, there exists exactly one graph representing the handshake situation. In this graph, the number of handshakes of Mr. and Mrs. Taylor is the same, namely 4. Hence, the additional assumption made by ChatGPT (that each of 10 people had a different number of handshakes) makes the situation described in the puzzle impossible. Interestingly, ChatGPT was able to solve such an impossible puzzle providing a correct answer… Does it mean that ChatGPT has solved the puzzle?
  • Gemini, in contrast, states that the puzzle cannot be solved without more information such as the order of arrivals. This is clearly wrong. We know how to solve the puzzle and the order of arrivals is actually completely irrelevant to the puzzle! 

This small experiment uses basic LLMs and does not aim to draw any conclusions about their capabilities. I am quite certain that one can find an LLM that can solve this simple puzzle without mistakes. However, this experiment aims to emphasise the need for checking AI solutions yourself. To do that you need to gain the skills of doing proof yourself!

In what follows I provide answers obtained from AI models and highlight as bold parts which are wrong assumptions or reasoning.

Solution by ChatGPT:

Key Observations:

  1. There are 10 people in total at the party (5 couples: Mr. and Mrs. Taylor and 4 other couples).
  2. No one shakes hands with their own partner.
  3. No one shakes hands with themselves or the same person more than once.
  4. The handshake answers must be unique and range from 0 to 9 (since there are 10 people, including Mr. and Mrs. Taylor).

Solution:

  • Since there are 10 people total, the number of handshakes any one person could have ranges from 0 to 9.
  • If everyone gives a unique answer, it means:
    • One person shook hands with 0 people.
    • Another shook hands with 1 person.
    • Another with 2, and so on, up to 9.

Steps:

  1. The person who shook hands with 9 people: This person shook hands with everyone else, meaning their partner (who doesn’t shake hands with them) must have shaken hands with 0 people.
  2. The person who shook hands with 8 people: This person shook hands with everyone except their partner and the person who shook hands with 0.
  3. By continuing this pattern, we see that for every unique count of handshakes k, the person with k handshakes corresponds to a partner with 9−k handshakes.

Mrs. Taylor’s Handshakes:

  • Mr. Taylor shouted “STOP” after collecting answers from all 10 people. Since there are 9 different handshake counts, one of those counts must belong to Mrs. Taylor.
  • Let’s assume Mr. Taylor is among the group and is the one with the maximum or minimum handshakes.
  • The pairing pattern ensures Mrs. Taylor’s number of handshakes must fall within the range, but based on symmetry of couples logic !

The key is to match couples symmetrically:

If Mr. Taylor gets answers (0, 1, 2,…

Let’s refine the solution:

  • The possible handshake counts are 0,1,2,…,9, ensuring each number is unique.
  • The pairing works such that if one person shakes k hands, their partner shakes 9-k hands.

This logic means:

  • The person who shook 9 hands paired with the person who shook 0.
  • The person who shook 8 hands paired with the person who shook 1, and so on.

Mrs. Taylor’s Count:

If there are nine unique handshake counts (from 0 to 9), Mrs. Taylor must have shaken 4 hands because her partner, Mr. Taylor, who controls the setup, must fall at the midpoint to allow symmetrical pairing logic.

Solution by Gemini:

Certainly, let’s solve the handshake puzzle!

Understanding the Problem:

  • We have 10 people at the party (Mr. and Mrs. Taylor, plus 4 other couples).
  • No one shakes hands with themselves or their partner.
  • No one shakes hands with the same person more than once.
  • Each person keeps track of how many handshakes they’ve done.
  • Mr. Taylor asks everyone how many handshakes they’ve done, and gets 9 different answers.
  • We need to figure out how many handshakes Mrs. Taylor did.

Let’s Analyze:

The key to solving this is to consider the possible range of handshakes each person could have.

  • Lowest Possible Handshakes: If someone is the first to arrive and doesn’t shake hands with anyone else, they’ll have 0 handshakes.
  • Highest Possible Handshakes: The maximum number of handshakes depends on the order of arrivals and who shakes hands with whom. In the most efficient scenario, everyone could shake hands with everyone else except their partner. This would lead to a maximum of 8 handshakes for each person.

Now, let’s consider the 9 different answers:

Since there are 9 different answers, and the minimum is 0, the maximum number of handshakes someone could have is 8.

Let’s think about Mrs. Taylor:

  • She can’t have 0 handshakes because she would have to shake hands with someone eventually.
  • She can’t have 8 handshakes because Mr. Taylor would have to have the same number (since they are the only couple).

Therefore, Mrs. Taylor must have shaken hands with a number between 1 and 7.

Unfortunately, without more information about the order of arrivals and who shook hands with whom, we can’t pinpoint the exact number of handshakes Mrs. Taylor did.

Let me know if you’d like to explore a specific scenario with a particular order of arrivals and handshakes to determine Mrs. Taylor’s exact number!

More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by UKRI, through grant EP/W033615/1.

Nemisindo: breaking the sound barrier

Womens feet walking on a path
Image by ashokorg0 from Pixabay

Games are becoming ever more realistic. Now, thanks to the work of Joshua Reiss’s research team and their spinout company, Nemisindo, it’s not just the graphics that are amazing, the sound effects can be too.

There has been a massive focus over the years in improving the graphics in games. We’ve come along way from Pong and its square ball and rectangular paddles. Year after year, decades after decade, new algorithms, new chips and new techniques have been invented that combined with the capabilities of ever faster computers, have meant that we now have games with realistic, real-time graphics immersing us in the action as we play. And yet games are a multimedia experience and realistic sounds matter too if the worlds are to be truly immersive. For decades film crews have included whole teams of Foley editors whose job is to create realistic everyday sounds (check out the credits next time you watch a film!). Whether the sound is of someone walking on a wooden floor in bare feet, walking on a crunchy path,opening thick, plush curtains, or an armoured knight clanging their way down a bare, black cliff, lots of effort goes into getting the sound just right.

Game sound effects are currently often based on choosing sounds from a sound library, but games, unlike films, are increasingly open. Just about anything can happen and make a unique noise while doing so. The chances of the sound library having all the right sounds get slimmer and slimmer.

Suppose a knight character in a game drops a shield. What should it sound like? Well, it depends on whether it is a wooden shield or a metal one. Did it land on its edge or fall horizontally, and was it curved so it rang like a bell? Is the floor mud or did it hit a stone path? Did it bounce or roll? Is the knight in an echoey hall, on a vast plain or clambering down those clanging cliffs…

All of this is virtually impossible to get exactly right if you’re relying on a library of sound samples. Instead of providing pre-recorded sounds as sound libraries do, the software of Josh and team’s company Nemisindo (which is the Zulu word for ‘sound effects’), create new sounds from scratch exactly when they are needed and in real time as a game is played. This approach is called “procedural audio technology”. It allows the action in the game itself to determine the sounds precisely as the sounds are programmed based on setting options for sounds linked to different action scenarios, rather than selecting a specific sound. Aside from the flexibility it gives, this way of doing sound effects gives big advantages in terms of memory too: because sounds are created on the fly, large libraries of sounds no longer need to be stored with the program. 

Nemisindo’s new software provides generated procedural sounds for the Unreal game engine allowing anyone building games using the engine to program a variety of action scenarios with realistic sounds tuned to the situation in their game as it happens…

In future, if that Knight steps off the stone path just as she drops her shield the sound generated will take the surface it actually lands on into account…

Procedural sound is the future of sound effects so just as games are now stunning visually, expect them in future to become ever more stunning to listen to too. As they do the whole experience will become ever more immersive… and what works for games works for other virtual environments too. All kinds of virtual worlds just became a lot more realistic. Getting the sound exactly right is no longer a barrier to a perfect experience.

Nemisindo has support from Innovate UK.

– Paul Curzon, Queen Mary University of London

More on …


Magazines …


Our Books …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Mike Lynch: sequencing success

Mike Lynch was one of Britain’s most successful entrepreneurs. An electrical engineer, he built his businesses around machine learning long before it was a buzz phrase. He also drew heavily on a branch of maths called Bayesian statistics which is concerned with understanding how likely, even apparently unlikely, things are to actually happen. This was so central to his success that he named his super yacht, Bayesian, after it. Tragically, he died on the yacht, when Bayesian sank in a freak, extremely unlikely, accident. The gods of the sea are cruel.

Synthesisers

A keyboard synthesiser
Image by Julius H. from Pixabay

Mike started his path to becoming an entrepreneur at school. He was interested in music, and especially the then new but increasingly exciting, digital synthesisers that were being used by pop bands, and were in the middle of revolutionising music. He couldn’t afford one of his own, though, as they cost thousands. He was sure he could design and build one to sell more cheaply. So he set about doing it.

He continued working on his synthesiser project as a hobby at Cambridge University, where he originally studied science, but changed to his by-then passion of electrical engineering. A risk of visiting his room was that you might painfully step on a resistor or capacitor, as they got everywhere. That was not surprising giving his living room was also his workshop. By this point he was also working more specifically on the idea of setting up a company to sell his synthesiser designs. He eventually got his first break in the business world when chatting to someone in a pub who was in the music industry. They were inspired enough to give him the few thousand pounds he needed to finance his first startup company, Lynett Systems.

By now he was doing a PhD in electrical engineering, funded by EPSRC, and went on to become a research fellow building both his research and innovation skills. His focus was on signal processing which was a natural research area given his work on synthesisers. They are essentially just computers that generate sounds. They create digital signals representing sounds and allow you to manipulate them to create new sounds. It is all just signal processing where the signals ultimately represent music.

However, Mike’s research and ideas were more general than just being applicable to audio. Ultimately, Mike moved away from music, and focussed on using his signal processing skills, and ideas around pattern matching to process images. Images are signals too (resulting from light rather than sound). Making a machine understand what is actually in a picture (really just lots of patches of coloured light) is a signal processing problem. To work out what an image shows, you need to turn those coloured blobs into lines, then into shapes, then into objects that you can identify. Our brains do this seamlessly so it seems easy to us, but actually it is a very hard problem, one that evolution has just found good solutions to. This is what happens whether the image is that captured by the camera of a robot “eye” trying to understand the world or a machine trying to work out what a medical scan shows. 

This is where the need for maths comes in to work out probabilities, how likely different things are. Part of the task of recognising lines, shapes and objects is working out how likely one possibility is over another. How likely is it that that band of light is a line, how likely is it that that line is part of this shape rather than that, and so on. Bayesian statistics gives a way to compute probabilities based on the information you already know (or suspect). When the likelihood of events is seen through this lens, things that seem highly unlikely, can turn out to be highly probably (or vice versa), so it can give much more accurate predictions than traditional statistics. Mike’s PhD used this way of calculating probabilities even though some statisticians disdained it. Because of that it was shunned by some in the machine learning community too, but Mike embraced it and made it central to all his work, which gave his programs an edge.

While Lynett Systems didn’t itself make him a billionaire, the experience from setting up that first company became a launch pad for other innovations based on similar technology and ideas. It gave him the initial experience and skills, but also meant he had started to build the networks with potential investors. He did what great entrepreneurs do and didn’t rest on his laurels with just one idea and one company, but started to work on new ideas, and new companies arising from his PhD research.

Fingerprints

Fingerprint being scanned
Image by alhilgo from Pixabay

He realised one important market for image pattern recognition, that was ripe for dominating, was fingerprint recognition. He therefore set about writing software that could match fingerprints far faster and more accurately than anyone else. His new company, Cambridge Neurodynamics, filled a gap, with his software being used by Police Forces nationwide. That then led to other spin-offs using similar technology

He was turning the computational thinking skills of abstraction and generalisation into a way to make money. By creating core general technology that solved the very general problems of signal processing and pattern matching, he could then relatively easily adapt and reuse it to apply to apparently different novel problems, and so markets, with one product leading to the next. By applying his image recognition solution to characters, for example, he created software (and a new company) that searched documents based on character recognition. That led on to a company searching databases, and finally to the company that made him famous, Autonomy.

Fetch

A puppy fetching a stick
Image from Pixabay

One of his great loves was his dog, Toby, a friendly enthusiastic beast. Mike’s take on the idea of a search engine was fronted by Toby – in an early version, with his sights set on the nascent search engine market, his search engine user interface involved a lovable, cartoon dog who enthusiastically fetched the information you needed. However, in business finding your market and getting the right business model is everything. Rather than competing with the big US search engine companies that were emerging, he switched to focussing on in-house business applications. He realised businesses were becoming overwhelmed with the amount of information they held on their servers, whether in documents or emails, phone calls or videos. Filing cabinets were becoming history and being replaced by an anarchic mess of files holding different media, individually organised, if at all, and containing “unstructured data”. This kind of data contrasts with the then dominant idea that important data should be organised and stored in a database to make processing it easier. Mike realised that there was lots of data held by companies that mattered to them, but that just was not structured like that and never would be. There was a niche market there to provide a novel solution to a newly emerging business problem. Focussing on that, his search company, Autonomy, took off, gaining corporate giants as clients including the BBC. As a hands-on CEO, with both the technical skills to write the code himself and the business skills to turn it into products businesses needed, he ensured the company quickly grew. It was ultimately sold for $11 billion. (The sale led to an accusation of fraud in hte US, but, innocent, he was acquitted of all the charges).

Investing

From firsthand experience he knew that to turn an idea into reality you needed angel investors: people willing to take a chance on your ideas. With the money he made, he therefore started investing himself, pouring the money he was making from his companies into other people’s ideas. To be a successful investor you need to invest in companies likely to succeed while avoiding ones that will fail. This is also about understanding the likelihood of different things,  obviously something he was good at. When he ultimately sold Autonomy, he used the money to create his own investment company, Invoke Capital. Through it he invested in a variety of tech startups across a wide range of areas, from cyber security, crime and law applications to medical and biomedical technologies, using his own technical skills and deep scientific knowledge to help make the right decisions. As a result, he contributed to the thriving Silicon Fen community of UK startup entrepreneurs, who were and continue to do exciting things in and around Cambridge, turning research and innovation into successful, innovative companies. He did this not only through his own ideas but by supporting the ideas of others.

Man on rock staring at the sun between 2 parallel worlds
Image by Patricio González from Pixabay

Mike was successful because he combined business skills with a wide variety of technical skills including maths, electronic engineering and computer science, even bioengineering. He didn’t use his success to just build up a fortune but reinvested it in new ideas, new companies and new people. He has left a wonderful legacy as a result, all the more so if others follow his lead and invest their success in the success of others too.

In memory of a friend

Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

To be (CEO) or not to be (CEO)

Just because you start a start-up doesn’t mean you have to be the boss (the CEO) running the company… Hamit Soyel didn’t and his research-based company, DragonFlyAI is flourishing.

Hamit’s computer science research (with Peter McOwan) at Queen Mary concerns understanding human (and animal) vision systems. Building on the research of neuroscientists they created computational models of vision systems. These are just programs that work in the way we believe our brains process what we see. If our understanding is correct then the models should see as we see. For example, one aspect of this is how our attention is drawn to some things and not others. If the model is accurate, it should be able to predict things we will definitely notice, and predict things we probably won’t. It turned out their models were really good at this.

They realised that their models had applications in marketing and advertising (an advert that no one notices is a waste of money). They therefore created a startup company based on their research. Peter sadly died not long after the company was founded leaving Hamit to make it a success. He had a choice to make though. Often people who start a startup company set themselves up as the CEO: it is their company so they want control. To do this you need good business skills though and also to be willing to devote the time to make the business a success. You got to this point though because of your technical and creative skills,

When you start a company you want to make a difference, but to actually do that you need a strong team and that team doesn’t have to be “behind” you, they can be “with” you – after all the best teams are made up of specialists who work to their strengths as well as supporting and working well with each other. Perhaps your strengths lie elsewhere, rather than in running a business,

With support from Queen Mary Innovations who helped him set up DragonflyAI and have supported it through its early years, Hamit decided his strengths were in the creative and technical side of the business, so he became the Chief Scientist and Inventor rather than the CEO. That role was handed to an expert as were the other senior leadership roles such as Marketing and Sales, Operations and Customer Success. That meant Hamit could focus on what he did best in further developing the models, as well as in innovating new ideas. This approach also gives confidence to investors that the leadership team do know what they are doing and that if they like the ideas then the company will be a success.

As a result, Hamit’s business is now a big success having helped a whole series of global companies improve their marketing, including Mars and Coca-Cola. DragonflyAI also recently raised $6m in funding from investors to further develop the business.

As Hamit points out:

By delegating operations to a professional leadership team, you can concentrate on areas you truly enjoy that fuel your passion and creativity, ultimately enhancing your fulfilment and contribution to your company and driving collective success.”

To be the CEO or not be the CEO depends on your skills and ambition, but you must also think about what is best for the company, as Hamit has pointed out. It is important to realise though that you do not have to be the CEO just because you founded the company.

Paul Curzon, Queen Mary University of London,

based on an interview between Hamit Soyel and Queen Mary Innovations

More on …


Magazines …


Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos