Dr Who? Dr You???

Image by Eduard Solà, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons

When The Doctor in Dr Who knows their time is up – usually because they’ve been injured so badly that they are dying – like all Time Lords, they can regenerate. They transform into a completely different body. They ends up with a new personality, new looks, a new gender, even new teeth. Could humans one day regenerate too?

Your body is constantly regenerating itself too. New cells are born to replace the ones that die. Your hair, nails and skin are always growing and renewing. Every year, you lose and regain so much that you could make a pile of dead cells that would weigh the same as your body. And yet with all this change, every morning you look in the mirror and you look and feel the same. No new personality, no new teeth. How does the human body keep such incredible control?

Here’s another puzzler. Even though our cells are always being renewed, you can’t regrow your arm if it gets cut off. We know it’s not impossible to regrow body parts: we do it for small things like cells, including whole toe nails and some animals like lizards can regrow tails. Why can we regrow some things but not others?

Creation of the shape

All of those questions are part of a field in biology called morphogenesis. The word is from Greek, and it means ‘creation of the shape’. Scientists who study morphogenesis are interested in how cells come together to create bodies. It might sound a long way from computing, but Alan Turing became interested in morphogenesis towards the end of his life. He was interested in finding out about patterns in nature – and patterns were something he knew a lot about as a mathematician. A paper he wrote in 1951 described a way that Turing thought animals could form patterns like stripes and spots on their bodies and in their fur. The mechanisms he described explain how uniform cells could end up turning into different things so not only different patttens in different places, but different body parts in different places. That work is now the foundation of a whole sub-discipline of biology.

Up for the chop

Turing died before he could do much work on morphogenesis, but lots of other scientists have taken up the mantle. One of them is Alejandro Sánchez Alvarado, who was born in Venezuela but works at the Stowers Institute for Medical Research in Kansas City, in the US. He is trying to get to the bottom of questions like how we regenerate our bodies. He thinks that some of the clues could come from working on flatworms that can regenerate almost any part of their body. A particular flatworm, called Schmidtea mediterranea, can regenerate its head and its reproductive organs. You can chop its body into almost 280 pieces and it will still regenerate.

A genetic mystery

The funny thing is, flatworms and humans aren’t as different as you might think. They have about the same number of genes as us, even though we’re so much bigger and seemingly more complicated. Even their genes and ours are mostly the same. All animals share a lot of the same, ancient genetic material. The difference seems to come from what we do with it. The good news there is that as the genes are mostly the same, if scientists can figure out how flatworm morphogenesis works, there’s a good chance that it will tell us something about humans too.

One gene does it all

Alejandro Sánchez Alvarado did one series of experiments on flatworms where he cut off their heads and watched them regenerate. He found that the process looked pretty similar to watching organs like lungs and kidneys grow in humans as well as other animals. He also found that there was a particular gene that, when knocked out, takes away the flatworm’s ability to regenerate.

What’s more, he tried again in other flatworms that can’t normally regenerate whole body parts – just cells, like us. Knocking out that gene made their organs, well, fall apart. That meant that the organs that fell apart would ordinarily have been kept together by regrowing cells, and that the same gene that allows for cell renewal in some flatworms takes care of regrowing whole bodies, Dr Who-style, in others. Phew. A lot of jobs for one gene.

Who knows, maybe Time Lords and humans share that same gene too. They’re like the lucky, regenerating flatworms and we’re the ones who are only just keeping things together. But if it’s any consolation, at least we know that our bodies are constantly working hard to keep us renewed. We still regenerate, just in a slightly less spectacular way.

– the CS4FN team (updated from the archive)

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

How did the zebra get its stripes?

Head of a fish with a distinctive stripy, spotty pattern
Image by geraldrose from Pixabay

There are many myths and stories about how different animals gained their distinctive patterns. In 1901, Rudyard Kipling wrote a “Just So Story” about how the leopard got its spots, for example. The myths are older than that though, such as a story told by the San people of Namibia (and others) of how the zebra got its stripes – during a fight with a baboon as a result of staggering through the baboon’s fire. These are just stories. It was a legendary computer scientist and mathematician, who was also interested in biology and chemistry, who worked out the actual way it happens.

Alan Turing is one of the most important figures in Computer Science having made monumental contributions to the subject, including what is now called the Turing Machine (giving a model of what a computer might be before they existed) and the Turing Test (kick-starting the field of Artificial Intelligence). Towards the end of his life, in the 1950s, he also made a major contribution to Biology. He came up with a mechanism that he believed could explain the stripy and spotty patterns of animals. He has largely been proved right. As a result those patterns are now called Turing Patterns. It is now the inspiration for a whole area of mathematical biology.

How animals come to have different patterns has long been a mystery. All sorts of animals from fish to butterflies have them though. How do different zebra cells “know” they ultimately need to develop into either black ones or white ones, in a consistent way so that stripes (not spots or no pattern at all) result, whereas leopard cells “know” they must grow into a creature with spots. They both start from similar groups of uniform cells without stripes or spots. How do some that end up in one place “know” to turn black and others ending up in another place “know” to turn white in such a consistent way?

There must be some physical process going on that makes it happen so that as cells multiply, the right ones grow or release pigments in the right places to give the right pattern for that animal. If there was no such process, animals would either have uniform colours or totally random patterns.

Mathematicians have always been interested in patterns. It is what maths is actually all about. And Alan Turing was a mathematician. However, he was a mathematician interested in computation, and he realised the stripy, spotty problem could be thought of as a computational kind of problem. Now we use computers to simulate all sorts or real phenomena, from the weather to how the universe formed, and in doing so we are thinking in the same kind of way. In doing this, we are turning a real, physical process into a virtual, computational one underpinned by maths. If the simulation gets it right then this gives evidence that our understanding of the process is accurate. This way of thinking has given us a whole new way to do science, as well as of thinking more generally (so a new kind of philosophy) and it starts with Alan Turing.

Back to stripes and spots. Turing realised it might all be explained by Chemistry and the processes that resulted from it. Thinking computationally he saw that you would get different patterns from the way chemicals react as they spread out (diffuse). He then worked out the mathematical equations that described those processes and suggested how computers could be used to explore the ideas.

Diffusion is just a way by which chemicals spread out. Imagine dropping some black ink onto some blotting paper. It starts as a drop in the middle, but gradually the black spreads out in an increasing circle until there is not enough to spread further. The expanding circle stops. Now, suppose that instead of just ink we have a chemical (let’s call it BLACK, after its colour), that as it spreads it also creates more of itself. Now, BLACK will gradually uniformly spread out everywhere. So far, so expected. You would not expect spots or stripes to appear!

Next, however, let’s consider what Turing thought about. What happens if that chemical BLACK produces another chemical WHITE as well as more BLACK? Now, starting with a drop of BLACK, as it spreads out, it creates both more BLACK to spread further, but also WHITE chemicals as well. Gradually they both spread. If the chemicals don’t interact then you would end up with BLACK and WHITE mixed everywhere in a uniform way leading to a uniform greyness. Again no spots or stripes. Having patterns appear still seems to be a mystery.

However, suppose instead that the presence of the WHITE chemical actually stops BLACK creating more of itself in that region. Anywhere WHITE becomes concentrated gets to stays WHITE. If WHITE spreads (ie diffuses) faster than BLACK then it spreads to places first that become WHITE with BLACK suppressed there. However, no new BLACK leads to no more new WHITE to spread further. Where there is already BLACK, however, it continue to create more BLACK leading to areas that become solid BLACK. Over time they spread around and beyond the white areas that stopped spreading and also create new WHITE that again spreads faster. The result is a pattern. What kind of pattern depends on the speed of the chemical reactions and how quickly each chemical diffuses, but where those are the same because it is the same chemicals the same kind of pattern will result: zebras will end up with stripes and leopards with spots.

This is now called a Turing pattern and the process is called a reaction-diffusion system. It gives a way that patterns can emerge from uniformity. It doesn’t just apply to chemicals spreading but to cells multiplying and creating different proteins. Detailed studies have shown it is the mechanism in play in a variety of animals that leads to their patterns. It also, as Alan Turing suggested, provides a basis to explain the way the different shapes of animals develop despite starting from identical cells. This is called morphogenesis. Reaction-diffusion systems have also been suggested as the mechanism behind how other things occur in the natural world, such as how fingerprints develop. Despite being ignored for decades, Turing’s theory now provides a foundation for the idea of mathematical biology. It has spawned a whole new discipline within biology, showing how maths and computation can support our understanding of the natural world. Not something that the writers of all those myths and stories ever managed.

– Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

If you go down to the woods today…

A girl walking through a meadow full of flowers within woods
Image by Jill Wellington from Pixabay

In the 2025 RHS Chelsea Flower Show there was one garden that was about technology as well as plants: The Avanade Intelligent Garden  exploring how AI might be used to support plants. Each of the trees contained probes that sensed and recorded data about them which could then be monitored through an App. This takes pioneering research from over two decades ago a step further, incorporating AI into the picture and making it mainstream. Back then a team led by Yvonne Rogers built an ambient wood aiming to add excitement to a walk in the woods...

Mark Weiser had a dream of ‘Calm Computing’ and while computing sometimes seems ever more frustrating to use, the ideas led to lots of exciting research that saw at least some computers disappearing into the background. His vision was driven by a desire to remove the frustration of using computers but also the realization that the most profound technologies are the ones that you just don’t notice. He wanted technology to actively remove frustrations from everyday life, not just the ones caused by computers. He wrote of wanting to “make using a computer as refreshing as taking a walk in the woods.”

Not calm, but engaging and exciting!

No one argues that computers should be frustrating to use, but Yvonne Rogers, then of the Open University, had a different idea of what the new vision could be. Not calm. Anything but calm in fact (apart from frustrating of course). Not calm, but engaging and exciting!

Her vision of Weiser’s tranquil woods was not relaxing but provocative and playful. To prove the point her team turned some real woods in Sussex into an ‘Ambient Wood’. The ambient wood was an enhanced wood. When you entered it you took probes with you, that you could point and poke with. They allowed you to take readings of different kinds in easy ways. Time hopping ‘Periscopes’ placed around the woods allowed you to see those patches of woodland at other times of the year. There was also a special woodland den where you could then see the bigger picture of the woods as all your readings were pulled together using computer visualisations.

Not only was the Ambient Wood technology visible and in your face but it made the invisible side of the wood visible in a way that provoked questions about the wildlife. You noticed more. You saw more. You thought more. A walk in the woods was no longer a passive experience but an active, playful one. Woods became the exciting places of childhood stories again but now with even more things to explore.

The idea behind the Ambient Wood, and similar ideas like Bristol’s Savannah project, where playing fields are turned into African Savannah, was to revisit the original idea of computers but in a new context. Computers started as tools, and tools don’t disappear, they extend our abilities. Tools originally extended our physical abilities – a hammer allows us to hit things harder, a pulley to lift heavier things. They make us more effective and allow us to do things a mere human couldn’t do alone. Computer technology can do a similar thing but for the human intellect…if we design them well.

“The most important thing the participants gained was a sense of wonderment at finding out all sorts of things and making connections through discovering aspects of the physical woodland (e.g., squirrel’s droppings, blackberries, thistles)”

– Yvonne Rogers

The Weiser dream was that technology invisibly watches the world and removes the obstacles in the way before you even notice them. It’s a little like the way servants to the aristocracy were expected to always have everything just right but at the same time were not to be noticed by those they served. The way this is achieved is to have technology constantly monitoring, understanding what is going on and how it might affect us and then calmly fixing things. The problem at the time was that it needs really ‘smart’ technology – a high level of Artificial Intelligence to achieve and that proved more difficult than anyone imagined (though perhaps we are now much closer than we were). Our behaviour and desires, however, are full of subtlety and much harder to read than was imagined. Even a super-intellect would probably keep getting it wrong.

There are also ethical problems. If we do ever achieve the dream of total calm we might not like it. It is very easy to be gung ho with technology and not realize the consequences. Calm computing needs monitors – the computer measuring everything it can so it has as much information as possible to make decisions from (see Big Sister is Watching You).

A classic example of how this can lead to people rejecting technology intended to help is in a project to make a ‘smart’ residential home for the elderly. The idea was that by wiring up the house to track the residents and monitor them the nurses would be able to provide much better care, and relatives be able to see how things were going. The place was filled with monitors. For example, sensors in the beds measured resident’s weight while they slept. Each night the occupants weight could invisibly be taken and the nurses alerted of worrying weight loss over time. The smart beds could also detect tossing and turning so someone having bad nights could be helped. A smart house could use similar technology to help you or I have a good nights sleep and help us diet.

The problem was the beds could tell other things too: things that the occupants preferred to keep to themselves. Nocturnal visitors also showed up in the records. That’s the problem if technology looks after us every second of the day, the records may give away to others far more than we are happy with.

Yvonne’s vision was different. It was not that the computers try to second-guess everything but instead extend our abilities. It is quite easy for new technology to lead to our being poorer intellectually than we were. Calculators are a good example. Yes, we can do more complex sums quickly now, but at the same time without a calculator many people can’t do the sums at all. Our abilities have both improved and been damaged at the same time. Generative AI seems to be currently heading the same way, What the probes do, instead, is extend our abilities not reduce them: allowing us to see the woods in a new way, but to use the information however we wish. The probes encourage imagination.

The alternative to the smart house (or calculator) that pampers allowing your brain to stay in neutral, or the residential home that monitors you for the sake of the nurses and your relatives, is one where the sensors are working for you. Where you are the one the bed reports to helping you to then make decisions about your health, or where the monitors you wear are (only) part of a game that you play because its fun.

What next? Yvonne suggested the same ideas could be used to help learning and exploration in other ways, understanding our bodies: “I’d like to see kids discover new ways of probing their bodies to find out what makes them tick.”

So if Yvonne’s vision is ultimately the way things turn out, you won’t be heading for a soporific future while the computer deals with real life for you. Instead it will be a future where the computers are sparking your imagination, challenging you to think, filling you with delight…and where the woods come alive again just as they do in the storybooks (and in the intelligent garden).

Paul Curzon, Queen Mary University of London

(adapted from the archive)

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The Sweet Learning Computer: Learning Ladder

The board for the ladder game with the piece on the bottom rung
The Ladder board. Image by Paul Curzon

Can a machine learn from its mistakes, until it plays a game perfectly, just by following rules? Donald Michie worked out a way in the 1960s. He made a machine out of matchboxes and beads called MENACE that did just that. Our version plays the game Ladder and is made of cups and sweets. Punish the machine when it loses by eating its sweets!

Let’s play the game, Ladder. It is played on a board like a ladder with a single piece (an X) placed on the bottom rung of the ladder. Players take it in turns to make a move, either 1, 2 or 3 places up the ladder. You win if you move the piece to the top of the ladder, so reach the target. We will play on a ladder with 10 rungs as on the right (but you can play on larger ladders).

To make the learning machine, you need 9 plastic cups and lots of wrapped sweets coloured red, green and purple. Spread out the sheets showing the possible board positions (see below) and place a cup on each. Put coloured sweets in each cup to match the arrows: for most positions there are red, green and purple arrows, so you put a red, green and purple sweet in those cups. Once all cups have sweets matching the arrows, your machine is ready to play (and learn).

The machine plays first. Each cup sits on a possible board position that your machine could end up in. Find the cup that matches the board position the game is in when it is its go.  Shut your eyes and take a sweet at random from that cup, placing it next to the cup. Make the move indicated by the arrow of that colour. Then the machine’s human opponent makes a move. Once they have moved the machine plays in the same way again, finding the position and taking a sweet to decide its move. Keep playing alternately like this until someone wins. If the machine ends up in a position with no sweets in that cup, then it resigns.

The possible board positions showing possible moves with coloured arrows.
The 9 board positions with arrows showing possible moves. Place a cup on each board position with sweets corresponding to the arrows. Image by Paul Curzon

If the machine loses, then eat the sweet corresponding to the last move it made. It will never make that mistake again! Win or lose, put all the other sweets back.

The initial cup for board position 8, with a red and purple sweet.
The initial cup for board position 8, with a red and purple sweet. Image by Paul Curzon

Now, play lots of games like that, punishing the machine by eating the sweet of its last move each time it loses. The machine will play badly at first. It’s just making moves at random. The more it loses, the more sweets (losing moves) you eat, so the better it gets. Eventually, it will play perfectly. No one told it how to win – it learnt from its mistakes because you ate its sweets! Gradually the sweets left encode rules of how to win.

Try slightly different rules. At the moment we just punish bad moves. You could reward all the moves that led to it by adding another sweet of the same colour too. Now the machine will be more likely to make those moves again. What other variations of rewards and punishments could you try?

Why not write a program that learns in the same way – but using data values in arrays to represent moves instead of sweets. Not so yummy!

– Paul Curzon, Queen Mary University of London

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Signing Glasses

Glasses sitting on top of a mobile phone.
Image by Km Nazrul Islam from Pixabay

In a recent episode of Dr Who, The Well, Deaf actress Rose Ayling-Ellis plays a Deaf character Aliss. Aliss is a survivor of some, at first unknown, disaster that has befallen a mining colony 500,000 years in the future. The Doctor and current companion Belinda arrive with troopers. Discovering Aliss is deaf they communicate with her using a nifty futuristic gadget of the troopers that picks up everything they say and converts it into text as they speak, projected in front of them. That allows her to read what they say as they speak.

Such a gadget is not so futuristic actually (other than in a group of troopers carrying them). Dictation programs have existed for a long time and now, with faster computers and modern natural language processing techniques, they can convert speech to text in real time from a variety of speakers without lots of personal training (though they still do make mistakes). Holographic displays also exist, though such a portable one as the troopers had is still a stretch. An alternative that definitely exists is that augmented reality glasses specifically designed for the deaf could be worn (though are still expensive). A deaf or hard of hearing person who owns a pair can read what is spoken through their glasses in real time as a person speaks to them, with the computing power provided by their smart phone, for example. It could also be displayed so that it appeared to be out in the world (not on the lenses), as though it were appearing next to the person speaking. The effect would be pretty much the same as in the programme, but without the troopers having had to bring gadgets of their own, just Aliss wearing glasses.

Aliss (and Rose) used British Sign Language of course, and she and the Doctor were communicating directly using it, so one might have hoped that by 500, 000 years in the future someone might have had the idea of projecting sign language rather than text. After all, British SIgn Language it is a language in its own right that has a different grammatical structure to English. It is therefore likely that it would be easier for a native BSL speaker to see sign language rather than read text in English.

Some Deaf people might also object to glasses that translate into English because it undermines their first language and so culture. However, ones that translated into sign language can do the opposite and reinforce sign language, helping people learn the language by being immersed in it (whether deaf or not). Services like this do in fact already exist, connecting Deaf people to expert Sign language interpreters who see and hear what they do, and translate for them – whether through glasses or laptops .

Of course all the above so far is about allowing Deaf people (like Aliss) fit into a non-deaf world (like that of the Troopers) allowing her to understand them. The same technology could also be used to allow everyone else fit into a Deaf world. Aliss’s signing could have been turned into text for the troopers in the same way. Similarly, augmented reality glasses, connected to a computer vision system, could translate sign language into English allowing non-deaf people wearing glasses to understand people who are signing..

So its not just Deaf people who should be wearing sign language translation glasses. Perhaps one day we all will. Then we would be able to understand (and over time hopefully learn) sign language and actively support the culture of Deaf people ourselves, rather than just making them adapt to us.

– Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Sign Language for Train Departures

BSL for CS4FN
Image by Daniel Gill

This week (5-11th May) is Deaf Awareness Week, an opportunity to celebrate d/Deaf* people, communities, and culture, and to advocate for equal access to communication and services for the d/Deaf and hard of hearing. A recent step forward is that sign language has started appearing on railway stations.

*”deaf” with a lower-case “d” refers to
audiological experience of deafness,
or those who might have become deafened
or hard of hearing in later life, so might identify
closer to the hearing community.
“Deaf” with an upper-case “D” refers
to the cultural experience of deafness, or those
who might have been born Deaf and
therefore identify with the Deaf community.
This is similar to how people might describe themselves
as “having a disability” versus “being disabled”.

If you’re like me and travel by train a lot (long time CS4FN readers will be aware of my love of railway timetabling), you may have seen these relatively new British Sign Language (BSL) screens at various railway stations.

They work by automatically converting train departure information into BSL by stitching together pre-recorded videos of BSL signs. Pretty cool stuff! 

When I first saw these, though, there was one small thing that piqued my interest – if d/Deaf people can see the screen, why not just read the text? I was sure it wasn’t an oversight: Network Rail and train operators worked closely with d/Deaf charities and communities when designing the system: so being a researcher in training, I decided to look into it. 

A train information screen with sign language
Image by Daniel Gill

It turns out that the answer has various lines of reasoning.

There’s been many years of research investigating reading comprehension for d/Deaf people compared to their hearing peers. A cohort of d/Deaf children, in a 2015 paper, had significantly weaker reading comprehension skills than both hearing children of the same chronological and reading age.

Although this gap does seem to close with age, some d/Deaf people may be far more comfortable and skilful using BSL to communicate and receive information. It should be emphasised that BSL is considered a separate language and is structured very differently to spoken and written English. As an example, take the statement:

“I’m on holiday next month.”

In BSL, you put the time first, followed by topic and then comment, so you’d end up with:

“next month – holiday – me”

As one could imagine, trying to read English (a second language for many d/Deaf people) with its wildly different sentence structure could be a challenge… especially as you’re rushing through the station looking for the correct platform for your train!

Sometimes, as computer scientists, we’re encouraged to remove redundancies and make our systems simpler and easy-to-use. But something that appears redundant to one person could be extremely useful to another – so as we go on to create tools and applications, we need to make sure that all target users are involved in the design process.

Daniel Gill, Queen Mary University of London

More on…

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Was the first computer a ‘Bombe’?

Image from a set of wartime photos of GC&CS at Bletchley Park, Public domain, via Wikimedia Commons

A group of enthusiasts at Bletchley Park, the top secret wartime codebreaking base, rebuilt a primitive computing device used in the Second World War to help the Allies listen in on U-boat conversations. It was called ‘the Bombe’. Professor Nigel Smart, now at KU Leuven and an expert on cryptography, tells us more.

So What’s all this fuss about building “A Bombe”? What’s a Bombe?

The Bombe didn’t help win the war destructively like its explosive name-sakes but using intelligence. It was designed to find the passwords or ‘keys’ into the secret codes of ‘Enigma’: the famous encryption machine used both by the German army in the field and to communicate to U-Boats in the Atlantic. It effectively allowed the English to listen in to the German’s secret communications.

A Bombe is an electro-mechanical special purpose computing device. ‘Electro-mechanical’ because it works using both mechanics and electricity. It works by passing electricity through a circuit. The precise circuit that is used is modified mechanically on each step of the machine by drums that rotate. It used a set of rotating drums to mirror the way the Enigma machine used a set of discs which rotated when each letter was encrypted. The Bombe is a ‘special purpose’ computing device rather than a ‘general purpose’ computer because it can’t be used to solve any other problem than the one it was designed for.

Why Bombe?

There are many explanations of why it’s called a ‘Bombe’. The most popular is that it is named after an earlier, but unrelated, machine built by the Polish to help break Enigma called the Bomba. The Bomba was also an electro-mechanical machine and was called that because as it ran it made a ticking sound, rather like a clock-based fuse on an exploding bomb.

What problem did it solve?

The Enigma machine used a different main key, or password, every day. It was then altered slightly for each message by a small indicator sent at the beginning of each message. The goal of the codebreakers at Bletchley Park each day was to find the German key for that day. Once this was found it was easy to then decrypt all the day’s messages. The Bombe’s task was to find this day key. It was introduced when the procedures used by the Germans to operate the Enigma changed. This had meant that the existing techniques used by the Allies to break the Enigma codes could no longer be used. They could no longer crack the German codes fast enough by humans alone.

So how did it help?

The basic idea was that many messages sent would consist of some short piece of predictable text such as “The weather today will be….” Then using this guess for the message that was being encrypted the cryptographers would take each encrypted message in turn and decide whether it was likely that it could have been an encryption of the guessed message. The fact that the German army was trained to say and write “Heil Hitler” at any opportunity was a great help too!

The words “Heil, Hitler” help the German’s lose the war

If they found one that was a possible match they would analyze the message in more detail to produce a “menu”. A menu was just what computer scientists today call a ‘graph’. It is a set of nodes and edges, where the nodes are letters of the alphabet and the edges link the letters together a bit like the way a London tube map links stations (the nodes) by tube lines (the edges). If the graph had suitable mathematical properties that they checked for, then the codebreakers knew that the Bombe might be able to find the day key from the graph.

The menu, or graph, was then sent over to one of the Bombe’s. They were operated by a team of women – the World’s first team of computer operators. The operator programmed the Bombe by using wires to connect letters together on the Bombe according to the edges of the menu. The Bombe was then set running. Every so often it would stop and the operator would write down the possible day key which it had just found. Finally another group checked this possible day key to see if the Bombe had produced the correct one. Sometimes it had, sometimes not.

So was the Bombe a computer?

By a computer today we usually mean something which can do many things. The reason the computer is so powerful is that we can purchase one piece of equipment and then use this to run many applications and solve many problems. It would be a big problem if we needed to buy one machine to write letters, one machine to run a spreadsheet, one machine to play “Grand Theft Auto” and one machine to play “Solitaire”. So, in this sense the Bombe was not a computer. It could only solve one problem: cracking the Enigma keys.

Whilst the operator programmed the Bombe using the menu, they were not changing the basic operation of the machine. The programming of the Bombe is more like the data entry we do on modern computers.

Alan Turing who helped design the Bombe along with Gordon Welchman, is often called the father of the computer, but that’s not for his work on the Bombe. It’s for two other reasons. Firstly before the war he had the idea of a theoretical machine which could be programmed to solve any problem, just like our modern computers. Then, after the war he used the experience of working at Bletchley to help build some of the worlds first computers in the UK.

But wasn’t the first computer built at Bletchley?

Yes, Bletchley park did build the first computer as we would call it. This was a machine called Colossus. Colossus was used to break a different German encryption machine called the Lorenz cipher. The Colossus was a true computer as it could be used to not only break the Lorenz cipher, but it could also be used to solve a host of other problems. It also worked using digital data, namely the set of ones and zeros which modern computers now operate on.

Nigel Smart, KU Leuven

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Delia Derbyshire: Say it sounds like singing

Image by Gerd Altmann from Pixabay

Many names stand out as pioneers of electronic music, combining computer science, electronics and music to create new and amazing sounds. Kraftwerk would top many people’s lists of the most influential bands and Jean-Michel Jarre must surely be up there. Giorgio Moroder returned to the limelight with Daft Punk, having previously invented electronic disco in producing Donna Summer’s “I feel love”. Will.i.am, La Roux or Goldfrapp might be on your playlist. One of the most influential creators of electronic music, a legend to those in the know, is barely known by comparison though: Delia Derbyshire.

Delia worked for the BBC radiophonic workshop, the department tasked with producing innovative music to go with the BBC’s innovative programming, and played a major part in its fame. She had originally tried to get a job at Decca records but was told they didn’t employ women in their recording studios (big loss for them!) In creating the sounds and soundscapes behind hundreds of tv and radio programmes, long before electronic music went mainstream, her ideas have influenced just about everyone in the field, whether they have heard of her or not.

The first person to realise that machines
would one day be able to not just play music
but also be able to compose it,
was Victorian programmer, and Countess, Ada Lovelace.

So have you heard her work? Her most famous piece of music you will most definitely know. She created the original electronic version of the Dr Who theme long before pop stars were playing electronic music. Each individual note was created separately, by cutting, splicing, speeding up and slowing down recordings of things like a plucked string and white noise. So why didn’t you know of her? It’s time more people did.

– Paul Curzon, Queen Mary University of London

Magazines …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The virtual Jedi

Image by Frank Davis from Pixabay

For Star Wars Day (May 4th), here is a Star Wars inspired research from the archive…

Virtual reality can give users an experience that was previously only available a long time ago in a galaxy far, far away. Josh Holtrop, a graduate of Calvin College in the USA, constructed a Jedi training environment inspired by the scene from Star Wars in which Luke Skywalker goes up against a hovering droid that shoots laser beams at him. Fortunately, you don’t have to be blindfolded in the virtual reality version, like Luke was in the movie. All you need to wear over your eyes is a pair of virtual reality goggles with screens inside.

When you’re wearing the goggles, it’s as though you’re encased in a cylinder with rough metal walls. A bumpy metallic sphere floats in front of the glowing blade of your lightsaber – which in the real world is a toy version with a blue light and whooshy sound effects, though you see the realistic virtual version. The sphere in your goggles spins around, shooting yellow pellets of light toward you as it does. It’s up to you to bring your weapon around and deflect each menacing pulse away before it hits you. If you do, you get a point. If you don’t, your vision fills with yellow and you lose one of your ten lives.

Tracking movement with magnetism

It takes more than just some fancy goggles to make the Jedi trainer work, though. A computer tracks your movement in order to translate your position into the game. How does it know where you are? In their system, because the whole time you’re playing the game, you’re also wandering through a magnetic field. The field comes from a small box on the ceiling above you and stretches for about a metre and a half in all directions. Sixty times every second, sensors attached to the headset and lightsaber check their position in the magnetic field and send that information to the computer. As you move your head and your sabre the sensors relay their position, and the view in your goggles changes. What’s more, each of your eyes receives a slightly different view, just like in real life, creating the feeling of a 3D environment.

Once the sensors have gathered all the information, it’s up to the software to create and animate the virtual 3D world – from the big cylinder you’re standing in to the tiny spheres the droid shoots at you. It controls the behaviour of the droid, too, making it move semi-randomly and become a tougher opponent as you go through the levels. Most users seem to get the hang of it pretty quickly. “Most of them take about two minutes to get used to the environment. Once they start using it, they get better at the game. Everybody’s bad at it the first sixty seconds,” Josh says. “My mother actually has the highest score for a beginner.”

The atom smasher

Much as every Jedi apprentice needs to find a way to train, there are uses for Josh’s system beyond gaming too. Another student, Jess Vriesma, wrote a program for the system that he calls the “atom smasher”. Instead of a helmet and lightsaber, each sensor represents a virtual atom. If the user guides the two atoms together, a bond forms between them. Two new atoms then appear, which the user can then add to the existing structure. By doing this over and over, you can build virtual molecules. The ultimate aim of the researchers at Calvin College was to build a system that lets you ‘zoom in’ to the molecule to the point where you could actually walk round inside it.

The team also bought themselves a shiny new magnetic field generator, that lets them generate a field that’s almost nine metres across. That’s big enough for two scientists to walk round the same molecule together. Or, of course, two budding Jedi to spar against one another.

the CS4FN Team (from the archive)

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Bits with Soul (via a puzzle)

Image by Gerd Altmann from Pixabay

In January 2025 computer scientist Simon Peyton Jones gave an inspiring lecture at Darwin College Cambridge on “Bits with Soul” about the joy, beauty, and creativity of computer science … from simple ideas of data representation comes all of virtual reality.

Our universe is built from elementary particles: quarks, electrons and the like. Out of quarks come protons and neutrons. Put those together with electrons in different ways to get different atoms. From atoms are built molecules, and from there on come ever more complexity including the amazing reality of planets and suns, humans, trees, mushrooms and more. From small things ever more complex things are built and ultimately all of creation.

The virtual world of our creation is made of bits combined using binary, but what are bits, and what is binary? Here is a puzzle that Simon Peyton Jones was set by his teacher as a child to solve, to help him think about it. Once you have worked it out then think about how things might be built from bits: numbers, letters, words, novels, sounds, music, images, videos, banking systems, game worlds … and now artificial intelligences?

A bank cashier has a difficult customer. They always arrive in a rush wanting some amount of money, always up to £1000 in whole pounds, but a different amount from day to day. They want it instantly and are always angry at the wait while it is counted out. The cashier hatches a plan. She will have ready each day a set of envelopes that will each contain a different amount of money. By giving the customer the right set of envelope(s) she will be able to hand over the amount asked for immediately. Her first thought had been to have one envelope with £1 in, one envelope with £2 in, one with £3 and so on up to an envelope with £1000 in. However, that takes 1000 envelopes. That’s no good. With a little thought though she realised she could do it with only 10 envelopes if she puts the right amount of money in each. How much does she put in each of the 10 envelopes that allows her to give the customer whatever amount they ask for just by handing over a set of those envelopes?

Simon Peyton Jones gives the answer to the puzzle in the talk and also explores how, from bits, come everything we have built on computers with all their beauty and complexity. Watch the video of Simon’s talk on youtube to find out. [EXTERNAL]

– Paul Curzon, Queen Mary University of London (inspired by Simon’s talk as I hope you will be)

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos