Sea sounds sink ships

You might think that under the sea things are nice and quiet, but something fishy is going on down there. Our oceans are filled with natural noise. This is called ambient noise and comes from lots of different sources: from the sound of winds blowing waves on the surface, rain, distant ships and even underwater volcanoes. For undersea marine life that relies on sonar or other acoustic ways to communicate and navigate all the extra ocean noise pollution that human activities, such as undersea mining and powerful ships sonars, have caused, is an increasing problem. But it’s not only the marine life that is affected by the levels of sea sounds, submarines also need to know something about all that ambient noise.

In the early 1900s the aptly named ‘Submarine signal company’ made their living by installing undersea bells near lighthouses. The sound of these bells were a warning to mariners about the impending navigation hazards: an auditory version of the lighthouse light.

The Second World War led to scientists taking undersea ambient noise more seriously as they developed deadly acoustic mines. These are explosive mines triggered by the sound of a passing ship. To make the acoustic trigger work reliably the scientists needed to measure ambient sound, or the mines would explode while simply floating in the water. Measurements of sound frequencies were taken in harbours and coastal waters, and from these a mathematical formula was computed that gave them the ‘Knudsen curves’. Named after the scientist who led the research these curves showed how undersea sound frequencies varies with surface wind speed and wave height. They allowed the acoustic triggers to be set to make the mines most effective.

– Peter McOwan, Queen Mary University of London


Related Magazine …

See also


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Shouting at Memory: Where Did My Write Go?

Bat sending out sound waves
Image by 13smok from Pixabay modified by CS4FN

How can computer scientists improve computer memory, ensuring saving things is more secure? If Vasileios Klimis of Queen Mary, University of London’s Theory research group has his way, they will be learning from bats.

Imagine spending hours building the perfect fortress in Minecraft, complete with lava moats and secret passages; or maybe you’re playing Halo, and you’ve just customised your SPARTAN with an epic new helmet. You press ‘Save’, and breathe a sigh of relief. But what happens next? Where does your digital castle or new helmet go to stay safe?

It turns out that when a computer saves something, it’s not as simple as putting a book on a shelf. The computer has lots of different places to put information, and some are much safer than others. Bats are helping us do it better!

The Bat in the Cave

Imagine you’re a bat flying around in a giant, dark cave. You can’t see, so how do you know where the walls are? You let out a loud shout!

SQUEAK!

A moment later, you hear the echo of your squeak bounce back to you. If the echo comes back really, really fast, you know the wall is very close. If it takes a little longer, you know the wall is further away. By listening to the timing of your echoes, you can build a map of the entire cave in your head without ever seeing it. This is called echolocation.

It turns out we can use this exact same idea to “see” inside a computer’s memory!

Fast Desks and Safe Vaults

A computer’s memory is a bit like a giant workshop with different storage areas.

  • There’s a Super-Fast Desk right next to the computer’s brain (the CPU). This is where it keeps information it needs right now. It’s incredibly fast to grab things from this desk, but there’s a catch: if the power goes out, everything on the desk is instantly wiped away and forgotten! If your data is here, it is not safe!
  • Further away, there’s a Big, Safe Vault. It takes a little longer to walk to the vault to store or retrieve things. But anything you put in the vault is safe, even if the power goes out. When you turn the computer back on, the information is still there.

When you press ‘Save’ in your game, you want your information to go from the fast-but-forgetful desk to the slower-but-safe vault. But how can we be sure it got there? We can’t just open up the computer and look!

Shouting and Listening for Echoes

This is where we use our bat’s trick. To check where a piece of information is, a computer scientist can tell the computer to do two things very quickly:

  1. SHOUT! First, it “shouts” by writing a piece of information, like your game score.
  2. LISTEN! Immediately after, it tries to read that same piece of information back. This is like listening for the “echo”.

If the echo comes back almost instantly, we know the information is still on the Super-Fast Desk nearby. But if the echo takes a little longer, it means the information had to travel all the way to the Big, Safe Vault and back!

By measuring the time of that echo, computer scientists can tell exactly where the write went. We can confirm that when you pressed ‘Save’, your information really did make it to the safe place.

The Real Names

In computer science, we have official names for these ideas:

  • The Super-Fast Desk is called the Cache.
  • The Big, Safe Vault is called Non-Volatile Memory (or NVM for short), which is a fancy way of saying it doesn’t forget when the power is off.
  • The whole system of close and far away memory is the Memory Hierarchy.
  • And this cool trick of shouting and listening is what we call Memory Echolocation.

So next time you save a game, you can imagine the computer shouting a tiny piece of information into its own secret cave and listening carefully for the echo to make sure your progress is safe and sound.

– Vasileios Klimis, Queen Mary University of London

More on …

Getting Technical …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

AI owes us an explanation

Question mark and silhouette of hands holding a smart phone with question mark
Image by Chen from Pixabay

Why should AI tools explain why? Erhan Pisirir and Evangelia Kyrimi, researchers ar Queen Mary University of London explain why.

From the moment we start talking, we ask why. A three-year-old may ask fifty “whys” a day. ‘Why should I hold your hand when we cross the road?’ ‘Why do I need to wear my jacket?’ Every time their parent provides a reason, the toddler learns and makes sense of the world a little bit more.

Even when we are no longer toddlers trying to figure out why the spoon falls on the ground and why we should not touch the fire, it is still in our nature to question the reasons. The decisions and the recommendations given to us have millions of “whys” behind them. A bank might reject our loan application. A doctor might urge us to go to hospital for more tests. And every time, our instinct is to ask the same question: Why? We trust advice more when we understand it.

Nowadays the advice and recommendations come not only from other humans but also from computers with artificial intelligence (AI), such as a bank’s computer systems or health apps.  Now that AI systems are giving us advice and making decisions that affect our lives, shouldn’t they also explain themselves?

That’s the promise of Explainable AI: building machines that can explain their decisions or recommendations. These machines must be able to say what is decided, but also why, in a way we can understand.

From trees to neurons

For decades we have been trying to make machines think for us. A machine does not have the thinking, or the reasoning, abilities of humans. So we need to give instructions on how to think. When computers were less capable, these instructions were simpler. For example, it could look like a tree: think of a tree where each branch is a question with several possible answers, and each answer creates a new branch. Do you have a rash? Yes Do you have a temperature? Yes. Do you have nausea? Yes. Are the spots purple? Yes. If you push a glass against them do they fade away? No …  Go to the hospital immediately.

The tree of decisions naturally gives whys connected to the tips of the paths taken: You should go to the hospital because your collection of symptoms: having a rash of purple spots, a temperature and nausea and especially because they do not fade under a glass, mean that it is likely you have Meningitis. Because it is life-threatening and can get worse very quickly, you need to get to a hospital urgently. An expert doctor can check reasoning like this and decide whether that explanation is actually good reasoning about whether someone has Meningitis or not, or more to the point should rush to the hospital.

Humans made computers much more capable of more complex tasks over time. With this, their thinking instructions became more complex too. Nowadays they might look like more complicated networks instead of trees with branches. They might look like a network of neurons in a human brain, for example. These complex systems make computers great at answering more difficult questions successfully. But unlike looking at a tree of decisions, humans cannot understand how the computer reaches its final answer at a glance of its system of thinking anymore. It is no longer the case that following a simple path of branches through a decision tree gives a definite answer, never mind a why. Now there are loops and backtracks, splits and joins, and the decisions depend on weightings of answers not just a definite Yes or No. For example, with Meningitis, according to the NHS website, there are many more symptoms than above and they can appear in any order or not at all. There may not even be a rash, or the rash may fade when pressure is applied. It is complicated and certainly not as simple as our decision tree suggests (the NHS says “Trust your instincts and and do not wait for all the symptoms to appear or until a rash develops. You should get medical help immediately if you’re concerned about yourself or your child.”) Certainly, the situation is NOT simple enough to say from a decision tree, for example, “Do not worry, you do not have Meningitis because your spots are not purple and did fade in the glass test”. An explanation like that could kill someone. The decision has to be made from a complex web of inter-related facts. AI tools require you to just  trust their instincts!

Let us, for a moment, forget about branches and networks, and imagine that AI is a magician’s hat: something goes in (a white handkerchief) and something else at the tap of a wand magically pops out (a white rabbit).  With a loan application, for example, details such as your age, income, or occupation go in, and a decision comes out: approved or rejected.

Inside the magician’s hat

Nowadays researchers are trying to make the magician’s hat transparent so that you can have a sneak peek of what is going on in there (it shouldn’t seem like magic!). Was the rabbit in a secret compartment, did the magician move it from the pocket and put it in at the last minute or did it really appear out of nowhere (real magic)? Was the decision based on your age or income, or was it influenced by something that should be irrelevant like the font choice in your application?

Currently, explainable AI methods can answer different kinds of questions (though, not always effectively):

  • Why: Your loan was approved because you have a regular income record and have always paid back loans in the past.
  • Why not: Your loan application was rejected because you are 20 years old and are still a student,
  • What if: If you earned £1000 or more each month, your loan application would not have been rejected.

Researchers are inventing many different ways to give these explanations: for example, heat maps that highlight the most important pixels in an image, lists of pros and cons that show the factors for and against a decision, visual explanations such as diagrams or highlights, or natural-language explanations that sound more like everyday conversations.

What explanations are good for

The more interactions people have with AI, the more we see why AI explanations are important. 

  • Understanding why AI made a specific recommendation helps people TRUST the system more; for example, doctors (or patients) might want to know why AI flagged a tumour before acting on its advice. 
  • The explanations might expose if AI recommendations have discrimination and bias, increasing FAIRNESS. Think about the loan rejection scenario again, what if the explanation shows that the reason of AI’s decision was your race? Is that fair?
  • The explanations can help researchers and engineers with DEBUGGING, helping them understand and fix problems with AI faster.
  • AI explanations are also becoming more and more required by LAW. The General Data Protection Regulation (GDPR) gives people a “right to explanation” for some automated decisions, especially for high stake areas, such as healthcare and finance. 

The convincing barrister

One thing to keep in mind is that the presence of explanations does not automatically make an AI system perfect. Explanations themselves can be flawed. The biggest catch is when an explanation is convincing when it shouldn’t be. Imagine a barrister with charming social skills who can spin a story and let a clearly guilty client go free from charge. The AI explanations should not aim to be blindly convincing whether the AI is right or wrong. In the cases AI got it all wrong (and from time to time it will), the explanations should make this clear rather than falsely reassuring the human.

The future 

Explainable AI isn’t an entirely new concept. Decades ago, early expert systems in medicine already included “why” buttons to justify their advice. But only in recent years explainable AI has become a major trend, because of AI systems becoming more powerful and with the increase of concerns about AI surpassing human decision-making but potenitally making some bad decisions.

Researchers are now exploring ways to make explanations more interactive and human friendly, similarly to how we can ask questions to ChatGPT like ‘what influenced this decision the most?’ or ‘what would need to change for a different outcome?’ They are trying to tailor the explanation’s content, style and representation to the users’ needs.

So next time AI makes a decision for you, ask yourself: could it tell me why? If not, maybe it still has some explaining to do.

Erhan Pisirir and Evangelia Kyrimi, Queen Mary University of London

More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Perceptrons and the AI winter

Perceptron over a winter scence of an icy tree
A perceptron winter: Winter image by Image by Nicky ❤️🌿🐞🌿❤️ from Pixabay. Perceptron and all other image by CS4FN.

Back in the 1960s there was an AI winter…after lots of hype about how Artificial Intelligence tools would soon be changing the world, the hype fell short of the reality and the bubble burst, funding disappeared and progress stalled. One of the things that contributed was a simple theoretical result, the apparent shortcomings of a little device called a perceptron. It was the computational equivalent of an artificial brain cell and all the hype had been built on its shoulders. Now, variations of perceptrons are the foundation of neural networks and machine learning tools which are taking over the world…so what went wrong in the 1960s? A much misunderstood mathematical result about what a perceptron can and can’t do was part of the problem!

The idea of a perceptron dates back to the 1940s but Frank Rosenblatt, a researcher at  Cornell Aeronautical Laboratory, first built one in 1958 and so popularised the idea. A perceptron can be thought of as a simple gadget, or as an algorithm for classifying things. The basic idea is it has lots of inputs of 0 or 1s and one output, also of 0 or 1 (so equivalent to taking true / false inputs and returning a true / false output). So for example, a perceptron working as a classifier of whether something was a mammal or not, might have inputs representing lots of features of an animal. These would be coded as 1 to mean that feature was true of the animal or 0 to mean false: INPUT: “A cow gives birth to live young” (true: 1), “A cow has feathers” (false: 0), “A cow has hair” (true: 1), “A cow lays eggs” (false: 0), “etc. OUTPUT: (true: 1) meaning a cow has been classified as a mammal.

A perceptron makes decisions by applying weightings to all the inputs that increase the importance of some, and lesson the importance of others. It then adds the results together also adding in a fixed value, bias. If the sum it calculates is greater then or equal to 0 then it outputs 1, otherwise it outputs 0. Each perceptron has different values for the bias and the weightings, depending on what it does. A simple perceptron is just computing the following bit of code for inputs in1, in2, in3 etc (where we use a full stop to mean multiply):

IF bias + w1.in1 + w2.in2 + w3.in3 ... >= 0 
THEN OUTPUT O 
ELSE OUTPUT 1

Because it uses binary (1s and 0s), this version is called a binary classifier. You can set a perceptron’s weights, essentially programming it to do a particular job, or you can let it learn the weightings (by applying learning algorithms to the weightings). In the latter case it learns for itself the right answers. Here, we are interested in the fundamental limits of what perceptrons could possibly learn to do, so do not need to focus on the learning side just on what a perceptron’s limits are. If we can’t program it to do something then it can’t learn to do it either!

Machines made of lots of perceptrons were created and experiments were done with them to show what AIs could do. For example, Rosenblatt built one called Tobermory with 12,000 weights designed to do speech recognition. However, you can also explore the limits of what can be done computationally through theory: using maths and logic, rather than just by invention and experiments, and that kind of theoretical computer science was what others did about perceptrons. A key question in theoretical computer science about computers is “What is computable?” Can your new invention compute anything a normal computer can? Alan Turing had previously proved an important result about the limits of what any computer could do, so what about an artificial intelligence made of perceptrons? Could it learn to do anything a computer could or was it less powerful than that?

As a perceptron is something that takes 1s and 0s and returns a 1 or 0, it is a way of implementing logic: AND gates, OR gates, NOT gates and so on. If it can be used to implement all the basic logical operators then a machine made of perceptrons can do anything a computer can do, as computers are built up out of basic logical operators. So that raises a simple question, can you actually implement all the actual logical operators with perceptrons set appropriately. If not then no perceptron machine will ever be as powerful as a computer made of logic gates! Two of the giants of the area Marvin Minsky and Seymour Papert investigated this. What they discovered contributed to the AI winter (but only because the result was misunderstood!)

Let us see what it involves. First, can we implement an AND gate with appropriate weightings and bias values with a perceptron? An AND gate has the following truth table, so that it only outputs 1 if both its inputs are 1:

Truth table for an AND gate

So to implement it with a perceptron, we need to come up with positive or negative number for, bias, and other numbers for w1 and w2, that weight the two inputs. The numbers chosen need to lead to it giving output 1 only when the two inputs (in1 and in2) are 1 and otherwise giving output, 0.

bias + w1.in1 + w2.in2 >= 0 when in1 = 1 AND in2 = 1
bias + w1.in1 + w2.in2 < 0 otherwise

See if you can work out the answer before reading on.

A perceptron for an AND gate needs values set for bias, w1 and w2

It can be done by setting the value of b to -2 and making both weightings, w1 and w2, value 1. Then, because the two inputs, in1, and in2 can only be 1 or 0, it takes both inputs being 1 to overcome b’s value of -2 and so raise the sum up to 0:

bias + w1.in1 + w2.in2 >= 0
-2 + 1.in1 + 1.in2 >= 0
-2 + 1.1 + 1.1 >=0
A perceptron implementing an AND gate

So far so good. Now, see if you can work out weightings to make an OR gate and a NOT gate.

Truth table for an OR gate
Truth table for a NOT gate

It is possible to implement both OR and NOT gate as a perceptron (see answers at the end).

However, Minsky and Papert proved that it was impossible to create another kind of logical operator, an XOR gate, with any values of bias and weightings in a perceptron. This a logic gate that has output 1 if its inputs are different, and outputs 0 if its inputs are the same.

Truth table for an XOR gate

Can you prove it is impossible?

They had seemingly shown that a perceptron could not compute everything a computer could. Perceptrons were not as expressive so not as powerful (and never could be as powerful) as a computer. There were things they could never learn to do, as there were things as simple as an XOR gate that they could not represent. This led some to believe the result meant AIs based on perceptrons were a dead end. It was better to just work with traditional computers and traditional computing (which by this point were much faster anyway). Along with the way that the promises of AI had been over-hyped with exaggerated expectations and the applications that had emerged so far had been fairly insignificant, this seemingly damming theoretical blow on top of all that led to funding for AI research drying up.

However, as current machine learning tools show, it was never that bad. The theoretical result had been misunderstood, and research into neural networks based on perceptrons eventually took off again in the 1990s

Minsky and Papert’s result is about what a single perceptron can do, not about what multiple ones can do together. More specifically, if you have perceptrons in a single layer, each with inputs just feeding its own outputs, the theoretical limitations apply. However, if you make multiple layers of perceptrons, with the outputs of one layer of perceptrons feeding into the next, the negative result no longer applies. After all, we can make AND, OR and NOT gates from perceptrons, and by wiring them together so the outputs of one are the inputs of the next one, then we can build an XOR gate just as we can with normal logic gates!

An XOR gate from layers of perceptrons set as AND, OR and NOT operators

We can therefore build an XOR gate from perceptrons. We just need multi-layer perceptrons, an idea that was actually known about in the 1960s including by Minsky and Papert. However, without funding, making further progress became difficult and the AI winter started where little research was done on any kind of Artificial Intelligence, and so little progress was made.

The theoretical result about the limits of what perceptrons could do was an important and profound one, but the limitations of the result needed to be understood too, and that means understanding the assumptions it is based on (it is not about multi-layer perceptrons. Now AI is back, though arguably being over-hyped again, so perhaps we should learn from the past!. Theoretical work on the limits of what neural networks can and can’t do is an active research area that is as vital as ever. Let’s just make sure we understand what results mean before we jump to any conclusions. Right now theoretical results about AI need more funding not a new winter!

– Paul Curzon, Queen Mary University of London

This article is based on a introductory segment of a research seminar on the expressive power of graph neural networks by Przemek Walega, Queen Mary University of London, October 2025.

More on …

Answers

An OR gate perceptron can be made with bias = -1, w1 = w2 = 1

A NOT gate perceptron can be made with bias = 0, w1 = -1

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Lego Computer Science: Algorithms and computational agents

Child following lego instructions
Image by Thomas G. from Pixabay

The idea of an algorithm is core to computer science. So what is an algorithm? If you have ever used the instructions from some Lego set for building a Lego building, car or animal, then you have followed algorithms for fun yourself and you have been a computational agent.

An algorithm is just a special kind of set of instructions to be followed to achieve something. That something that is to be achieved could be anything (as long as someone is clever enough to come up with instructions to do it). It could be that the instructions tell you how to multiply two number, how to compute an answer to some calculation; how to best rank search results so the most useful are first; or how to make a machine learn from data so it can tell pictures of dogs from cats or recognise faces. The instructions could also be how to build a TIE fighter from a box of lego pieces, or how to build a duck out of 5 pieces of lego or, in fact, anything you might want to build from lego.

The first special thing about the instructions of an algorithm is that they guarantee the desired result is achieved (if they are followed exactly) … every time. If you follow the steps taught in school of how to multiply those numbers then you will get the answer right every time, whatever numbers you are asked to multiply. Similarly, if you follow the instructions that come with a lego box exactly and you will build exactly what is in the picture on the box. If you take it apart and build it again, it will come out the same the second time too.

For this to be possible and for instructions to be an algorithm, those instructions must be precise. There can be no doubt about what the next step is. In computer science, instructions are written in special languages like pseudocode or a programming language. Those languages are used because they are very precise (unlike English) with no doubt at all about what the instruction means to be done. Those nice people at Lego who write the booklets of instructions in each set, put a lot of effort into making sure their instructions are precise (and easy to follow). Algorithms do not have to be written in words. Lego use diagrams rather than words to be precise about each step. Their drawings are very clear so there is no room for doubt about what needs to be done next.

Computer scientists talk about “computational agents”. A computational agent is something that can follow an algorithm precisely. It does so without needing to understand or know what the instructions do. It just follows them blindly. Computers are the most obvious thing to act as a computational agent. It is what they are designed to do. In fact, it is all they can do. THey do not know what they are doing (they are just metal and silicon). They are machines that precisely follow instructions. But a human can act as a computational agent too, if they also follow instructions. If you build a lego set following the instructions exactly making no mistake then you are acting as a computational agent. If you miss a step or do steps in the wrong order of place a piece in the wrong place or (heaven forbid) do something creative and change the design as you go, then you are no longer being a computational agent. You are no longer following an algorithm. If you do act as a computational agent you will build whatever is on the box exactly, however big it is and even if you have no idea what you are building.

Acting as a computational agent can be a way to destress, a form of mindfulness where you switch off your mind. They can also be good to build up useful skills that matter as a programmer: like attention to detail, or if following a program helping you understand the semantics of programming languages so learn to program better. It is also a good debugging technique, and part of code audits where you step through a program to check it does do as intended (or find out where and why is doesn’t).

Algorithms are the core of everything computers do. They can do useful work or they can be just fun to follow. I know which kind I like playing with best.

– Paul Curzon, Queen Mary University of London


More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The Lego Computer Science series was originally funded by UKRI, through grant EP/K040251/2 held by Professor Ursula Martin, and formed part of a broader project on the development and impact of computing.

Mary and Eliza Edwards: the mother and daughter human computers

The globe with lines of longitude marked
Lines of Longitude. Image from wikimedia, Public Domain.

Mary Edwards was a computer, a human computer. Even more surprisingly for the time (the 1700s), she was a female computer (and so was her daughter Eliza).

In the early 1700s navigation at sea was a big problem. In particular, if you were lost in the middle of the Atlantic Ocean, there was no good way to determine your longitude, your position east to west. There was of course no satnavs at the time not least because there would be no satellites for 300 years! 

It could be done based on taking sightings of the position of the sun, moon or planets, at different times of the day, but only if you had an accurate time. Unfortunately, there was no good way to know the precise time when at sea. Then in the mid 1700s, an accurate clock that could survive a rough sea voyage and still be highly accurate was invented by clockmaker John Harrison. Now the problem moved to helping mariners know where the moon and planets were supposed to be at any given time so they could use the method.

As a result, the Board of Longitude (set up by the UK government to solve the problem) with the Royal Greenwich Observatory started to publish the Nautical Almanac from 1767. It consisted lots of information of such astronomical data for use by navigators at sea. For example, it contained tables of the position of the moon (or specifically its angle in the sky relative to the sun and planets (known as lunar distances). But how were these angles known years in advance to create the annual almanacs? Well, basic Newtonian physics allow the positions of planets and the moon to be calculated based on how everything in the solar system moves together with their positions at a known time. From that their position in the sky at any time can be calculated. That answers would be in the Nautical Almanac. Each year a new table was needed, so the answers also needed to be constantly recomputed.

But who did the complex calculations? No calculators, computers or other machines that could do it automatically would exist for several hundred years. It had to be done by human mathematicians. Computers then were just people, following algorithms, precisely and accurately, to get jobs like this done. Astronomer Royal, Nevil Maskelyne recruited 35 male mathematicians to do the job. One was the Revd John Edwards (well-educated clergy were of course perfectly capable of doing maths in their spare time!). He was paid for calculations done at home from 1773 until he died in 1884.

However, when he died Maskelyne received a letter from his wife Mary, revealing officially that in fact she had been doing a lot of the calculations herself, and with no family income any more she asked if she could continue to do the work to support herself and her daughters. Given the work had been of high enough quality that John Edwards had been kept on year after year so Mary was clearly an asset to the project, (and given he had visited the family several times so knew them, and was possibly even unofficially aware who was actually doing the work towards the end) Maskelyne was open-minded enough to give her a full time job. She worked as a human computer until her death 30 years later. Women doing such work was not at all normal at the time and this became apparent when Maskelyne himself died and the work stated to dry up. The quality of the work she did do, though, eventually persuaded the new Astronomer Royal  to continue to give her work.

Just as she helped her husband, her daughter Eliza helped her do the calculations, becoming proficient enough herself that when Mary died, Eliza took over the job, continuing the family business for another 17 years. Unfortunately, however, in 1832, the work was moved to a new body called ‘His Majesty’s Nautical Almanac Office’ At that point, despite Mary and Eliza having proved they were at least as good as the men for half a century or more, government imposed civil service rules came into force that meant women could no longer be employed to do the work.

Mary and Eliza, however had done lots of good, helping mariners safely navigate the oceans for very many years through their work as computers.

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The Digital Seabed: Data in Augmented Reality

A globe (North Atlantic visible) showing ocean depth information, with the path of HMS Challenger shown in red.
A globe (North Atlantic visible) showing ocean depth information, with the path of HMS Challenger shown in red. Image by Daniel Gill.

For many of us, the deep sea is a bit of a mystery. But an exciting interactive digital tool at the National Museum of the Royal Navy is bringing the seabed to life!

It turns out that the sea floor is just as interesting as the land where we spend most of our time (unless you’re a crab, of course, in which case you spend most of your time on the sea floor). I recently learnt about the sea floor at the National Museum of the Royal Navy in Portsmouth, in their “Worlds Beneath the Waves” exhibition, which documents 150-years of deep-sea exploration.

 One ship which revolutionised deep ocean study was HMS Challenger. It left London in 1858 and went on to make a 68,890 nautical-mile journey all over the earth’s oceans. One of its scientific goals was to measure the depth of the seabed as it circled the earth. To make these measurements, a long rope with a weight at one end was dropped into the water, which sank to the bottom. The length of the rope needed until the weight hit the floor was measured. It’s a simple process, but it worked! 

Thankfully, modern technology has caught up with bathymetry (the study of the sea floor). Now, sea floor depths are measured using sonar (so sound) and lidar (light) from ships or using special sensors on satellites. All of these methods send signals down to the seabed, and count how long it takes for a response. Knowing the speed of sound or light through air and water, you can calculate the distance to whatever reflected the signal.

You may be thinking, why do we need to know how deep the ocean is? Well, apart from the human desire to explore and mapour planet, it’s also useful for navigation and safety: in smaller waterways and ports, it’s very helpful to know whether there’s enough water below the boat to stay afloat!

It’s also useful to look at fault lines, the deep valleys (such as Challenger Deep, the deepest known point in the ocean, named after HMS Challenger), and underwater mountain ranges which separate continental plates. Studying these can help us to predict earthquakes and understand continental drift (read more about continental drift).

The sand table with colours projected onto it showing height.
The sand table with colours projected onto it showing height. Image by Daniel Gill.

We now have a much better understanding of the seabed, including detailed maps of sea floor topography around the world. So, we know what the ocean floor looks like at the moment, but how can we use this to understand the future of our waterways? This is where computers come in.

Near the end of the exhibition sits a table covered in sand, which has, projected onto it, the current topography of the sand. Where the sand is piled up higher is coloured red and orange, and lower in green and blue. Looking across the table you can see how sand at the same level, even far apart, is still within the same band of colour.

The projected image automatically adjusts (below) to the removal of the hill in red (above).
The projected image automatically adjusts (below) to the removal of the hill in red (above). Image by Daniel Gill.

But this isn’t even the coolest part! When you pick up and move sand around, the colours automatically adjust to the new sand topography, allowing you to shape the seabed at will. The sand itself, however, will flow and move depending on gravity, so an unrealistically tall tower will soon fall down and form a more rotund mound. 

 Want to know what will happen if a meteor impacts? Grab a handful of sand and drop it onto the table (without making a mess) and see how the topographical map changes with time!

The technology above the table.
The technology above the table. Image by Daniel Gill.

So how does this work? Looking above the table, you can see an Xbox Kinect sensor, and a projector. The Kinect works much like the lidar systems installed on ships – it sends beams of infrared lights down onto the sand, which bounce off back to the sensor in a measured time. This creates a depth map, just like ships do, but on a much smaller scale. This map is turned into colours and projected back on to the sand. 

Virtual water fills the valleys.
Virtual water fills the valleys. Image by Daniel Gill.

This is not the only feature of this table, however: it can also run physics simulations! By placing your hand over the sand, you can add virtual water, which flows realistically into the lower areas of sand, and even responds to the movement of sand.

The mixing of physical and digital representations of data like this is an example of augmented, or mixed, reality. It can help visualise things that you might otherwise find difficult to imagine, perhaps by simulating the effects of building a new dam, for example. Models like this can help experts and students, and, indeed, museum visitors, to see a problem in a different and more interactive way.

– Daniel Gill, Queen Mary University of London

More on…

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

An experiment in buoyancy

Here is a little science experiment anyone can do to help understand the physics of marine animals and their buoyancy. It helps give insight into how animals such as ancient ammonites and now cuttlefish can move up and down at will just by changing the density of internal fluids.* (See Ammonite propulsion of underwater robots). It also shows how marine robots could do the same with a programmed ammonite brain.

First take a beaker of water and a biro pen top. Put a small piece of blu tack over the the top of the pen top (to cover the holes that are there to hopefully stop you suffocating if you were to swallow one – never chew pen tops!). Next, put a larger blob of blu tack round the bottom of the pen top. You will have to use trial and error to get the right amount. Your aim is to make the pen top float vertically upright in the water, with the smaller blu tack just floating above the surface. Try it, by carefully placing the pen top vertically into the water. If it doesn’t float like that, dry the blu tack then add or remove a bit more until it does float correctly.

It now has neutral buoyancy. The force of gravity pulling it down is the same as the buoyancy force (or upthrust) pushing it upwards, caused by the air trapped in the top of the lid… so it stays put, neither sinking nor rising.

Now fill a drink bottle with water all the way to the top. Then add a little more water so the water curves up above the top of the bottle (held in place by surface tension). Carefully, drop in the weighted pen top and screw on the top of the bottle tightly.

The pen top should now just float in the water at some depth. It is acting just like the swim bladder of a fish, with the air in the pen top preventing the weight of the blue tack pulling it down to the bottom.

Now, squeeze the side of the bottle. As you squeeze, the pen top should suddenly sink to the bottom! Let go and it rises back up. What is happening? The force of gravity is still pulling down the same as it was (the mass hasn’t changed), so if it is sinking the buoyancy force pushing up must be less that it was.

What is happening? We are increasing the pressure inside the bottle, so the water is now compressing the air in the pen top, reducing its volume and increasing its density. The more dense your little diving bell is, the less the buoyancy force pushing up, so it sinks.

That is essentially the trick that ammonites evolved, many, many millions of years ago, squeezing the gas inside their shell to suddenly sink to get away quickly when they sensed danger. It is what cuttlefish still do today squeezing the gas in their cuttlebone so the cuttlefish becomes denser.

So, if you were basing a marine robot on an ammonite (with movement also possible by undulating its arms, and by jet propulsion, perhaps) then your programming task for controlling its movement would involve it being able to internally squeeze an air space by just the right amount at the right time!

In fact, several groups of researchers have created marine robots based on ammonites. For example, a group at Utah have been doing so to better understand the real but extinct ammonites themselves, including how they did actually move. For example, the team have been testing different shell shapes to see if some shapes work better than others, and so just how efficient ammonite shell shapes actually were. By programming an ammonite robot brain, you could similarly, for example, better understand how they controlled their movement and how effective it really was in practice (not just in theory).

Science can now be done in a completely different way to the traditional version of just using discovery, observation and experiment. You can now do computer and robotic modelling too, running experiments on your creations. If you want to study marine biology, or even fancy being a Palaeontologist with a difference, understanding long extinct life, you can now do it through robotics and computer science, not just by watching animals or digging up fossils (but understanding some physics is still important to get you started).

– Paul Curzon, Queen Mary University of London

More on …

*Thanks to the Dorset Wildlife Trusts at the Chisel Beach Visitor Centre, Portland where I personally learnt about ammonite and cuttlefish propulsion in a really fun science talk on the physics of marine biology, including demonstrating this experiment.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Ammonite propulsion of underwater robots

Ammonite statue showing creature inside its shell
Image by M W from Pixabay

Intending to make a marine robot that will operate under the ocean? Time to start learning, not just engineering and computing, but the physics of marine biology! And, it turns out you can learn a lot from ammonites: marine creatures that ruled the ocean for millennia and died out while dinosaurs ruled the earth. Perhaps your robot needs a shell, not for protection, but to help it move efficiently.

If you set yourself the task of building an underwater robot, perhaps to work with divers in exploring wrecks or studying marine life, you immediately have to solve a problem that is different to traditional land-based robotics researchers. Most of the really cool videos of the latest robots tend to show how great they are at balancing on two legs, doing some martial art, perhaps, or even gymnastics. Or maybe they are hyping how good they are running through the forest like a wolf, now on four legs. Once you go underwater all that exciting stuff with legs becomes a bit pointless. Now its all about floating not balancing. So what do you do?

The obvious thing perhaps is to just look at boats, submarines and torpedoes and design a propulsion system with propellers, maybe using an AI to design the most efficient propellor shape, then write some fancy software to control it as efficiently as possible. Alternatively, you could look at what the fish do and copy them!

What do fish do? They don’t have propellors! The most obvious thing is they have tails and fins and wiggle a lot. Perhaps your marine robot could be streamlined like a fish and well, swim, its way through the sea. That involves the fish using its muscles to make waves ripple along its body pushing against the water. In exerting a force on the water, by Newton’s Laws, the water pushes back and the fish moves forward.

Of course, your robot is likely to be heavy so will sink. That raises the other problem. Unlike on land, in water you need to be able to move up (and down) too. Being heavy, moving down is easy. But then that is the same for fish. All that fishy muscle is heavier than water so sinks too. Unless they have evolved a way to solve the problem, fish sink to the bottom and have to actively swim upwards if they want to be anywhere else. Some live on the bottom so that is exactly what they want. Maybe your robot is to crawl about on the sea floor too, so that may be right for it too.

Many, many other fish don’t want to be at the bottom. They float without needing to expend any energy to do so. How? They evolved a swim bladder that uses the physics of buoyancy to make them naturally float, neither rising or sinking. They have what is called neutral buoyancy. Perhaps that would be good for your robot too, not least to preserve its batteries for more important things like moving forwards. How do swim bladders do it? They are basically bags of air that give the fish buoyancy – a bit like you wearing a life jacket. Get the amount of air right and the buoyancy, which provides an upward force, can exactly counteract the force of gravity that is pushing your robot down to the depths. The result is the robot just floats under the water where it is. It now has to actively swim if it wants to move down towards the sea floor. So, if you want your robot to do more than crawl around on the bottom, designing in a swim bladder is a good idea.

Perhaps, you can save more energy and simplify things even more though. Perhaps, your robot could learn from ammonites. These are long extinct, dying out with the dinosaurs and now found only as fossils, fearsome predators that evolved a really neat way to move up and down in the water. Ammonites were once believed to be curled up snakes turned to stone, but they were actually molluscs (like snails) and the distinctive spiral structure preserved in fossils was their shell. They didn’t live deep in the spiral though, just in the last chamber at the mouth of the spiral with their multi-armed octopus like body sticking out the end to catch prey. So what were the rest of the chambers for? Filled with liquid or gas, they would act exactly like a swim bladder providing buoyancy control. However, it is likely that, as with the similar modern day nautilus, the ammonite could squeeze the gas or liquid of its spiral shell into a smaller volume, changing its density. Doing that changes its buoyancy: with increased density the buoyancy is less, so gravity exerts a greater force than the lift the shell’s content is giving and it suddenly sinks. Decrease the density by letting the gas or liquid expand and it rises again.

You can see how it works with this simple experiment.

You don’t needs a shell of course, other creatures have evolved more sophisticated versions. A cuttlebone does the same job. It is an internal organ of the cuttlefish (which are not fish but cephalopods like octopus and squid, so related to ammonites). They are the white elongated disks that you find washed up on the beach (especially along the south and west coasts in the UK). They are really hard on one side but slightly softer on the other. They act like an adjustable swim bladder. The hard upper side prevents gas escaping (whilst also adding a layer of armour). The soft lower side is full of microscopic chambers that the cuttlefish can push gas into or pull gas out of at will with the same effect as that of the ammonites shell.

This whole mechanism is essentially how the buoyancy tanks of a submarine work. First used in the original practical submarine, the Nautilus of 1800, they are flooded and emptied to make a submarine sink and rise.

Build the idea of a cuttlebone or ammonite shell into your robot and it can rise and sink at will with minimal energy wasted. Cuttlefish, though, also have another method of propulsion (aside from undulating their body) that allows it to escape from danger in a hurry: jet propulsion. By ejecting water stored in their mantle through their syphon (a tube), they can suddenly give themselves lots of acceleration just like a jet engine gives a plane. That would normally be a very inefficient form of propulsion, using lots of energy. However, experiments show that when used with negative buoyancy such as provided by the cuttlebone, this jet propulsion is actually much more efficient than it would be. So the cuttlebone saves energy again. And a rare ammonite fossil with the preserved muscles of the actual animal suggests that ammonites had similar jet propulsion too. Given some ammonites grew as large as several metres across, that would have been an amazing sight to see!

To be a great robotics engineer, rather than inventing everything from scratch, you could do well to learn from biological physics. Some of the best solutions are already out there and may even be older than the dinosaurs, You might then find your programming task is to program the equivalent of the brain of an ammonite.

Paul Curzon, Queen Mary University of London

More on …

Thanks to the Dorset Wildlife Trusts at the Chisel Beach Visitor Centre, Portland where I personally learnt about ammonite and cuttlefish propulsion in a really fun science talk on the physics of marine biology.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Film Futures: The Lord of the Rings

What if there was Computer Science in Middle Earth?…Computer Scientists and digital artists are behind the fabulous special effects and computer generated imagery we see in today’s movies, but for a bit of fun, in this series, we look at how movie plots could change if they involved Computer Scientists. Here we look at an alternative version of the film series (and of course book trilogy): The Lord of the Rings.

***SPOILER ALERT***

The Lord of the Rings is an Oscar winning film series by Peter Jackson. It follows the story of Frodo as he tries to destroy the darkly magical, controlling One Ring of Power, by throwing it in to the fires of Mount Doom at Mordor. This involves a three film epic journey across Middle Earth where he and “the company of the Ring” are chased by the Nazgûl, the Ringwraiths of the evil Sauron. Their aim is to get to Mordor, without being killed and the Ring taken from them and returned to Sauron who created it, or it being stolen by Golem who once owned it.

The Lord of the Rings: with computer science

In our computer science film future version, Frodo discovers there is a better way than setting out on a long and dangerous quest. Aragorn, has been tinkering with drones in his spare time, and so builds a drone to carry the Ring to Mount Doom controlled remotely. Frodo pilots it from the safety of Rivendell. However, on its first test flight, its radio signal is jammed by the magic of Saruman from his tower. The drone crashes and is lost. It looks like a the company must set off on a quest after all.

However, the wise Elf, the Lady Galadriel suggests that they control the drone by impossible-to-jam fibre optic cable. The Elves are experts at creating such cables using them in their highly sophisticated communication networks that span Middle Earth (unknown to the other peoples of Middle Earth), sending messages encoded in light down the cables.

They create a huge spool containing the hundreds of miles needed. Having also learnt from their first attempt, they build a new drone that uses stealth technology devised by Gandalf to make it invisible to the magic of Wizards, bouncing magical signals off it in a way that means even the ever watchful Eye of Sauron does not detect it until it is too late. The new drone sets off trailing a fine strand of silk-like cable behind, with the One Ring within. At its destination, the drone is piloted into the lava of Mount Doom, destroying the ring forever. Sauron’s power collapses, and peace returns to Middle Earth. Frodo does not suffer from post-traumatic stress disorder, and lives happily ever after, though what becomes of Golem is unknown (he was last seen on Mount Doom through the Drones camera, chasing after it, as the drone was piloted into the crater).

In real life…

Drones are being touted for lots of roles, from delivering packages to people’s doors to helping in disaster emergency areas. They have most quickly found their place as a weapon, however. At regular intervals a new technology changes war forever, whether it is the long bow, the musket, the cannon, the tank, the plane… The most recent technology to change warfare on the battlefield has been the introduction of drone technology. It is essentially the use of robots in warfare, just remote controlled, flying ones rather than autonomous humanoid ones, Terminator style (but watch this space – the military are not ones to hold back on a ‘good’ idea). The vast majority of deaths in the Russia-Ukraine war on both sides have been caused by drone strikes. Now countries around the world are scrambling to update their battle readiness, adding drones into their defence plans.

The earliest drones to be used on the battlefield were remote controlled by radio, The trouble with anything controlled that way is it is very easy to jam – either sending your own signals at higher power to take over control, or more easily to just swamp the airwaves with signal so the one controlling the drone does not get through. The need to avoid weapons being jammed is not a new problem. In World War II, some early torpedoes were radio controlled to their target, but that became ineffectual as jamming technology was introduced. Movie star Hedy Lamar is famous for patenting a mechanism whereby a torpedo could be controlled by radio signals that jumped from frequency to frequency, making it harder to jam (without knowing the exact sequence and timing of the frequency jumps). In London, torpedo stations protecting the Thames from enemy shipping had torpedoes controlled by wire so they could be guided all the way to the target. Unfortunately though it was not a great success, the only time one was used in a test it blew up a harmless fishing boat passing by (luckily no-one died).

And that is the solution adopted by both sides in the Ukraine war to overcome jamming. Drones flying across the front lines are controlled by miles of fibre optic cable that is run out on spools (tens of miles rather than the hundreds we suggested above). The light signals controlling the drone, pass down the glass fibre so cannot be jammed or interfered with. As a result the front lines in the Ukraine are now criss-crossed with gossamer thin fibres, left behind once the drones hit their target or are taken out by the opposing side. It looks as though the war is being fought by robotic spiders (which one day may be the case but not yet). With this advent of fibre-optic drone control, the war has changed again and new defences against this new technology are needed. By the time they are effective, likely the technology will have morphed into something new once more.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos