The Machine Stops: a review

Old rusting cogs and a clock
Image by Amy from Pixabay

How reliant on machines should we let ourself become? E.M. Forster is most famous for period dramas but he also wrote a brilliant Science Fiction short story, ‘The Machine Stops’ about it. It is a story I first read in an English Literature lesson at school, a story that convinced me that English Literature could be really, really interesting!

Written in 1909 decades before the first computers were built never mind the internet, video calls, digital music and streaming, he wrote of a future with all of that, where humans live alone in identical underground rooms across the Earth, never leaving because there is no reason to leave, never meeting others because they can meet each other through the Machine. Everything is at hand at the touch of a button. Everything is provided by the Machine, whether food, water, light, entertainment, education, communication, and even air, …

The story covers themes of whether we should let ourself become disconnected from the physical world or not. Is part of what makes us human our embodiment in that world. He refers to this as “the sin against the body” a theme returned to in the film WALL-E. Disconnected from the world humans decline not only in body but also in spirit.

As the title suggests, the story also explores the problems of becoming over-reliant on technology and of what then happens if the technology is taken away. It is more than this though but the issue of repeatedly accepting “good enough” as a replacement for the fidelity of physical and natural reality. What seems wonderfully novel and cool, convenient or just cheaper may not actually be as good as the original. Human-human interaction that is face-to-face is far richer than we get through a video call, for example, and yet meetings have disappeared rapidly in favour of the latter in the 21st century.

Once we do become reliant on machines to service our every whim, what would happen if those ever more connected machines break? Written over a century ago, this is very topical now, of course, as, with our ever increasing reliance on inter-connected digital technology for energy, communication, transport, banking and more, we have started to see outages happen. These have arisen from the consequences of bugs and cyber attacks, from ‘human error’ and technology that it turns out is just not quite dependable enough, leading to country and world-wide outages of the things that constitutes modern living.

How we use technology is up to us all of course, and like magpies we love shiny new toys, but losing all the skills and understanding just because they can be now done by the machine may not be very wise in the long term. More generally, we need to make sure the technology we do make ourselves reliant on, is really, really dependable: far more dependable than our current standards are in actual practice. That needs money and time, not rushed introductions, but also more Computer Science research on how to do dependability better in practice. Above all we need to make sure we do continue to understand the systems we build well enough to maintain them in the long term.

Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Why do we still have lighthouses?

Image by Tom from Pixabay

In an age of satellite navigation when all ships have high-tech navigation systems that can tell them exactly where they are to the metre, on accurate charts that show exactly where dangers lurk, why do we still bother to keep any working lighthouses?

Lighthouses were built around the Mediterranean from the earliest times, originally to help guide ships into ports rather than protect them from dangerous rocks or currents. The most famous ancient lighthouse was the great lighthouse of Pharos, at the entry to the port of Alexandria. Built by the Ancient Egyptians, it was one of the seven wonders of the ancient world.

In the UK Trinity House, the charitable trust that still runs all our lighthouses, was set up in Tudor Times by Henry VIII, originally to provide warnings for shipping in the Thames. The first offshore lighthouse built to protect shipping from dangerous rocks was built on Eddystone at the end of the 17th century, It only survived for 5 years, before it was washed away in a storm itself, along, sadly, with Henry Winstanley who built it. However, in the centuries since then Trinity House, has repeatedly improved the design of their lighthouses, turning them into a highly reliable warning system that has saved countless lives, across the centuries.

There are still several hundred lighthouses round the UK with over 60 still maintained by Trinity House. Each has a unique code spelled out in its flashing light that communicates to ships exactly where it is, and so what danger awaits them. But why are they still needed at all? They cost a lot of money to maintain, and the UK government doesn’t fund them. It is all done on donations and money they can raise. So why not just power them down and turn them into museums? Instead their old lamps have been modernised and upgraded with powerful LED lights, automated and networked. They switch on automatically based on light sensors, sounding foghorns automatically too. If the LED light fails, a second automatically switches on in its place, and the control centre, now hundreds of miles away is alerted. There are no plans to turn them all off and just leave shipping to look after itself. The reason is a lesson that we could learn from in many other areas where computer technology is replacing “old-fashioned” ways of doing things.

Yes, satellite navigation is a wonderful system that is a massive step forward for navigation. However, the problem is that it is not completely reliable for several reasons. GPS, for example, is a US system, developed originally for the military and ultimately they retain control. They can switch the public version off at any time, and will if they think it is in their interests to do so. Elon Musk switched off his Starlink system, which he aims to be a successor to GPS, to prevent Ukraine from using it in the war with Russia. It was done in the middle of a Ukranian military operation causing that operation to fail. In July 2025, the Starlink system also demonstrated it is not totally reliable anyway, as it went down for several hours showing that satellite navigation systems can fail for periods, even if not switched off intentionally, due to software bugs or other system issues. A third problem is that navigation systems can be intentionally jammed whether as an act of war or terrorism, or just high-tech vandalism. Finally, a more everyday problem is that people are over trusting of computer systems and they can give a false sense of security. Satellite navigation gives unprecedented accuracy and so are trusted to work to finer tolerances than people would without them. As a result it has been noticed that ships now often travel closer to dangerous rocks than they used to. However, the sea is capricious and treacherous. Sail too close to the rocks in a storm and you could suddenly find yourself tossed upon them, the back of your ship broken, just as has happened repeatedly through history.

Physical lighthouses may be old technology but they work as a very visible and dependable warning system, day or night. They can be used in parallel to satellite navigation, the red and white towers and powerful lights very clearly say: “there is danger here … be extra careful!” That extra very physical warning of the physical danger is worth having as a reminder not to take risks.The lighthouses are also still there, adding in redundancy, should the modern navigation systems go down just when a ship needs them, with nothing extra needing to be done, and so no delay.

It is not out of some sense of nostalgia that the lighthouses still work. Updated with modern technology of their own, they are still saving lives.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Margaret Hamilton: Apollo Emergency! Take a deep breath, hold your nerve and count to 5

Buzz Aldrin standing on the moon
Buzz Aldrin standing on the moon. Image by Neil Armstrong, NASA via Wikimedia Commons – Public Domain

You have no doubt heard of Neil Armstrong, first human on the moon. But have you heard of Margaret Hamilton? She was the lead engineer, responsible for the Apollo mission software that got him there, and ultimately for ensuring the lunar module didn’t crash land due to a last minute emergency.

Being a great software engineer means you have to think of everything. You are writing software that will run in the future encountering all the messiness of the real world (or real solar system in the case of a moon landing). If you haven’t written the code to be able to deal with everything then one day the thing you didn’t think about will bite back. That is why so much software is buggy or causes problems in real use. Margaret Hamilton was an expert not just in programming and software engineering generally, but also in building practically dependable systems with humans in the loop. A key interaction design principle is that of error detection and recovery – does your software help the human operators realise when a mistake has been made and quickly deal with it? This, it turned out, mattered a lot in safely landing Neil Armstrong and Buzz Aldrin on the moon.

As the Lunar module was in its final descent dropping from orbit to the moon with only minutes to landing, multiple alarms were triggered. An emergency was in progress at the worst possible time. What it boiled down to was that the system could only handle seven programs running at once but Buzz Aldrin had just set an eighth running. Suddenly, the guidance system started replacing the normal screens by priority alarm displays, in effect shouting “EMERGENCY! EMERGENCY”! These were coded into the system, but were supposed never to be shown, as the situations triggering them were supposed to never happen. The astronauts suddenly had to deal with situations that they should not have had to deal with and they were minutes away from crashing into the surface of the moon.

Margaret Hamilton was in charge of the team writing the Apollo in-flight software, and the person responsible for the emergency displays. She was covering all bases, even those that were supposedly not going to happen, by adding them. She did more than that though. Long before the moon landing happened she had thought through the consequences of if these “never events” did ever happen. Her team had therefore also included code in the Apollo software to prioritise what the computer was doing. In the situation that happened, it worked out what was actually needed to land the lunar module and prioritised that, shutting down the other software that was no longer vital. That meant that despite the problems, as long as the astronauts did the right things and carried on with the landing, everything would ultimately be fine.

Margaret Hamilton
Margaret Hamilton Image by Daphne Weld Nichols, CC BY-SA 3.0 via Wikimedia Commons

There was still a potential problem though, When an emergency like this happened, the displays appeared immediately so that the astronauts could understand the problem as soon as possible. However, behind the scenes the software itself that was also dealing with them, by switching between programs, shutting down the ones not needed. Such switchovers took time In the 1960s Apollo computers as computers were much slower than today. It was only a matter of seconds but the highly trained human astronauts could easily process the warning information and start to deal with it faster than that. The problem was that, if they pressed buttons, doing their part of the job continuing with the landing, before the switchover completed they would be sending commands to the original code, not the code that was still starting up to deal with the warning. That could be disastrous and is the kind of problem that can easily evade testing and only be discovered when code is running live, if the programmers do not deeply understand how their code works and spend time worrying about it.

Margaret Hamilton had thought all this through though. She had understood what could happen, and not only written the code, but also come up with a simple human instruction to deal with the human pilot and software being out of synch. Because she thought about it in advance, the astronauts knew about the issue and solution and so followed her instructions. What it boiled down to was “If a priority display appears, count to 5 before you do anything about it.” That was all it took for the computer to get back in synch and so for Buzz Aldrin and Neil Armstrong to recover the situation, land safely on the moon and make history.

Without Margaret Hamilton’s code and deep understanding of it, we  would most likely now be commemorating the 20th July as the day the first humans died on the moon, rather than being the day humans first walked on the moon.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

You cannot be serious! …Wimbledon line calls go wrong

Image by Felix Heidelberger from Pixabay (cropped)

The 2025 tennis championships are the first time Wimbledon has completely replaced their human line judges with an AI vision and decision system, Hawk-Eye. After only a week it caused controversy, with the system being updated, after it failed to call a glaringly out ball in a Centre Court match between Brit Sonay Kartal and Anastasia Pavlyuchenkova. Apparently it had been switched off by mistake mid-game. This raises issues inherent in all computer technology replacing humans: that they can go wrong, the need for humans-in-the-loop, the possibility of human error in their use, and what you do when they do go wrong.

Perhaps because it is a vision system rather than generative AI there has been little talk of whether Hawk-Eye is 100% accurate or not. Vision systems do not hallucinate in the way generative AI does, but they are still not infallible. The opportunity for players to appeal has been removed, however: in the original way Hawk-Eye was used humans made the call and players could ask for Hawk-Eye to check. Now, Hawk-Eye makes a decision and basically that is it. A picture is shown on screen of a circle relative to the line, generated by Hawk-Eye to ‘prove’ the ball was in or out as claimed. It is then taken as gospel. Of course, it is just reflecting Hawk-Eye’s decision – what it “saw” – not reality and not any sort of actual separate evidence. It is just a visual version of the call shouted. However, it is taken as though it is absolute proof with no argument possible. If it is aiming to be really, really dependable then Hawk-Eye will have multiple independent systems sensing in different ways and voting on the result – as that is one of the ways computer scientists have invented to program dependability. However, whether it is 100% accurate isn’t really the issue. What matters is whether it is more accurate, making fewer mistakes, than human line judges do. Undoubtedly it is, so is therefore an improvement and some uncaught mistakes are not actually the point.

However, the mistake in this problem call was different. The operators of the system had switched it off mistakenly mid-match due to “human error”. That raises two questions. First, why was it designed to that a human could accidentally turn it off mid-match – don’t blame that person as it should not have been possible in the first place. Fix the system so it can’t happen again. That is what within a day the Lawn Tennis Association claim to have done (whether resiliently remains to be seen).

However, the mistake begs another question. Wimbledon had not handed the match over to the machines completely. A human umpire was still in charge. There was a human in the loop. They, however, had no idea the system was switched off we were told until the call for a ball very obviously out was not made. If that is so, why not? Hawk-Eye supposedly made two calls of “Stop”. Was that its way of saying “I am not working so stop the match”? If it was such an actual message to the umpire it is not a very clear way to make it, and guarantees to be disruptive. It sounds a lot like a 404 error message, added by a programmer for a situation that they do not expect to occur!

A basic requirement of a good interactive system is that the system state is visible – that it is not even switched on should have been totally obvious in the controls the umpire had well before the bad call. That needs to be fixed too, just in case there is still a way Hawk-Eye can still be switched off. It begs the question of how often has the system been accidentally switched off, or powered down temporally for other reasons, with no one knowing, because there was no glaringly bad call to miss at the time.

Another issue is the umpire supposedly did follow the proper procedure which was not to just call the point (as might have happened in the past given he apparently knew “the ball was out!”) but instead had the point replayed. That was unsurprisingly considered unfair by the player who lost a point they should have won. Why couldn’t the umpire make a decision on the point? Perhaps, because humans are no longer trusted at all as they were before. As suggested by Pavlyuchenkova there is no reason why there cannot be a video review process in place so that the umpire can make a proper decision. That would be a way to add back in a proper appeal process.

Also, as was pointed out, what happens if the system fully goes down, does Wimbledon now have to just stop until Hawk-Eye is fixed: “AI stopped play”. We have lots of situations over many decades as well as recently of complex computer systems crashing. Hawk-Eye is a complex system so problems are likely possible. Programmers make mistakes (and especially when doing quick fixes to fix other problems as was apparently just done). If you replace people by computers, you need a reliable and appropriate backup that can kick into place immediately from the outset. A standard design principle is that programs should help avoid humans making mistakes, help them quickly detect them when they do and help them recover.

A tennis match is not actually high stakes by human standards. No one dies because of mistakes (though a LOT of money is at stake), but the issues are very similar in a wide range of systems where people can die – from control of medical devices, military applications, space, aircraft and nuclear power plant control…all of which computers are replacing humans. We need good solutions, and they need to be in place before something goes wrong not after. An issue as systems are more and more automated is that the human left in the loop to avoid disaster has more and more trouble tracking what the machine is doing as they do less and less, so making it harder to step in and correct problems in a timely way (as was likely the case with the Wimbledon umpire). The humans need to not just be a little bit in the loop but centrally so. How you do that for different situations is not easy to work out but as tennis has shown it can’t just be ignored. There are better solutions than Wimbledon are using but to even consider them you have to first accept that computers do make mistakes so know there is a problem to be solved.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Philippa Gardner bringing law and order to a wild west

Verified Trustworthy Software

Image by CS4FN

The computing world is a wild west, with bugs in software the norm, and malicious people and hostile countries making use of them to attack people, companies and other nations. We can do better. Just as in the original wild west, advances have happened faster than law and order can keep up. Rather than catch cyber criminals we need to remove the possibility. In software the complexity of our computers and the programs they run has increased faster than ways have been developed and put in place to ensure they can be trusted. It is important that we can answer precisely questions such as “What does this code do?” and “Does it actually do what is intended?”, but can also assure ourselves of what code definitely does NOT do: it doesn’t include trapdoors for criminals to subvert, for example. Philippa Gardner has dedicated her working life to rectifying this by providing ways to verify software, so mathematically prove such trust-based properties hold of it.

Programs are incredibly complicated. Traditionally, software has been checked using testing. You run it on lots of input scenarios and check it does the right thing in those cases. If it does you assume it works in all the cases you didn’t have time to check. That is not good enough if you want code to really be trustworthy. It is impossible to check all possibilities, so testing alone is just not good enough. The only way to do it properly is to also use engineering methods based on mathematics. This is the case, not just for application programs, but also for the software systems they run within, and that includes programming languages themselves. If you can’t trust the programming language then you can’t trust any programs written in that language. Building on decades of work by both her own team and others, Philippa has helped provide tools and techniques that mean complex industrial software and the programming languages they are written in can now be verified mathematically to be correct. Helping secure the web is one area she is making a massive contribution via the W3C WebAssembly (Wasm) initiative. She is helping ensure that programs of the future that run over the web are trustworthy. 

Programs written in programming languages are compiled (translated) into low level code (ie binary 1s and 0s) that can actually be run on a computer. Each kind of computer has its own binary instructions. Rather than write a compiler for every different machine, compilers often now use common intermediary languages. The idea is you have what is called a virtual machine – an imaginary one that does not really exist in hardware. You compile your code to run on the imaginary machine. A compiler is written for each language to compile it into the common low level language for that virtual machine. Then a separate, much simpler, translator can be written to convert that code into code for a particular real machine. That two step process is much easier than writing compilers for all combinations of languages and machines. It is also a good approach to make programs more trustworthy, as you can separately verify the separate, simpler parts. If programs compile to the virtual machine, then to be sure they cannot do harm (like overwrite areas of memory they shouldn’t be able to write to) you also only have to be sure that programs running on the virtual machine programs cannot , in general, do such harm.

The aim of Wasm is to make this all a reality for web programming, where visiting a web page may run a program you can’t trust. Wasm is a language with linked virtual machine that programming language compilers can be compiled into that itself will be trustworthy even when run over the web. It is based on a published formal specification of how the programming language and the virtual machine should behave.

As Philippa has pointed out, while some companies have good processes for ensuring their software is good enough, these are often kept secret.  But given we all rely on such software we need much better assurances. Processes and tools need to be inspectable by anyone. That has been one of the areas she has focussed on. Working on Wasm is a way she has been doing that. Much of her work over 30 years or so has been around the development and use of logics that can be used to mathematically verify that concurrent programs are correct. Bringing that experience to Wasm has allowed her to work on the formal specification conducting proofs of properties of Wasm that show it is trustworthy in various way, correcting definitions in the specification when problems are found. Her approach is now being adopted as the way to do such checking.

Her work with Wasm continues but she has already made massive steps to helping ensure that the programs we use are safe and can be trusted. As a result, she was recently awarded the BCS Lovelace medal for her efforts.

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog and post is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Double or nothing: an extra copy of your software, just in case

Ariane 5 on the launchpad
Ariane 5 on the launch pad. Photo Credit: (NASA/Chris Gunn) Public Domain via Wikimedia Commons.

If you spent billions of dollars on a gadget you’d probably like it to last more than a minute before it blows up. That’s what happened to a European Space Agency rocket. How do you make sure the worst doesn’t happen to you? How do you make machines reliable?

A powerful way to improve reliability is to use redundancy: double things up. A plane with four engines can keep flying if one fails. Worried about a flat tyre? You carry a spare in the boot. These situations are about making physical parts reliable. Most machines are a combination of hardware and software though. What about software redundancy?

You can have spare copies of software too. Rather than a single version of a program you can have several copies running on different machines. If one program goes wrong another can take over. It would be nice if it was that simple, but software is different to hardware. Two identical programs will fail in the same way at the same time: they are both following the same instructions so if one goes wrong the other will too. That was vividly shown by the maiden flight of the Ariane 5 rocket. Less than 40 seconds from launch things went wrong. The problem was to do with a big number that needed 64 bits of storage space to hold it. The program’s instructions moved it to a storage place with only 16 bits. With not enough space, the number was mangled to fit. That led to calculations by its guidance system going wrong. The rocket veered off course and exploded. The program was duplicated, but both versions were the same so both agreed on the same wrong answers. Seven billion dollars went up in smoke.

Can you get round this? One solution is to get different teams to write programs to do the same thing. The separate teams may make mistakes but surely they won’t all get the same thing wrong! Run them on different machines and let them vote on what to do. Then as long as more than half agree on the right answer the system as a whole will do the right thing. That’s the theory anyway. Unfortunately in practice it doesn’t always work. Nancy Leveson, an expert in software safety from MIT, ran an experiment where different programmers were given programs to write. She found they wrote code that gave the same wrong answers. Even if it had used independently written redundant code it’s still possible Ariane 5 would have exploded.

Redundancy is a big help but it can’t guarantee software works correctly. When designing systems to be highly reliable you have to assume things will still go wrong. You must still have ways to check for problems and to deal with them so that a mistake (whether by human or machine) won’t turn into a disaster.

Paul Curzon, Queen Mary University of London


Related Magazine …


More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Mary Clem: getting it right

by Paul Curzon, Queen Mary University of London

Mary Clem was a pioneer of dependable computing long before the first computers existed. She was a computer herself, but became more like a programmer.

A tick on a target of red concentric zeros
Image by Paul Curzon

Back before there were computers there were human computers: people who did the calculations that machines now do. Victorian inventor, Charles Babbage, worked as one. It was the inspiration for him to try to build a steam-powered computer. Often, however, it was women who worked as human computers especially in the first half of the 20th century. One was Mary Clem in the 1930s. She worked for Iowa State University’s statistical lab. Despite having no mathematical training and finding maths difficult at school, she found the work fascinating and rose to become the Chief Statistical Clerk. Along the way she devised a simple way to make sure her team didn’t make mistakes.

The start of stats

Big Data, the idea of processing lots of data to turn that data into useful information, is all the rage now, but its origins lie at the start of the 20th century, driven by human computers using early calculating machines. The 1920s marked the birth of statistics as a practical mathematical science. A key idea was that of calculating whether there were correlations between different data sets such as rainfall and crop growth, or holding agricultural fairs and improved farm output. Correlation is the the first step to working out what causes what. it allows scientists to make progress in working out how the world works, and that can then be turned into improved profits by business, or into positive change by governments. It became big business between the wars, with lots of work for statistical labs.

Calculations and cards

Originally, in and before the 19th century, human computers did all the calculations by hand. Then simple calculating machines were invented, so could be used by the human computers to do the basic calculations needed. In 1890 Herman Hollerith invented his Tabulator machine (his company later became computing powerhouse, IBM). The Tabulator machine was originally just a counting machine created for the US census, though later versions could do arithmetic too. The human computers started to use them in their work. The tabulator worked using punch cards, cards that held data in patterns of holes punched in to them. A card representing a person in the census might have a hole punched in one place if they were male, and in a different place if they were female. Then you could count the total number of any property of a person by counting the appropriate holes.

Mary was being more than a computer,
and becoming more like a programmer

Mary’s job ultimately didn’t just involve doing calculations but also involved preparing punch cards for input into the machines (so representing data as different holes on a card). She also had to develop the formulae needed for doing calculations about different tasks. Essentially she was creating simple algorithms for the human computers using the machines to follow, including preparing their input. Her work was therefore moving closer to that of a computer operator and then programmer’s job.

Zero check

She was also responsible for checking calculations to make sure mistakes were not being made in the calculations. If the calculations were wrong the results were worse than useless. Human computers could easily make mistakes in calculations, but even with machines doing calculations it was also possible for the formulae to be wrong or mistakes to be made preparing the punch cards. Today we call this kind of checking of the correctness of programs verification and validation. Since accuracy mattered, this part of he job also mattered. Even today professional programming teams spend far more time checking their code and testing it than writing it.

Mary took the role of checking for mistakes very seriously, and like any modern computational thinker, started to work out better ways of doing it that was more likely to catch mistakes. She was a pioneer in the area of dependable computing. What she came up with was what she called the Zero Check. She realised that the best way to check for mistakes was to do more calculations. For the calculations she was responsible for, she noticed that it was possible to devise an extra calculation, whereby if the other answers (the ones actually needed) have been correctly calculated then the answer to this new calculation is 0. This meant, instead of checking lots of individual calculations with different answers (which is slow and in itself error prone), she could just do this extra calculation. Then, if the answer was not zero she had found a mistake.

A trivial version of this general idea when you are doing a single calculation is to just do it a second time, but in a different way. Rather than checking manually if answers are the same, though, if you have a computer it can subtract the two answers. If there are no mistakes, the answer to this extra check calculation should be 0. All you have to do is to look for zero answers to the extra subtractions. If you are checking lots of answers then, spotting zeros amongst non-zeros is easier for a human than looking for two numbers being the same.

Defensive Programming

This idea of doing extra calculations to help detect errors is a part of defensive programming. Programmers add in extra checking code or “assertions” to their programs to check that values calculated at different points in the program meet expected properties automatically. If they don’t then the program itself can do something about it (issue a warning, or apply a recovery procedure, for example).

A similar idea is also used now to catch errors whenever data is sent over networks. An extra calculation is done on the 1s and 0s being sent and the answer is added on to the end of the message. When the data is received a similar calculation is performed with the answer indicating if the data has been corrupted in transmission. 

A pioneering human computer

Mary Clem was a pioneer as a human computer, realising there could be more to the job than just doing computations. She realised that what mattered was that those computations were correct. Charles Babbages answer to the problem was to try to build a computing machine. Mary’s was to think about how to validate the computation done (whether by a human or a machine).

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1.