Mary Clem: getting it right

by Paul Curzon, Queen Mary University of London

Mary Clem was a pioneer of dependable computing long before the first computers existed. She was a computer herself, but became more like a programmer.

A tick on a target of red concentric zeros
Image by Paul Curzon

Back before there were computers there were human computers: people who did the calculations that machines now do. Victorian inventor, Charles Babbage, worked as one. It was the inspiration for him to try to build a steam-powered computer. Often, however, it was women who worked as human computers especially in the first half of the 20th century. One was Mary Clem in the 1930s. She worked for Iowa State University’s statistical lab. Despite having no mathematical training and finding maths difficult at school, she found the work fascinating and rose to become the Chief Statistical Clerk. Along the way she devised a simple way to make sure her team didn’t make mistakes.

The start of stats

Big Data, the idea of processing lots of data to turn that data into useful information, is all the rage now, but its origins lie at the start of the 20th century, driven by human computers using early calculating machines. The 1920s marked the birth of statistics as a practical mathematical science. A key idea was that of calculating whether there were correlations between different data sets such as rainfall and crop growth, or holding agricultural fairs and improved farm output. Correlation is the the first step to working out what causes what. it allows scientists to make progress in working out how the world works, and that can then be turned into improved profits by business, or into positive change by governments. It became big business between the wars, with lots of work for statistical labs.

Calculations and cards

Originally, in and before the 19th century, human computers did all the calculations by hand. Then simple calculating machines were invented, so could be used by the human computers to do the basic calculations needed. In 1890 Herman Hollerith invented his Tabulator machine (his company later became computing powerhouse, IBM). The Tabulator machine was originally just a counting machine created for the US census, though later versions could do arithmetic too. The human computers started to use them in their work. The tabulator worked using punch cards, cards that held data in patterns of holes punched in to them. A card representing a person in the census might have a hole punched in one place if they were male, and in a different place if they were female. Then you could count the total number of any property of a person by counting the appropriate holes.

Mary was being more than a computer,
and becoming more like a programmer

Mary’s job ultimately didn’t just involve doing calculations but also involved preparing punch cards for input into the machines (so representing data as different holes on a card). She also had to develop the formulae needed for doing calculations about different tasks. Essentially she was creating simple algorithms for the human computers using the machines to follow, including preparing their input. Her work was therefore moving closer to that of a computer operator and then programmer’s job.

Zero check

She was also responsible for checking calculations to make sure mistakes were not being made in the calculations. If the calculations were wrong the results were worse than useless. Human computers could easily make mistakes in calculations, but even with machines doing calculations it was also possible for the formulae to be wrong or mistakes to be made preparing the punch cards. Today we call this kind of checking of the correctness of programs verification and validation. Since accuracy mattered, this part of he job also mattered. Even today professional programming teams spend far more time checking their code and testing it than writing it.

Mary took the role of checking for mistakes very seriously, and like any modern computational thinker, started to work out better ways of doing it that was more likely to catch mistakes. She was a pioneer in the area of dependable computing. What she came up with was what she called the Zero Check. She realised that the best way to check for mistakes was to do more calculations. For the calculations she was responsible for, she noticed that it was possible to devise an extra calculation, whereby if the other answers (the ones actually needed) have been correctly calculated then the answer to this new calculation is 0. This meant, instead of checking lots of individual calculations with different answers (which is slow and in itself error prone), she could just do this extra calculation. Then, if the answer was not zero she had found a mistake.

A trivial version of this general idea when you are doing a single calculation is to just do it a second time, but in a different way. Rather than checking manually if answers are the same, though, if you have a computer it can subtract the two answers. If there are no mistakes, the answer to this extra check calculation should be 0. All you have to do is to look for zero answers to the extra subtractions. If you are checking lots of answers then, spotting zeros amongst non-zeros is easier for a human than looking for two numbers being the same.

Defensive Programming

This idea of doing extra calculations to help detect errors is a part of defensive programming. Programmers add in extra checking code or “assertions” to their programs to check that values calculated at different points in the program meet expected properties automatically. If they don’t then the program itself can do something about it (issue a warning, or apply a recovery procedure, for example).

A similar idea is also used now to catch errors whenever data is sent over networks. An extra calculation is done on the 1s and 0s being sent and the answer is added on to the end of the message. When the data is received a similar calculation is performed with the answer indicating if the data has been corrupted in transmission. 

A pioneering human computer

Mary Clem was a pioneer as a human computer, realising there could be more to the job than just doing computations. She realised that what mattered was that those computations were correct. Charles Babbages answer to the problem was to try to build a computing machine. Mary’s was to think about how to validate the computation done (whether by a human or a machine).

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Black in Data

Lightbulb in a black circle surrounded by circles of colour representing data

Image based on combining bid data and lightbulb images by Gerd Altmann from Pixabay

Careers do not have to be decided on from day one. You can end up in a good place in a roundabout way. That is what happened to Sadiqah Musa, and now she is helping make the paths easier for others to follow.

Sadiqah went to university at QMUL expecting to become an environmental scientist. Her first job was as a geophysicist analysing seismic data. It was a job she thought she loved and would do forever. Unfortunately, she wasn’t happy, not least about the lack of job security. It was all about data though which was a part she did still enjoy, and the computer science job of Data Analyst was now a sought-after role. She retrained and started on a whole new exciting career. She currently works at the Guardian Newspapers where she met Devina Nembhard … who was the first Black woman she had ever worked with throughout her career.

Together they decided that was just wrong, but also set out to change it. They created “Black in Data” to support people of colour in the industry, mentoring them, training them in the computer science skills they might be short of: like programming and databases; helping them thrive. More than that they also confront industry to try and take down the barriers that block diversity in the first place.

Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Al-Jazari: the father of robotics

Al Jazari's hand washing automaton
Image by user:Grenavitar, Public domain, via Wikimedia Commons

Science fiction films are full of humanoid robots acting as servants, workers, friends or colleagues. The first were created during the Islamic Golden Age, a thousand years ago. 

Robots and automata have been the subject of science fiction for over a century, but their history in myth goes back millennia, but so does the actual building of lifelike animated machines. The Ancient Greeks and Egyptians built Automata, animal or human-like contraptions that seemed to come to life. The early automata were illusions that did not have a practical use, though, aside from entertainment or just to amaze people. 

It was the great inventor of mechanical gadgets Ismail Al-Jazari from the Islamic Golden Age of science, engineering and art in the 12th century, who first built robot-like machines with actual purposes. Powered by water, his automata acted as servants doing specific tasks. One machine was a humanoid automaton that acted as a servant during the ritual purification of hand washing before saying prayers. It poured water into a basin from a jug and then handed over a towel, mirror and comb. It used a toilet style flushing mechanism to deliver the water from a tank. Other inventions included a waitress automaton that served drinks and robotic musicians that played instruments from a boat. It may even have been programmable. 

We know about Al-Jazari’s machines because he not only created mechanical gadgets and automata, he also wrote a book about them: The Book of Knowledge of Ingenious Mechanical Devices. It’s possible that it inspired Leonardo Da Vinci who, in addition to being a famous painter of the Italian Renaissance, was a prolific inventor of machines. 

Such “robots” were not everyday machines. The hand washing automata was made for the King. Al-Jazari’s book, however, didn’t just describe the machines, it explained how to build them: possibly the first text book to cover Automata. If you weren’t a King, then perhaps you could, at least, have a go at making your own servants. 

Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Mark Dean: A PC Success

An outline of a head showing the brain and spinal column on a digital background of binary and circuitry

Image by Gerd Altmann from Pixabay (cropped)

We have moved on to smartphones, tablets and smartwatches, but for 30 years the desktop computer ruled, and originally not just any desktop computer, the IBM PC. A key person behind its success was African American computer scientist, Mark Dean.

IBM is synonymous with computers. It became the computing industry powerhouse as a result of building large, room-sized computers for businesses. The original model of how computers would be used followed IBM’s president, Thomas J Watson’s, supposed quote that “there is a world market for about five computers.” They produced gigantic computers that could be dialled into by those needed computing time. That prediction was very quickly shown to be wrong, though, as computer sales boomed.

Becoming more personal

Mark Dean was the first African American
to receive IBM’s highest honour.

By the end of the 1970s the computing world was starting to change. Small, but powerful, mini-computers had taken off and some companies were pushing the idea of computers for the desktop. IBM was at risk of being badly left behind… until they suddenly roared back into the lead with the IBM personal computer and almost overnight became the world leaders once more, revolutionising the way computers were seen, sold and used. Their predictions were still a little off with initial sales of the IBM PC 8 times more than they expected! Within a few years they were selling many hundreds of thousands a year and making billions of dollars. Soon every office desk had one and PC had become an everyday word used to mean computer.

Get on the bus

So who was behind this remarkable success? One of the design team who created the IBM PC was Mark Dean. As a consequence of his work on the PC, he became the first African American to be made an IBM fellow (IBM’s highest honour). One of his important contributions was in leading the development of the PC’s bus. Despite the name, a computer bus is more like a road than a vehicle, so its other name of data highway is perhaps better. It is the way the computer chip communicates with the outside world. A computer on its own is not really that useful to have on your desktop. It needs a screen, keyboard and so on. A computer bus is a bit like your nervous system used to send messages from your brain around your body. Just as your brain interacts with the world receiving messages from your senses, and allowing you to take action by sending messages to your muscles, all using your nervous system, a computer chip sends signals to its peripherals using the bus. Those peripherals include things like mouse, keyboard, printers, monitors, modems, external memory devices and more; the equivalents of its way of sensing the world and interacting with it. The bus is in essence just a set of connectors into the chip so wires out with different allocated uses and a set of rules about how they are used. All peripherals then follow the same set of rules to communicate to the computer. It means you can easily swap peripherals in and out (unlike your body!) Later versions of the PC bus, that Mark designed, ultimately became an industry standard for desktop computers.

Mark can fairly be called a key member of that PC development team, given he was responsible for a third of the patents behind the PC. He didn’t stop there though. He has continued to be awarded patents, most recently related to artificial neural networks inspired by neuroscience. He has moved on from making computer equivalents of the nervous system to computer equivalents of the brain itself.

Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

In space no one can hear you …

Red arrows aircraft flying close to the ground.
Image by Bruno Albino from Pixabay 
Image by SnottyBoggins from Pixabay (cropped)

Johanna Lucht could do maths before she learned language. Why? Because she was born deaf and there was little support for deaf people where she lived. Despite, or perhaps because of, that she became a computer scientist and works for NASA. 

Being deaf can be very, very disabling if you don’t get the right help. As a child, Johanna had no one to help her to communicate apart from her mother. She tried to teach Johanna sign language from a book. Throughout most of her primary school years she couldn’t have any real conversations with anyone, never mind learn. She got the lifeline she needed, when the school finally took on an interpreter, Keith Wann, to help her. She quickly learned American Sign Language working with him. Learning your first language is crucial to learning other things and suddenly she was able to learn in school like other children. She caught up remarkably quickly, showing that an intelligent girl had been locked in that silent, shy child. More than anything though, from Keith, she learned never to give up. 

Her early ability in maths, now her favourite subject, came to the fore as she excelled at science and technology. By this point her family had moved from Germany where she grew up to Alaska where there was much more support, an active deaf community for her to join and lots more opportunities that she started to take. She signed up for a special summer school on computing specifically for deaf people at the University of Washington, learning the programming skills that became the foundation for her future career at NASA. At only 17 she even returned to help teach the course. From there, she signed up to do Computer Science at university and applied for an internship at NASA. To her shock and delight she was given a place. 

Hitting the ground running 

A big problem for pilots especially of fighter aircraft is that of “controlled flight into terrain”: a technical sounding phrase that just means flying the plane into the ground for no good reason other than how difficult flying a fighter aircraft as low as possible in hazardous terrain is. The solution is a ground collision avoidance system: basically the pilots need a computer to warn them when hazardous terrain is coming up and when they are too close for comfort, and so should take evasive action. Johanna helped work on the interface design, so the part that pilots see and interact with. To be of any use in such high-pressure situations this communication has to be slick and very clear. 

She impressed those she was working with so much that she was offered a full time job and so became an engineer at NASA Armstrong working with a team designing, testing and integrating new research technology into experimental aircraft. She had to run tests with other technicians, the first problem being how to communicate effectively with the rest of the team. She succeeded twice as fast as her bosses expected, taking only a couple of days before the team were all working well together. Her experience from the challenges she had faced as a child were now providing her with the skills to do brilliantly in a job where teamwork and communication skills are vital. 

Mission control 

Eventually, she gained a place in Mission Control. There, slick comms are vital too. The engineers have to monitor the flight including all the communication as it happens, and be able to react to any developing situation. Johanna worked with an interpreter who listened directly to all the flight communications, signing it all for her to see on a second monitor. Working with interpreters in a situation like this is in itself a difficult task and Johanna had to make sure not only that they could communicate effectively but that the interpreter knew all the technical language that might come up in the flight. Johanna had plenty of experience dealing with issues like that though, and they worked together well, with the result that in April 2017 Johanna became the first deaf person to work in NASA mission control on a live mission … where of course she did not just survive the job, she excelled. 

As Johanna has pointed out it is not deafness itself that disables people, but the world deaf people live in that does. When in a world that wasn’t set up for deaf people, she struggled, but as soon as she started to get the basic help she needed that all changed. Change the environment to one that does not put up obstacles and deaf people can excel like anyone else. In space no one can hear anyone scream or for that matter speak. We don’t let it stop our space missions though. We just invent appropriate technology and make the problems go away. 

– Paul Curzon, Queen Mary University of London

More on …

Read more about Johanna Lucht:

Related Magazines …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Alexander Graham Bell: It’s good to talk

An antique phone

Image modified version of that by Christine Sponchia from Pixabay
Image modified version of that by Christine Sponchia from Pixabay

by Peter W McOwan, Queen Mary University of London

(From the archive)

The famous inventor of the telephone, Alexander Graham Bell, was born in 1847 in Edinburgh, Scotland. His story is a fascinating one, showing that like all great inventions, a combination of talent, timing, drive and a few fortunate mistakes are what’s needed to develop a technology that can change the world.

A talented Scot

As a child the young Alexander Graham Bell, Aleck, as he was known to his family, showed remarkable talents. He had the ability to look at the world in a different way, and come up with creative solutions to problems. Aged 14, Bell designed a device to remove the husks from wheat by combining a nailbrush and paddle into a rotary-brushing wheel.

Family talk

The Bell family had a talent with voices. His grandfather had made a name for himself as a notable, but often unemployed, actor. Aleck’s Mother was deaf, but rather than use her ear trumpet to talk to her like everyone else did, the young Alexander came up with the cunning idea that speaking to her in low, booming tones very close to her forehead would allow her to hear his voice through the vibrations his voice would make. This special bond with his mother gave him a lifelong intereste in the education of deaf people, which combined with his inventive genius and some odd twists of fate were to change the world.

A visit to London, and a talking dog

While visiting London with his father, Aleck was fascinated by a demonstration of Sir Charles Wheatstone’s “speaking machine”, a mechanical contraption that made human like noises. On returning to Edinburgh their father challenged Aleck and his older brother to come up with a machine of their own. After some hard work and scrounging bits from around the place they built a machine with a mouth, throat, nose, movable tongue, and bellow for lungs, and it worked. It made human-like sounds. Delighted by his success Aleck went a step further and massaged the mouth of his Skye terrier so that the dog’s growls were heard as words. Pretty wruff on the poor dog.

Speaking of teaching

By the time he was 16, Bell was teaching music and elocution at a boy’s boarding school. He was still fascinated by trying to help those with speech problems improve their quality of life, and was very successful in this, later publishing two well-respected books called ‘The Practical Elocutionist’ and ‘Stammering and Other Impediments of Speech’. Alexander and his brother toured the country giving demonstrations of their techniques to improve peoples’ speech. He also started his study at the University of London, where a mistake in reading German was to change his life and lay the foundations for the telecommunications revolution.

A ‘silly’ language mistake that changed the world

At University, Bell became fascinated by the ideas of German physicist Hermann Von Helmholtz. Von Helmholtz had produced a book, ‘On The Sensations of Tone’, in which he said that vowel sounds, a, e, i, o and u, could be produced using electrical tuning forks and resonators. However Bell couldn’t read German very well, and mistakenly believed that Von Helmholtz’s had written that vowel sounds could be transmitted over a wire. This misunderstanding changed history. As Bell later stated, “It gave me confidence. If I had been able to read German, I might never have begun my experiments in electricity.”

Tragedy and Travel

Things were going well for young Bell’s career, when tragedy struck. Both his brothers and he contracted Tuberculosis, a common disease at the time. His two brothers died and at the age of 23, still suffering from the disease, Bell left Britain to move to Ontario in Canada to convalesce and then to Boston to work in a school for deaf mutes.

The time for more than dots and dashes

His dreams of transmitting voices over a wire were still spinning round in his creative head. It just needed some new ideas to spark him off again. Samuel Morse had just developed Morse Code and the electronic telegraph, which allowed single messages in the form of long and short electronic pulses, dots and dashes, to be transmitted rapidly along a wire over huge distances. Bell saw the similarities between the idea of being able to send multiple messages and the multiple notes in a musical chord, the “harmonic telegraph” could be a way to send voices.

Chance encounter

Again chance played its roll in telecommunications history. At the electrical machine shop of Charles Williams, Bell ran into young Thomas Watson, a skilled electrical machinist able to build the devices that Bell was devising. The two teamed up and started to work toward making Bell’s dream a reality. To make this reality work they needed to invent two things: something to measure a voice at one end, and another device to reproduce the voice at the other, what we would call today the microphone and the speaker. The speaker accident June 2, 1875 was a landmark day for team Bell and Watson. Working in their laboratory they were trying to free a reed, a small flat piece of metal, which they had wound too tightly to the pole of an electromagnet. In trying to free it Watson produced a ‘twang’. Bell heard the twang and came running. It was a sound similar to the sounds in human speech; this was the solution to producing an electronic voice, a discovery that must have come as a relief for all the dogs in the Boston area. The mercury microphone Bell had also discovered that a wire vibrated by his voice while partially dipped in a conducting liquid, like mercury or battery acid, could be made to produce a changing electrical current. They had a device where the voice could be transformed into an electronic signal. Now all that was needed was to put the two inventions together.

The first ’emergency’ phone call (allegedly)

On March 10, 1876, Bell and Watson set out to test their new system. The story goes that Bell knocked over a container with battery acid, which they were using as the conducting liquid in the ‘microphone’. Spilled acid tends to be nasty and Bell shouted out “Mr. Watson, come here. I want you!” Watson, working in the next room, heard Bell’s cry for help through the wire. The first phone call had been made, and Watson quickly went through to answer it. The telephone was invented, and Bell was only 29 years old.

The world listens

The telephone was finally introduced to the world at the Centennial Exhibition in Philadelphia in 1876. Bell quoted Hamlet over the phone line from the main building 100 yards away, causing the surprised Brazilian Emperor Dom Pedro to exclaim, “My God, it talks”, and talk it did. From there on, the rest, as they say, is history. The telephone spread throughout the world changing the way people lived their lives. Though it was not without its social problems. In many upper class homes it was considered to be vulgar. Many people considered it intrusive (just like some people’s view of mobile phones today!), but eventually it became indispensable.

Can’t keep a good idea down

Inventor Elisha Gray also independently designed his own version of the telephone. In fact both he and Bell rushed their designs to the US patent office within hours of each other, but Alexander Graham Bell patented his telephone first. With the massive amounts of money to be made Elisha Gray and Alexander Graham Bell entered into a famous legal battle over who had invented the telephone first, and Bell had to fight may legal battles over his lifetime as others claimed they had invented the technology first. In all the legal cases Bell won, partly many claimed because he was such a good communicator and had such a convincing talking voice. As is often the way few people now remember the other inventors. In fact, it is now recognized that Italian Antonio Meucci had invented a method of electronic voice communication earlier though did not have the funds to patent it.

Fame and Fortune under Forty

Bell became rich and famous, and he was only in his mid thirties. The Bell telephone company was set up, and later went on to become AT&T one of Americas foremost telecommunications giants.

Read Terry Pratchett’s brilliant book ‘Going Postal’ for a fun fantasy about inventing and making money from communication technology on DiscWorld.

More on …

EPSRC supports this blog through research grant EP/W033615/1. 

Recognising (and addressing) bias in facial recognition tech

A unit containing four sockets, 2 USB and 2 for a microphone and speakers.
Happy, though surprised, sockets Photo taken by Jo Brodie in 2016 at Gladesmore School in London.

Some people have a neurological condition called face blindness (also known as ‘prosopagnosia’) which means that they are unable to recognise people, even those they know well – this can include their own face in the mirror! They only know who someone is once they start to speak but until then they can’t be sure who it is. They can certainly detect faces though, but they might struggle to classify them in terms of gender or ethnicity. In general though, most people actually have an exceptionally good ability to detect and recognise faces, so good in fact that we even detect faces when they’re not actually there – this is called pareidolia – perhaps you see a surprised face in this picture of USB sockets below.

How about computers? There is a lot of hype about face recognition technology as a simple solution to help police forces prevent crime, spot terrorists and catch criminals. What could be bad about being able to pick out wanted people automatically from CCTV images, so quickly catch them?

What if facial recognition technology isn’t as good at recognising faces as it has sometimes been claimed to be, though? If the technology is being used in the criminal justice system, and gets the identification wrong, this can cause serious problems for people (see Robert Williams’ story in “Facing up to the problems of recognising faces“).

“An audit of commercial facial-analysis tools
found that dark-skinned faces are misclassified
at a much higher rate than are faces from any
other group. Four years on, the study is shaping
research, regulation and commercial practices.”

The unseen Black faces of AI algorithms
(19 October 2022) Nature

In 2018 Joy Buolamwini and Timnit Gebru shared the results of research they’d done, testing three different commercial facial recognition systems. They found that these systems were much more likely to wrongly classify darker-skinned female faces compared to lighter- or darker-skinned male faces. In other words, the systems were not reliable. (Read more about their research in “The gender shades audit“).

“The findings raise questions about
how today’s neural networks, which …
(look for) patterns in huge data sets,
are trained and evaluated.”

Study finds gender and skin-type bias
in commercial artificial-intelligence systems
(11 February 2018) MIT News

Their work has shown that face recognition systems do have biases and so are not currently at all fit for purpose. There is some good news though. The three companies whose products they studied made changes to improve their facial recognition systems and several US cities have already banned the use of this tech in criminal investigations. More cities are calling for it too and in Europe, the EU are moving closer to banning the use of live face recognition technology in public places. Others, however, are still rolling it out. It is important not just to believe the hype about new technology and make sure we do understand their limitations and risks.

Jo Brodie and Paul Curzon, Queen Mary University of London

More on

Further reading

More technical articles

• Joy Buolamwini and Timnit Gebru (2018) Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research 81:1-15. [EXTERNAL]
The unseen Black faces of AI algorithms (19 October 2022) Nature News & Views [EXTERNAL]



Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Hidden Figures: NASA’s brilliant calculators

Full Moon with a blue filter
Full Moon image by PIRO from Pixabay

NASA Langley was the birthplace of the U.S. space program where astronauts like Neil Armstrong learned to land on the moon. Everyone knows the names of astronauts, but behind the scenes a group of African-American women were vital to the space program: Katherine Johnson, Mary Jackson and Dorothy Vaughan. Before electronic computers were invented ‘computers’ were just people who did calculations and that’s where they started out, as part of a segregated team of mathematicians. Dorothy Vaughan became the first African-American woman to supervise staff there and helped make the transition from human to electronic computers by teaching herself and her staff how to program in the early programming language, FORTRAN.

The women switched from being the computers to programming them. These hidden women helped put the first American, John Glenn, in orbit, and over many years worked on calculations like the trajectories of spacecraft and their launch windows (the small period of time when a rocket must be launched if it is to get to its target). These complex calculations had to be correct. If they got them wrong, the mistakes could ruin a mission, putting the lives of the astronauts at risk. Get them right, as they did, and the result was a giant leap for humankind.

See the film ‘Hidden Figures’ for more of their story.

Paul Curzon, Queen Mary University of London

from the archive

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Writing together: Clarence ‘Skip’ Ellis

Poster of Skip Ellis showing people working on a shared document
Poster by Richard Butterworth for CS4FN

Back in 1956, Clarence Ellis started his career at the very bottom of the computer industry. He was given a job, at the age of 15, as a “computer operator” … because he was the only applicant. He was also told that under no circumstances should he touch the computer! Its lucky for all of us he got the job, though! He went on to develop ideas that have made computers easier for everyone to use. Working at a computer was once a lonely endeavour: one person, on one computer, doing one job. Clarence Ellis changed that. He pioneered ways for people to use computers together effectively.

The graveyard shift

The company Clarence first worked for had a new computer. Just like all computers back then, it was the size of a room. He worked the graveyard shift and his duties were more those of a nightwatchman than a computer operator. It could have been a dead-end job, but it gave him lots of spare time and, more importantly, access to all the computer’s manuals … so he read them … over and over again. He didn’t need to touch the computer to learn how to use it!

Saving the day

His studying paid dividends. Only a few months after he started, the company had a potential disaster on its hands. They ran out of punch cards. Back then punch cards were used to store both data. They used patterns of holes and non-holes as a way to store numbers as binary in a away a computer could read them. Without punchcards the computer could not work!

It had to though, because the payroll program had to run before the night was out. If it didn’t then no-one would be paid that month. Because he had studied the manuals in detail, and more so than anyone else, Clarence was the only person who could work out how to reuse old punch cards. The problem was that the computer used a system called ‘parity checking’ to spot mistakes. In its simplest form parity checking of a punch card involves adding an extra binary digit (an extra hole or no-hole) on the end of each number. This is done in a way that ensures that the number of holes is even. If there is an even number of holes already, the extra digit is left as a non-hole. If, on the other hand there is an odd number of holes, a hole is punched as the extra digit. That extra binary digit isn’t part of the number. It’s just there so the computer can check if the number has been corrupted. If a hole was accidentally or otherwise turned into a non-hole (or vice versa), then this would show up. It would mean there was now an odd number of holes. Special circuitry in the computer would spot this and spit out the card, rejecting it. Clarence knew how to switch that circuitry off. That meant they could change the numbers on the cards by adding new holes without them being rejected.

After that success he was allowed to become a real operator and was relied on to troubleshoot whenever there were problems. His career was up and running.

Clicking icons

He later worked at Xerox Parc, a massively influential research centre. He was part of the team that invented graphical user interfaces (GUIs). With GUIs Xerox Parc completely transformed the way we used computers. Instead of typing obscure and hard to remember commands, they introduced the now standard ideas, of windows, icons, dragging and dropping, using a mouse, and more. Clarence, himself, has been credited with inventing the idea of clicking on an icon to run a program.

Writing Together

As if that wasn’t enough of an impact, he went on to help make groupware a reality: software that supports people working together. His focus was on software that let people write a document together. With Simon Gibbs he developed a crucial algorithm called Operational Transformation. It allows people to edit the same document at the same time without it becoming hopelessly muddled. This is actually very challenging. You have to ensure that two (or more) people can change the text at exactly the same time, and even at the same place, without each ending up with a different version of the document.

The actual document sits on a server computer. It must make sure that its copy is always the same as the ones everyone is individually editing. When people type changes into their local copy, the master is sent messages informing it of the actions they performed. The trouble is the order that those messages arrive can change what happens. Clarence’s operational transformation algorithm solved this by changing the commands from each person into ones that work consistently whatever order they are applied. It is the transformed operation that is the one that is applied to the master. That master version is the version everyone then sees as their local copy. Ultimately everyone sees the same version. This algorithm is at the core of programs like Google Docs that have ensured collaborative editing of documents is now commonplace.

Clarence Ellis started his career with a lonely job. By the end of his career he had helped ensure that writing on a computer at least no longer needs to be a lonely affair.

Paul Curzon, Queen Mary University of London


More on …

Posters

  • Skip Ellis
  • Computer Science Hero Posters
    • One of the aims of our Diversity in Computing posters is to help a classroom of young people see the range of computer scientists which includes people who look like them and people who don’t look like them. You can download our posters free from the link above.

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The original version of this article was funded by the Institute of Coding.

Ada Lovelace: Visionary

It is 1843, Queen Victoria is on the British throne. The industrial revolution has transformed the country. Steam, cogs and iron rule. The first computers won’t be successfully built for a hundred years. Through the noise and grime one woman sees the future. A digital future that is only just being realised.

Ada Lovelace is often said to be the first programmer. She wrote programs for a designed, but yet to be built, computer called the Analytical Engine. She was something much more important than a programmer, though. She was the first truly visionary person to see the real potential of computers. She saw they would one day be creative.

Charles Babbage had come up with the idea of the Analytical Engine – how to make a machine that could do calculations so we wouldn’t need to do it by hand. It would be another century before his ideas could be realised and the first computer was actually built. As he tried to get the money and build the computer, he needed someone to help write the programs to control it – the instructions that would tell it how to do calculations. That’s where Ada came in. They worked together to try and realise their joint dream, jointly working out how to program.

Ada also wrote “The Analytical Engine has no pretensions to originate anything.” So how does that fit with her belief that computers could be creative? Read on and see if you can unscramble the paradox.

Ada was a mathematician with a creative flair and while Charles had come up with the innovative idea of the Analytical Engine itself, he didn’t see beyond his original idea of the computer as a calculator, she saw that they could do much more than that.

The key innovation behind her idea was that the numbers could stand for more than just quantities in calculations. They could represent anything – music for example. Today when we talk of things being digital – digital music, digital cameras, digital television, all we really mean is that a song, a picture, a film can all be stored as long strings of numbers. All we need is to agree a code of what the numbers mean – a note, a colour, a line. Once that is decided we can write computer programs to manipulate them, to store them, to transmit them over networks. Out of that idea comes the whole of our digital world.

Ada saw even further though. She combined maths with a creative flair and so she realised that not only could they store and play music they could also potentially create it – they could be composers. She foresaw the whole idea of machines being creative. She wasn’t just the first programmer, she was the first truly creative programmer.

Paul Curzon, Queen Mary University of London

More on …

Magazines

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos