Al-Jazari: the father of robotics

by Paul Curzon, Queen Mary University of London

Al Jazari's hand washing automaton
Image from Wikipedia

Science fiction films are full of humanoid robots acting as servants, workers, friends or colleagues. The first were created during the Islamic Golden Age, a thousand years ago. 

Robots and automata have been the subject of science fiction for over a century, but their history in myth goes back millennia, but so does the actual building of lifelike animated machines. The Ancient Greeks and Egyptians built Automata, animal or human-like contraptions that seemed to come to life. The early automata were illusions that did not have a practical use, though, aside from entertainment or just to amaze people. 

It was the great inventor of mechanical gadgets Ismail Al-Jazari from the Islamic Golden Age of science, engineering and art in the 12th century, who first built robot-like machines with actual purposes. Powered by water, his automata acted as servants doing specific tasks. One machine was a humanoid automaton that acted as a servant during the ritual purification of hand washing before saying prayers. It poured water into a basin from a jug and then handed over a towel, mirror and comb. It used a toilet style flushing mechanism to deliver the water from a tank. Other inventions included a waitress automaton that served drinks and robotic musicians that played instruments from a boat. It may even have been programmable. 

We know about Al-Jazari’s machines because he not only created mechanical gadgets and automata, he also wrote a book about them: The Book of Knowledge of Ingenious Mechanical Devices. It’s possible that it inspired Leonardo Da Vinci who, in addition to being a famous painter of the Italian Renaissance, was a prolific inventor of machines. 

Such “robots” were not everyday machines. The hand washing automata was made for the King. Al-Jazari’s book, however, didn’t just describe the machines, it explained how to build them: possibly the first text book to cover Automata. If you weren’t a King, then perhaps you could, at least, have a go at making your own servants. 

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

A PC Success

by Paul Curzon, Queen Mary University of London

An outline of a head showing the brain and spinal column on a digital background of binary and circuitry

Image by Gerd Altmann from Pixabay

We have moved on to smartphones, tablets and smartwatches, but for 30 years the desktop computer ruled, and originally not just any desktop computer, the IBM PC. A key person behind its success was African American computer scientist, Mark Dean.

IBM is synonymous with computers. It became the computing industry powerhouse as a result of building large, room-sized computers for businesses. The original model of how computers would be used followed IBM’s president, Thomas J Watson’s, supposed quote that “there is a world market for about five computers.” They produced gigantic computers that could be dialled into by those needed computing time. That prediction was very quickly shown to be wrong, though, as computer sales boomed.

Becoming more personal

Mark Dean was the first African American
to receive IBM’s highest honour.

By the end of the 1970s the computing world was starting to change. Small, but powerful, mini-computers had taken off and some companies were pushing the idea of computers for the desktop. IBM was at risk of being badly left behind… until they suddenly roared back into the lead with the IBM personal computer and almost overnight became the world leaders once more, revolutionising the way computers were seen, sold and used. Their predictions were still a little off with initial sales of the IBM PC 8 times more than they expected! Within a few years they were selling many hundreds of thousands a year and making billions of dollars. Soon every office desk had one and PC had become an everyday word used to mean computer.

Get on the bus

So who was behind this remarkable success? One of the design team who created the IBM PC was Mark Dean. As a consequence of his work on the PC, he became the first African American to be made an IBM fellow (IBM’s highest honour). One of his important contributions was in leading the development of the PC’s bus. Despite the name, a computer bus is more like a road than a vehicle, so its other name of data highway is perhaps better. It is the way the computer chip communicates with the outside world. A computer on its own is not really that useful to have on your desktop. It needs a screen, keyboard and so on. A computer bus is a bit like your nervous system used to send messages from your brain around your body. Just as your brain interacts with the world receiving messages from your senses, and allowing you to take action by sending messages to your muscles, all using your nervous system, a computer chip sends signals to its peripherals using the bus. Those peripherals include things like mouse, keyboard, printers, monitors, modems, external memory devices and more; the equivalents of its way of sensing the world and interacting with it. The bus is in essence just a set of connectors into the chip so wires out with different allocated uses and a set of rules about how they are used. All peripherals then follow the same set of rules to communicate to the computer. It means you can easily swap peripherals in and out (unlike your body!) Later versions of the PC bus, that Mark designed, ultimately became an industry standard for desktop computers.

Mark can fairly be called a key member of that PC development team, given he was responsible for a third of the patents behind the PC. He didn’t stop there though. He has continued to be awarded patents, most recently related to artificial neural networks inspired by neuroscience. He has moved on from making computer equivalents of the nervous system to computer equivalents of the brain itself.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

In space no one can hear you …

by Paul Curzon, Queen Mary University of London

Red arrows aircraft flying close to the ground.
Image by Bruno Albino from Pixabay 
Image by Bruno Albino from Pixabay 

Johanna Lucht could do maths before she learned language. Why? Because she was born deaf and there was little support for deaf people where she lived. Despite, or perhaps because of, that she became a computer scientist and works for NASA. 

Being deaf can be very, very disabling if you don’t get the right help. As a child, Johanna had no one to help her to communicate apart from her mother. She tried to teach Johanna sign language from a book. Throughout most of her primary school years she couldn’t have any real conversations with anyone, never mind learn. She got the lifeline she needed, when the school finally took on an interpreter, Keith Wann, to help her. She quickly learned American Sign Language working with him. Learning your first language is crucial to learning other things and suddenly she was able to learn in school like other children. She caught up remarkably quickly, showing that an intelligent girl had been locked in that silent, shy child. More than anything though, from Keith, she learned never to give up. 

Her early ability in maths, now her favourite subject, came to the fore as she excelled at science and technology. By this point her family had moved from Germany where she grew up to Alaska where there was much more support, an active deaf community for her to join and lots more opportunities that she started to take. She signed up for a special summer school on computing specifically for deaf people at the University of Washington, learning the programming skills that became the foundation for her future career at NASA. At only 17 she even returned to help teach the course. From there, she signed up to do Computer Science at university and applied for an internship at NASA. To her shock and delight she was given a place. 

Hitting the ground running 

A big problem for pilots especially of fighter aircraft is that of “controlled flight into terrain”: a technical sounding phrase that just means flying the plane into the ground for no good reason other than how difficult flying a fighter aircraft as low as possible in hazardous terrain is. The solution is a ground collision avoidance system: basically the pilots need a computer to warn them when hazardous terrain is coming up and when they are too close for comfort, and so should take evasive action. Johanna helped work on the interface design, so the part that pilots see and interact with. To be of any use in such high-pressure situations this communication has to be slick and very clear. 

She impressed those she was working with so much that she was offered a full time job and so became an engineer at NASA Armstrong working with a team designing, testing and integrating new research technology into experimental aircraft. She had to run tests with other technicians, the first problem being how to communicate effectively with the rest of the team. She succeeded twice as fast as her bosses expected, taking only a couple of days before the team were all working well together. Her experience from the challenges she had faced as a child were now providing her with the skills to do brilliantly in a job where teamwork and communication skills are vital. 

Mission control 

Eventually, she gained a place in Mission Control. There, slick comms are vital too. The engineers have to monitor the flight including all the communication as it happens, and be able to react to any developing situation. Johanna worked with an interpreter who listened directly to all the flight communications, signing it all for her to see on a second monitor. Working with interpreters in a situation like this is in itself a difficult task and Johanna had to make sure not only that they could communicate effectively but that the interpreter knew all the technical language that might come up in the flight. Johanna had plenty of experience dealing with issues like that though, and they worked together well, with the result that in April 2017 Johanna became the first deaf person to work in NASA mission control on a live mission … where of course she did not just survive the job, she excelled. 

As Johanna has pointed out it is not deafness itself that disables people, but the world deaf people live in that does. When in a world that wasn’t set up for deaf people, she struggled, but as soon as she started to get the basic help she needed that all changed. Change the environment to one that does not put up obstacles and deaf people can excel like anyone else. In space no one can hear anyone scream or for that matter speak. We don’t let it stop our space missions though. We just invent appropriate technology and make the problems go away. 

More on …

Read more about Johanna Lucht:

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Alexander Graham Bell: It’s good to talk

An antique phone

Image modified version of that by Christine Sponchia from Pixabay

by Peter W McOwan, Queen Mary University of London

(From the archive)

The famous inventor of the telephone, Alexander Graham Bell, was born in 1847 in Edinburgh, Scotland. His story is a fascinating one, showing that like all great inventions, a combination of talent, timing, drive and a few fortunate mistakes are what’s needed to develop a technology that can change the world.

A talented Scot

As a child the young Alexander Graham Bell, Aleck, as he was known to his family, showed remarkable talents. He had the ability to look at the world in a different way, and come up with creative solutions to problems. Aged 14, Bell designed a device to remove the husks from wheat by combining a nailbrush and paddle into a rotary-brushing wheel.

Family talk

The Bell family had a talent with voices. His grandfather had made a name for himself as a notable, but often unemployed, actor. Aleck’s Mother was deaf, but rather than use her ear trumpet to talk to her like everyone else did, the young Alexander came up with the cunning idea that speaking to her in low, booming tones very close to her forehead would allow her to hear his voice through the vibrations his voice would make. This special bond with his mother gave him a lifelong intereste in the education of deaf people, which combined with his inventive genius and some odd twists of fate were to change the world.

A visit to London, and a talking dog

While visiting London with his father, Aleck was fascinated by a demonstration of Sir Charles Wheatstone’s “speaking machine”, a mechanical contraption that made human like noises. On returning to Edinburgh their father challenged Aleck and his older brother to come up with a machine of their own. After some hard work and scrounging bits from around the place they built a machine with a mouth, throat, nose, movable tongue, and bellow for lungs, and it worked. It made human-like sounds. Delighted by his success Aleck went a step further and massaged the mouth of his Skye terrier so that the dog’s growls were heard as words. Pretty wruff on the poor dog.

Speaking of teaching

By the time he was 16, Bell was teaching music and elocution at a boy’s boarding school. He was still fascinated by trying to help those with speech problems improve their quality of life, and was very successful in this, later publishing two well-respected books called ‘The Practical Elocutionist’ and ‘Stammering and Other Impediments of Speech’. Alexander and his brother toured the country giving demonstrations of their techniques to improve peoples’ speech. He also started his study at the University of London, where a mistake in reading German was to change his life and lay the foundations for the telecommunications revolution.

A ‘silly’ language mistake that changed the world

At University, Bell became fascinated by the ideas of German physicist Hermann Von Helmholtz. Von Helmholtz had produced a book, ‘On The Sensations of Tone’, in which he said that vowel sounds, a, e, i, o and u, could be produced using electrical tuning forks and resonators. However Bell couldn’t read German very well, and mistakenly believed that Von Helmholtz’s had written that vowel sounds could be transmitted over a wire. This misunderstanding changed history. As Bell later stated, “It gave me confidence. If I had been able to read German, I might never have begun my experiments in electricity.”

Tragedy and Travel

Things were going well for young Bell’s career, when tragedy struck. Both his brothers and he contracted Tuberculosis, a common disease at the time. His two brothers died and at the age of 23, still suffering from the disease, Bell left Britain to move to Ontario in Canada to convalesce and then to Boston to work in a school for deaf mutes.

The time for more than dots and dashes

His dreams of transmitting voices over a wire were still spinning round in his creative head. It just needed some new ideas to spark him off again. Samuel Morse had just developed Morse Code and the electronic telegraph, which allowed single messages in the form of long and short electronic pulses, dots and dashes, to be transmitted rapidly along a wire over huge distances. Bell saw the similarities between the idea of being able to send multiple messages and the multiple notes in a musical chord, the “harmonic telegraph” could be a way to send voices.

Chance encounter

Again chance played its roll in telecommunications history. At the electrical machine shop of Charles Williams, Bell ran into young Thomas Watson, a skilled electrical machinist able to build the devices that Bell was devising. The two teamed up and started to work toward making Bell’s dream a reality. To make this reality work they needed to invent two things: something to measure a voice at one end, and another device to reproduce the voice at the other, what we would call today the microphone and the speaker. The speaker accident June 2, 1875 was a landmark day for team Bell and Watson. Working in their laboratory they were trying to free a reed, a small flat piece of metal, which they had wound too tightly to the pole of an electromagnet. In trying to free it Watson produced a ‘twang’. Bell heard the twang and came running. It was a sound similar to the sounds in human speech; this was the solution to producing an electronic voice, a discovery that must have come as a relief for all the dogs in the Boston area. The mercury microphone Bell had also discovered that a wire vibrated by his voice while partially dipped in a conducting liquid, like mercury or battery acid, could be made to produce a changing electrical current. They had a device where the voice could be transformed into an electronic signal. Now all that was needed was to put the two inventions together.

The first ’emergency’ phone call (allegedly)

On March 10, 1876, Bell and Watson set out to test their new system. The story goes that Bell knocked over a container with battery acid, which they were using as the conducting liquid in the ‘microphone’. Spilled acid tends to be nasty and Bell shouted out “Mr. Watson, come here. I want you!” Watson, working in the next room, heard Bell’s cry for help through the wire. The first phone call had been made, and Watson quickly went through to answer it. The telephone was invented, and Bell was only 29 years old.

The world listens

The telephone was finally introduced to the world at the Centennial Exhibition in Philadelphia in 1876. Bell quoted Hamlet over the phone line from the main building 100 yards away, causing the surprised Brazilian Emperor Dom Pedro to exclaim, “My God, it talks”, and talk it did. From there on, the rest, as they say, is history. The telephone spread throughout the world changing the way people lived their lives. Though it was not without its social problems. In many upper class homes it was considered to be vulgar. Many people considered it intrusive (just like some people’s view of mobile phones today!), but eventually it became indispensable.

Can’t keep a good idea down

Inventor Elisha Gray also independently designed his own version of the telephone. In fact both he and Bell rushed their designs to the US patent office within hours of each other, but Alexander Graham Bell patented his telephone first. With the massive amounts of money to be made Elisha Gray and Alexander Graham Bell entered into a famous legal battle over who had invented the telephone first, and Bell had to fight may legal battles over his lifetime as others claimed they had invented the technology first. In all the legal cases Bell won, partly many claimed because he was such a good communicator and had such a convincing talking voice. As is often the way few people now remember the other inventors. In fact, it is now recognized that Italian Antonio Meucci had invented a method of electronic voice communication earlier though did not have the funds to patent it.

Fame and Fortune under Forty

Bell became rich and famous, and he was only in his mid thirties. The Bell telephone company was set up, and later went on to become AT&T one of Americas foremost telecommunications giants.

Read Terry Pratchett’s brilliant book ‘Going Postal’ for a fun fantasy about inventing and making money from communication technology on DiscWorld.

Related Magazines and a new book…


EPSRC supports this blog through research grant EP/W033615/1. 

Recognising (and addressing) bias in facial recognition tech – the Gender Shades Audit #BlackHistoryMonth

The five shades used for skin tone emojis

Some people have a neurological condition called face blindness (also known as ‘prosopagnosia’) which means that they are unable to recognise people, even those they know well – this can include their own face in the mirror! They only know who someone is once they start to speak but until then they can’t be sure who it is. They can certainly detect faces though, but they might struggle to classify them in terms of gender or ethnicity. In general though, most people actually have an exceptionally good ability to detect and recognise faces, so good in fact that we even detect faces when they’re not actually there – this is called pareidolia – perhaps you see a surprised face in this picture of USB sockets below.

A unit containing four sockets, 2 USB and 2 for a microphone and speakers.
Happy, though surprised, sockets

What if facial recognition technology isn’t as good at recognising faces as it has sometimes been claimed to be? If the technology is being used in the criminal justice system, and gets the identification wrong, this can cause serious problems for people (see Robert Williams’ story in “Facing up to the problems of recognising faces“).

In 2018 Joy Buolamwini and Timnit Gebru shared the results of research they’d done, testing three different commercial facial recognition systems. They found that these systems were much more likely to wrongly classify darker-skinned female faces compared to lighter- or darker-skinned male faces. In other words, the systems were not reliable.

“The findings raise questions about how today’s neural networks, which … (look for) patterns in huge data sets, are trained and evaluated.”

Study finds gender and skin-type bias in commercial artificial-intelligence systems
(11 February 2018) MIT News

The Gender Shades Audit

Facial recognition systems are trained to detect, classify and even recognise faces using a bank of photographs of people. Joy and Timnit examined two banks of images used to train facial recognition systems and found that around 80 per cent of the photos used were of people with lighter coloured skin. 

If the photographs aren’t fairly balanced in terms of having a range of people of different gender and ethnicity then the resulting technologies will inherit that bias too. Effectively the systems here were being trained to recognise light-skinned people.

The Pilot Parliaments Benchmark

They decided to create their own set of images and wanted to ensure that these covered a wide range of skin tones and had an equal mix of men and women (‘gender parity’). They did this by selecting photographs of members of various parliaments around the world which are known to have a reasonably equal mix of men and women, and selected parliaments from countries with predominantly darker skinned people (Rwanda, Senegal and South Africa) and from countries with predominantly lighter-skinned people (Iceland, Finland and Sweden). 

They labelled all the photos according to gender (they did have to make some assumptions based on name and appearance if pronouns weren’t available) and used the Fitzpatrick scale (see Different shades, below) to classify skin tones. The result was a set of photographs labelled as dark male, dark female, light male, light female with a roughly equal mix across all four categories – this time, 53 per cent of the people were light-skinned (male and female).

A composite image showing the range of skin tone classifications with the Fitzpatrick scale on top and the skin tone emojis below.

Different shades

The Fitzpatrick skin tone scale (top) is used by dermatologists (skin specialists) as a way of classifying how someone’s skin responds to ultraviolet light. There are six points on the scale with 1 being the lightest skin and 6 being the darkest. People whose skin tone has a lower Fitzpatrick score are more likely to burn in the sun and not tan, and are also at greater risk of melanoma (skin cancer). People with higher scores have darker skin which is less likely to burn and they have a lower risk of skin cancer. 

Below it is a variation of the Fitzpatrick scale, with five points, which is used to create the skin tone emojis that you’ll find on most messaging apps in addition to the ‘default’ yellow. 

Testing three face recognition systems

Joy and Timnit tested the three commercial face recognition systems against their new database of photographs – a fair test of a wide range of faces that a recognition system might come across – and this is where they found that the systems were less able to correctly identify particular groups of people. The systems were very good at spotting lighter-skinned men, and darker skinned men, but were less able to correctly identify darker-skinned women, and women overall.  

These tools, trained on sets of data that had a bias built into them, inherited those biases and this affected how well they worked. Joy and Timnit published the results of their research and it was picked up and discussed in the news as people began to realise the extent of the problem, and what this might mean for the ways in which facial recognition tech is used. 

“An audit of commercial facial-analysis tools found that dark-skinned faces are misclassified at a much higher rate than are faces from any other group. Four years on, the study is shaping research, regulation and commercial practices.”

The unseen Black faces of AI algorithms (19 October 2022) Nature

There is some good news though. The three companies made changes to improve their facial recognition technology systems and several US cities have already banned the use of this tech in criminal investigations, and more cities are calling for it too. People around the world are becoming more aware of the limitations of this type of technology and the harms to which it may be (perhaps unintentionally) put and are calling for better regulation of these systems.

Further reading

Study finds gender and skin-type bias in commercial artificial-intelligence systems (11 February 2018) MIT News
Facial recognition software is biased towards white men, researcher finds (11 February 2018) The Verge
Go read this special Nature issue on racism in science (21 October 2022) The Verge

More technical articles

• Joy Buolamwini and Timnit Gebru (2018) Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research 81:1-15.
The unseen Black faces of AI algorithms (19 October 2022) Nature News & Views


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


This blog is funded through EPSRC grant EP/W033615/1.

Hidden Figures: NASA’s brilliant calculators #BlackHistoryMonth

Full Moon and silhouetted tree tops

by Paul Curzon, Queen Mary University of London

Full Moon with a blue filter
Full Moon image by PIRO from Pixabay

NASA Langley was the birthplace of the U.S. space program where astronauts like Neil Armstrong learned to land on the moon. Everyone knows the names of astronauts, but behind the scenes a group of African-American women were vital to the space program: Katherine Johnson, Mary Jackson and Dorothy Vaughan. Before electronic computers were invented ‘computers’ were just people who did calculations and that’s where they started out, as part of a segregated team of mathematicians. Dorothy Vaughan became the first African-American woman to supervise staff there and helped make the transition from human to electronic computers by teaching herself and her staff how to program in the early programming language, FORTRAN.

FORTRAN code on a punched card, from Wikipedia.

The women switched from being the computers to programming them. These hidden women helped put the first American, John Glenn, in orbit, and over many years worked on calculations like the trajectories of spacecraft and their launch windows (the small period of time when a rocket must be launched if it is to get to its target). These complex calculations had to be correct. If they got them wrong, the mistakes could ruin a mission, putting the lives of the astronauts at risk. Get them right, as they did, and the result was a giant leap for humankind.

See the film ‘Hidden Figures’ for more of their story (trailer below).

This story was originally published on the CS4FN website and was also published in issue 23, The Women Are (Still) Here, on p21 (see ‘Related magazine’ below).

More on …


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

Writing together: Clarence ‘Skip’ Ellis #BlackHistoryMonth

by Paul Curzon, Queen Mary University of London

Small photo of Clarence 'Skip' Ellis
Clarence ‘Skip’ Ellis

Back in 1956, Clarence Ellis started his career at the very bottom of the computer industry. He was given a job, at the age of 15, as a “computer operator” … because he was the only applicant. He was also told that under no circumstances should he touch the computer! Its lucky for all of us he got the job, though! He went on to develop ideas that have made computers easier for everyone to use. Working at a computer was once a lonely endeavour: one person, on one computer, doing one job. Clarence Ellis changed that. He pioneered ways for people to use computers together effectively.

The graveyard shift

The company Clarence first worked for had a new computer. Just like all computers back then, it was the size of a room. He worked the graveyard shift and his duties were more those of a nightwatchman than a computer operator. It could have been a dead-end job, but it gave him lots of spare time and, more importantly, access to all the computer’s manuals … so he read them … over and over again. He didn’t need to touch the computer to learn how to use it!

Saving the day

His studying paid dividends. Only a few months after he started, the company had a potential disaster on its hands. They ran out of punch cards. Back then punch cards were used to store both data. They used patterns of holes and non-holes as a way to store numbers as binary in a away a computer could read them. Without punchcards the computer could not work!

It had to though, because the payroll program had to run before the night was out. If it didn’t then no-one would be paid that month. Because he had studied the manuals in detail, and more so than anyone else, Clarence was the only person who could work out how to reuse old punch cards. The problem was that the computer used a system called ‘parity checking’ to spot mistakes. In its simplest form parity checking of a punch card involves adding an extra binary digit (an extra hole or no-hole) on the end of each number. This is done in a way that ensures that the number of holes is even. If there is an even number of holes already, the extra digit is left as a non-hole. If, on the other hand there is an odd number of holes, a hole is punched as the extra digit. That extra binary digit isn’t part of the number. It’s just there so the computer can check if the number has been corrupted. If a hole was accidentally or otherwise turned into a non-hole (or vice versa), then this would show up. It would mean there was now an odd number of holes. Special circuitry in the computer would spot this and spit out the card, rejecting it. Clarence knew how to switch that circuitry off. That meant they could change the numbers on the cards by adding new holes without them being rejected.

After that success he was allowed to become a real operator and was relied on to troubleshoot whenever there were problems. His career was up and running.

Clicking icons

He later worked at Xerox Parc, a massively influential research centre. He was part of the team that invented graphical user interfaces (GUIs). With GUIs Xerox Parc completely transformed the way we used computers. Instead of typing obscure and hard to remember commands, they introduced the now standard ideas, of windows, icons, dragging and dropping, using a mouse, and more. Clarence, himself, has been credited with inventing the idea of clicking on an icon to run a program.

Writing Together

As if that wasn’t enough of an impact, he went on to help make groupware a reality: software that supports people working together. His focus was on software that let people write a document together. With Simon Gibbs he developed a crucial algorithm called Operational Transformation. It allows people to edit the same document at the same time without it becoming hopelessly muddled. This is actually very challenging. You have to ensure that two (or more) people can change the text at exactly the same time, and even at the same place, without each ending up with a different version of the document.

The actual document sits on a server computer. It must make sure that its copy is always the same as the ones everyone is individually editing. When people type changes into their local copy, the master is sent messages informing it of the actions they performed. The trouble is the order that those messages arrive can change what happens. Clarence’s operational transformation algorithm solved this by changing the commands from each person into ones that work consistently whatever order they are applied. It is the transformed operation that is the one that is applied to the master. That master version is the version everyone then sees as their local copy. Ultimately everyone sees the same version. This algorithm is at the core of programs like Google Docs that have ensured collaborative editing of documents is now commonplace.

Clarence Ellis started his career with a lonely job. By the end of his career he had helped ensure that writing on a computer at least no longer needs to be a lonely affair.


This article was originally published on the CS4FN website. One of the aims of our Diversity in Computing posters (see below) is to help a classroom of young people see the range of computer scientists which includes people who look like them and people who don’t look like them. You can download our posters free from the link below.

More on …


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


This blog is funded through EPSRC grant EP/W033615/1.

The original version of this article was funded by the Institute of Coding.

Ada Lovelace: Visionary

Cover of Issue 20 of CS4FN, celebrating Ada Lovelace

By Paul Curzon, Queen Mary University of London

It is 1843, Queen Victoria is on the British throne. The industrial revolution has transformed the country. Steam, cogs and iron rule. The first computers won’t be successfully built for a hundred years. Through the noise and grime one woman sees the future. A digital future that is only just being realised.

Ada Lovelace is often said to be the first programmer. She wrote programs for a designed, but yet to be built, computer called the Analytical Engine. She was something much more important than a programmer, though. She was the first truly visionary person to see the real potential of computers. She saw they would one day be creative.

Charles Babbage had come up with the idea of the Analytical Engine – how to make a machine that could do calculations so we wouldn’t need to do it by hand. It would be another century before his ideas could be realised and the first computer was actually built. As he tried to get the money and build the computer, he needed someone to help write the programs to control it – the instructions that would tell it how to do calculations. That’s where Ada came in. They worked together to try and realise their joint dream, jointly working out how to program.

Ada also wrote “The Analytical Engine has no pretensions to originate anything.” So how does that fit with her belief that computers could be creative? Read on and see if you can unscramble the paradox.

Ada was a mathematician with a creative flair and while Charles had come up with the innovative idea of the Analytical Engine itself, he didn’t see beyond his original idea of the computer as a calculator, she saw that they could do much more than that.

The key innovation behind her idea was that the numbers could stand for more than just quantities in calculations. They could represent anything – music for example. Today when we talk of things being digital – digital music, digital cameras, digital television, all we really mean is that a song, a picture, a film can all be stored as long strings of numbers. All we need is to agree a code of what the numbers mean – a note, a colour, a line. Once that is decided we can write computer programs to manipulate them, to store them, to transmit them over networks. Out of that idea comes the whole of our digital world.

Ada saw even further though. She combined maths with a creative flair and so she realised that not only could they store and play music they could also potentially create it – they could be composers. She foresaw the whole idea of machines being creative. She wasn’t just the first programmer, she was the first truly creative programmer.

This article was originally published at the CS4FN website, along with lots of other articles about Ada Lovelace. We also have a special Ada Lovelace-themed issue of the CS4FN magazine which you can download as a PDF (click picture below).

See also: The very first computers and Ada Lovelace Day (2nd Tuesday of October). Help yourself to our Women in Computing posters PDF (or sign up to get FREE copies posted to your school (UK-based only, please).

 

The very first computers

A head with numbers circling round and the globe in the middleVictorian engineer Charles Babbage designed, though never built the first mechanical computer. The first computers had actually existed for a long time before he had his idea, though. The British superiority at sea and ultimately the Empire was already dependent on them. They were used to calculate books of numbers that British sailors relied on to navigate the globe. The original meaning of the word computer was actually a person who did these calculations. The first computers were humans.

Babbage became interested in the idea of creating a mechanical computer in part because of computing work he did himself, calculating accurate versions of numbers needed for a special book: ‘The Nautical Almanac’. It was a book of astronomical tables, the result of an idea of Astronomer Royal, Nevil Maskelyne. It was the earliest way ships had to reliably work out their longitudinal (i.e., east-west) position at sea. Without them, to cross the Atlantic, you just set off and kept going until you hit land, just as Columbus did. The Nautical Almanac gave a way to work out how far west you were all the time.

Maskelyne’s idea was based on the fact that the angle from the moon’ to a person on the Earth and back to a star was the same at the same time wherever that person was looking from (as long as they could see both the star and moon at once). This angle was called the lunar distance.

The lunar distance could be used to work out where you were because as time passed its value changed but in a predictable way based on Newton’s Laws of motion applied to the planets. For a given place, Greenwich say, you could calculate what that lunar distance would be for different stars at any time in the future. This is essentially what the Almanac recorded.

Now the time changes as you move East or West: Dawn gradually arrives later the further west you go, for example, as the Earth rotates the sun comes into view at different times round the planet). That is why we have different time zones. The time in the USA is hours behind that in Britain which itself is behind that in China. Now suppose you know your local time, which you can check regularly from the position of the sun or moon, and you know the lunar distance. You can look up in the Almanac the time in Greenwich that the lunar distance occurs and that gives you the current time in Greenwich. The greater the difference that time is to your local time, the further West (or East) you are. It is because Greenwich was used as the fixed point for working the lunar distances out, that we now use Greenwich Mean Time as UK time. The time in Greenwich was the one that mattered!

This was all wonderful. Sailors just had to take astronomical readings, do some fairly simple calculations and a look up in the Almanac to work out where they were. However, there was a big snag. it relied on all those numbers in the tables having been accurately calculated in advance. That took some serious computing power. Maskelyne therefore employed teams of human ‘computers’ across the country, paying them to do the calculations for him. These men and women were the first industrial computers.

Before pocket calculators were invented in the 1970s the easiest way to do calculations whether big multiplication, division, powers or square roots was to use logarithms. The logarithm of a number is just the number of times you can divide it by 10 before you get to 1. Complicated calculations can be turned in to simple ones using logarithms. Therefore the equivalent of the pocket calculator was a book containing a table of logarithms. Log tables were the basis of all other calculations including maritime ones. Babbage himself became a human computer, doing calculations for the Nautical Almanac. He calculated the most accurate book of log tables then available for the British Admiralty.

The mechanical computer came about because Babbage was also interested in finding the most profitable ways to mechanise work in factories. He realised a machine could do more than weave cloth but might also do calculations. More to the point such a machine would be able to do them with a guaranteed accuracy, unlike people. He therefore spent his life designing and then trying to build such a machine. It was a revolutionary idea and while his design worked, the level of precision engineering needed was beyond what could be done. It was another hundred years before the first electronic computer was invented – again to replace human computers working in the national interest…but this time at Bletchley Park doing the calculations needed to crack the German military codes and so win the World War II.

More on …

Florence Nightingale: rebel with a cause

Florence Nightingale, the most famous female Victorian after Queen Victoria, is known for her commitment to nursing, especially in the Crimean War. She rebelled against convention to become a nurse at a time when nursing was seen as a lowly job, not suitable for ‘ladies’. She broke convention in another less well-known, but much more significant way too. She was a mathematician – the first woman to be elected a member of the Royal Statistical Society. She also pioneered the use of pictures to present the statistical data that she collected about causes of war deaths and issues of sanitation and health. What she did was an early version of the current Big Data revolution in computer science.

Soldiers were dying in vast numbers in the field hospital she worked in, not directly from their original wounds but from the poor conditions. But how do you persuade people of something that (at least then) is so unintuitive? Even she originally got the cause of the deaths wrong, thinking they were due to poor nutrition, rather than the hospital conditions as her statistics later showed. Politicians, the people with power to take action, were incapable of understanding statistical reports full of numbers then (and probably now). She needed a way to present the information so that the facts would jump out to anyone. Only then could she turn her numbers into life-saving action. Her solution was to use pictures, often presenting her statistics as books of pie charts and circular histograms.

Whilst she didn’t invent them, Florence Nightingale certainly was responsible for demonstrating how effective they could be in promoting change, and so subsequently popularising their use. She undoubtedly saved more lives with her statistics than from her solitary rounds at night by lamplight.

Big Data is now a big thing. It is the idea that if you collect lots of data about something (which computers now make easy) then you (and computers themselves) can look for patterns and so gain knowledge and, for people, ultimately wisdom from it. Florence Nightingale certainly did that. Data visualisation is now an important area of computer science. As computers allow us to collect and store ever more data, it becomes harder and harder for people to make any sense of it all – to pick out the important nuggets of information that matter. Raw numbers are little use if you can’t actually turn them into knowledge, or better still wisdom. Machine Learning programs can number crunch the data and make decisions from it, but its hard to know where the decisions came from. That often matters if we are to be persuaded. For humans the right kind of picture for the right kind of data can do just that as Florence Nightingale showed.

‘The Lady of the Lamp’: more than a nurse, but also a remarkable statistician and pioneer of a field of computer science…a Lady who made a difference by rebelling with a cause.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1.