In space no one can hear you …

by Paul Curzon, Queen Mary University of London

Red arrows aircraft flying close to the ground.
Image by Bruno Albino from Pixabay 
Image by Bruno Albino from Pixabay 

Johanna Lucht could do maths before she learned language. Why? Because she was born deaf and there was little support for deaf people where she lived. Despite, or perhaps because of, that she became a computer scientist and works for NASA. 

Being deaf can be very, very disabling if you don’t get the right help. As a child, Johanna had no one to help her to communicate apart from her mother. She tried to teach Johanna sign language from a book. Throughout most of her primary school years she couldn’t have any real conversations with anyone, never mind learn. She got the lifeline she needed, when the school finally took on an interpreter, Keith Wann, to help her. She quickly learned American Sign Language working with him. Learning your first language is crucial to learning other things and suddenly she was able to learn in school like other children. She caught up remarkably quickly, showing that an intelligent girl had been locked in that silent, shy child. More than anything though, from Keith, she learned never to give up. 

Her early ability in maths, now her favourite subject, came to the fore as she excelled at science and technology. By this point her family had moved from Germany where she grew up to Alaska where there was much more support, an active deaf community for her to join and lots more opportunities that she started to take. She signed up for a special summer school on computing specifically for deaf people at the University of Washington, learning the programming skills that became the foundation for her future career at NASA. At only 17 she even returned to help teach the course. From there, she signed up to do Computer Science at university and applied for an internship at NASA. To her shock and delight she was given a place. 

Hitting the ground running 

A big problem for pilots especially of fighter aircraft is that of “controlled flight into terrain”: a technical sounding phrase that just means flying the plane into the ground for no good reason other than how difficult flying a fighter aircraft as low as possible in hazardous terrain is. The solution is a ground collision avoidance system: basically the pilots need a computer to warn them when hazardous terrain is coming up and when they are too close for comfort, and so should take evasive action. Johanna helped work on the interface design, so the part that pilots see and interact with. To be of any use in such high-pressure situations this communication has to be slick and very clear. 

She impressed those she was working with so much that she was offered a full time job and so became an engineer at NASA Armstrong working with a team designing, testing and integrating new research technology into experimental aircraft. She had to run tests with other technicians, the first problem being how to communicate effectively with the rest of the team. She succeeded twice as fast as her bosses expected, taking only a couple of days before the team were all working well together. Her experience from the challenges she had faced as a child were now providing her with the skills to do brilliantly in a job where teamwork and communication skills are vital. 

Mission control 

Eventually, she gained a place in Mission Control. There, slick comms are vital too. The engineers have to monitor the flight including all the communication as it happens, and be able to react to any developing situation. Johanna worked with an interpreter who listened directly to all the flight communications, signing it all for her to see on a second monitor. Working with interpreters in a situation like this is in itself a difficult task and Johanna had to make sure not only that they could communicate effectively but that the interpreter knew all the technical language that might come up in the flight. Johanna had plenty of experience dealing with issues like that though, and they worked together well, with the result that in April 2017 Johanna became the first deaf person to work in NASA mission control on a live mission … where of course she did not just survive the job, she excelled. 

As Johanna has pointed out it is not deafness itself that disables people, but the world deaf people live in that does. When in a world that wasn’t set up for deaf people, she struggled, but as soon as she started to get the basic help she needed that all changed. Change the environment to one that does not put up obstacles and deaf people can excel like anyone else. In space no one can hear anyone scream or for that matter speak. We don’t let it stop our space missions though. We just invent appropriate technology and make the problems go away. 

More on …

Read more about Johanna Lucht:

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Alexander Graham Bell: It’s good to talk

An antique phone

Image modified version of that by Christine Sponchia from Pixabay

by Peter W McOwan, Queen Mary University of London

(From the archive)

The famous inventor of the telephone, Alexander Graham Bell, was born in 1847 in Edinburgh, Scotland. His story is a fascinating one, showing that like all great inventions, a combination of talent, timing, drive and a few fortunate mistakes are what’s needed to develop a technology that can change the world.

A talented Scot

As a child the young Alexander Graham Bell, Aleck, as he was known to his family, showed remarkable talents. He had the ability to look at the world in a different way, and come up with creative solutions to problems. Aged 14, Bell designed a device to remove the husks from wheat by combining a nailbrush and paddle into a rotary-brushing wheel.

Family talk

The Bell family had a talent with voices. His grandfather had made a name for himself as a notable, but often unemployed, actor. Aleck’s Mother was deaf, but rather than use her ear trumpet to talk to her like everyone else did, the young Alexander came up with the cunning idea that speaking to her in low, booming tones very close to her forehead would allow her to hear his voice through the vibrations his voice would make. This special bond with his mother gave him a lifelong intereste in the education of deaf people, which combined with his inventive genius and some odd twists of fate were to change the world.

A visit to London, and a talking dog

While visiting London with his father, Aleck was fascinated by a demonstration of Sir Charles Wheatstone’s “speaking machine”, a mechanical contraption that made human like noises. On returning to Edinburgh their father challenged Aleck and his older brother to come up with a machine of their own. After some hard work and scrounging bits from around the place they built a machine with a mouth, throat, nose, movable tongue, and bellow for lungs, and it worked. It made human-like sounds. Delighted by his success Aleck went a step further and massaged the mouth of his Skye terrier so that the dog’s growls were heard as words. Pretty wruff on the poor dog.

Speaking of teaching

By the time he was 16, Bell was teaching music and elocution at a boy’s boarding school. He was still fascinated by trying to help those with speech problems improve their quality of life, and was very successful in this, later publishing two well-respected books called ‘The Practical Elocutionist’ and ‘Stammering and Other Impediments of Speech’. Alexander and his brother toured the country giving demonstrations of their techniques to improve peoples’ speech. He also started his study at the University of London, where a mistake in reading German was to change his life and lay the foundations for the telecommunications revolution.

A ‘silly’ language mistake that changed the world

At University, Bell became fascinated by the ideas of German physicist Hermann Von Helmholtz. Von Helmholtz had produced a book, ‘On The Sensations of Tone’, in which he said that vowel sounds, a, e, i, o and u, could be produced using electrical tuning forks and resonators. However Bell couldn’t read German very well, and mistakenly believed that Von Helmholtz’s had written that vowel sounds could be transmitted over a wire. This misunderstanding changed history. As Bell later stated, “It gave me confidence. If I had been able to read German, I might never have begun my experiments in electricity.”

Tragedy and Travel

Things were going well for young Bell’s career, when tragedy struck. Both his brothers and he contracted Tuberculosis, a common disease at the time. His two brothers died and at the age of 23, still suffering from the disease, Bell left Britain to move to Ontario in Canada to convalesce and then to Boston to work in a school for deaf mutes.

The time for more than dots and dashes

His dreams of transmitting voices over a wire were still spinning round in his creative head. It just needed some new ideas to spark him off again. Samuel Morse had just developed Morse Code and the electronic telegraph, which allowed single messages in the form of long and short electronic pulses, dots and dashes, to be transmitted rapidly along a wire over huge distances. Bell saw the similarities between the idea of being able to send multiple messages and the multiple notes in a musical chord, the “harmonic telegraph” could be a way to send voices.

Chance encounter

Again chance played its roll in telecommunications history. At the electrical machine shop of Charles Williams, Bell ran into young Thomas Watson, a skilled electrical machinist able to build the devices that Bell was devising. The two teamed up and started to work toward making Bell’s dream a reality. To make this reality work they needed to invent two things: something to measure a voice at one end, and another device to reproduce the voice at the other, what we would call today the microphone and the speaker. The speaker accident June 2, 1875 was a landmark day for team Bell and Watson. Working in their laboratory they were trying to free a reed, a small flat piece of metal, which they had wound too tightly to the pole of an electromagnet. In trying to free it Watson produced a ‘twang’. Bell heard the twang and came running. It was a sound similar to the sounds in human speech; this was the solution to producing an electronic voice, a discovery that must have come as a relief for all the dogs in the Boston area. The mercury microphone Bell had also discovered that a wire vibrated by his voice while partially dipped in a conducting liquid, like mercury or battery acid, could be made to produce a changing electrical current. They had a device where the voice could be transformed into an electronic signal. Now all that was needed was to put the two inventions together.

The first ’emergency’ phone call (allegedly)

On March 10, 1876, Bell and Watson set out to test their new system. The story goes that Bell knocked over a container with battery acid, which they were using as the conducting liquid in the ‘microphone’. Spilled acid tends to be nasty and Bell shouted out “Mr. Watson, come here. I want you!” Watson, working in the next room, heard Bell’s cry for help through the wire. The first phone call had been made, and Watson quickly went through to answer it. The telephone was invented, and Bell was only 29 years old.

The world listens

The telephone was finally introduced to the world at the Centennial Exhibition in Philadelphia in 1876. Bell quoted Hamlet over the phone line from the main building 100 yards away, causing the surprised Brazilian Emperor Dom Pedro to exclaim, “My God, it talks”, and talk it did. From there on, the rest, as they say, is history. The telephone spread throughout the world changing the way people lived their lives. Though it was not without its social problems. In many upper class homes it was considered to be vulgar. Many people considered it intrusive (just like some people’s view of mobile phones today!), but eventually it became indispensable.

Can’t keep a good idea down

Inventor Elisha Gray also independently designed his own version of the telephone. In fact both he and Bell rushed their designs to the US patent office within hours of each other, but Alexander Graham Bell patented his telephone first. With the massive amounts of money to be made Elisha Gray and Alexander Graham Bell entered into a famous legal battle over who had invented the telephone first, and Bell had to fight may legal battles over his lifetime as others claimed they had invented the technology first. In all the legal cases Bell won, partly many claimed because he was such a good communicator and had such a convincing talking voice. As is often the way few people now remember the other inventors. In fact, it is now recognized that Italian Antonio Meucci had invented a method of electronic voice communication earlier though did not have the funds to patent it.

Fame and Fortune under Forty

Bell became rich and famous, and he was only in his mid thirties. The Bell telephone company was set up, and later went on to become AT&T one of Americas foremost telecommunications giants.

Read Terry Pratchett’s brilliant book ‘Going Postal’ for a fun fantasy about inventing and making money from communication technology on DiscWorld.

Related Magazines and a new book…


EPSRC supports this blog through research grant EP/W033615/1. 

Recognising (and addressing) bias in facial recognition tech #BlackHistoryMonth

The five shades used for skin tone emojis

By Jo Brodie and Paul Curzon, Queen Mary University of London

A unit containing four sockets, 2 USB and 2 for a microphone and speakers.
Happy, though surprised, sockets

Some people have a neurological condition called face blindness (also known as ‘prosopagnosia’) which means that they are unable to recognise people, even those they know well – this can include their own face in the mirror! They only know who someone is once they start to speak but until then they can’t be sure who it is. They can certainly detect faces though, but they might struggle to classify them in terms of gender or ethnicity. In general though, most people actually have an exceptionally good ability to detect and recognise faces, so good in fact that we even detect faces when they’re not actually there – this is called pareidolia – perhaps you see a surprised face in this picture of USB sockets below.

How about computers? There is a lot of hype about face recognition technology as a simple solution to help police forces prevent crime, spot terrorists and catch criminals. What could be bad about being able to pick out wanted people automatically from CCTV images, so quickly catch them?

What if facial recognition technology isn’t as good at recognising faces as it has sometimes been claimed to be, though? If the technology is being used in the criminal justice system, and gets the identification wrong, this can cause serious problems for people (see Robert Williams’ story in “Facing up to the problems of recognising faces“).

“An audit of commercial facial-analysis tools
found that dark-skinned faces are misclassified
at a much higher rate than are faces from any
other group. Four years on, the study is shaping
research, regulation and commercial practices.”

The unseen Black faces of AI algorithms
(19 October 2022) Nature

In 2018 Joy Buolamwini and Timnit Gebru shared the results of research they’d done, testing three different commercial facial recognition systems. They found that these systems were much more likely to wrongly classify darker-skinned female faces compared to lighter- or darker-skinned male faces. In other words, the systems were not reliable. (Read more about their research in “The gender shades audit“).

“The findings raise questions about
how today’s neural networks, which …
(look for) patterns in huge data sets,
are trained and evaluated.”

Study finds gender and skin-type bias
in commercial artificial-intelligence systems
(11 February 2018) MIT News

Their work has shown that face recognition systems do have biases and so are not currently at all fit for purpose. There is some good news though. The three companies whose products they studied made changes to improve their facial recognition systems and several US cities have already banned the use of this tech in criminal investigations. More cities are calling for it too and in Europe, the EU are moving closer to banning the use of live face recognition technology in public places. Others, however, are still rolling it out. It is important not just to believe the hype about new technology and make sure we do understand their limitations and risks.

More on

Further reading

More technical articles

• Joy Buolamwini and Timnit Gebru (2018) Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research 81:1-15. [EXTERNAL]
The unseen Black faces of AI algorithms (19 October 2022) Nature News & Views [EXTERNAL]


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


EPSRC supports this blog through research grant EP/W033615/1.

Hidden Figures: NASA’s brilliant calculators #BlackHistoryMonth

Full Moon and silhouetted tree tops

by Paul Curzon, Queen Mary University of London

Full Moon with a blue filter
Full Moon image by PIRO from Pixabay

NASA Langley was the birthplace of the U.S. space program where astronauts like Neil Armstrong learned to land on the moon. Everyone knows the names of astronauts, but behind the scenes a group of African-American women were vital to the space program: Katherine Johnson, Mary Jackson and Dorothy Vaughan. Before electronic computers were invented ‘computers’ were just people who did calculations and that’s where they started out, as part of a segregated team of mathematicians. Dorothy Vaughan became the first African-American woman to supervise staff there and helped make the transition from human to electronic computers by teaching herself and her staff how to program in the early programming language, FORTRAN.

FORTRAN code on a punched card, from Wikipedia.

The women switched from being the computers to programming them. These hidden women helped put the first American, John Glenn, in orbit, and over many years worked on calculations like the trajectories of spacecraft and their launch windows (the small period of time when a rocket must be launched if it is to get to its target). These complex calculations had to be correct. If they got them wrong, the mistakes could ruin a mission, putting the lives of the astronauts at risk. Get them right, as they did, and the result was a giant leap for humankind.

See the film ‘Hidden Figures’ for more of their story (trailer below).

This story was originally published on the CS4FN website and was also published in issue 23, The Women Are (Still) Here, on p21 (see ‘Related magazine’ below).

More on …


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


Related Magazine …

EPSRC supports this blog through research grant EP/W033615/1.

Writing together: Clarence ‘Skip’ Ellis #BlackHistoryMonth

by Paul Curzon, Queen Mary University of London

Small photo of Clarence 'Skip' Ellis
Clarence ‘Skip’ Ellis

Back in 1956, Clarence Ellis started his career at the very bottom of the computer industry. He was given a job, at the age of 15, as a “computer operator” … because he was the only applicant. He was also told that under no circumstances should he touch the computer! Its lucky for all of us he got the job, though! He went on to develop ideas that have made computers easier for everyone to use. Working at a computer was once a lonely endeavour: one person, on one computer, doing one job. Clarence Ellis changed that. He pioneered ways for people to use computers together effectively.

The graveyard shift

The company Clarence first worked for had a new computer. Just like all computers back then, it was the size of a room. He worked the graveyard shift and his duties were more those of a nightwatchman than a computer operator. It could have been a dead-end job, but it gave him lots of spare time and, more importantly, access to all the computer’s manuals … so he read them … over and over again. He didn’t need to touch the computer to learn how to use it!

Saving the day

His studying paid dividends. Only a few months after he started, the company had a potential disaster on its hands. They ran out of punch cards. Back then punch cards were used to store both data. They used patterns of holes and non-holes as a way to store numbers as binary in a away a computer could read them. Without punchcards the computer could not work!

It had to though, because the payroll program had to run before the night was out. If it didn’t then no-one would be paid that month. Because he had studied the manuals in detail, and more so than anyone else, Clarence was the only person who could work out how to reuse old punch cards. The problem was that the computer used a system called ‘parity checking’ to spot mistakes. In its simplest form parity checking of a punch card involves adding an extra binary digit (an extra hole or no-hole) on the end of each number. This is done in a way that ensures that the number of holes is even. If there is an even number of holes already, the extra digit is left as a non-hole. If, on the other hand there is an odd number of holes, a hole is punched as the extra digit. That extra binary digit isn’t part of the number. It’s just there so the computer can check if the number has been corrupted. If a hole was accidentally or otherwise turned into a non-hole (or vice versa), then this would show up. It would mean there was now an odd number of holes. Special circuitry in the computer would spot this and spit out the card, rejecting it. Clarence knew how to switch that circuitry off. That meant they could change the numbers on the cards by adding new holes without them being rejected.

After that success he was allowed to become a real operator and was relied on to troubleshoot whenever there were problems. His career was up and running.

Clicking icons

He later worked at Xerox Parc, a massively influential research centre. He was part of the team that invented graphical user interfaces (GUIs). With GUIs Xerox Parc completely transformed the way we used computers. Instead of typing obscure and hard to remember commands, they introduced the now standard ideas, of windows, icons, dragging and dropping, using a mouse, and more. Clarence, himself, has been credited with inventing the idea of clicking on an icon to run a program.

Writing Together

As if that wasn’t enough of an impact, he went on to help make groupware a reality: software that supports people working together. His focus was on software that let people write a document together. With Simon Gibbs he developed a crucial algorithm called Operational Transformation. It allows people to edit the same document at the same time without it becoming hopelessly muddled. This is actually very challenging. You have to ensure that two (or more) people can change the text at exactly the same time, and even at the same place, without each ending up with a different version of the document.

The actual document sits on a server computer. It must make sure that its copy is always the same as the ones everyone is individually editing. When people type changes into their local copy, the master is sent messages informing it of the actions they performed. The trouble is the order that those messages arrive can change what happens. Clarence’s operational transformation algorithm solved this by changing the commands from each person into ones that work consistently whatever order they are applied. It is the transformed operation that is the one that is applied to the master. That master version is the version everyone then sees as their local copy. Ultimately everyone sees the same version. This algorithm is at the core of programs like Google Docs that have ensured collaborative editing of documents is now commonplace.

Clarence Ellis started his career with a lonely job. By the end of his career he had helped ensure that writing on a computer at least no longer needs to be a lonely affair.


This article was originally published on the CS4FN website. One of the aims of our Diversity in Computing posters (see below) is to help a classroom of young people see the range of computer scientists which includes people who look like them and people who don’t look like them. You can download our posters free from the link below.

More on …


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


EPSRC supports this blog through research grant EP/W033615/1.

The original version of this article was funded by the Institute of Coding.

Ada Lovelace: Visionary

Cover of Issue 20 of CS4FN, celebrating Ada Lovelace

By Paul Curzon, Queen Mary University of London

It is 1843, Queen Victoria is on the British throne. The industrial revolution has transformed the country. Steam, cogs and iron rule. The first computers won’t be successfully built for a hundred years. Through the noise and grime one woman sees the future. A digital future that is only just being realised.

Ada Lovelace is often said to be the first programmer. She wrote programs for a designed, but yet to be built, computer called the Analytical Engine. She was something much more important than a programmer, though. She was the first truly visionary person to see the real potential of computers. She saw they would one day be creative.

Charles Babbage had come up with the idea of the Analytical Engine – how to make a machine that could do calculations so we wouldn’t need to do it by hand. It would be another century before his ideas could be realised and the first computer was actually built. As he tried to get the money and build the computer, he needed someone to help write the programs to control it – the instructions that would tell it how to do calculations. That’s where Ada came in. They worked together to try and realise their joint dream, jointly working out how to program.

Ada also wrote “The Analytical Engine has no pretensions to originate anything.” So how does that fit with her belief that computers could be creative? Read on and see if you can unscramble the paradox.

Ada was a mathematician with a creative flair and while Charles had come up with the innovative idea of the Analytical Engine itself, he didn’t see beyond his original idea of the computer as a calculator, she saw that they could do much more than that.

The key innovation behind her idea was that the numbers could stand for more than just quantities in calculations. They could represent anything – music for example. Today when we talk of things being digital – digital music, digital cameras, digital television, all we really mean is that a song, a picture, a film can all be stored as long strings of numbers. All we need is to agree a code of what the numbers mean – a note, a colour, a line. Once that is decided we can write computer programs to manipulate them, to store them, to transmit them over networks. Out of that idea comes the whole of our digital world.

Ada saw even further though. She combined maths with a creative flair and so she realised that not only could they store and play music they could also potentially create it – they could be composers. She foresaw the whole idea of machines being creative. She wasn’t just the first programmer, she was the first truly creative programmer.

This article was originally published at the CS4FN website, along with lots of other articles about Ada Lovelace. We also have a special Ada Lovelace-themed issue of the CS4FN magazine which you can download as a PDF (click picture below).

See also: The very first computers and Ada Lovelace Day (2nd Tuesday of October). Help yourself to our Women in Computing posters PDF (or sign up to get FREE copies posted to your school (UK-based only, please).

 

The very first computers

Victorian engineer Charles Babbage designed, though never built the first mechanical computer. The first computers had actually existed for a long time before he had his idea, though. The British superiority at sea and ultimately the Empire was already dependent on them. They were used to calculate books of numbers that British sailors relied on to navigate the globe. The original meaning of the word computer was actually a person who did these calculations. The first computers were humans.

Babbage became interested in the idea of creating a mechanical computer in part because of computing work he did himself, calculating accurate versions of numbers needed for a special book: ‘The Nautical Almanac’. It was a book of astronomical tables, the result of an idea of Astronomer Royal, Nevil Maskelyne. It was the earliest way ships had to reliably work out their longitudinal (i.e., east-west) position at sea. Without them, to cross the Atlantic, you just set off and kept going until you hit land, just as Columbus did. The Nautical Almanac gave a way to work out how far west you were all the time.

Maskelyne’s idea was based on the fact that the angle from the moon’ to a person on the Earth and back to a star was the same at the same time wherever that person was looking from (as long as they could see both the star and moon at once). This angle was called the lunar distance.

The lunar distance could be used to work out where you were because as time passed its value changed but in a predictable way based on Newton’s Laws of motion applied to the planets. For a given place, Greenwich say, you could calculate what that lunar distance would be for different stars at any time in the future. This is essentially what the Almanac recorded.

Now the time changes as you move East or West: Dawn gradually arrives later the further west you go, for example, as the Earth rotates the sun comes into view at different times round the planet). That is why we have different time zones. The time in the USA is hours behind that in Britain which itself is behind that in China. Now suppose you know your local time, which you can check regularly from the position of the sun or moon, and you know the lunar distance. You can look up in the Almanac the time in Greenwich that the lunar distance occurs and that gives you the current time in Greenwich. The greater the difference that time is to your local time, the further West (or East) you are. It is because Greenwich was used as the fixed point for working the lunar distances out, that we now use Greenwich Mean Time as UK time. The time in Greenwich was the one that mattered!

This was all wonderful. Sailors just had to take astronomical readings, do some fairly simple calculations and a look up in the Almanac to work out where they were. However, there was a big snag. it relied on all those numbers in the tables having been accurately calculated in advance. That took some serious computing power. Maskelyne therefore employed teams of human ‘computers’ across the country, paying them to do the calculations for him. These men and women were the first industrial computers.

Before pocket calculators were invented in the 1970s the easiest way to do calculations whether big multiplication, division, powers or square roots was to use logarithms. The logarithm of a number is just the number of times you can divide it by 10 before you get to 1. Complicated calculations can be turned in to simple ones using logarithms. Therefore the equivalent of the pocket calculator was a book containing a table of logarithms. Log tables were the basis of all other calculations including maritime ones. Babbage himself became a human computer, doing calculations for the Nautical Almanac. He calculated the most accurate book of log tables then available for the British Admiralty.

The mechanical computer came about because Babbage was also interested in finding the most profitable ways to mechanise work in factories. He realised a machine could do more than weave cloth but might also do calculations. More to the point such a machine would be able to do them with a guaranteed accuracy, unlike people. He therefore spent his life designing and then trying to build such a machine. It was a revolutionary idea and while his design worked, the level of precision engineering needed was beyond what could be done. It was another hundred years before the first electronic computer was invented – again to replace human computers working in the national interest…but this time at Bletchley Park doing the calculations needed to crack the German military codes and so win the World War II.

More on …

Related Magazines …

Cover of Issue 20 of CS4FN, celebrating Ada Lovelace

EPSRC supports this blog through research grant EP/W033615/1. 

Florence Nightingale: rebel with a cause

Florence Nightingale, the most famous female Victorian after Queen Victoria, is known for her commitment to nursing, especially in the Crimean War. She rebelled against convention to become a nurse at a time when nursing was seen as a lowly job, not suitable for ‘ladies’. She broke convention in another less well-known, but much more significant way too. She was a mathematician – the first woman to be elected a member of the Royal Statistical Society. She also pioneered the use of pictures to present the statistical data that she collected about causes of war deaths and issues of sanitation and health. What she did was an early version of the current Big Data revolution in computer science.

Soldiers were dying in vast numbers in the field hospital she worked in, not directly from their original wounds but from the poor conditions. But how do you persuade people of something that (at least then) is so unintuitive? Even she originally got the cause of the deaths wrong, thinking they were due to poor nutrition, rather than the hospital conditions as her statistics later showed. Politicians, the people with power to take action, were incapable of understanding statistical reports full of numbers then (and probably now). She needed a way to present the information so that the facts would jump out to anyone. Only then could she turn her numbers into life-saving action. Her solution was to use pictures, often presenting her statistics as books of pie charts and circular histograms.

Whilst she didn’t invent them, Florence Nightingale certainly was responsible for demonstrating how effective they could be in promoting change, and so subsequently popularising their use. She undoubtedly saved more lives with her statistics than from her solitary rounds at night by lamplight.

Big Data is now a big thing. It is the idea that if you collect lots of data about something (which computers now make easy) then you (and computers themselves) can look for patterns and so gain knowledge and, for people, ultimately wisdom from it. Florence Nightingale certainly did that. Data visualisation is now an important area of computer science. As computers allow us to collect and store ever more data, it becomes harder and harder for people to make any sense of it all – to pick out the important nuggets of information that matter. Raw numbers are little use if you can’t actually turn them into knowledge, or better still wisdom. Machine Learning programs can number crunch the data and make decisions from it, but its hard to know where the decisions came from. That often matters if we are to be persuaded. For humans the right kind of picture for the right kind of data can do just that as Florence Nightingale showed.

‘The Lady of the Lamp’: more than a nurse, but also a remarkable statistician and pioneer of a field of computer science…a Lady who made a difference by rebelling with a cause.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Who invented Morse code?

by Paul Curzon, Queen Mary University of London

Morse code tapper: www.istock.com 877875

Who invented Morse code? Silly question, surely! Samuel Morse, of course. He is one of the most celebrated inventors on the planet as a result. Morse code helped revolutionise global communications. It was part of the reason the telegraph made fast, world-wide communication a practical reality. Morse did invent a code to use for the telegraph, but not Morse code. His code was, by comparison, a poor, inflexible solution. He was a great businessman, focussed on making his dream a reality, but perhaps not so good at computational thinking! The code that bears his name was largely invented by his partner Alfred Vail.

Samuel Morse was originally a painter. However, his life changed when his wife died suddenly. He was away doing a portrait commission at the time. On hearing of his wife’s illness he rushed home, but the message, delivered by a horse rider had taken too long to reach him and she died and was buried before he got there. He dedicated his life to giving the world a better way of communicating as a result. Several different people were working on the idea of a way to send messages by electricity over wires, but no one had really come up with a usable, practical system. The physics had largely been sorted, but the engineering was still lacking.

Morse came up with a basic version of an electrical telegraph system and he demonstrated it. Alfred Vail saw the demonstration and persuaded Morse to take him on as a partner. His father built a famous ironworks, and so he had worked as a machinist. He improved Morse’s system enormously including making the tapping machine used to send messages.

He wasn’t just good at engineering though. He was good at computational thinking, so he also worked on the code used for sending messages. Having a machine that can send taps down a wire is no use unless you can also invent a simple, easy to use algorithm that turns words into those taps, and back again once it arrives. Morse came up with a code based on words not letters. It was a variation of the system already used by semaphore operators. It involved a code book: essentially a list of words. Each word in the book was given a number. A second code turned numbers in to taps – in to dots and dashes. The trouble with this system is it is not very general. If the word you want to send isn’t in the code book you are stuffed! To cover every possibility it has to be the size of a dictionary, with every word numbered. But that would make it very slow to use. Vail came up with a version where the dots and dashes represented letters instead of numbers, allowing any message to be sent letter by letter.

He also realised that some letters are more common than others. He therefore included the results of what we now call “frequency analysis” to make the system faster, working out the order of letters based on how common they are. He found a simple way to do it. He went to his local newspaper offices! To print a page of text, printing presses used metal letters called movable type. Each page was built up out of the individual metal letters slotted in to place. Vail realised that the more common a letter was, the more often it appeared on any page, and the more metal versions the newspaper office would therefore need if they wasn’t to keep running out of the right letters before the page was done. He therefore counted how many of each “movable type” letter the newspaper printers had in their trays. He gave the letters that were most common the shortest codes. So E, for example, is just a single dot as it is the most common letter in American English. T, which is also common, is a single dash. It is this kind of attention to detail that made Morse code so successful. Vail was really good at computational thinking!

Morse and Vail worked really well as a team, though Morse then took all the credit because the original idea to solve the problem had been his, and their agreement meant the main rights were with Morse. They almost certainly worked together to some extent on everything to do with the telegraph. It is the small details that meant their version of the telegraph was the one that took over the world though and that was largely down to Vail. Morse maybe the famous one but the invention of the telegraph needed them both working together.

More on …