Hidden Figures – NASA’s brilliant calculators #BlackHistoryMonth ^JB

Full Moon and silhouetted tree tops

by Paul Curzon, Queen Mary University of London

Full Moon with a blue filter
Full Moon image by PIRO from Pixabay

NASA Langley was the birthplace of the U.S. space program where astronauts like Neil Armstrong learned to land on the moon. Everyone knows the names of astronauts, but behind the scenes a group of African-American women were vital to the space program: Katherine Johnson, Mary Jackson and Dorothy Vaughan. Before electronic computers were invented ‘computers’ were just people who did calculations and that’s where they started out, as part of a segregated team of mathematicians. Dorothy Vaughan became the first African-American woman to supervise staff there and helped make the transition from human to electronic computers by teaching herself and her staff how to program in the early programming language, FORTRAN.

FORTRAN code on a punched card, from Wikipedia.

The women switched from being the computers to programming them. These hidden women helped put the first American, John Glenn, in orbit, and over many years worked on calculations like the trajectories of spacecraft and their launch windows (the small period of time when a rocket must be launched if it is to get to its target). These complex calculations had to be correct. If they got them wrong, the mistakes could ruin a mission, putting the lives of the astronauts at risk. Get them right, as they did, and the result was a giant leap for humankind.

See the film ‘Hidden Figures’ for more of their story (trailer below).

This story was originally published on the CS4FN website and was also published in issue 23, The Women Are (Still) Here, on p21 (see ‘Related magazine’ below).


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

Writing together: Clarence ‘Skip’ Ellis #BlackHistoryMonth ^JB

Small photo of Clarence 'Skip' Ellis

by Paul Curzon, Queen Mary University of London

Small photo of Clarence 'Skip' Ellis
Small photo of
Clarence ‘Skip’ Ellis

Back in 1956, Clarence Ellis started his career at the very bottom of the computer industry. He was given a job, at the age of 15, as a “computer operator” … because he was the only applicant. He was also told that under no circumstances should he touch the computer! Its lucky for all of us he got the job, though! He went on to develop ideas that have made computers easier for everyone to use. Working at a computer was once a lonely endeavour: one person, on one computer, doing one job. Clarence Ellis changed that. He pioneered ways for people to use computers together effectively.

The graveyard shift

The company Clarence first worked for had a new computer. Just like all computers back then, it was the size of a room. He worked the graveyard shift and his duties were more those of a nightwatchman than a computer operator. It could have been a dead-end job, but it gave him lots of spare time and, more importantly, access to all the computer’s manuals … so he read them … over and over again. He didn’t need to touch the computer to learn how to use it!

Saving the day

His studying paid dividends. Only a few months after he started, the company had a potential disaster on its hands. They ran out of punch cards. Back then punch cards were used to store both data. They used patterns of holes and non-holes as a way to store numbers as binary in a away a computer could read them. Without punchcards the computer could not work!

It had to though, because the payroll program had to run before the night was out. If it didn’t then no-one would be paid that month. Because he had studied the manuals in detail, and more so than anyone else, Clarence was the only person who could work out how to reuse old punch cards. The problem was that the computer used a system called ‘parity checking’ to spot mistakes. In its simplest form parity checking of a punch card involves adding an extra binary digit (an extra hole or no-hole) on the end of each number. This is done in a way that ensures that the number of holes is even. If there is an even number of holes already, the extra digit is left as a non-hole. If, on the other hand there is an odd number of holes, a hole is punched as the extra digit. That extra binary digit isn’t part of the number. It’s just there so the computer can check if the number has been corrupted. If a hole was accidentally or otherwise turned into a non-hole (or vice versa), then this would show up. It would mean there was now an odd number of holes. Special circuitry in the computer would spot this and spit out the card, rejecting it. Clarence knew how to switch that circuitry off. That meant they could change the numbers on the cards by adding new holes without them being rejected.

After that success he was allowed to become a real operator and was relied on to troubleshoot whenever there were problems. His career was up and running.

Clicking icons

He later worked at Xerox Parc, a massively influential research centre. He was part of the team that invented graphical user interfaces (GUIs). With GUIs Xerox Parc completely transformed the way we used computers. Instead of typing obscure and hard to remember commands, they introduced the now standard ideas, of windows, icons, dragging and dropping, using a mouse, and more. Clarence, himself, has been credited with inventing the idea of clicking on an icon to run a program.

Writing Together

As if that wasn’t enough of an impact, he went on to help make groupware a reality: software that supports people working together. His focus was on software that let people write a document together. With Simon Gibbs he developed a crucial algorithm called Operational Transformation. It allows people to edit the same document at the same time without it becoming hopelessly muddled. This is actually very challenging. You have to ensure that two (or more) people can change the text at exactly the same time, and even at the same place, without each ending up with a different version of the document.

The actual document sits on a server computer. It must make sure that its copy is always the same as the ones everyone is individually editing. When people type changes into their local copy, the master is sent messages informing it of the actions they performed. The trouble is the order that those messages arrive can change what happens. Clarence’s operational transformation algorithm solved this by changing the commands from each person into ones that work consistently whatever order they are applied. It is the transformed operation that is the one that is applied to the master. That master version is the version everyone then sees as their local copy. Ultimately everyone sees the same version. This algorithm is at the core of programs like Google Docs that have ensured collaborative editing of documents is now commonplace.

Clarence Ellis started his career with a lonely job. By the end of his career he had helped ensure that writing on a computer at least no longer needs to be a lonely affair.


This article was originally published on the CS4FN website. One of the aims of our Diversity in Computing posters (see below) is to help a classroom of young people see the range of computer scientists which includes people who look like them and people who don’t look like them. You can download our posters free from the link below.


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


This blog is funded through EPSRC grant EP/W033615/1.

CS4FN Advent – Day 25: Merry Christmas! Today’s post is about the ‘wood computer’

Today is the final post in our CS4FN Christmas Computing Advent Calendar – it’s been a lot of fun rummaging in the CS4FN back catalogue, and also finding out about some new things to write about.

Each day we published a blog post about computing with the theme suggested by the picture on the advent calendar’s ‘door’. Our first picture was a woolly jumper so the accompanying post was about the links between knitting and coding, the door with a picture of a ‘pair of mittens’ on led to a post about pair programming and gestural gloves, a patterned bauble to an article about printed circuit boards, and so on. It was fun coming up with ideas and links and we hope it was fun to read too.

We hope you enjoyed the series of posts (scroll to the end to see them all) and that you have a very Merry Christmas. Don’t forget that if you’re awake and reading this at the time it’s published (6.30am Christmas Day) and it’s not cloudy, you may be able to see Father Christmas passing overhead at 6.48am. He’s just behind the International Space Station…

And on to today’s post which is accompanied by a picture of a Christmas Tree, so it’ll be a fairly botanically-themed post. The suggestion for this post came from Prof Ursula Martin of Oxford University, who told us about the ‘wood computer’.

It’s a Christmas tree!

The Wood Computer

by Jo Brodie, QMUL.

Other than asking someone “do you know what this tree is?” as you’re out enjoying a nice walk and coming across an unfamiliar tree, the way of working out what that tree is would usually involve some sort of key, with a set of questions that help you distinguish between the different possibilities. You can see an example of the sorts of features you might want to consider in the Woodland Trust’s page on “How to identify trees“.

Depending on the time of year you might consider its leaves – do they have stalks or not, do they sit opposite from each other on a twig or are they diagonally placed etc. You can work your way through leaf colour, shape, number of lobes on the leaf and also answer questions about the bark and other features of your tree. Eventually you narrow things down to a handful of possibilities.

What happens if the tree is cut up into timber and your job is to check if you’re buying the right wood for your project. If you’re not a botanist the job is a little harder and you’d need to consider things like the pattern of the grain, the hardness, the colour and any scent from the tree’s oils.

Historically, one way of working out which piece of timber was in front of you was to use a ‘wood computer’ or wood identification kit. This was prepared (programmed!) from a series of index cards with various wood features printed on all the cards – there might be over 60 different features.

Every card had the same set of features on it and a hole punched next to every feature. You can see an example of a ‘blank’ card below, which has a row of regularly placed holes around the edge. This one happens to be being used as a library card rather than a wood computer (though if we consider what books are made of…).

Image of an edge-notched card (actually being used as a library card though), from Wikipedia.

I bet you can imagine inserting a thin knitting needle into any of those holes and lifting that card up – in fact that’s exactly how you’d use the wood computer. In the tweet below you can see several cards that made up the wood computer.

One card was for one tree or type of wood and the programmer would add notch the hole next to features that particularly defined that type. For example you’d notch ‘has apples’ for the apple tree card but leave it as an intact hole on the pear tree card.  If a particular type of timber had fine grained wood they’d add the notch to the hole next to “fine-grained”. The cards were known, not too surprisingly, as edge-notched cards.

You can see what one looks like here with some notches cut into it. You might have spotted how knitting needles can help you in telling different woods apart.

Holes and notches

Edge-notched card overlaid on black background, with two rows of holes. On the top a hole in the first row is notched, on the right hand side two holes are notched. Image from Wikipedia.

Each card would end up with a slightly different pattern of notched holes, and you’d end up with lots of cards that are slightly different from each other.

Example ‘wood computer’. At the end of your search (to find out which tree your piece of wood came from) you are left with two cards for fine-grained wood. If your sample has a strong scent then it’s likely it’s the tree in the card on the right (though you could arrive at the same conclusion by using the differences in colour too). The card at the top is the blank un-notched card.

How it works

Your wood computer is basically a stack of cards, all lined up and that knitting needle. You pick a feature that your tree or piece of wood has and put your needle through that hole, and lift. All of the cards that don’t have that feature notched will have an un-notched hole and will continue to hang from your knitting needle. All of the cards that contain wood that do have that feature have now been sorted from your pile of cards and are sitting on the table.

You can repeat the process several times to whittle (sorry!) your cards down by choosing a different feature to sort them on.

The advantage of the cards is that they are incredibly low tech, requiring no electricity or phone signal and they’re very easy to use without needing specialist botanical knowledge.

You can see a diagram of one on page 8 of the 20 page PDF “Indian Standard: Key for identification of commercial timbers”, from 1974.

Teachers: we have a classroom sorting activity that uses the same principles as the wood computer. Download our Punched Card Searching PDF from our activity page.

The creation of this post was funded by UKRI, through grant EP/K040251/2 held by Professor Ursula Martin, and forms part of a broader project on the development and impact of computing.

Previous Advent Calendar posts

CS4FN Advent – Day 23: Bonus material – see “Santa’s sleigh” flying overhead (23 December 2021) – this was an extra post so that people could get ready to see “Father Christmas” passing overhead on Christmas Day at 6:48am).

CS4FN Advent – Day 25: Merry Christmas! Today’s post is about the ‘wood computer’ (25 December 2021) – this post


CS4FN Advent – Day 22: stars and celestial navigation

Every day from the 1st to the 25th of December this blog will publish a Christmas Computing post, as part of our CS4FN Christmas Computing Advent Calendar. On the front of the calendar for each day is a festive cartoon which suggests the post’s theme – today’s is a star, so today’s post is about finding your way: navigation.

Follow that star…

 

In modern cities looking up at the night sky is perhaps not as dramatic as it might have been in the past, or in a place with less light pollution. For centuries people have used stars and the patterns they form to help them find their way.

GPS Orbital Navigator Satellite (DRAGONSat), photograph by NASA.

There are many ways our explorations of space have led to new technologies, though satellites have perhaps had the most obvious effect on our daily lives. Early uses were just for communication, allowing live news reports from the other side of the world, with networks that span the globe. More recently GPS – the Global Positioning System has led to new applications and now we generally just use our phones or satnav to point us in the right direction.

 

The very first computers

by Paul Curzon, QMUL. This post was first published on the CS4FN website.

Victorian engineer Charles Babbage designed, though never built the first mechanical computer. The first computers had actually existed for a long time before he had his idea, though. The British superiority at sea and ultimately the Empire was already dependent on them. They were used to calculate books of numbers that British sailors relied on to navigate the globe. The original meaning of the word computer was actually a person who did these calculations. The first computers were humans.

An American almanac from 1816. Image by Dave Esons from Pixabay

Babbage became interested in the idea of creating a mechanical computer in part because of computing work he did himself, calculating accurate versions of numbers needed for a special book: ‘The Nautical Almanac’. It was a book of astronomical tables, the result of an idea of Astronomer Royal, Nevil Maskelyne. It was the earliest way ships had to reliably work out their longitudinal (i.e., east-west) position at sea. Without them, to cross the Atlantic, you just set off and kept going until you hit land, just as Columbus did. The Nautical Almanac gave a way to work out how far west you were all the time.

Maskelyne’s idea was based on the fact that the angle from the moon’ to a person on the Earth and back to a star was the same at the same time wherever that person was looking from (as long as they could see both the star and moon at once). This angle was called the lunar distance.

The lunar distance could be used to work out where you were because as time passed its value changed but in a predictable way based on Newton’s Laws of motion applied to the planets. For a given place, Greenwich say, you could calculate what that lunar distance would be for different stars at any time in the future. This is essentially what the Almanac recorded.

Moon image by PollyDot from Pixabay

Now the time changes as you move East or West: Dawn gradually arrives later the further west you go, for example, as the Earth rotates the sun comes into view at different times round the planet). That is why we have different time zones. The time in the USA is hours behind that in Britain which itself is behind that in China. Now suppose you know your local time, which you can check regularly from the position of the sun or moon, and you know the lunar distance. You can look up in the Almanac the time in Greenwich that the lunar distance occurs and that gives you the current time in Greenwich. The greater the difference that time is to your local time, the further West (or East) you are. It is because Greenwich was used as the fixed point for working the lunar distances out, that we now use Greenwich Mean Time as UK time. The time in Greenwich was the one that mattered!

This was all wonderful. Sailors just had to take astronomical readings, do some fairly simple calculations and a look up in the Almanac to work out where they were. However, there was a big snag. it relied on all those numbers in the tables having been accurately calculated in advance. That took some serious computing power. Maskelyne therefore employed teams of human ‘computers’ across the country, paying them to do the calculations for him. These men and women were the first industrial computers.

Book of logarithms, image by sandid from Pixabay

Before pocket calculators were invented in the 1970s the easiest way to do calculations whether big multiplication, division, powers or square roots was to use logarithms (not to be confused with algorithm). The logarithm of a number is just the number of times you can divide it by 10 before you get to 1. Complicated calculations can be turned in to simple ones using logarithms. Therefore the equivalent of the pocket calculator was a book containing a table of logarithms. Log tables were the basis of all other calculations including maritime ones. Babbage himself became a human computer, doing calculations for the Nautical Almanac. He calculated the most accurate book of log tables then available for the British Admiralty.

The mechanical computer came about because Babbage was also interested in finding the most profitable ways to mechanise work in factories. He realised a machine could do more than weave cloth but might also do calculations. More to the point such a machine would be able to do them with a guaranteed accuracy, unlike people. He therefore spent his life designing and then trying to build such a machine. It was a revolutionary idea and while his design worked, the level of precision engineering needed was beyond what could be done. It was another hundred years before the first electronic computer was invented – again to replace human computers working in the national interest…but this time at Bletchley Park doing the calculations needed to crack the German military codes and so win the World War II.

 

The creation of this post was funded by UKRI, through grant EP/K040251/2 held by Professor Ursula Martin, and forms part of a broader project on the development and impact of computing.

 

 

Previous Advent Calendar posts

CS4FN Advent – Day 1 – Woolly jumpers, knitting and coding (1 December 2021)

 

CS4FN Advent – Day 2 – Pairs: mittens, gloves, pair programming, magic tricks (2 December 2021)

 

CS4FN Advent – Day 3 – woolly hat: warming versus cooling (3 December 2021)

 

CS4FN Advent – Day 4 – Ice skate: detecting neutrinos at the South Pole, figure-skating motion capture, Frozen and a puzzle (4 December 2021)

 

CS4FN Advent – Day 5 – snowman: analog hydraulic computers (aka water computers), digital compression, and a puzzle (5 December 2021)

 

CS4FN Advent – Day 6 – patterned bauble: tracing patterns in computing – printed circuit boards, spotting links and a puzzle for tourists (6 December 2021)

 

CS4FN Advent – Day 7 – Computing for the birds: dawn chorus, birds as data carriers and a Google April Fool (plus a puzzle!) (7 December 2021)

 

CS4FN Advent – Day 8: gifts, and wrapping – Tim Berners-Lee, black boxes and another computing puzzle (8 December 2021)

 

CS4FN Advent – Day 9: gingerbread man – computing and ‘food’ (cookies, spam!), and a puzzle (9 December 2021)

 

CS4FN Advent – Day 10: Holly, Ivy and Alexa – chatbots and the useful skill of file management. Plus win at noughts and crosses – (10 December 2021)

 

CS4FN Advent – Day 11: the proof of the pudding… mathematical proof (11 December 2021)

 

CS4FN Advent – Day 12: Computer Memory – Molecules and Memristors – (12 December 2021)

 

CS4FN Advent – Day 13: snowflakes – six-sided symmetry, hexahexaflexagons and finite state machines in computing (13 December 2021)

 

CS4FN Advent – Day 14 – Why is your internet so slow + a festive kriss-kross puzzle (14 December 2021)

 

CS4FN Advent – Day 15 – a candle: optical fibre, optical illusions (15 December 2021)

 

CS4FN Advent – Day 16: candy cane or walking aid: designing for everyone, human computer interaction (16 December 2021)

 

CS4FN Advent – Day 17: reindeer and pocket switching (17 December 2021)

 

CS4FN Advent – Day 18: cracker or hacker? Cyber security(18 December 2021)

 

CS4FN Advent – Day 19: jingle bells or warning bells? Avoiding computer scams (19 December 2021)

 

CS4FN Advent – Day 20: where’s it @? Gift tags and internet addresses (20 December 2021)

 

CS4FN Advent – Day 21: wreaths and rope memory – weave your own space age computer (21 December 2021)

 

 

CS4FN Advent – Day 22: stars and celestial navigation (22 December 2021) – this post

 

 

 

CS4FN Advent – Day 5 – snowman: analog hydraulic computers (aka water computers), digital compression, and a puzzle

This post is behind the 5th ‘door’ of the CS4FN Christmas Computing Advent Calendar – we’re publishing a computing-themed (and sometimes festive-themed) post every day until Christmas Day. Today’s picture is a snowman, and what’s a snowman made of but frozen water?

1. You can make a computer out of water!

1n 1936 Vladimir Lukyanov got creative with some pipes and pumps built a computer, called a water (or hydraulic) integrator, which could store water temporarily in some bits and pump water to other bits. The movement of water and where it ended up used the ‘simplicity of programming’ to show him the answer – a physical representation of some Very Hard Sums (sums, equations and calculations that are easier now thanks to much faster computers).

A simple and effective way of using water to show a mathematical relationship popped up on QI and the video below demonstrates Pythagoras’ Theorem rather nicely.

In 1939 Lukyanov published an article about his analog hydraulic computer for the (‘Otdeleniye Technicheskikh Nauk’ or ‘Отделение технических наук’ in Russian which means Section for Technical Scientific Works although these days we’d probably say Department of Engineering Sciences) and in 1955 this was translated by the Massachusetts Institute of Technology (MIT) for the US army’s “Arctic Construction and Frost Effects Laboratory”. You can see a copy of his translated ‘Hydraulic Apparatus for Engineering Computations‘ at the Internet Archive.

In a rather pleasing coincidence for this blog post (that you might think was by design rather than just good fortune) this device was actually put to work by the US Army to study the freezing and thawing not of snowmen but of soil (ie, the ground). It’s particularly useful if you’re building and maintaining a military airfield (or even just roads) to know how well the concrete runway will survive changes in weather (and how well your aircraft’s wheels will survive after meeting it).

For a modern take on the ‘hydrodynamic calculating machine’ aka water computer see this video from science communicator Steve Mould in which he creates a computer that can do some simple additions.

 

2. The puzzle of digital compression

Our snowman’s been sitting around for a while and his ice has probably become a bit compacted, so he might be taking up less space (or he might have melted). Compression is a technique computer scientists use to make big data files smaller.

Big files take a long time to transfer from one place to another. The more data the longer it takes, and the more memory is needed to store the information. Compressing the files saves space. Data on computers is stored as long sequences of characters – ultimately as binary 1s and 0s. The idea with compression is that we use an algorithm to change the way the information is represented so that fewer characters are needed to store exactly the same information.

That involves using special codes. Each common word or phrase is replaced by a shorter sequence of symbols. A long file can be made much shorter if it has lots of similar sequences, just as the message below has been shortened. A second algorithm can then be used to get the original back. We’ve turned the idea into a puzzle that involves pattern matching patterns from the code book. Can you work out what the original message was? (Answer tomorrow).

The code: NG1 AMH5 IBEC2 84F6JKO 7JDLC93 (clue: Spooky apparitions are about to appear on Christmas Eve)

The code book (match the letter or number to the word it codes for).

3. Answer to yesterday’s puzzle

The creation of this post was funded by UKRI, through grant EP/K040251/2 held by Professor Ursula Martin, and forms part of a broader project on the development and impact of computing.

Letters from the Victorian Smog: Braille: binary, bits & bytes

by Paul Curzon, Queen Mary University of London

Reading Braille image by Myriams-Fotos from Pixabay

We take for granted that computers use binary: to represent numbers, letters, or more complicated things like music and pictures…any kind of information.That was something Ada Lovelace realised very early on. Binary wasn’t invented for computers though. Its first modern use as a way to represent letters was actually invented in the first half of the 19th century. It is still used today: Braille.

Braille is named after its inventor, Louis Braille. He was born 6 years before Ada though they probably never met as he lived in France. He was blinded as a child in an accident and invented the first version of Braille when he was only 15 in 1824 as a way for blind people to read. What he came up with was a representation for letters that a blind person could read by touch.

Choosing a representation for the job is one of the most important parts of computational thinking. It really just means deciding how information is going to be recorded. Binary gives ways of representing any kind of information that is easy for computers to process. The idea is just that you create codes to represent things made up of only two different characters: 1 and 0. For example, you might decide that the binary for the letter ‘p’ was: 01110000. For the letter ‘c’ on the other hand you might use the code, 01100011. The capital letters, ‘P’ and ‘C’ would have completely different codes again. This is a good representation for computers to use as the 1’s and 0’s can themselves be represented by high and low voltages in electrical circuits, or switches being on or off.

The first representation Louis Braille chose wasn’t great though. It had dots, dashes and blanks – a three symbol code rather than the two of binary. It was hard to tell the difference between the dots and dashes by touch, so in 1837 he changed the representation – switching to a code of dots and blanks.

He had invented the first modern form of writing based on binary.

Braille works in the same way as modern binary representations for letters. It uses collections of raised dots (1s) and no dots (0s) to represent them. Each gives a bit of information in computer science terms. To make the bits easier to touch they’re grouped into pairs. To represent all the letters of the alphabet (and more) you just need 3 pairs as that gives 64 distinct patterns. Modern Braille actually has an extra row of dots giving 256 dot/no dot combinations in the 8 positions so that many other special characters can be represented. Representing characters using 8 bits in this way is exactly the equivalent of the computer byte.

Modern computers use a standardised code, called Unicode. It gives an agreed code for referring to the characters in pretty well every language ever invented including Klingon! There is also a Unicode representation for Braille using a different code to Braille itself. It is used to allow letters to be displayed as Braille on computers! Because all computers using Unicode agree on the representations of all the different alphabets, characters and symbols they use, they can more easily work together. Agreeing the code means that it is easy to move data from one program to another.

The 1830s were an exciting time to be a computer scientist! This was around the time Charles Babbage met Ada Lovelace and they started to work together on the analytical engine. The ideas that formed the foundation of computer science must have been in the air, or at least in the Victorian smog.

**********************************

Further reading

This post was first published on CS4FN and also appears on page 7 of Issue 20 of the CS4FN magazine. You can download a free PDF copy of the magazine here as well as all of our previous magazines and booklets, at our free downloads site.

The RNIB has guidance for sighted people who might be producing Braille texts for blind people, about how to use Braille on a computer and get it ready for correct printing.

This History of Braille article also references an earlier ‘Night Writing’ system developed by Charles Barbier to allow French soldiers in the 1800s to read military messages without using a lamp (which gave away their position, putting them at risk). Barbier’s system inspired Braille to create his.

A different way of representing letters is Morse Code which is a series of audible short and long sounds that was used to communicate messages very rapidly via telegraphy.

Find out about Abraham Louis Breguet’s ‘Tactful Watch‘ that let people work out what time it was by feel, instead of rudely looking at their watch while in company.

Ada Lovelace: Visionary

Cover of Issue 20 of CS4FN, celebrating Ada Lovelace

By Paul Curzon, Queen Mary University of London

It is 1843, Queen Victoria is on the British throne. The industrial revolution has transformed the country. Steam, cogs and iron rule. The first computers won’t be successfully built for a hundred years. Through the noise and grime one woman sees the future. A digital future that is only just being realised.

Ada Lovelace is often said to be the first programmer. She wrote programs for a designed, but yet to be built, computer called the Analytical Engine. She was something much more important than a programmer, though. She was the first truly visionary person to see the real potential of computers. She saw they would one day be creative.

Charles Babbage had come up with the idea of the Analytical Engine – how to make a machine that could do calculations so we wouldn’t need to do it by hand. It would be another century before his ideas could be realised and the first computer was actually built. As he tried to get the money and build the computer, he needed someone to help write the programs to control it – the instructions that would tell it how to do calculations. That’s where Ada came in. They worked together to try and realise their joint dream, jointly working out how to program.

Ada also wrote “The Analytical Engine has no pretensions to originate anything.” So how does that fit with her belief that computers could be creative? Read on and see if you can unscramble the paradox.

Ada was a mathematician with a creative flair and while Charles had come up with the innovative idea of the Analytical Engine itself, he didn’t see beyond his original idea of the computer as a calculator, she saw that they could do much more than that.

The key innovation behind her idea was that the numbers could stand for more than just quantities in calculations. They could represent anything – music for example. Today when we talk of things being digital – digital music, digital cameras, digital television, all we really mean is that a song, a picture, a film can all be stored as long strings of numbers. All we need is to agree a code of what the numbers mean – a note, a colour, a line. Once that is decided we can write computer programs to manipulate them, to store them, to transmit them over networks. Out of that idea comes the whole of our digital world.

Ada saw even further though. She combined maths with a creative flair and so she realised that not only could they store and play music they could also potentially create it – they could be composers. She foresaw the whole idea of machines being creative. She wasn’t just the first programmer, she was the first truly creative programmer.

This article was originally published at the CS4FN website, along with lots of other articles about Ada Lovelace. We also have a special Ada Lovelace-themed issue of the CS4FN magazine which you can download as a PDF (click picture below).

See also: The very first computers and Ada Lovelace Day (2nd Tuesday of October). Help yourself to our Women in Computing posters PDF (or sign up to get FREE copies posted to your school (UK-based only, please).

 

Emoticons and Emotions

Emoticons are a simple and easily understandable way to express emotions in writing using letters and punctuation without any special pictures, but why might Japanese emoticons be better than western ones? And can we really trust expressions to tell us about emotions anyway?

African woman smiling 
Image by Tri Le from Pixabay

The trouble with early online message board messages, email and text messages was that it was always more difficult to express subtleties, including intended emotions, than if talking to someone face to face. Jokes were often assumed to be serious and flame wars were the result. So when in 1982 Carnegie Mellon Professor Scott Fahlman suggested the use of the smiley : – ) to indicate a joke in message board messages, a step forward in global peace was probably made. He also suggested that since posts more often than not seemed to be intended as jokes then a sad face : – ( would be more useful to explicitly indicate anything that wasn’t a joke.

He wasn’t actually the first to use punctuation characters to indicate emotions though. The earliest apparently recorded use is in a poem in 1648 by Robert Herrick, an English poet in his poem “To Fortune”.

Tumble me down, and I will sit
Upon my ruins, (smiling yet:)

Whether this was intentional or not is disputed, as punctuation wasn’t consistently used then. Perhaps the poet intended it, perhaps it was just a coincidentall printing error, or perhaps it was a joke inserted by the printers. Either way it is certainly an appropriate use (why not write your own emoticon poem!)

You might think that everyone uses the same emoticons you are familiar with but different cultures use them in different ways. Westerners follow Fahlman’s suggestion putting them on their side. In Japan by contrast they sit the right way up and crucially the emotion is all in the eyes not the mouth which is represented by an underscore. In this style, happiness can be given by (^_^) and T or ; as an indication of crying, can be used for sadness: (T_T) or (;_;). In South Korea, the Korean alphabet is used so a different character set of letter are available (though their symbols are the right way up as with the Japanese version).

Automatically understanding people’s emotions is an important area of research, called sentiment analysis, whether analysing text, faces or other aspects that can be captured. It is amongst other things important for marketeers and advertisers to work out whether people like their products or what issues matter most to people in elections, so it is big business. Anyone who truly cracks it will be rich.

So in reality is the western version or the Eastern version more accurate: are emotions better detected in the shape of the mouth or the eyes? With a smile at least, it turns out that the eyes really give away whether someone is happy or not, not the mouth. When people put on a fake smile their mouth does curve just as with a natural smile. The difference between fake and genuine smiles that really shows if the person is happy is in the eyes. A genuine smile is called a Duchenne smile after Duchenne de Boulogne who in 1862 showed that when people find something actually funny the smile affects the muscles in their eyes. It causes a tell-tale crow’s foot pattern in the skin at the sides of the eyes. Some people can fake a Duchenne too though, so even that is not totally reliable.

As emoticons hint, because emotions are indicated in the eyes as much as in the mouth, sentiment analysis of emotions based on faces needs to focus on the whole face, not just the mouth. However, all may not be what it seems as other research shows that most of the time people do not actually smile at all when genuinely happy. Just like emoticons facial expressions are just a way we tell other people what we want them to think our emotions are, not necessarily our actual emotions. Expressions are not a window into our souls, but a pragmatic way to communicate important information. They probably evolved for the same reason emoticons were invented, to avoid pointless fights. Researchers trying to create software that works out what we really feel, may have their work cut out if their life’s work is to make them genuinely happy.

     ( O . O )
         0

– Paul Curzon, Queen Mary University of London, Summer 2021

The computer vs the casino: Wearable tech cheating

What happened when a legend of computer science took on the Las Vegas casinos? The answer, surprisingly, was the birth of wearable computing.

There have always been people looking to beat the system, to get that little bit extra of the odds going their way to allow them to clean up at the casino. Over the years maths and technology have been used, from a hidden mechanical arm up your sleeve allowing you to swap cards, to the more cerebral card counting. In the latter, a player remembers a running total of the cards played so they can estimate when high value cards will be dealt. One popular game to try and cheat was Roulette.

A spin of the wheel

Roulette, which comes from the French word ‘little wheel’, involves a dish containing a circular rotating part marked into red and black numbers. A simple version of the game was developed by the French mathematician, Pascal, and it evolved over the centuries to become a popular betting game. The central disc is spun and as it rotates a small ball is thrown into the dish. Players bet on the number that the ball will eventually stop at. The game is based on probability, but like most games there is a house advantage: the probabilities mean that the casino will tend to win more money than it loses.

Gamblers tried to work out betting strategies to win, but the random nature of where the ball stops thwarted them. In fact, the pattern of numbers produced from multiple roulette spins was so random that mathematicians and scientists have used these numbers as a random-number generator. Methods using them are even called Monte Carlo methods after the famous casino town. They are ways to calculate difficult mathematical functions by taking thousands of random samples of their value at different random places.

A mathematical system of betting wasn’t going to work to beat the game, but there was one possible weakness to be exploited: the person who ran the game and threw the ball into the wheel, the croupier.

No more bets please

There is a natural human instinct to spin the wheel and throw the ball in a consistent pattern. Each croupier who has played thousands of games has a slight bias in the speed and force with which they spin the wheel and throw the ball in. If you could just see where the wheel was when the spin started and the ball went in, you could use the short time before betting was suspended to make a rough guess of the area where the ball was more likely to land, giving you an edge. This is called ‘clocking the wheel’, but it requires great skill. You have to watch many games with the same croupier to gain a tiny chance of working out where their ball will go. This isn’t cheating in the same way as physically tampering with the wheel with weights and magnets (which is illegal), it is the skill of the gambler’s observation that gives the edge. Casinos became aware of it, so frequently changed the croupier on each game, so the players couldn’t watch long enough to work out the pattern. But if there was some technological way to work this out quickly perhaps the game could be beaten.

Blackjack and back room

Enter Ed Thorpe, in the 1950s, a graduate student in physics at MIT. Along with his interest in physics he had a love of gambling. Using his access to one of the world’s few room filling IBM computers at the university he was able to run the probabilities in card games and using this wrote a scientific paper on a method to win at Blackjack. This paper brought him to the attention of Claude Shannon, the famous and rather eccentric father of information theory. Shannon loved to invent things: the flame throwing trumpet, the insult machine and other weird and wonderful devices filled the basement workshop of his home. It was there that he and Ed decided to try and take on the casinos at Roulette and built arguably the first wearable computer.

Sounds like a win

The device comprised a pressure switch hidden in a shoe. When the ball was spun and passed a fixed point on the wheel, the wearer pressed the switch. A computer timer, strapped to the wrist, started and was used to track the progress of the ball as it passed around the wheel, using technology in place of human skill to clock the wheel. A series of musical tones told the person using the device where the ball would stop, each tone represented a separate part of the wheel. They tested the device in secret and found that using it gave them a 44% increased chance of correctly predicting the winning numbers. They decided to try it for real … and it worked! However, the fine wires that connected the computer to the earpiece kept breaking, so they gave up after winning only a few dollars. The device, though very simple and for a single purpose, is in the computing museum at MIT. The inventors eventually published the detail in a scientific paper called “The Invention of the First Wearable Computer,” in 1998.

The long arm of the law reaches out

Others followed with similar systems built into shoes, developing more computers and software to help cheat at Blackjack too, but by the mid 1980’s the casino authorities became wise to this way to win, so new laws were introduced to prevent the use of technology to give unfair advantages in casino games. It definitely is now cheating. If you look at the rules for casinos today they specifically exclude the use of mobile phones at the table, for example, just in case your phone is using some clever app to scam the casinos.

From its rather strange beginning, wearable computing has spun out into new areas and applications, and quite where it will go next is anybody’s bet.

– Peter W. McOwan, Queen Mary University of London, Autumn 2018

The very first computers

A head with numbers circling round and the globe in the middleVictorian engineer Charles Babbage designed, though never built the first mechanical computer. The first computers had actually existed for a long time before he had his idea, though. The British superiority at sea and ultimately the Empire was already dependent on them. They were used to calculate books of numbers that British sailors relied on to navigate the globe. The original meaning of the word computer was actually a person who did these calculations. The first computers were humans.

Babbage became interested in the idea of creating a mechanical computer in part because of computing work he did himself, calculating accurate versions of numbers needed for a special book: ‘The Nautical Almanac’. It was a book of astronomical tables, the result of an idea of Astronomer Royal, Nevil Maskelyne. It was the earliest way ships had to reliably work out their longitudinal (i.e., east-west) position at sea. Without them, to cross the Atlantic, you just set off and kept going until you hit land, just as Columbus did. The Nautical Almanac gave a way to work out how far west you were all the time.

Maskelyne’s idea was based on the fact that the angle from the moon’ to a person on the Earth and back to a star was the same at the same time wherever that person was looking from (as long as they could see both the star and moon at once). This angle was called the lunar distance.

The lunar distance could be used to work out where you were because as time passed its value changed but in a predictable way based on Newton’s Laws of motion applied to the planets. For a given place, Greenwich say, you could calculate what that lunar distance would be for different stars at any time in the future. This is essentially what the Almanac recorded.

Now the time changes as you move East or West: Dawn gradually arrives later the further west you go, for example, as the Earth rotates the sun comes into view at different times round the planet). That is why we have different time zones. The time in the USA is hours behind that in Britain which itself is behind that in China. Now suppose you know your local time, which you can check regularly from the position of the sun or moon, and you know the lunar distance. You can look up in the Almanac the time in Greenwich that the lunar distance occurs and that gives you the current time in Greenwich. The greater the difference that time is to your local time, the further West (or East) you are. It is because Greenwich was used as the fixed point for working the lunar distances out, that we now use Greenwich Mean Time as UK time. The time in Greenwich was the one that mattered!

This was all wonderful. Sailors just had to take astronomical readings, do some fairly simple calculations and a look up in the Almanac to work out where they were. However, there was a big snag. it relied on all those numbers in the tables having been accurately calculated in advance. That took some serious computing power. Maskelyne therefore employed teams of human ‘computers’ across the country, paying them to do the calculations for him. These men and women were the first industrial computers.

Before pocket calculators were invented in the 1970s the easiest way to do calculations whether big multiplication, division, powers or square roots was to use logarithms. The logarithm of a number is just the number of times you can divide it by 10 before you get to 1. Complicated calculations can be turned in to simple ones using logarithms. Therefore the equivalent of the pocket calculator was a book containing a table of logarithms. Log tables were the basis of all other calculations including maritime ones. Babbage himself became a human computer, doing calculations for the Nautical Almanac. He calculated the most accurate book of log tables then available for the British Admiralty.

The mechanical computer came about because Babbage was also interested in finding the most profitable ways to mechanise work in factories. He realised a machine could do more than weave cloth but might also do calculations. More to the point such a machine would be able to do them with a guaranteed accuracy, unlike people. He therefore spent his life designing and then trying to build such a machine. It was a revolutionary idea and while his design worked, the level of precision engineering needed was beyond what could be done. It was another hundred years before the first electronic computer was invented – again to replace human computers working in the national interest…but this time at Bletchley Park doing the calculations needed to crack the German military codes and so win the World War II.

More on …