The Decline and Fall of Ada: Who’s popular now?

Audience image by Pexels from Pixabay

Ada (the language), is not the big player on the programming block these days. In 1997 the DoD1 cancelled their rule that you had to use Ada when working for them. Developers in commerce had always found Ada hard to work with and often preferred other languages. There are hundreds of other languages used in industry and by researchers. How many can you name?

Here are some fun clues about different languages. Can you work out their names?
(Answers at the end.)

  1. A big snake that will squash you dead.
  2. A famous Victorian woman who worked with Babbage.
  3. A, B, __
  4. A, B, __ (ouch)
  5. A precious, but misspelled, thing inside a shell.
  6. A tiny person chatting.
  7. A beautiful Indonesian island.
  8. A french mathematician and inventor famous for triangles.

(You can try an online version of our quiz here)

Today, the most popular programming languages are, well we don’t know, because it depends when you are reading this! Because what is fashionable, what is new is always changing. Plus it’s hard to agree what ‘the most popular’ means for languages (and pop stars!). Is it the most lines of code in use today? The favorite language of developers? The language everyone is learning? In July 2015 one particular website rated programing languages using features such as number of skilled software engineers who can use the language; number of courses to learn the language; search engine queries on the language and came up with the order.

  • 1) Java
  • 2) C
  • 3) C++
  • 4) C#
  • 5) Python

Where is Ada? 30th out of 100s! The same website had shown Ada (the language) as 3rd top in 1985! What a fall from grace.

But have no fear, Ada still survives and lives on in millions of lines of avionics2, radar systems, space, shipboard, train, subway, nuclear reactors and DoD systems. Plus Ada is perhaps making a comeback. Ada 2012 is just being finalised, heralded by some as the next generation of engineering software with its emphasis on safety, security and reliability. So Ada meet Ada, it looks like you will be remembered and used for a long time still.

Github is a place where lots of programmers now develop and save their code. It encourages programmers to share their work. A kind of modern day, crowd sourced ‘mass of shared facts’ but coders would probably not say they did this just to ‘amuse their idle hours’. Popular coding tools on this platform are JavaScript. Java, Python, CSS, PHP, Ruby, C++. Ada doesn’t really feature, well not yet.

Jane Waite, Queen Mary University of London

  1. United States Department of Defense ↩︎
  2. Avionics (aviation electronics) includes all the electronics and software needed to fly aircraft safely. ↩︎

Related Magazine …

This article was originally published on page 19 of issue 20 of the CS4FN magazine. You can download a copy at the link below, and all of our previous magazine issues (free) here.


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


Answers to the quiz…

Answers: 1) Python 2) Ada 3) C 4) C# (C sharp) 5) Perl 6) Smalltalk 7) Java 8) Pascal

Victorian volunteers needed – the start of citizen science

What was Ada Lovelace thinking about when she wrote:
“If amateurs of either sex would amuse their idle hours with experiments on this subject, and would keep an accurate journal of their daily observations, we should have in a few years a mass of registered facts to compare with the observation of the scientific”.

Yes, crowdsourcing science experiments! Now we call it Citizen Science. She had just read a book by a Baron von Reichenbach on magnetism in which he had suggested a whole host of experiments, such as moving magnets up and down a person’s body, showing people magnets in the dark, and holding heavy and light magnets and asking them if they felt any sensations. She could see that he had some great ideas, but she was not convinced by his examples alone.

Ada was not the only Victorian to ask the general public for help collecting data. Charles Darwin, the Origin of Species man, wrote to gardeners, diplomats, army officers and scientists across the world asking for information about the plants they grew and the animals (including people) they saw. This all helped him build up the concrete evidence that natural selection was the way evolution works. People even sent him gifts of live animals in the post. A Danish gentleman sent him a parcel of live barnacles. When they did not arrive on time, Darwin, desperate to dissect the species, panicked and got ready to offer a reward in the Times newspaper. Luckily they arrived intact, fresh and not too smelly!

Today we might take part in the RSPB’s Big Garden Bird Watch1, contribute to a blog, ‘favourite’ or ‘like’ a post on social media or vote for your favorite performer in a talent show. We participate, and ‘amuse our idle hours’ sometimes in the pursuit of science, sometimes not. Public research is a big new topic, with governments and companies looking to use people power. Innovations such as shared mapping systems ask users to upload details about a place, add photographs, rectify mistakes. Wikipedia is sourced by volunteers, with other volunteers checking accuracy. Galaxy Zoo volunteers even found a whole new planet that orbits four stars!

What would Ada be asking us to research? Test your own DNA and send in the results? Measure air quality and keep a record on a central database? Build your own ‘find a barnacle’ app? But rather than writing a journal or sending a parcel of barnacles, you would log it on line, click a link or design your own survey. Ada’s computers are in on the act again.

Why not find a Citizen Science project on something you are interested in. Sometimes called public science or science outreach projects they might be run by local universities, museums, your council, charities or through crowdsourced internet projects such as www.zooniverse.org. Share what you do with others and spread Ada’s word to be a modern day volunteer.

Jane Waite, Queen Mary University of London

  1. 23-25 January 2026: RSPB Big Garden Birdwatch – “Spend an hour watching the birds in your patch, between 23 and 25 January, and record the birds t allhat land.” You can also get your school involved in the Big School’s Birdwatch 2026. If you’re reading this after 25 January 2026 make a note in your diary to remind you to check next year! ↩︎


Related Magazine …

This article was originally published on page 13 of issue 20 of the CS4FN magazine. You can download a copy at the link below, and all of our previous magazine issues (free) here.


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


The Social Machine of Maths

In school we learn about the maths that others have invented: results that great mathematicians like Euclid, Pythagoras, Newton or Leibniz worked out. We follow algorithms for getting results they devised. Ada Lovelace was actually taught by one of the great mathematicians, Augustus De Morgan, who invented important laws, ‘De Morgan’s laws’ that are a fundamental basis for the logical reasoning computer scientists now use. Real maths is about discovering new results of course not just using old ones, and the way that is done is changing.

We tend to think of maths as something done by individual geniuses: an isolated creative activity, to produce a proof that other mathematicians then check. Perhaps the greatest such feat of recent years was Andrew WIles’ proof of Fermat’s Last Theorem. It was a proof that had evaded the best mathematicians for hundreds of years. Wiles locked himself away for 7 years to finally come up with a proof. Mathematics is now at a remarkable turning point. Computer science is changing the way maths is done. New technology is radically extending the power and limits of individuals. “Crowdsourcing” pulls together diverse experts to solve problems; computers that manipulate symbols can tackle huge routine calculations; and computers, using programs designed to verify hardware, check proofs that are just too long and complicated for any human to understand. Yet these techniques are currently used in stand-alone fashion, lacking integration with each other or with human creativity or fallibility.

‘Social machines’ are a whole new paradigm for viewing a combination of people and computers as a single problem-solving entity. The idea was identified by Tim Berners-Lee, inventor of the world-wide web. A project led by Ursula Martin at the University of Oxford explored how to make this a reality, creating a mathematics social machine – a combination of people, computers, and archives to create and apply mathematics. The idea is to change the way people do mathematics, so transforming the reach, pace, and impact of mathematics research. The first step involves social science rather than maths or computing though – studying what working mathematicians really do when working on new maths, and how they work together when doing crowdsourced maths. Once that is understood it will then be possible to develop tools to help them work as part of such a social machine.

The world changing mathematics results of the future may be made by social machines rather than solo geniuses. Team work, with both humans and computers is the future.

– Ursula Martin, University of Oxford
and Paul Curzon, Queen Mary University of London


Related Magazine …


Related Magazine …

The history of computational devices: automata, core rope memory (used by NASA in the Moon landings), Charles Babbage’s Analytical Engine (never built) and Difference Engine made of cog wheels and levers, mercury delay lines, standardising the size of machine parts, Mary Coombs and the Lyons tea shop computer, computers made of marbles, i-Ching and binary, Ada Lovelace and music, a computer made of custard, a way of sorting wood samples with index cards and how to work out your own programming origin story.


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Balls, beams and quantum computers – performing calculations with patterns of light

Photo credit: Galton Box by Klaus-Dieter Keller, Public Domain, via Wikimedia Commons, via the Wikipedia page for the Galton board

Have you played the seaside arcade game where shiny metal balls drops down to ping, ping off little metal pegs and settle in one of a series of channels? After you have fired lots of balls, did you notice a pattern as the silver spheres collect in the channels? A smooth glistening curve of tiny balls forming a dome, a bell curve forms. High scores are harder to get than lower ones. Francis Galton pops up again*, but this time as a fellow Victorian trend setter for future computer design.

Francis Galton invented this special combination of row after row of offset pins and narrow receiving channels to demonstrate a statistical theory called normal distribution: the bell curve. Balls are more likely to bounce their way to the centre, distributing themselves in an elegant sweep down to the left and right edges of the board. But instead of ball bearings, Galton used beans, it was called the bean machine. The point here though is that the machine does a computation – it computes the bell curve.

Skip forward 100 years and ‘Boson Samplers’, based on Galton’s bean machine, are being used to drive forward the next big thing in computer design, quantum computers.

Instead of beans or silver balls computer scientists fire photons, particles of light through minuscule channels on optical chips. These tiny bundles of energy bounce and collide to create a unique pattern, a distribution though one that a normal digital computer would find hard to calculate. By setting it up in different ways, the patterns that result can correspond to different computations. It is computing answers to different calculations set for it.

Through developing these specialised quantum circuits scientists are bouncing beams of light forwards on the path that will hopefully lead to conventional digital technology being replaced with the next generation of supercomputers.

Jane Waite, Queen Mary University of London

Watch…



Related Magazine …

*Francis Galton appears earlier in Issue 20, you can read more about him on page 15 of the PDF. Although a brilliant mathematician he held views about people that are unacceptable today. In 2020 University College London (UCL) changed the name of its Galton Lecture Theatre, which had been named previously in his honour, to Lecture Theatre 115.

EPSRC supports this blog through research grant EP/W033615/1.

Understanding matters of the heart – creating accurate computer models of human organs

Colourful depiction of a human heart
Heart image by Gordon Johnson from Pixabay

Ada Lovelace, the ‘first programmer’ thought the possibilities of computer science might cover a far wider breadth than anyone else of her time. For example, she mused that one day we might be able to create mathematical models of the human nervous system, essentially describing how electrical signals move around the body. University of Oxford’s Blanca Rodriguez is interested in matters of the heart. She’s a bioengineer creating accurate computer models of human organs.

How do you model a heart? Well you first have to create a 3D model of its structure. You start with MRI scans. They give you a series of pictures of slices through the heart. To turn that into a 3D model takes some serious computer science: image processing that works out, from the pictures, what is tissue and what isn’t. Next you do something called mesh generation. That involves breaking up the model into smaller parts. What you get is more than just a picture of the surface of the organ but an accurate model of its internal structure.

So far so good, but it’s still just the structure. The heart is a working, beating thing not just a sculpture. To understand it you need to see how it works. Blanca and her team are interested in simulating the electrical activity in the heart – how electrical pulses move through it. To do this they create models of the way individual cells propagate an electrical system. Once you have this you can combine it with the model of the heart’s structure to give one of how it works. You essentially have a lot of equations. Solving the equations gives a simulation of how electrical signals propagate from cell to cell.

The models Blanca’s team have created are based on a healthy rabbit heart. Now they have it they can simulate it working and see if it corresponds to the results from lab experiments. If it does then that suggests their understanding of how cells work together is correct. When the results don’t match, then that is still good as it gives new questions to research. It would mean something about their initial understanding was wrong, so would drive new work to fix the problem and so the models.

Once the models have been validated in this way – shown it is an accurate description of the way a rabbit’s heart works – they can use them to explore things you just can’t do with experiments – exploring what happens when changes are made to the structure of the virtual heart or how drugs change the way it works, for example. That can lead to new drugs.

They can also use it to explore how the human heart works. For example, early work has looked at the heart’s response to an electric shock. Essentially the heart reboots! That’s why when someone’s heart stops in hospital, the emergency team give it a big electric shock to get it going again. The model predicts in detail what actually happens to the heart when that is done. One of the surprising things is it suggests that how well an electric shock works depends on the particular structure of the person’s heart! That might mean treatment could be more effective if tailored for the person.

Computer modelling is changing the way science is done. It doesn’t replace experiments. Instead clinical work, modelling and experiments combine to give us a much deeper understanding of the way the world, and that includes our own hearts, work.

Paul Curzon, Queen Mary University of London


The charity Cardiac Risk in the Young raises awareness of cardiac electrical rhythm abnormalities and supports testing (electrocardiograms and echocardiograms) for all young people aged 14-35.

EPSRC supports this blog through research grant EP/W033615/1.

A storm in a bell jar

Ada Lovelace was close friends with John Crosse, and knew his father Andrew: the ‘real Frankenstein’. Andrew Crosse apparently created insect life from electricity, stone and water…

Andrew Crosse was a ‘gentleman scientist’ doing science for his own amusement including work improving giant versions of the first batteries called ‘voltaic piles’. He was given the nickname ‘the thunder and lightning man’ because of the way he used the batteries to do giant discharges of electricity with bangs as loud as canons.

He hit the headlines when he appeared to create life from electricity, Frankenstein-like. This was an unexpected result of his experiments using electricity to make crystals. He was passing a current through water containing dissolved limestone over a period of weeks. In one experiment, about a month in, a perfect insect appeared apparently from no-where, and soon after starting to move. More and more insects then appeared over time. He mentioned it to friends, which led to a story in a local paper. It was then picked up nationally. Some of the stories said he had created the insects, and this led to outrage and death threats over his apparent blasphemy of trying to take the position of God.

(Does this start to sound like a modern social networking storm, trolls and all?) In fact he appears to have believed, and others agreed, that the mineral samples he was using must have been contaminated with tiny insect eggs, that just naturally hatched. Scientific results are only accepted if they can be replicated. Others, who took care to avoid contamination couldn’t get the same result. The secret of creating life had not been found.

While Mary Shelley, who wrote Frankenstein, did know Crosse, sadly perhaps, for the story’s sake, he can’t have been the inspiration for Frankenstein as has been suggested, given she wrote it decades earlier!

Paul Curzon, Queen Mary University of London (from the archicve)


More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

Pass the screwdriver, Igor

Mary Shelley, Frankenstein’s monster and artificial life

Shortly after Ada Lovelace was born, so long before she made predictions about future “creative machines”, Mary Shelley, a friend of her father (Lord Byron), was writing a novel. In her book, Frankenstein, inanimate flesh is brought to life. Perhaps Shelley foresaw what is actually to come, what computer scientists might one day create: artificial life.

Life it may not be, but engineers are now doing pretty well in creating humanoid machines that can do their own thing. Could a machine ever be considered alive? The 21st century is undoubtedly going to be the age of the robot. Maybe it’s time to start thinking about the consequences in case they gain a sense of self.

Frankenstein was obsessed with creating life. In Mary Shelley’s story, he succeeded, though his creation was treated as a “Monster” struggling to cope with the gift of life it was given. Many science fiction books and films have toyed with these themes: the film Blade Runner, for example, explored similar ideas about how intelligent life is created; androids that believe they are human, and the consequences for the creatures concerned.

Is creating intelligent life fiction? Not totally. Several groups of computer scientists are exploring what it means to create non-biological life, and how it might be done. Some are looking at robot life, working at the level of insect life-forms, for example. Others are looking at creating intelligent life within cyberspace.

For 70 years or more scientists have tried to create artificial intelligences. They have had a great deal of success in specific areas such as computer vision and chess playing programs. They are not really intelligent in the way humans are, though they are edging closer. However none of these programs really cuts it as creating “life”. Life is something more than intelligence.

A small band of computer scientists have been trying a different approach that they believe will ultimately lead to the creation of new life forms: life forms that could one day even claim to be conscious (and who would we be to disagree with them if they think they are?) These scientists believe life can’t be engineered in a piecemeal way, but that the whole being has to be created as a coherent whole. Their approach is to build the basic building blocks and let life emerge from them.

Sodarace creatures racing over a bumpy terrain
A sodarace in action
by CS4FN

The outline of the idea could be seen in the game Sodarace, where you could build your own creatures that move around a virtual world, and even let them evolve. One approach to building creatures, such as a spider, would be to try and work out mathematical equations about how each leg moves and program those equations. The alternative artificial life way as used in Sodarace is to instead program up the laws of physics such as gravity and friction and how masses, springs and muscles behave according to those laws. Then you just put these basic bits together in a way that corresponds to a spider. With this approach you don’t have to work out in advance every eventuality (what if it comes to a wall? Or a cliff? Or bumpy ground?) and write code to deal with it. Instead natural behaviour emerges.

The artificial life community believe, not just life-like movement, but life-like intelligence can emerge in a similar way. Rather than programming the behaviour of muscles you program the behaviour of neurones and then build brains out of them. That it turns out has been the key to the machine learning programs that are storming the world of Artificial Intelligence, turning it into an everyday tool. However, if aiming for artificial life, you would keep going and combine it with the basic biochemistry of an immune system, do a similar thing with a reproductive system, and so on.

Want to know more? A wonderful early book is Steve Grand’s: “Creation”, on how he created what at the time was claimed to be “the nearest thing to artificial life yet”… It started life as the game “Creatures”.

Then have a go at creating artificial life yourself (but be nice to it).

Paul Curzon and Peter W McOwan, Queen Mary University of London

More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

Ada Lovelace in her own words

A jumble of letters
Image by CS4FN

Charles Babbage invented wonderful computing machines. But he was not very good at explaining things. That’s where Ada Lovelace came in. She is famous for writing a paper in 1843 explaining how Charles Babbage’s Analytical Engine worked – including a big table of formulas which is often described as “the first computer program”.

Charles Babbage invented his mechanical computers to save everyone from the hard work of doing big mathematical calculations by hand. He only managed to build a few tiny working models of his first machine, his difference engine. It was finally built to Babbage’s designs in the 1990s and you can see it in the London Science Museum. It has 8,000 mechanical parts, and is the size of small car, but when the operator turns the big handle on the side it works perfectly, and prints out correct answers.

Babbage invented, but never built, a more ambitious machine, his Analytical Engine. In modern language, this was a general purpose computer, so it could have calculated anything a modern computer can – just a lot more slowly. It was entirely mechanical, but it had all the elements we recognize today – like memory, CPU, and loops.

Lovelace’s paper explains all the geeky details of how numbers are moved from memory to the CPU and back, and the way the machine would be programmed using punched cards.

But she doesn’t stop there – in quaint Victorian language she tells us about the challenges familiar to every programmer today! She understands how complicated programming is:

“There are frequently several distinct sets of effects going on simultaneously; all in a manner independent of each other, and yet to a greater or less degree exercising a mutual influence.”

the difficulty of getting things right:

“To adjust each to every other, and indeed even to perceive and trace them out with perfect correctness and success, entails difficulties whose nature partakes to a certain extent of those involved in every question where conditions are very numerous and inter-complicated.”

and the challenge of making things go faster:

“One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation.”

She explains how computing is about patterns:

“it weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves”.

and inventing new ideas

“We might even invent laws … in an arbitrary manner, and set the engine to work upon them, and thus deduce numerical results which we might not otherwise have thought of obtaining”.

and being creative. If we knew the laws for composing music:

“the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”

Alan Turing famously asked if a machine can think – Ada Lovelace got there first:

“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.”

Wow, pretty amazing, for someone born 200 years ago.

Ursula Martin, University of Oxford (From the archive)


More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

Dickens knitting in code

Charles Dickens is famous for his novels highlighting Victorian social injustice. Despite what people say, art and science really do mix, and Dickens certainly knew some computer science. In his classic novel about the French Revolution, A Tale of Two Cities, one of his characters relies on some computer science based knitting.

Dickens actually moved in the same social circles as Charles Babbage, the Victorian inventor of the first computer (which he designed but unfortunately never managed to build) and Ada Lovelace the mathematician who worked with him on those first computers. They went to the same dinner parties and Dickens will have seen Babbage demonstrate his prototype machines. An engineer in Dickens novel, Little Dorrit, is even believed to be partly based on Babbage. Dickens was probably the last non-family member to visit Ada before she died. She asked him to read to her, choosing a passage from his book Dombey and Son in which the son, Paul Dombey, dies. Like Ada, Paul Dombey had suffered from illness all his life.

So Charles Dickens had lots of opportunity to learn about algorithms! His novel ‘A Tale of Two Cities’ is all about the French Revolution, but lurking in the shadows is some computer science. One of the characters, a revolutionary called Madame Defarge takes the responsibility of keeping a register of all those people who are to be executed once the revolution comes to pass: the aristocrats and “enemies of the people”. Of course in the actual French Revolution lots of aristocrats were guillotined precisely for being enemies of the new state.

Now Madame Defarge could have just tried to memorize the names on her ‘register’ as she supposedly has a great memory, but the revolutionaries wanted a physical record. That raises the problem, though, of how to keep it secret, and that is where the computer science comes in. Madame Defarge knits all the time and so she decides to store the names in her knitting.

“Knitted, in her own stitches and her own symbols, it will always be as plain to her as the sun. Confide in Madame Defarge. It would be easier for the weakest poltroon that lives, to erase himself from existence, than to erase one letter of his name or crimes from the knitted register of Madame Defarge.”

Computer scientists call this Steganography: hiding information or messages in plain sight, so that no one suspects they are there at all. Modern forms of steganography include hiding messages in the digital representation of pictures and in the silences of a Skype conversation.

Madame Defarge didn’t of course just knit French words in the pattern like a victorian scarf version of a T-shirt message. It wouldn’t have been very secret if anyone looking at the resulting scarf could read the names. So how to do it? In fact, knitting has been used as a form of steganography for real. One way was for a person to take a ball of wool and mark messages down it in Morse Code dots and dashes. The wool was then knitted into a jumper or scarf. The message is hidden! To read it you unpick it all and read the morse code back off the wool.

The names were “Knitted, in her own stitches and her own symbols”

That wouldn’t have worked for Madame Defarge though. She wanted to add the names to the register in plain view of the person as they watched and without them knowing what she was doing. She therefore needed the knitting patterns themselves to hold the code. It was possible because she was both a fast knitter and sat knitting constantly so it raised no suspicion. The names were therefore, as Dickens writes “Knitted, in her own stitches and her own symbols”

She used a ‘cipher’ and that brings in another area of computer science: encryption. A cipher is just an algorithm – a set of rules to follow – that converts symbols in one alphabet (letters) into different symbols. In Madame Defarge’s case the new symbols were not written but knitted sequences of stitches. Only if you know the algorithm, and a secret ‘key’ that was used in the encryption, can you convert the knitted sequences back into the original message.

In fact both steganography and encryption date back thousands of years (computer science predates computers!), though Charles Dickens may have been the first to use knitting to do it in a novel. The Ancient Greeks used steganography. In the most famous case a message was written on a slave’s shaved head. They then let the hair grow back. The Romans knew about cryptographic algorithms too and one of the most famous ciphers is called the Caesar cipher as Julius Caesar used it when writing letters…even in Roman times people were worried about the spies reading their equivalent of emails.

Dickens didn’t actually describe the code that Madame Defarge was using so we can only guess…but why not see that as an opportunity and (if you can knit) why not invent a way yourself. If you can’t knit then learn to knit first and then invent one! Somehow you need a series of stitches to represent each letter of the alphabet. In doing so you are doing algorithmic thinking with knitting. You are knitting your way to being a computer scientist.

Paul Curzon, Queen Mary University of London (From the archive)


More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

Letters from the Victorian Smog: Braille: binary, bits & bytes

We take for granted that computers use binary: to represent numbers, letters, or more complicated things like music and pictures…any kind of information. That was something Ada Lovelace realised very early on. Binary wasn’t invented for computers though. Its first modern use as a way to represent letters was actually invented in the first half of the 19th century. It is still used today: Braille.

Braille is named after its inventor, Louis Braille. He was born 6 years before Ada though they probably never met as he lived in France. He was blinded as a child in an accident and invented the first version of Braille when he was only 15 in 1824 as a way for blind people to read. What he came up with was a representation for letters that a blind person could read by touch.

Choosing a representation for the job is one of the most important parts of computational thinking. It really just means deciding how information is going to be recorded. Binary gives ways of representing any kind of information that is easy for computers to process. The idea is just that you create codes to represent things made up of only two different characters: 1 and 0. For example, you might decide that the binary for the letter ‘p’ was: 01110000. For the letter ‘c’ on the other hand you might use the code, 01100011. The capital letters, ‘P’ and ‘C’ would have completely different codes again. This is a good representation for computers to use as the 1’s and 0’s can themselves be represented by high and low voltages in electrical circuits, or switches being on or off.

He was inspired by an earlier ‘Night Writing’ system developed by Charles Barbier to allow French soldiers in the 1800s to read military messages without using a lamp (which gave away their position, putting them at risk).

The first representation Louis Braille chose wasn’t great though. It had dots, dashes and blanks – a three symbol code rather than the two of binary. It was hard to tell the difference between the dots and dashes by touch, so in 1837 he changed the representation – switching to a code of dots and blanks.

He had invented the first modern
form of writing based on binary.

Braille works in the same way as modern binary representations for letters. It uses collections of raised dots (1s) and no dots (0s) to represent them. Each gives a bit of information in computer science terms. To make the bits easier to touch they’re grouped into pairs. To represent all the letters of the alphabet (and more) you just need 3 pairs as that gives 64 distinct patterns. Modern Braille actually has an extra row of dots giving 256 dot/no dot combinations in the 8 positions so that many other special characters can be represented. Representing characters using 8 bits in this way is exactly the equivalent of the computer byte.

Modern computers use a standardised code, called Unicode. It gives an agreed code for referring to the characters in pretty well every language ever invented including Klingon! There is also a Unicode representation for Braille using a different code to Braille itself. It is used to allow letters to be displayed as Braille on computers! Because all computers using Unicode agree on the representations of all the different alphabets, characters and symbols they use, they can more easily work together. Agreeing the code means that it is easy to move data from one program to another.

The 1830s were an exciting time to be a computer scientist! This was around the time Charles Babbage met Ada Lovelace and they started to work together on the analytical engine. The ideas that formed the foundation of computer science must have been in the air, or at least in the Victorian smog.

Paul Curzon and Jo Brodie, Queen Mary University of London


More on…

Magazines

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos