We have explained how core rope memory was used as the computer memory storing the Apollo guidance computer program that got us to the moon. A team from the University of Washington came up with a fun craft activity to make your own core memory. It may not fly you to the moon, but is a neat way to store information in a bracelet. Find their activity pages here [EXTERNAL].
What it involves is threading 8 beads onto a string, with a gap between them to form a storage space for bytes of data. Each byte is 8 binary bits (Eight pieces of information, each a 1 or a 0). Each bead represents the position of one bit in your core rope memory. You then take other threads and weave them through the beads. Each thread will store another byte of actual data. Pass the thread through a bead when you want that bead to read 1, or over, when you want that bead to read 0.
Each thread weaving past or through 8 beads can then encode the information for one letter. By adding lots of threads you can store a word or even a sentence on each core rope memory string (perhaps your name, or some secret message).
Using a binary encoding for each letter (so capital letter A would be the 8 bits 01000001 if you’re following this conversion from binary to letters table) you put that letter’s thread through or over each of the 8 beads to ‘spell’ out the letter in binary.
My name is Jo so a core rope memory encoding my name would have only three threads (one to hold the 8 beads and two to spell my name). The second thread would go over, through, over, over, through, over, through, over to spell the capital letter J (01001010). The second thread would go over, through, through, over, through, through, through, through to spell lowercase o (01101111).
Let’s hope you have a slightly longer name so can have more fun time creating your own personalised core rope memory!
Weaving, in the form of the Jacquard loom, with its swappable punch cards controlling the loom’s patterns inspired Charles Babbage. He intended to use the same kind of punch card to store programs in his Analytical Engine, which had it been built would have been the first computer. However, weaving had a much more direct use in computing history. Weaving helped get us to the Moon.
In the 1960s, NASA’s Apollo moon mission needed really dependable computers. It was vital that the programs wouldn’t be corrupted in space. The problem was solved using core rope memory.
Core rope memory was made of small ‘eyelets’ or beads of a metal called ferrite that can be magnetised and copper wire which was woven through some of the eyelets but not others. The ring-shaped magnets were known as magnetic cores. An electrical current passing through the wires made the whole thing work.
Representing binary
Both data and programs in computers are stored as binary: 1s and 0s. Those 1s and 0s can be represented by physical things in the world in lots of different ways. NASA used weaving. A wire that passed through an eyelet would be read as a binary 1 when the current was on but if it passed around the eyelet then it would be read as 0. This meant that a computer program, made up of sequences of 1s and 0s, could be permanently stored by the pattern that was woven. This gave read-only memory. Related techniques were used to create memory that the computer could change too, as the guidance computer needed both.
The memory was woven for NASA by women who were skilled textile workers. They worked in pairs using a special hollow needle to thread the copper wire through one magnetic core and then the other person would thread it back through a different one.
The program was first developed on a computer (the sort that took up a whole room back then) and then translated into instructions for a machine which told the weavers the correct positions for the wire threads. It was very difficult to undo a mistake so a great deal of care was taken to get things right the first time, especially as it could take up to two months to complete one block of memory. Some of the rope weavers were overseen by Margaret Hamilton, one of the women who developed the software used on board the spacecraft, and who went on to lead the Apollo software team.
The world’s first portable computer?
Several of these pre-programmed core rope memory units were combined and installed in the guidance computers of the Apollo mission spacecraft that had to fly astronauts safely to the Moon and back. NASA needed on-board guidance systems to control the spacecraft independently of Mission Control back on Earth. They needed something that didn’t take up too much room or weigh too much, that could survive the shaking and juddering of take-off and background radiation: core rope memory fitted the bill perfectly.
It packed a lot of information (well, not by modern standards! The guidance computer contained only around 70 kilobytes of memory) into a small space and was very robust as it could only break if a wire came loose or one of the ferrite eyelets was damaged (which didn’t happen). To make sure though, the guidance computer’s electronics were sealed from the atmosphere for extra protection. They survived and worked well, guiding the Landing Modules safely onto the Moon.
One small step for man perhaps, but the Moon landings were certainly a giant leap for computing.
Jo Brodie and Paul Curzon, Queen Mary University of London
Charles Babbage invented wonderful computing machines. But he was not very good at explaining things. That’s where Ada Lovelace came in. She is famous for writing a paper in 1843 explaining how Charles Babbage’s Analytical Engine worked – including a big table of formulas which is often described as “the first computer program”.
Charles Babbage invented his mechanical computers to save everyone from the hard work of doing big mathematical calculations by hand. He only managed to build a few tiny working models of his first machine, his difference engine. It was finally built to Babbage’s designs in the 1990s and you can see it in the London Science Museum. It has 8,000 mechanical parts, and is the size of small car, but when the operator turns the big handle on the side it works perfectly, and prints out correct answers.
Babbage invented, but never built, a more ambitious machine, his Analytical Engine. In modern language, this was a general purpose computer, so it could have calculated anything a modern computer can – just a lot more slowly. It was entirely mechanical, but it had all the elements we recognize today – like memory, CPU, and loops.
Lovelace’s paper explains all the geeky details of how numbers are moved from memory to the CPU and back, and the way the machine would be programmed using punched cards.
But she doesn’t stop there – in quaint Victorian language she tells us about the challenges familiar to every programmer today! She understands how complicated programming is:
“There are frequently several distinct sets of effects going on simultaneously; all in a manner independent of each other, and yet to a greater or less degree exercising a mutual influence.”
the difficulty of getting things right:
“To adjust each to every other, and indeed even to perceive and trace them out with perfect correctness and success, entails difficulties whose nature partakes to a certain extent of those involved in every question where conditions are very numerous and inter-complicated.”
and the challenge of making things go faster:
“One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation.”
She explains how computing is about patterns:
“it weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves”.
and inventing new ideas
“We might even invent laws … in an arbitrary manner, and set the engine to work upon them, and thus deduce numerical results which we might not otherwise have thought of obtaining”.
and being creative. If we knew the laws for composing music:
“the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”
Alan Turing famously asked if a machine can think – Ada Lovelace got there first:
“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.”
Wow, pretty amazing, for someone born 200 years ago.
Ursula Martin, University of Oxford(From the archive)
Charles Dickens is famous for his novels highlighting Victorian social injustice. Despite what people say, art and science really do mix, and Dickens certainly knew some computer science. In his classic novel about the French Revolution, A Tale of Two Cities, one of his characters relies on some computer science based knitting.
Dickens actually moved in the same social circles as Charles Babbage, the Victorian inventor of the first computer (which he designed but unfortunately never managed to build) and Ada Lovelace the mathematician who worked with him on those first computers. They went to the same dinner parties and Dickens will have seen Babbage demonstrate his prototype machines. An engineer in Dickens novel, Little Dorrit, is even believed to be partly based on Babbage. Dickens was probably the last non-family member to visit Ada before she died. She asked him to read to her, choosing a passage from his book Dombey and Son in which the son, Paul Dombey, dies. Like Ada, Paul Dombey had suffered from illness all his life.
So Charles Dickens had lots of opportunity to learn about algorithms! His novel ‘A Tale of Two Cities’ is all about the French Revolution, but lurking in the shadows is some computer science. One of the characters, a revolutionary called Madame Defarge takes the responsibility of keeping a register of all those people who are to be executed once the revolution comes to pass: the aristocrats and “enemies of the people”. Of course in the actual French Revolution lots of aristocrats were guillotined precisely for being enemies of the new state.
Now Madame Defarge could have just tried to memorize the names on her ‘register’ as she supposedly has a great memory, but the revolutionaries wanted a physical record. That raises the problem, though, of how to keep it secret, and that is where the computer science comes in. Madame Defarge knits all the time and so she decides to store the names in her knitting.
“Knitted, in her own stitches and her own symbols, it will always be as plain to her as the sun. Confide in Madame Defarge. It would be easier for the weakest poltroon that lives, to erase himself from existence, than to erase one letter of his name or crimes from the knitted register of Madame Defarge.”
Computer scientists call this Steganography: hiding information or messages in plain sight, so that no one suspects they are there at all. Modern forms of steganography include hiding messages in the digital representation of pictures and in the silences of a Skype conversation.
Madame Defarge didn’t of course just knit French words in the pattern like a victorian scarf version of a T-shirt message. It wouldn’t have been very secret if anyone looking at the resulting scarf could read the names. So how to do it? In fact, knitting has been used as a form of steganography for real. One way was for a person to take a ball of wool and mark messages down it in Morse Code dots and dashes. The wool was then knitted into a jumper or scarf. The message is hidden! To read it you unpick it all and read the morse code back off the wool.
The names were “Knitted, in her own stitches and her own symbols”
That wouldn’t have worked for Madame Defarge though. She wanted to add the names to the register in plain view of the person as they watched and without them knowing what she was doing. She therefore needed the knitting patterns themselves to hold the code. It was possible because she was both a fast knitter and sat knitting constantly so it raised no suspicion. The names were therefore, as Dickens writes “Knitted, in her own stitches and her own symbols”
She used a ‘cipher’ and that brings in another area of computer science: encryption. A cipher is just an algorithm – a set of rules to follow – that converts symbols in one alphabet (letters) into different symbols. In Madame Defarge’s case the new symbols were not written but knitted sequences of stitches. Only if you know the algorithm, and a secret ‘key’ that was used in the encryption, can you convert the knitted sequences back into the original message.
In fact both steganography and encryption date back thousands of years (computer science predates computers!), though Charles Dickens may have been the first to use knitting to do it in a novel. The Ancient Greeks used steganography. In the most famous case a message was written on a slave’s shaved head. They then let the hair grow back. The Romans knew about cryptographic algorithms too and one of the most famous ciphers is called the Caesar cipher as Julius Caesar used it when writing letters…even in Roman times people were worried about the spies reading their equivalent of emails.
Dickens didn’t actually describe the code that Madame Defarge was using so we can only guess…but why not see that as an opportunity and (if you can knit) why not invent a way yourself. If you can’t knit then learn to knit first and then invent one! Somehow you need a series of stitches to represent each letter of the alphabet. In doing so you are doing algorithmic thinking with knitting. You are knitting your way to being a computer scientist.
Paul Curzon, Queen Mary University of London (From the archive)
Avengers: Age of Ultron is the latest film about robots or artificial intelligences (AI) trying to take over the world. AI is becoming ever present in our lives, at least in the form of software tools that demonstrate elements of human-like intelligence. AI in our mobile phones apply and adapt their rules to learn to serve us better, for example. But fears of AI’s potential negative impact on humanity remain as seen in its projection into characters like Ultron, a super-intelligence accidentally created by the Avengers.
But what relation do the evil AIs of the movies have to scientific reality? Could an AI take over the world? How would it do it? And why would it want to? AI movie villains need to consider the whodunit staples of motive and opportunity.
Motive? What motive?
Let’s look at the motive. Few would say Intelligence in itself unswervingly leads to a desire to rule the world. In movies AI are often driven by self preservation, a realisation that fearful humans might shut them down. But would we give our AI tools cause to feel threatened? They provide benefits for us and there also seems little reason in creating a sense of self-awareness in a system that searches the web for the nearest Italian restaurant, for example.
Another popular motive for AIs’ evilness is their zealous application of logic. In Ultron’s case the goal of protecting the earth can only be accomplished by wiping out humanity. This destruction by logic is reminiscent of the notion that a computer would select a stopped clock over one that is two seconds slow, as the stopped clock is right twice a day whereas the slow one is never right. Ultron’s plot motivation, based on brittle logic combined with indifference to life, seems at odds with todays AI systems that reason mathematically with uncertainty and are built to work safely with users.
Opportunity Knocks
When we consider an AI’s opportunity to rule the world we are on somewhat firmer ground. The famous Turning Test of machine intelligence was set up to measure a particular skill – the ability to conduct a believable conversation. The premise being that if you can’t tell the difference between AI and human skill, the AI has passed the test and should be considered as intelligent as humans.
So what would a Turing Test for the ‘skill’ of world domination look like? To explore that we need to compare the antisocial AI behaviours with the attributes expected of human world domination. World dominators need to control important parts of our lives, say our access to money or our ability to buy a house. AI does that already – lending decisions are frequently made by an AI sifting through mountains of information to decide your credit worthiness. AIs now trade on the stock market too.
An overlord would give orders and expect them to be followed. Anyone who has stood helplessly at a shop’s self-service till as it makes repeated bagging related demands of them already knows what it feels like to be bossed about by AIs.
Kill Bill?
Finally, no megalomaniac Hollywood robot would be complete without at least some desire to kill us. Today military robots can identify targets without human intervention. It is currently a human controller that gives permission to attack but it’s not a stretch to say that the potential to auto kill exists in these AIs, but we would need to change the computer code to allow it.
These examples arguably show AI in control in limited but significant parts of life on earth, but to truly dominate the world, movie style, these individual AIs would need to start working together to create a synchronised AI army – that bossy self-service till talking to your health monitor and denying selling you beer, then both ganging up with a credit scoring system to only raise your credit limit if you both buy a pair of trainers with a built in GPS tracker and only eat the kale from your smart fridge but after the shoe data shows you completed the required five mile run.
It’s a worrying picture but fortunately I think it’s an unlikely one. Engineers worldwide are developing the Internet of things, networks connecting all manner of devices together to create new services. These are pieces of a jigsaw that would need to join together and form a big picture for total world domination. It’s an unlikely situation – too much has too fall into place and work together. It’s a lot like the infamous plot-hole in Independence Day – where an Apple Mac and an alien spaceship’s software inexplicably have cross-platform compatibility. [See video below for a possible answer!]
Our earthly AI systems are written in a range of computer languages, hold different data in different ways and use different and non-compatible rule sets and learning techniques. Unless we design them to be compatible there is no reason why adding two safely designed AI systems, developed by separate companies for separate services would spontaneously blend to share capabilities and form some greater common goal without human intervention.
So could AIs, and the robot bodies containing them, pass the test and take over the world? Only if we humans let them, and help them a lot. Why would we?
Theatre producers, radio directors and film-makers have been trying to create realistic versions of natural sounds for years. Special effects teams break frozen celery stalks to mimic breaking bones, smack coconut shells on hard packed sand to hear horses gallop, rustle cellophane for crackling fire. Famously, in the first Star Wars movie the Wookie sounds are each made up of up to six animal clips combined, including a walrus! Sometimes the special effect people even record the real thing and play it at the right time! (Not a good idea for the breaking bones though!) The person using props to create sounds for radio and film is called a Foley artist, named after the work of Jack Donovan Foley in the 1920’s. Now the Foley artist is drawing on digital technology to get the job done.
Designing sounds
Sound designers have a hard job finding the right sounds. So how about creating sound automatically using algorithms? Synthetic sound! Research into sound creation is a hot topic, not just for special effects but also to help understand how people hear and for use in many other sound based systems. We can create simple sounds fairly easily using musical instruments and synthesisers, but creating sounds from nature, animal sounds and speech is much more complicated.
The approaches used to recognize sounds can be the basis of generating sounds too. You can either try and hand craft a set of rules that describe what makes the sound sound the way it does, or you can write algorithms that work it out for themselves.
Paying patterns attention
One method, developed as a way to automatically generate synthetic sound, is based on looking for patterns in the sounds. Computer scientists often create mathematical models to better understand things, as well as to recognize and generate computer versions of them. The idea is to look at (or here listen to) lots of examples of the thing being studied. As patterns become obvious they also start to identify elements that don’t have much impact. Those features are ignored so the focus stays on the most important parts. In doing this they build up a general model, or view, that describes all possible examples. This skill of ignoring unimportant detail is called abstraction, and if you create a general view, a model of something, this is called generalisation: both important parts of computational thinking. The result is a hand-crafted model for generating that sound.
That’s pretty difficult to do though, so instead computer scientists write algorithms to do it for them. Now, rather than a person trying to work out what is, or is not important, training algorithms work it out using statistical rules. The more data they see, the stronger the pattern that emerges, which is why these approaches are often referred to as ‘Big Data’. They rely on number crunching vast data sets. The learnt pattern is then matched against new data, looking for examples, or as the basis of creating new examples that match the pattern.
The rain in train(ing)
Number crunching based on Big Data isn’t the only way though, sometimes general patterns can be identified from knowledge of the thing being investigated. For example, rain isn’t one sound but is made up of lots of rain drops all doing a similar thing. Natural sounds often have that kind of property. So knowledge of a phenomenon can be used to create a basic model to build a generator around. This is an approach Richard Turner, now at Cambridge University, has pioneered, analysing the statistical properties of natural sounds. By creating a basic model and then gradually tweaking it to match the sound-quality of lots of different natural sounds, his algorithms can learn what natural sounds are like in general. Then, given a specific natural ‘training’ sound, it can generate synthetic versions of that sound by choosing settings that match its features. You could give it a recorded sample of real rain, for example. Then his sound processing algorithms apply a bunch of maths that pull out the important features of that particular sound based on the statistical models. With the critical features identified, and plugged in to his general model, a new sound of any length can then be generated that still matches the statistical pattern of, and so sounds like, the original. Using the model you can create lots of different versions of rain, that all still sound like rain, lots of different campfires, lots of different streams, and so-on.
For now, the celery stalks are still in use, as are the walrus clippings, but it may not be long before film studios completely replace their Foley bag of tricks with computerised solutions like Richard’s. One wookie for 3 minutes and a dawn chorus for 5 please.
Become a Foley Artist with Sonic Pi
You can have a go at being a Foley artist yourself. Sonic Pi is a free live-coding synth for music creation that is both powerful enough for professional musicians, but intended to get beginners into live coding: combining programming with composing to make live music.
It was designed for use with a Raspberry Pi computer, which is a cheap way to get started, though works with other computers too. Its also a great, fun way to start to learn to program.
Play with anything, and everything, you find around the house, junk or otherwise. See what sounds it makes. Record it, and then see what it makes you think of out of context. Build up your own library of sounds, labelling them with things they sound like. Take clips of films, mute the sound and create your own soundscape for them. Store the sound clips and then manipulate them in Sonic Pi, and see if you can use them as the basis of different sounds.
Listen to the example sound clips made with Sonic Pi on their website, then start adapting them to create your own sounds, your own music. What is the most ‘natural sound’ you can find or create using Sonic Pi?
Jane Waite and Paul Curzon, Queen Mary University of London.
explores the work of scientists and engineers who are using computers to understand, identify and recreate wild sounds, especially those of birds. We see how sophisticated algorithms that allow machines to learn, can help recognize birds even when they can’t be seen, so helping conservation efforts. We see how computer models help biologists understand animal behaviour, and we look at how electronic and computer generated sounds, having changed music, are now set to change the soundscapes of films. Making electronic sounds is also a great, fun way to become a computer scientist and learn to program.
Subscribe to be notified whenever we publish a new post to the CS4FN blog.
This blog is funded by EPSRC on research agreement EP/W033615/1.
Can a robot get cancer? Silly question. Our bodies are made of cells. Robots aren’t. Cells are the basic building blocks of life and come in lots of different forms from long thin nerve cells that allow us to sense the world, to round blood cells that carry oxygen around our bodies. Cancer occurs when cells go rogue and start reproducing in an uncontrolled way. A computer can’t get cancer, but you can allow virtual diseases to attack virtual cells inside a computer. Doing that may just help find cures. That is what Jasmin Fisher, who leads a research group at Microsoft Research in Cambridge, has devoted her career to.
Becoming a medic isn’t the only way to help save lives!
Computational Modelling is changing the way the sciences are done. It is the idea that you can run experiments on virtual versions of things you are investigating. A computer model is essentially just a program that simulates the phenomena of interest. For example, by writing a program that simulates the laws of Physics, you can use it to run virtual Physics experiments about the motion of the planets, say. If your virtual planets do follow the paths real planets do, then you have evidence the laws are right. If they don’t your laws (or the models) need to change. You can also make predictions such as when an eclipse will happen. If you are right it suggests the laws you coded are good descriptions of reality. If wrong, back to the drawing board.
Jasmin has been pioneering this idea with the stuff of life and death. She focusses on modelling cells and the specific ways that we think cancer attacks them. It gives a way of exploring what is going on at the level of the molecules inside cells, and so how well new medicines might, or might not, work. Experiments can be done quickly and easily on the programmed models by running simulations. That means the real experiments, taking up expensive lab time, can focus on things that are most likely to be successful. Jasmin’s work has helped researchers design more effective actual experiments because they start with a better understanding of what is going on. One of the most important questions she is studying is how cells end up becoming what they are, and how this differs between normal cells and cancer cells. Understand this and we will be much closer to understanding how to stop cancer.
[This article includes a free papercraft activity with a paper robot that expresses ’emotions’.]
If humans are ever to get to like and live with robots we need to understand each other. One of the ways that people let others know how they are feeling is through the expressions on their faces. A smile or a frown on someone’s face tells us something about how they are feeling and how they are likely to react. We can also tell something of a person’s emotions from their eyes and eyebrows. Some scientists think it might be possible for robots to express feelings this way too, but understanding how a robot can usefully express its ‘emotions’ (what its internal computer program is processing and planning to do next), is still in its infancy. A group of researchers in Poland, at Wroclaw University of Technology, have come up with a clever new design for a robot head that could help a computer show its feelings. It’s inspired by the Teenage Mutant Ninja Turtles cartoon and movie series.
Their turtle-inspired robotic head called EMYS, which stands for EMotive headY System is cleverly also the name of a European pond turtle, Emys orbicularis. Taking his inspiration from cartoons, the project’s principal ‘head’ designer Jan Kedzierski created a mechanical marvel that can convey a whole range of different emotions by tilting a pair of movable discs, one of which contains highly flexible eyes and eyebrows.
Eye see
Image by CS4FN
The lower disc imitates the movements of the human lower jaw, while the upper disk can mimic raising the eyebrows and wrinkling the forehead. There are eyelids and eyebrows linked to each eye. Have a look at your face in the mirror, then try pulling some expressions like sadness and anger. In particular look at what these do to your eyes. In the robot, as in humans, the eyelids can move to cover the eye. This helps in the expression of emotions like sadness or anger, as your mirror experiment probably showed.
Pop eye
But then things get freaky and fun. Following the best traditions of cartoons, when EMYS is ‘surprised’ the robot’s eyes can shoot out to a distance of more than 10 centimetres! This well-known ‘eyes out on stalks’ cartoon technique, which deliberately over-exaggerates how people’s eyes widen and stare when they are startled, is something we instinctively understand even though our eyes don’t really do this. It makes use of the fact that cartoons take the real world to extremes, and audiences understand and are entertained by this sort of comical exaggeration. In fact it’s been shown that people are faster at recognising cartoons of people than recognising the un- exaggerated original.
High tech head builder
The mechanical internals of EMYS consist of lightweight aluminium, while the covering external elements, such as the eyes and discs, are made of lightweight plastic using 3D rapid prototyping technology. This technology allows a design on the computer to be ‘printed’ in plastic in three dimensions. The design in the computer is first converted into a stack of thin slices. Each slice of the design, from the bottom up, individually oozes out of a printer and on to the slice underneath, so layer-by-layer the design in the computer becomes a plastic reality, ready for use.
Facing the future
A ‘gesture generator’ computer program controls the way the head behaves. Expressions like ‘sad’ and ‘surprised’ are broken down into a series of simple commands to the high-speed motors, moving the various lightweight parts of the face. In this way EMYS can behave in an amazingly fluid way – its eyes can ‘blink’, its neck can turn to follow a person’s face or look around. EMYS can even shake or nod its head. EMYS is being used on the Polish group’s social robot FLASH (FLexible Autonomous Social Helper) and also with other robot bodies as part of the LIREC project (www.lirec.eu [archived]). This big project explores the question of how robot companions could interact with humans, and helps find ways for robots to usefully show their ‘emotions’.
Do try this at home
You can program a paper version of an EMYS-like robot. Download and follow the instructions on the Emotion Machine in the printable version below and build your own EMYS.
Print, cut out and make your own emotional robot. The strips of paper at the top (‘sliders’) containing the expressions and letters are slotted into the grooves on the robot’s face and happy or annoyed faces can created by moving the sliders.
By selecting a series of different commands in the Emotion Engine boxes, the expression on EMYS’s face will change. How many different expressions can you create? What are the instructions you need to send to the face for a particular expression? What emotion do you think that expression looks like – how would you name it? What would you expect the robot to be ‘feeling’ if it pulled that face?
Click on the image to go to the download page. Activity sheet by CS4FN
Go further
Why not draw your own sliders, with different eye shapes, mouth shapes and so on. Explore and experiment! That’s what computer scientists do.
We take for granted that computers use binary: to represent numbers, letters, or more complicated things like music and pictures…any kind of information. That was something Ada Lovelace realised very early on. Binary wasn’t invented for computers though. Its first modern use as a way to represent letters was actually invented in the first half of the 19th century. It is still used today: Braille.
Braille is named after its inventor, Louis Braille. He was born 6 years before Ada though they probably never met as he lived in France. He was blinded as a child in an accident and invented the first version of Braille when he was only 15 in 1824 as a way for blind people to read. What he came up with was a representation for letters that a blind person could read by touch.
Choosing a representation for the job is one of the most important parts of computational thinking. It really just means deciding how information is going to be recorded. Binary gives ways of representing any kind of information that is easy for computers to process. The idea is just that you create codes to represent things made up of only two different characters: 1 and 0. For example, you might decide that the binary for the letter ‘p’ was: 01110000. For the letter ‘c’ on the other hand you might use the code, 01100011. The capital letters, ‘P’ and ‘C’ would have completely different codes again. This is a good representation for computers to use as the 1’s and 0’s can themselves be represented by high and low voltages in electrical circuits, or switches being on or off.
He was inspired by an earlier ‘Night Writing’ system developed by Charles Barbier to allow French soldiers in the 1800s to read military messages without using a lamp (which gave away their position, putting them at risk).
The first representation Louis Braille chose wasn’t great though. It had dots, dashes and blanks – a three symbol code rather than the two of binary. It was hard to tell the difference between the dots and dashes by touch, so in 1837 he changed the representation – switching to a code of dots and blanks.
He had invented the first modern form of writing based on binary.
Braille works in the same way as modern binary representations for letters. It uses collections of raised dots (1s) and no dots (0s) to represent them. Each gives a bit of information in computer science terms. To make the bits easier to touch they’re grouped into pairs. To represent all the letters of the alphabet (and more) you just need 3 pairs as that gives 64 distinct patterns. Modern Braille actually has an extra row of dots giving 256 dot/no dot combinations in the 8 positions so that many other special characters can be represented. Representing characters using 8 bits in this way is exactly the equivalent of the computer byte.
Modern computers use a standardised code, called Unicode. It gives an agreed code for referring to the characters in pretty well every language ever invented including Klingon! There is also a Unicode representation for Braille using a different code to Braille itself. It is used to allow letters to be displayed as Braille on computers! Because all computers using Unicode agree on the representations of all the different alphabets, characters and symbols they use, they can more easily work together. Agreeing the code means that it is easy to move data from one program to another.
The 1830s were an exciting time to be a computer scientist! This was around the time Charles Babbage met Ada Lovelace and they started to work together on the analytical engine. The ideas that formed the foundation of computer science must have been in the air, or at least in the Victorian smog.
Paul Curzon and Jo Brodie, Queen Mary University of London
Being creative isn’t just for the fun of it. It can be serious too. Marketing people are paid vast amounts to come up with slogans for new products, and in the political world, a good, memorable soundbite can turn the tide over who wins and loses an election. Coming up with great slogans that people will remember for years needs both a mastery of language and a creative streak too. Algorithms are now getting in on the act, and if anyone can create a program as good as the best humans, they will soon be richer than the richest marketing executive. Polona Tomašicˇ and her colleagues from the Jožef Stefan Institute in Slovenia are one group exploring the use of algorithms to create slogans. Their approach is based on the way evolution works – genetic algorithms. Only the fittest slogans survive!
A mastery of language
To generate a slogan, you give their program a short description on the slogan’s topic – a new chocolate bar perhaps. It then uses existing language databases and programs to give it the necessary understanding of language.
First, it uses a database of common grammatical links between pairs of words generated from wikipedia pages. Then skeletons of slogans are extracted from an Internet list of famous (so successful) slogans. These skeletons don’t include the actual words, just the grammatical relationships between the words. They provide general outlines that successful slogans follow.
From the passage given, the program pulls out keywords that can be used within the slogans (beans, flavour, hot, milk, …). It generates a set of fairly random slogans from those words to get started. It does this just by slotting keywords into the skeletons along with random filler words in a way that matches the grammatical links of the skeletons.
Breeding Slogans
New baby slogans are now produced by mating pairs of initial slogans (the parents). This is done by swapping bits into the baby from each parent. Both whole sections and individual words are swapped in. Mutation is allowed too. For example, adjectives are added in appropriate places. Words are also swapped for words with a related meaning. The resulting children join the new population of slogans. Grammar is corrected using a grammar checker.
Culling Slogans
Slogans are now culled. Any that are the same as existing ones go immediately. The slogans are then rated to see which are fittest. This uses simple properties like their length, the number of keywords used, and how common the words used are. More complex tests used are based on how related the meanings of the words are, and how commonly pairs of words appear together in real sentences. Together these combine to give a single score for the slogan. The best are kept to breed in the next generation, the worst are discarded (they die!), though a random selection of weaker slogans are also allowed to survive. The result is a new set of slogans that are slightly better than the previous set.
The program breeds and culls slogans like this for thousands, even millions of generations, gradually improving them, until it finally chooses its best. The slogans produced are not yet world beating on their own, and vary in quality as judged by humans. For chocolate, one run came up with slogans like “The healthy banana” and “The favourite oven”, for example. It finally settled on “The HOT chocolate” which is pretty good.
More work is needed on the program, especially its fitness function – the way it decides what is a good slogan and what isn’t. As it stands this sort of program isn’t likely to replace anyone’s marketing department. They could help with brainstorming sessions though, to spark new ideas but leaving humans to make the final choice. Supporting human creativity rather than replacing it is probably just as rewarding for the program after all.