Making sense of squishiness – 3D modelling the natural world

by Paul Curzon, Queen Mary University of London

Look out the window at the human-made world. It’s full of hard, geometric shapes – our buildings, the roads, our cars. They are made of solid things like tarmac, brick and metal that are designed to be rigid and stay that way. The natural world is nothing like that though. Things bend, stretch and squish in response to the forces around them. That provides a whole bunch of fascinating problems for computer scientists like Lourdes Agapito of Queen Mary, University of London to solve.

Computer scientists interested in creating 3-dimensional models of the world have so far mainly concentrated on modelling the hard things. Why? Because they are easier! You can see the results in computer-animated films like Toy Story, and the 3D worlds like Second Life your avatar inhabits. Even the soft things tend to be rigid.

Lourdes works in this general area creating 3D computer models, but she wants to solve the problems of creating them automatically just from the flat images in videos and is specifically interested in things that deform – the squishy things.

Look out the window and watch the world go by. As you watch a woman walk past you have no problem knowing that you are looking at the same person as you were a second ago – even if she becomes partially hidden as she walks behind the post box and turns to post a letter. The sun goes behind a cloud and the scene is suddenly darker. It starts to rain and she opens an umbrella. You can still recognise her as the same object. Your brain is pulling some amazing tricks to make this seem so mundane. Essentially it is creating a model of the world – identifying all the 3-dimensional objects that you see and tracking them over time. If we can do it, why can’t a computer?

Unlike hard surfaces, deformable ones don’t look the same from one still to the next. You don’t have to just worry about changes in lighting, them being partially hidden, and that they appear different from a different angle. The object itself will be a different shape from one still to the next. That makes it far harder to work out which bits of one image are actually the same as the ones in the next. Lourdes has taken on a seriously hard problem.

Existing vision systems that create 3D objects have made things easier for themselves by using existing models. If a computer already has a model of a cube to compare what it sees with, then spotting a cube in the image stream is much easier than working it out from scratch. That doesn’t really generalise to deformable objects though because they vary too much. Another approach, used by the film industry, is to put highly visible markers on objects so that those markers can be tracked. That doesn’t help if you just want to point a camera out the window at whatever passes by though.

Software from Lourdes’ team creates a model of the human face as it deforms. A looping gif of a man’s face making different expressions next to a cartoon version which copies him. Red dots on his features are mapped to red dots on the cartoon face

Lourdes aim is to be able to point a camera at a deformable object and have a computer vision system be able to create a 3D model simply by analysing the images. No markers, no existing models of what might be there, not even previous films to train it with, just the video itself. So far her team have created a system that can do this in some situations such as with faces as a person changes their expression. Their next goal is to be able to make their system work for a whole person as they are filmed doing arbitrary things. It’s the technical challenge that inspires Lourdes the most, though once the problems of deformable objects are solved there are applications of course. One immediately obvious area is in operating theatres. Keyhole surgery is now very common. It involves a surgeon operating remotely, seeing what they are doing by looking at flat video images from a fibre optic probe inside the body of the person being operated on. The image is flat but the inside of the person that the surgeon is trying to make cuts in is 3-dimensional. It would be far less error prone if what the surgeon was looking at was an accurate 3D model of the video feed rather than just a flat picture. Of course the inside of your body is made of exactly the kind of squishy deformable surfaces that Lourdes is interested in. Get the computer science right and technologies like this will save lives.

At the same time as tackling seriously hard if squishy computer science problems, Lourdes is also a mother of three. A major reason she can fit it all in, as she points out, is that she has a very supportive partner who shares in the childcare. Without him it would be impossible to balance all the work involved in leading a top European research team. It’s also important to get away from work sometimes. Running regularly helps Lourdes cope with the pressures and as we write she is about to run her first half marathon.

Lourdes may or may not be the person who turns her team’s solutions into the applications that in the future save lives in operating theatres, spot suspicious behaviour in CCTV footage or allow film-makers to quickly animate the actions of actors. Whoever does create the applications, we still need people like Lourdes who are just excited about solving the fundamental problems in the first place.


This article was originally published on the CS4FN website in ~2011. You can read more about Women in Computing here.


This blog is funded through EPSRC grant EP/W033615/1.

Recognising (and addressing) bias in facial recognition tech – the Gender Shades Audit #BlackHistoryMonth ^JB

The five shades used for skin tone emojis

Some people have a neurological condition called face blindness (also known as ‘prosopagnosia’) which means that they are unable to recognise people, even those they know well – this can include their own face in the mirror! They only know who someone is once they start to speak but until then they can’t be sure who it is. They can certainly detect faces though, but they might struggle to classify them in terms of gender or ethnicity. In general though, most people actually have an exceptionally good ability to detect and recognise faces, so good in fact that we even detect faces when they’re not actually there – this is called pareidolia – perhaps you see a surprised face in this picture of USB sockets below.

A unit containing four sockets, 2 USB and 2 for a microphone and speakers.
Happy, though surprised, sockets

What if facial recognition technology isn’t as good at recognising faces as it has sometimes been claimed to be? If the technology is being used in the criminal justice system, and gets the identification wrong, this can cause serious problems for people (see Robert Williams’ story in “Facing up to the problems of recognising faces“).

In 2018 Joy Buolamwini and Timnit Gebru shared the results of research they’d done, testing three different commercial facial recognition systems. They found that these systems were much more likely to wrongly classify darker-skinned female faces compared to lighter- or darker-skinned male faces. In other words, the systems were not reliable.

“The findings raise questions about how today’s neural networks, which … (look for) patterns in huge data sets, are trained and evaluated.”

Study finds gender and skin-type bias in commercial artificial-intelligence systems
(11 February 2018) MIT News

The Gender Shades Audit

Facial recognition systems are trained to detect, classify and even recognise faces using a bank of photographs of people. Joy and Timnit examined two banks of images used to train facial recognition systems and found that around 80 per cent of the photos used were of people with lighter coloured skin. 

If the photographs aren’t fairly balanced in terms of having a range of people of different gender and ethnicity then the resulting technologies will inherit that bias too. Effectively the systems here were being trained to recognise light-skinned people.

The Pilot Parliaments Benchmark

They decided to create their own set of images and wanted to ensure that these covered a wide range of skin tones and had an equal mix of men and women (‘gender parity’). They did this by selecting photographs of members of various parliaments around the world which are known to have a reasonably equal mix of men and women, and selected parliaments from countries with predominantly darker skinned people (Rwanda, Senegal and South Africa) and from countries with predominantly lighter-skinned people (Iceland, Finland and Sweden). 

They labelled all the photos according to gender (they did have to make some assumptions based on name and appearance if pronouns weren’t available) and used the Fitzpatrick scale (see Different shades, below) to classify skin tones. The result was a set of photographs labelled as dark male, dark female, light male, light female with a roughly equal mix across all four categories – this time, 53 per cent of the people were light-skinned (male and female).

A composite image showing the range of skin tone classifications with the Fitzpatrick scale on top and the skin tone emojis below.

Different shades

The Fitzpatrick skin tone scale (top) is used by dermatologists (skin specialists) as a way of classifying how someone’s skin responds to ultraviolet light. There are six points on the scale with 1 being the lightest skin and 6 being the darkest. People whose skin tone has a lower Fitzpatrick score are more likely to burn in the sun and not tan, and are also at greater risk of melanoma (skin cancer). People with higher scores have darker skin which is less likely to burn and they have a lower risk of skin cancer. 

Below it is a variation of the Fitzpatrick scale, with five points, which is used to create the skin tone emojis that you’ll find on most messaging apps in addition to the ‘default’ yellow. 

Testing three face recognition systems

Joy and Timnit tested the three commercial face recognition systems against their new database of photographs – a fair test of a wide range of faces that a recognition system might come across – and this is where they found that the systems were less able to correctly identify particular groups of people. The systems were very good at spotting lighter-skinned men, and darker skinned men, but were less able to correctly identify darker-skinned women, and women overall.  

These tools, trained on sets of data that had a bias built into them, inherited those biases and this affected how well they worked. Joy and Timnit published the results of their research and it was picked up and discussed in the news as people began to realise the extent of the problem, and what this might mean for the ways in which facial recognition tech is used. 

“An audit of commercial facial-analysis tools found that dark-skinned faces are misclassified at a much higher rate than are faces from any other group. Four years on, the study is shaping research, regulation and commercial practices.”

The unseen Black faces of AI algorithms (19 October 2022) Nature

There is some good news though. The three companies made changes to improve their facial recognition technology systems and several US cities have already banned the use of this tech in criminal investigations, and more cities are calling for it too. People around the world are becoming more aware of the limitations of this type of technology and the harms to which it may be (perhaps unintentionally) put and are calling for better regulation of these systems.

Further reading

Study finds gender and skin-type bias in commercial artificial-intelligence systems (11 February 2018) MIT News
Facial recognition software is biased towards white men, researcher finds (11 February 2018) The Verge
Go read this special Nature issue on racism in science (21 October 2022) The Verge

More technical articles

• Joy Buolamwini and Timnit Gebru (2018) Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research 81:1-15.
The unseen Black faces of AI algorithms (19 October 2022) Nature News & Views


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


This blog is funded through EPSRC grant EP/W033615/1.

Hidden Figures – NASA’s brilliant calculators #BlackHistoryMonth ^JB

Full Moon and silhouetted tree tops

by Paul Curzon, Queen Mary University of London

Full Moon with a blue filter
Full Moon image by PIRO from Pixabay

NASA Langley was the birthplace of the U.S. space program where astronauts like Neil Armstrong learned to land on the moon. Everyone knows the names of astronauts, but behind the scenes a group of African-American women were vital to the space program: Katherine Johnson, Mary Jackson and Dorothy Vaughan. Before electronic computers were invented ‘computers’ were just people who did calculations and that’s where they started out, as part of a segregated team of mathematicians. Dorothy Vaughan became the first African-American woman to supervise staff there and helped make the transition from human to electronic computers by teaching herself and her staff how to program in the early programming language, FORTRAN.

FORTRAN code on a punched card, from Wikipedia.

The women switched from being the computers to programming them. These hidden women helped put the first American, John Glenn, in orbit, and over many years worked on calculations like the trajectories of spacecraft and their launch windows (the small period of time when a rocket must be launched if it is to get to its target). These complex calculations had to be correct. If they got them wrong, the mistakes could ruin a mission, putting the lives of the astronauts at risk. Get them right, as they did, and the result was a giant leap for humankind.

See the film ‘Hidden Figures’ for more of their story (trailer below).

This story was originally published on the CS4FN website and was also published in issue 23, The Women Are (Still) Here, on p21 (see ‘Related magazine’ below).


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

Gladys West: Where’s my satellite? Where’s my child? #BlackHistoryMonth

Satellite image of the Earth at night

by Paul Curzon, Queen Mary University of London

Satellites are critical to much modern technology, and especially GPS. It allows our smartphones, laptops and cars to work out their exact position on the surface of the earth. This is central to all mobile technology, wearable or not, that relies on knowing where you are, from plotting a route your nearest Indian restaurant to telling you where a person you might want to meet is. Many, many people were involved in creating GPS, but it was only in Black History Month of 2017 when the critical part Gladys West played became widely known.

Work hard, go far

As a child Gladys worked with her family in the fields of their farm in rural Virginia. That wasn’t the life she wanted, so she worked hard through school, leaving as the top student. She won a scholarship to university, and then landed a job as a mathematician at a US navy base.

There she solved the maths problems behind the positioning of satellites. She worked closely with the programmers to write the code to do calculations based on her maths. Nine times out of ten the results that came back weren’t exactly right so much of her time was spent working out what was going wrong with the programs, as it was vital the results were very accurate.

Seasat and Geosat

Her work on the Seasat satellite won her a commendation. It was a revolutionary satellite designed to remotely monitor the oceans. It collected data about things like temperature, wind speed and wind direction at the sea’s surface, the heights of waves, as well as sensing data about sea ice. This kind of remote sensing has since had a massive impact on our understanding of climate change. Gladys specifically worked on the satellite’s altimeter. It was a radar-based sensor that allowed Seasat to measure its precise distance from the surface of the ocean below. She continued this work on later remote sensing satellites too, including Geosat, a later earth observation satellite.

Gladys West and Sam Smith look over data from the Global Positioning System,
which Gladys helped develop. Photo credit US Navy, 1985, via Wikipedia.

GPS

Knowing the positions of satellites is the foundation for GPS. The way GPS works is that our mobile receivers pick up a timed signal from several different satellites. Calculating where we are can only be done if you first know very precisely where those satellites were when they sent the signal. That is what Gladys’ work provided.

GPS Watches

You can now buy, for example, buy GPS watches, allowing you to wear a watch that watches where you are. They can also be used by people with dementia, who have bad memory problems, allowing their carers to find them if they go out on their own but are then confused about where they are. They also allow parents to know where their kids are all the time. Do you think that’s a good use?

Since so much technology now relies on knowing exactly where we are, Gladys’ work has had a massive impact on all our lives.

This article was originally published on the CS4FN website and a copy can also be found on page 14 of Issue 25 of CS4FN, “Technology worn out (and about)“, on wearable computing, which can be downloaded as a PDF, along with all our other free material, here: https://cs4fndownloads.wordpress.com/  

This article is also republished during Black History Month and is part of our Diversity in Computing series, celebrating the different people working in computer science (Gladys West’s page).


This blog is funded through EPSRC grant EP/W033615/1.

The Mummy in an AI world: Jane Webb’s future

by Paul Curzon, Queen Mary University of London

The sarcophagus of a mummy
Image by albertr from Pixabay

Inspired by Mary Shelley’s Frankenstein, 17-year old Victorian orphan, Jane Webb secured her future by writing the first ever Mummy story. The 22nd century world in which her novel was set is perhaps the most amazing thing about the three volume book though.

On the death of her father, Jane realised she needed to find a way to support herself and did so by publishing her novel “The Mummy!” in 1827. In contrast to their modern version as stars of horror films, Webb’s Mummy, a reanimation of Cheops, was actually there to help those doing good and punish those that were evil. Napoleon had, through the start of the century, invaded Egypt, taking with him scholars intent on understanding the Ancient Egyptian society. Europe was fascinated with Ancient Egypt and awash with Egyptian artefacts and stories around them. In London, the Egyptian Hall had been built in Piccadilly in 1812 to display Egyptian artefacts and in 1821 it displayed a replica of the tomb of Seti I. The Rosetta Stone that led to the decipherment of hieroglyphics was cracked in 1822. The time was therefore ripe for someone to come up with the idea of a Mummy story.

The novel was not, however, set in Victorian times but in a 22nd century future that she imagined, and that future was perhaps more amazing than the idea of a mummy coming to life. Her version of the future was full of technological inventions supporting humanity, as well as social predictions, many of which have come to fruition such as space travel and the idea that women might wear trousers as the height of fashion (making her a feminist hero). The machines she described in the book led to her meeting her future husband, John Loudon. As a writer about farming and gardening he was so impressed by the idea of a mechanical milking machine included in the book, that he asked to meet her. They married soon after (and she became Jane Loudon).

The skilled artificial intelligences she wrote into her future society are perhaps the most amazing of her ideas in that she was the first person to really envision in fiction a world where AIs and robots were embedded in society just doing good as standard. To put this into context of other predictions, Ada Lovelace wrote her notes suggesting machines of the future would be able to compose music 20 years later.

Jane Webb’s future was also full of cunning computational contraptions: there were steam-powered robot surgeons, foreseeing the modern robots that are able to do operations (and with their steady hands are better at, for example, eye surgery than a human). She also described Artificial Intelligences replacing lawyers. Her machines were fed their legal brief, giving them instructions about the case, through tubes. Whilst robots may not yet have fully replaced barristers and judges, artificial intelligence programs are already used, for example, to decide the length of sentences of those convicted in some places, and many see it now only being a matter of time before lawyers are spending their time working with Artificial Intelligence programs as standard. Jane’s world also includes a version of the Internet, at a time before electric telegraph existed and when telegraph messages were sent by semaphore between networks of towers.

The book ultimately secured her future as required, and whilst we do not yet have any real reanimated mummy’s wandering around doing good deeds, Jane Webb did envision lots of useful inventions, many that are now a reality, and certainly had pretty good ideas about how future computer technology would pan out in society…despite computers, never mind artificial intelligences, still being well over a century away.


More on …

Related Magazines …


EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

A storm in a bell jar

by Paul Curzon, Queen Mary University of London

(from the archive)

lightning
Image by FelixMittermeier from Pixabay 

Ada Lovelace was close friends with John Crosse, and knew his father Andrew: the ‘real Frankenstein’. Andrew Crosse apparently created insect life from electricity, stone and water…

Andrew Crosse was a ‘gentleman scientist’ doing science for his own amusement including work improving giant versions of the first batteries called ‘voltaic piles’. He was given the nickname ‘the thunder and lightning man’ because of the way he used the batteries to do giant discharges of electricity with bangs as loud as canons.

He hit the headlines when he appeared to create life from electricity, Frankenstein-like. This was an unexpected result of his experiments using electricity to make crystals. He was passing a current through water containing dissolved limestone over a period of weeks. In one experiment, about a month in, a perfect insect appeared apparently from no-where, and soon after starting to move. More and more insects then appeared over time. He mentioned it to friends, which led to a story in a local paper. It was then picked up nationally. Some of the stories said he had created the insects, and this led to outrage and death threats over his apparent blasphemy of trying to take the position of God.

(Does this start to sound like a modern social networking storm, trolls and all?) In fact he appears to have believed, and others agreed, that the mineral samples he was using must have been contaminated with tiny insect eggs, that just naturally hatched. Scientific results are only accepted if they can be replicated. Others, who took care to avoid contamination couldn’t get the same result. The secret of creating life had not been found.

While Mary Shelley, who wrote Frankenstein, did know Crosse, sadly perhaps, for the story’s sake, he can’t have been the inspiration for Frankenstein as has been suggested, given she wrote it decades earlier!


More on …

Related Magazines …


EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

Pass the screwdriver, Igor

Mary Shelley, Frankenstein’s monster and artificial life

by Paul Curzon and Peter W McOwan, Queen Mary University of London

(Updated from the archive)

Frankenstein's Monster
Image by sethJreid from Pixabay

Shortly after Ada Lovelace was born, so long before she made predictions about future “creative machines”, Mary Shelley, a friend of her father (Lord Byron), was writing a novel. In her book, Frankenstein, inanimate flesh is brought to life. Perhaps Shelley foresaw what is actually to come, what computer scientists might one day create: artificial life.

Life it may not be, but engineers are now doing pretty well in creating humanoid machines that can do their own thing. Could a machine ever be considered alive? The 21st century is undoubtedly going to be the age of the robot. Maybe it’s time to start thinking about the consequences in case they gain a sense of self.

Frankenstein was obsessed with creating life. In Mary Shelley’s story, he succeeded, though his creation was treated as a “Monster” struggling to cope with the gift of life it was given. Many science fiction books and films have toyed with these themes: the film Blade Runner, for example, explored similar ideas about how intelligent life is created; androids that believe they are human, and the consequences for the creatures concerned.

Is creating intelligent life fiction? Not totally. Several groups of computer scientists are exploring what it means to create non-biological life, and how it might be done. Some are looking at robot life, working at the level of insect life-forms, for example. Others are looking at creating intelligent life within cyberspace.

For 70 years or more scientists have tried to create artificial intelligences. They have had a great deal of success in specific areas such as computer vision and chess playing programs. They are not really intelligent in the way humans are, though they are edging closer. However none of these programs really cuts it as creating “life”. Life is something more than intelligence.

A small band of computer scientists have been trying a different approach that they believe will ultimately lead to the creation of new life forms: life forms that could one day even claim to be conscious (and who would we be to disagree with them if they think they are?) These scientists believe life can’t be engineered in a piecemeal way, but that the whole being has to be created as a coherent whole. Their approach is to build the basic building blocks and let life emerge from them.

A sodarace in action

The outline of the idea could be seen in the game Sodarace, where you could build your own creatures that move around a virtual world, and even let them evolve. One approach to building creatures, such as a spider, would be to try and work out mathematical equations about how each leg moves and program those equations. The alternative artificial life way as used in Sodarace is to instead program up the laws of physics such as gravity and friction and how masses, springs and muscles behave according to those laws. Then you just put these basic bits together in a way that corresponds to a spider. With this approach you don’t have to work out in advance every eventuality (what if it comes to a wall? Or a cliff? Or bumpy ground?) and write code to deal with it. Instead natural behaviour emerges.

The artificial life community believe, not just life-like movement, but life-like intelligence can emerge in a similar way. Rather than programming the behaviour of muscles you program the behaviour of neurones and then build brains out of them. That it turns out has been the key to the machine learning programs that are storming the world of Artificial Intelligence, turning it into an everyday tool. However, if aiming for artificial life, you would keep going and combine it with the basic biochemistry of an immune system, do a similar thing with a reproductive system, and so on.

Want to know more? A wonderful early book is Steve Grand’s: “Creation”, on how he created what at the time was claimed to be “the nearest thing to artificial life yet”… It started life as the game “Creatures”.

Then have a go at creating artificial life yourself (but be nice to it).


More on …

Related Magazines …


EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

Making core rope memory

A coloured bead version of core rope memory with J encoded on its 8 beads (01001010)

by Jo Brodie, Queen Mary University of London

We have explained how core rope memory was used as the computer memory storing the Apollo guidance computer program that got us to the moon. A team from the University of Washington came up with a fun craft activity to make your own core memory. It may not fly you to the moon, but is a neat way to store information in a bracelet. Find their activity pages here [EXTERNAL].

What it involves is threading 8 beads onto a string, with a gap between them to form a storage space for bytes of data. Each byte is 8 binary bits (Eight pieces of information, each a 1 or a 0). Each bead represents the position of one bit in your core rope memory. You then take other threads and weave them through the beads. Each thread will store another byte of actual data. Pass the thread through a bead when you want that bead to read 1, or over, when you want that bead to read 0.

Each thread weaving past or through 8 beads can then encode the information for one letter. By adding lots of threads you can store a word or even a sentence on each core rope memory string (perhaps your name, or some secret message).

Using a binary encoding for each letter (so capital letter A would be the 8 bits 01000001 if you’re following this conversion from binary to letters table) you put that letter’s thread through or over each of the 8 beads to ‘spell’ out the letter in binary.

My name is Jo so a core rope memory encoding my name would have only three threads (one to hold the 8 beads and two to spell my name). The second thread would go over, through, over, over, through, over, through, over to spell the capital letter J (01001010). The second thread would go over, through, through, over, through, through, through, through to spell lowercase o (01101111).

Let’s hope you have a slightly longer name so can have more fun time creating your own personalised core rope memory!


More on …

Related Magazines …


This article was funded by UKRI, through Professor Ursula Martin’s grant EP/K040251/2 and grant EP/W033615/1.

Core rope memory

by Jo Brodie and Paul Curzon, Queen Mary University of London

A view of the Earth from the Moon
Image by WikiImages from Pixabay

Weaving, in the form of the Jacquard loom, with its swappable punch cards controlling the loom’s patterns inspired Charles Babbage. He intended to use the same kind of punch card to store programs in his Analytical Engine, which had it been built would have been the first computer. However, weaving had a much more direct use in computing history. Weaving helped get us to the Moon.

In the 1960s, NASA’s Apollo moon mission needed really dependable computers. It was vital that the programs wouldn’t be corrupted in space. The problem was solved using core rope memory.

Core rope memory was made of small ‘eyelets’ or beads of a metal called ferrite that can be magnetised and copper wire which was woven through some of the eyelets but not others. The ring-shaped magnets were known as magnetic cores. An electrical current passing through the wires made the whole thing work.

Representing binary

Both data and programs in computers are stored as binary: 1s and 0s. Those 1s and 0s can be represented by physical things in the world in lots of different ways. NASA used weaving. A wire that passed through an eyelet would be read as a binary 1 when the current was on but if it passed around the eyelet then it would be read as 0. This meant that a computer program, made up of sequences of 1s and 0s, could be permanently stored by the pattern that was woven. This gave read-only memory. Related techniques were used to create memory that the computer could change too, as the guidance computer needed both.

The memory was woven for NASA by women who were skilled textile workers. They worked in pairs using a special hollow needle to thread the copper wire through one magnetic core and then the other person would thread it back through a different one.

The program was first developed on a computer (the sort that took up a whole room back then) and then translated into instructions for a machine which told the weavers the correct positions for the wire threads. It was very difficult to undo a mistake so a great deal of care was taken to get things right the first time, especially as it could take up to two months to complete one block of memory. Some of the rope weavers were overseen by Margaret Hamilton, one of the women who developed the software used on board the spacecraft, and who went on to lead the Apollo software team.

The world’s first portable computer?

Several of these pre-programmed core rope memory units were combined and installed in the guidance computers of the Apollo mission spacecraft that had to fly astronauts safely to the Moon and back. NASA needed on-board guidance systems to control the spacecraft independently of Mission Control back on Earth. They needed something that didn’t take up too much room or weigh too much, that could survive the shaking and juddering of take-off and background radiation: core rope memory fitted the bill perfectly.

It packed a lot of information (well, not by modern standards! The guidance computer contained only around 70 kilobytes of memory) into a small space and was very robust as it could only break if a wire came loose or one of the ferrite eyelets was damaged (which didn’t happen). To make sure though, the guidance computer’s electronics were sealed from the atmosphere for extra protection. They survived and worked well, guiding the Landing Modules safely onto the Moon.

One small step for man perhaps, but the Moon landings were certainly a giant leap for computing.


More on …

Related Magazines …


This article was funded by UKRI, through Professor Ursula Martin’s grant EP/K040251/2 and grant EP/W033615/1.

Ada Lovelace in her own words

by Ursula Martin, University of Oxford

(From the archive)

A jumble of letters

Charles Babbage invented wonderful computing machines. But he was not very good at explaining things. That’s where Ada Lovelace came in. She is famous for writing a paper in 1843 explaining how Charles Babbage’s Analytical Engine worked – including a big table of formulas which is often described as “the first computer program”.

Charles Babbage invented his mechanical computers to save everyone from the hard work of doing big mathematical calculations by hand. He only managed to build a few tiny working models of his first machine, his difference engine. It was finally built to Babbage’s designs in the 1990s and you can see it in the London Science Museum. It has 8,000 mechanical parts, and is the size of small car, but when the operator turns the big handle on the side it works perfectly, and prints out correct answers.

Babbage invented, but never built, a more ambitious machine, his Analytical Engine. In modern language, this was a general purpose computer, so it could have calculated anything a modern computer can – just a lot more slowly. It was entirely mechanical, but it had all the elements we recognize today – like memory, CPU, and loops.

Lovelace’s paper explains all the geeky details of how numbers are moved from memory to the CPU and back, and the way the machine would be programmed using punched cards.

But she doesn’t stop there – in quaint Victorian language she tells us about the challenges familiar to every programmer today! She understands how complicated programming is:

“There are frequently several distinct sets of effects going on simultaneously; all in a manner independent of each other, and yet to a greater or less degree exercising a mutual influence.”

the difficulty of getting things right:

“To adjust each to every other, and indeed even to perceive and trace them out with perfect correctness and success, entails difficulties whose nature partakes to a certain extent of those involved in every question where conditions are very numerous and inter-complicated.”

and the challenge of making things go faster:

“One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation.”

She explains how computing is about patterns:

“it weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves”.

and inventing new ideas

“We might even invent laws … in an arbitrary manner, and set the engine to work upon them, and thus deduce numerical results which we might not otherwise have thought of obtaining”.

and being creative. If we knew the laws for composing music:

“the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”

Alan Turing famously asked if a machine can think – Ada Lovelace got there first:

“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.”

Wow, pretty amazing, for someone born 200 years ago.


More on …

Related Magazines …


EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1).