Hidden Figures – NASA’s brilliant calculators #BlackHistoryMonth ^JB

Full Moon and silhouetted tree tops

by Paul Curzon, Queen Mary University of London

Full Moon with a blue filter
Full Moon image by PIRO from Pixabay

NASA Langley was the birthplace of the U.S. space program where astronauts like Neil Armstrong learned to land on the moon. Everyone knows the names of astronauts, but behind the scenes a group of African-American women were vital to the space program: Katherine Johnson, Mary Jackson and Dorothy Vaughan. Before electronic computers were invented ‘computers’ were just people who did calculations and that’s where they started out, as part of a segregated team of mathematicians. Dorothy Vaughan became the first African-American woman to supervise staff there and helped make the transition from human to electronic computers by teaching herself and her staff how to program in the early programming language, FORTRAN.

FORTRAN code on a punched card, from Wikipedia.

The women switched from being the computers to programming them. These hidden women helped put the first American, John Glenn, in orbit, and over many years worked on calculations like the trajectories of spacecraft and their launch windows (the small period of time when a rocket must be launched if it is to get to its target). These complex calculations had to be correct. If they got them wrong, the mistakes could ruin a mission, putting the lives of the astronauts at risk. Get them right, as they did, and the result was a giant leap for humankind.

See the film ‘Hidden Figures’ for more of their story (trailer below).

This story was originally published on the CS4FN website and was also published in issue 23, The Women Are (Still) Here, on p21 (see ‘Related magazine’ below).


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

Making core rope memory

A coloured bead version of core rope memory with J encoded on its 8 beads (01001010)

by Jo Brodie, Queen Mary University of London

We have explained how core rope memory was used as the computer memory storing the Apollo guidance computer program that got us to the moon. A team from the University of Washington came up with a fun craft activity to make your own core memory. It may not fly you to the moon, but is a neat way to store information in a bracelet. Find their activity pages here [EXTERNAL].

What it involves is threading 8 beads onto a string, with a gap between them to form a storage space for bytes of data. Each byte is 8 binary bits (Eight pieces of information, each a 1 or a 0). Each bead represents the position of one bit in your core rope memory. You then take other threads and weave them through the beads. Each thread will store another byte of actual data. Pass the thread through a bead when you want that bead to read 1, or over, when you want that bead to read 0.

Each thread weaving past or through 8 beads can then encode the information for one letter. By adding lots of threads you can store a word or even a sentence on each core rope memory string (perhaps your name, or some secret message).

Using a binary encoding for each letter (so capital letter A would be the 8 bits 01000001 if you’re following this conversion from binary to letters table) you put that letter’s thread through or over each of the 8 beads to ‘spell’ out the letter in binary.

My name is Jo so a core rope memory encoding my name would have only three threads (one to hold the 8 beads and two to spell my name). The second thread would go over, through, over, over, through, over, through, over to spell the capital letter J (01001010). The second thread would go over, through, through, over, through, through, through, through to spell lowercase o (01101111).

Let’s hope you have a slightly longer name so can have more fun time creating your own personalised core rope memory!


More on …

Related Magazines …


This article was funded by UKRI, through Professor Ursula Martin’s grant EP/K040251/2 and grant EP/W033615/1.

Core rope memory

by Jo Brodie and Paul Curzon, Queen Mary University of London

A view of the Earth from the Moon
Image by WikiImages from Pixabay

Weaving, in the form of the Jacquard loom, with its swappable punch cards controlling the loom’s patterns inspired Charles Babbage. He intended to use the same kind of punch card to store programs in his Analytical Engine, which had it been built would have been the first computer. However, weaving had a much more direct use in computing history. Weaving helped get us to the Moon.

In the 1960s, NASA’s Apollo moon mission needed really dependable computers. It was vital that the programs wouldn’t be corrupted in space. The problem was solved using core rope memory.

Core rope memory was made of small ‘eyelets’ or beads of a metal called ferrite that can be magnetised and copper wire which was woven through some of the eyelets but not others. The ring-shaped magnets were known as magnetic cores. An electrical current passing through the wires made the whole thing work.

Representing binary

Both data and programs in computers are stored as binary: 1s and 0s. Those 1s and 0s can be represented by physical things in the world in lots of different ways. NASA used weaving. A wire that passed through an eyelet would be read as a binary 1 when the current was on but if it passed around the eyelet then it would be read as 0. This meant that a computer program, made up of sequences of 1s and 0s, could be permanently stored by the pattern that was woven. This gave read-only memory. Related techniques were used to create memory that the computer could change too, as the guidance computer needed both.

The memory was woven for NASA by women who were skilled textile workers. They worked in pairs using a special hollow needle to thread the copper wire through one magnetic core and then the other person would thread it back through a different one.

The program was first developed on a computer (the sort that took up a whole room back then) and then translated into instructions for a machine which told the weavers the correct positions for the wire threads. It was very difficult to undo a mistake so a great deal of care was taken to get things right the first time, especially as it could take up to two months to complete one block of memory. Some of the rope weavers were overseen by Margaret Hamilton, one of the women who developed the software used on board the spacecraft, and who went on to lead the Apollo software team.

The world’s first portable computer?

Several of these pre-programmed core rope memory units were combined and installed in the guidance computers of the Apollo mission spacecraft that had to fly astronauts safely to the Moon and back. NASA needed on-board guidance systems to control the spacecraft independently of Mission Control back on Earth. They needed something that didn’t take up too much room or weigh too much, that could survive the shaking and juddering of take-off and background radiation: core rope memory fitted the bill perfectly.

It packed a lot of information (well, not by modern standards! The guidance computer contained only around 70 kilobytes of memory) into a small space and was very robust as it could only break if a wire came loose or one of the ferrite eyelets was damaged (which didn’t happen). To make sure though, the guidance computer’s electronics were sealed from the atmosphere for extra protection. They survived and worked well, guiding the Landing Modules safely onto the Moon.

One small step for man perhaps, but the Moon landings were certainly a giant leap for computing.


More on …

Related Magazines …


This article was funded by UKRI, through Professor Ursula Martin’s grant EP/K040251/2 and grant EP/W033615/1.

“The thundering engines vibrate throughout your body”

Computer scientist Jason Cordes tells us what it was like to work for NASA on the International Space Station during the time of Space Shuttle launches.

(From the archive)

The space shuttle lifting off
A space shuttle launch.
Image by WikiImages from Pixabay

Working for a space agency is brilliant. When I was younger, I often looked up at the stars and wondered what was out there. I visited Johnson Space Center in Houston, Texas and told myself that I wanted to work there someday. After completing my college degree in computer science, I had the great fortune to be asked to work at NASA’s Johnson Space Center as well as Kennedy Space Center.

Johnson Space Center is the home of the Mission Control Center (MCC). This is where NASA engineers direct in-orbit flights and track the position of the International Space Station (ISS) and the Space Shuttle when it is in orbit. Kennedy Space Center, situated at Cape Canaveral, Florida, is where the Space Shuttle and most other space-bound vehicles are launched. Once they achieve orbit, control is handed over to Johnson Space Center in Houston, which is why when you hear astronauts calling Earth, they talk to “Houston”.

Space City

Houston is a very busy city and you get that feeling when you are at Johnson. There are people everywhere and the Space Center looks like a small city unto itself. While I was there I worked on the computer control system for the International Space Station. The part I worked on was a series of laptop-based displays designed to give astronauts on the station a real-time view of the state of everything, from oxygen levels to the location of the robotic arm.

The interesting thing about developing this type of software is realising that the program is basically sending and receiving telemetry (essentially a long list of numbers) to the hardware, where the hardware is the space station itself. Once you think of it like that, the sheer simplicity of what is being done is really surprising. I certainly expected something more complex. All of the telemetry comes in over a wire and the software has to keep track of what telemetry belongs to what component since different components all broadcast over the same wire. Essentially the program routes the data based on what component it comes from and acts as an interpreter that takes the numbers that the space station is feeding and converting them into a graphical format that the astronauts can understand. The coolest part of working in Houston was interacting with astronauts and getting their feedback on how the software should work. It’s like working with celebrities.

Wild times

While at Kennedy Space Center, I was tasked with working on the Shuttle Launch Control System for the next generation of shuttles. The software is very similar to that used to control the ISS. The thing I remember most about working there was the environment.

Kennedy Space Center is about as opposite as you can get from the big city feeling at Johnson. It’s situated on what is essentially swampland on the eastern coast of Florida. The main gates to Johnson are right on major streets within Houston, but at Kennedy the gate is on a major highway, and even then, travel to the actual buildings of the Space Center is a leisurely 30 minute drive through orange groves and trees as well as bypassing causeways and creeks. Along the way you might spot an eagle’s nest in one of the trees, or a manatee poking its head from the waters. Kennedy is in the middle of a wildlife preserve with alligators, manatees, raccoons and every other kind of critter you can imagine. In fact, I was prevented from going home one evening by a gator that decided to warm itself up by my car.

The coolest thing about working at NASA, and specifically Kennedy Space Center, was being able to watch shuttle launches from less than 10 miles away. It’s an incredible experience. The thundering engines vibrate throughout your body like being next to the speakers at an entirely too loud rock concert. Night launches were the most amazing, with the fire from the engines lighting up the sky. It is very amazing to watch this machine and realise that you are the one who wrote the computer program that set it in motion. I’ve worked in a few development firms, but few of them gave me as much emotion when I saw it in action as this did.


More on …

Related Magazines …


This cs4fn blog is funded by EPSRC, through grant EP/W033615/1.

If the Beagle had landed…

by Peter W McOwan and Paul Curzon, Queen Mary University of London

(Updated from the archive)

A replica of Beagle 2 in the Science Museum with solar panels deployed.
Image by user:geni from Wikimedia CC BY-SA 4.0

A reason the Apollo Moon landings were manned was in-part because the astronauts were there to deal with things if they went wrong: landing on a planet or moon’s surface is perfectly possible to do automatically as long as things go to plan. It is when something unexpected happens that is always going to be the tricky bit.

Beagle 2 is a good example. It was a British-built space probe that was sent to explore Mars in 2003. Named after biologist Charles Darwin’s famous ship, Beagle 2, sadly it never made it. It was due to land on Christmas Day that year, but something went wrong and it vanished without a trace. Beagle 2’s disappearance was perhaps the inspiration behind the Guinevere One space probe in the 2005 Doctor Who episode ‘The Christmas Invasion’, but Beagle 2 was unlikely to have been stolen by the Sycorax.

Had Beagle 2 made it, the first thing we would have heard was its radio call sign, which was some digital music specially composed by Britpop group, Blur. It wasn’t the only part of the ill-fated Beagle 2 mission that had an artistic twist. Famous British artist Damien Hirst (the man who had previously pickled halved calves in formaldehyde tanks), had designed one of his famous spot paintings – rows of differently coloured spots – that was to be used as an instrument calibration chart. It would have been the first art on Mars, but it, instead, appeared to have become the first art all over Mars! However, if you shoot for the stars you have to expect things to fail sometimes. You learn and try again.

There was a twist to the story too, as eleven years later in 2015, the Beagle 2 was spotted by NASA’s Mars Reconnaissance Orbiter. Using sophisticated image reconstruction programs working with a series of different images, a picture of it was created that allowed the scientists to work out some of what had happened. It had landed successfully on Mars, but apparently its solar panels had then failed to fully open. One appeared to be blocking its communications antenna meaning it had no way to talk to Earth, and no way to repair itself either. It may well have collected data, but just couldn’t tell us about it (or play us some Blur). The data it collected (if it did) may be there, though, waiting for the day when it can be passed back to Earth.

While it may not have succeeded in helping us find out more about Mars, Beagle 2 has presumably become the first Martian Art Gallery, though, displaying the one and only work of art on the planet: a spot picture by Damien Hirst.


More on …

  • Computer Science in Space
  • Read the full story of the trade-offs between human and machine control in Apollo in: Digital Apollo, David A Mindell, The MIT Press, 2011.

Related Magazines …


This cs4fn blog is funded by EPSRC, through grant EP/W033615/1.

Fencing the moon

by Paul Curzon, Queen Mary University of London

Lunar module in landing configuration. Probes below each foot tell when the Lunar Module has almost landed.
Lunar module Eagle from the Apollo 11 moon landing getting ready to land (taken from the command module)
Image by NASA (public domain)

The Apollo lunar modules that landed on the moon were guided by a complex mixture of computer program control and human control. Neil Armstrong and the other astronauts essentially operated an semi-automatic autopilot, switching on and off pre-programmed routines. One of the many problems the astronauts had to deal with was that the engines had to be shut down before the craft actually landed. Too soon and they would land too heavily with a crunch, too late and they could kick up the surface and the dust might cause the lunar module to explode. But how to know when?

They had ground sensing radar but would it be accurate enough? They needed to know when they were only feet above the surface. The solution was a cunning contraption: essentially a sensor button on the end of a long stick. These sensors dangled below each foot of the lunar module (see image). When they touched the surface the button pressed in, a light came on in the control panel and the astronaut knew to switch the engines off. Essentially, this sensor is the same as an epee: a fencing sword. In a fencing match the sword registers a hit on the opponent when the button on its tip is pressed against their body. Via a wire running down the sword and out behind the fencer, that switches on a light on the score board telling the referee who made the hit. So the Lunar Module effectively had a fencing bout with the moon…and won.


More on …

Related Magazines …


This cs4fn blog is funded by EPSRC, through grant EP/W033615/1.