Fran Allen: Smart Translation

Computers don’t speak English, or Urdu or Cantonese for that matter. They have their own special languages that human programmers have to learn if they want to create new applications. Even those programming languages aren’t the language computers really speak. They only understand 1s and 0s. The programmers have to employ translators to convert what they say into Computerese (actually binary): just as if I wanted to speak with someone from Poland, I’d need a Polish translator. Computer translators aren’t called translators though. They are called ‘compilers’, and just as it might be a Pole who translated for me into Polish, compilers are special programs that can take text written in a programming language and convert it into binary.

The development of good compilers has been one of the most important advancements from the early years of computing and Fran Allen, one of the star researchers of computer giant, IBM, was awarded the ‘Turing Prize’ for her contribution. It is the Computer Science equivalent of a Nobel Prize. Not bad given she only joined IBM to clear her student debts from University.

Fran was a pioneer with her groundbreaking work on ‘optimizing compilers’. Translating human languages isn’t just about taking a word at a time and substituting each for the word in the new language. You get gibberish that way. The same goes for computer languages.

Things written in programming languages are not just any old text. They are instructions. You actually translate chunks of instructions together in one go. You also add a lot of detail to the program in the translation, filling in every little step.

Suppose a Japanese tourist used an interpreter to ask me for directions of how to get to Sheffield from Leeds. I might explain it as:

“Follow the M1 South from Junction 43 to Junction 33”.

If the Japanese translator explained it as a compiler would they might actually say (in Japanese):

“Take the M1 South from Junction 43 as far as Junction 42, then follow the M1 South from Junction 42 as far as Junction 41, then follow … from Junction 34 as far as Junction 33”.

Computers actually need all the minute detail to follow the instructions.

The most important thing about computer instructions (i.e., programs) is usually how fast following them leads to the job getting done. Imagine I was on the Information desk at Heathrow airport and the tourist wanted to get to Sheffield. I’ve never done that journey. I do know how to get from Heathrow to Leeds as I’ve done it a lot. I’ve also gone from Leeds to Sheffield a lot, so I know that journey too. So the easiest way for me to give instructions for getting from London to Sheffield, without much thought and be sure it gets the tourist there might be to say:

Go from Heathrow to Leeds:

  1. Take the M4 West to Junction 4B
  2. Take the M25 clockwise to Junction 21
  3. Take the M1 North to Leeds at Junction 43

Then go from Leeds to Sheffield:

  1. Take the M1 South to Sheffield at Junction 33

That is easy to write and made up of instructions I’ve written before perhaps. Programmers reuse instructions like this a lot – it both saves their time and reduces the chances of introducing mistakes into the instructions. That isn’t the optimum way to do the journey of course. You pass the turn off for Sheffield on the way up. An optimizing compiler is an intelligent compiler. It looks for inefficiency and actually converts it into a shorter and faster set of instructions. The Japanese translator, if acting like an optimizing compiler, would actually remove the redundant instructions from the ones I gave and simplify it (before converting it to all the junction by junction detailed steps) to:

  1. Take the M4 West to Junction 4B
  2. Take the M25 clockwise to Junction 21
  3. Take the M1 North to Sheffield Junction 33

Much faster! Much more intelligent! Happier tourists!

Next time you take the speed of your computer for granted, remember it is not just that fast because the hardware is quick, but because, thanks to people like Fran Allen, the compilers don’t just do what the programmers tell them to do. They are far smarter than that.

Paul Curzon, Queen Mary University of London (Updated from the archive)

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

A gendered timeline of technology

(Updated from previous versions, July 2025)

Women have played a gigantic role in the history of computing. Their ideas form the backbone to modern technology, though that has not always been obvious. Here is a gendered timeline of technology innovation to offset that.

825 Muslim scholar Al-Khwarizmi kicks it all off with a book on algorithms – recipes on how to do computation pulling together work of Indian mathematicians. Of course back then it’s people who do all the computation, as electronic computers won’t exist for another millennium.

1587 Mary, Queen of Scots loses her head because the English Queen, Elizabeth I, has a crack team of spies that are better at computer science than Mary’s are. They’ve read the Arab mathematician Al-Kindi’s book on the science of cryptography so they can read all Mary’s messages. More

1650 Maria Cunitz publishes Urania Propitia an updated book of astronomical tables based on the ones by Johannes Kepler. She gives an improved algorithm over his for calculating the positions of the planets in the sky. That and her care as a human computer make it the most accurate to date. More.

1757 Nicole-Reine Lepaute works as a human computer as part of a team of three calculating the date Halley’s comet will return to greater accuracy (a month) than Halley had (his prediction was over a year).

1784 Mary Edwards is paid as a human computer helping compile The Nautical Almanac, a book of data used to help sailors work out their position (longitude) at sea. She had been doing the work in her husband’s name for about 10 years prior to this.

1787 Caroline Herschel becomes the first woman to be paid to be an astronomer (by King George III) as a result of finding new comets and nebulae. She goes on to spend 2 years creating the most comprehensive catalogue of stars ever created to that point. This involves acting as a human computer doing vast amounts of computation calculating positions.

1818 Mary Shelley writes the first science fiction novel on artificial life, Frankenstein. More

1827 Mary Web publishes the first ever Egyptian Mummy novel. Set in the future, in it she predicts a future with robot surgeons, AI lawyers and a version of the Internet. More

1842 Ada Lovelace and Charles Babbage work on the analytical engine. Lovelace shows that the machine could be programmed to calculate a series of numbers called Bernoulli numbers, if Babbage can just get the machine built. He can’t. It’s still Babbage who gets most of the credit for the next hundred-plus years. Ada predicts that one day computers will compose music, A century or so later she is proved right. More

1854 George Boole publishes his work on a logical system that remains obscure until the 1930s, when Claude Shannon discovers that Boolean logic can be electrically applied to create digital circuits.

1856 Statistician (and nurse) Florence Nightingale returns from the Crimean War and launches the subject of data visualisation to convince politicians that soldiers are dying in hospital because of poor sanitation. More

1912 Thomas Edison claims “woman is now centuries, ages, even epochs behind man”, the year after Marie Curie wins the second of her two Nobel prizes.

1927 Metropolis, a silent science fiction film, is released. Male scientists kidnap a woman and create a robotic version of her to trick people and destroy the world. The robotic Maria dances nude to ‘mesmerise’ the workers. The underlying assumptions are bleak: women with power should be replaced with docile robots, bodies are more important than brains, and working class men are at the whim of beautiful gyrating women. Could the future be more offensive?

1931 Mary Clem starts work as a human computer at Iowa State College. She invents the zero check as a way of checking for errors in algorithms human computers (the only kind at the time) are following.

1941 Hedy Lamarr, better know as a blockbuster Hollywood actress co-invents frequency hopping: communicating by constantly jumping from one frequency to another. This idea underlies much of today’s mobile technology. More

1943 Thomas Watson, the CEO of IBM, announces that he thinks: “there is a world market for maybe 5 computers”. It’s hard to believe just how wrong he was!

1945 Grace Murray Hopper and her associates are hard at work on an early computer called Mark I when a moth causes the circuit to malfunction. Hopper (later made an admiral) refers to this as ‘debugging’ the circuit. She tapes the bug to her logbook. After this, computer malfunctions are referred to as ‘bugs’. Her achievements didn’t stop there: she develops the first compiler and one of the pioneering programming languages. More

1946 The Electronic Numerical Integrator and Computer is the world’s first general purpose electronic computer. The main six programmers, all highly skilled mathematicians, were women. They were seen to be more capable programmers because it was considered too repetitive for men and as a result it was labelled ‘sub-professional’ work. Once more men realised that it was interesting and fun, programming was re- classed as ‘professional’, the salaries became higher, and men become dominant in the field.

1949 A Popular Mechanics magazine article predicts that the computers of the future might weigh “as little as” 1.5 tonnes each. That’s over 10,000 iPhones!

1958 Daphne Oram, a pioneer of electronic music, co-founds the BBC Radiophonic Workshop, responsible for the soundscapes behind hundreds of tv and radio programmes. She suggests the idea of spatial sound where sounds are in specific places. More

1966 Paper published on ELIZA, the first chatbot that in its psychotherapist role, people treat as human. It starts an unfortunately long line of female chatbots. It is named after a character from the play Pygmalion about a working class woman taught to speak in a posh voice. The Greek myth of Pygmalion is about a male sculptor falling in love with a statue he made. Hmm… Joseph Weizenbaum agrees the choice was wrong as it stereotyped women.

1967 The original series of TV show Star Trek includes an episode where mad ruler Harry Mudd runs a planet full of identical female androids who are ‘fully functional’ at physical pleasure to tend to his whims. But that’s not the end of the pleasure bots in this timeline…

1969 Margaret Hamilton is in charge fo the team developing the in-flight software for the Apollo missions including the Apollo 11 Moon Landing. More.

1969 DIna St Johnston founds the UKs first independent software house. It is a massive success writing software for lots of big organisations including the BBC and British Rail. More.

1972 Karen Spärck Jones publishes a paper describing a new way to pick out the most important documents when doing searches. Twenty years later, once the web is up and running, the idea comes of age. It’s now used by most search engines to rank their results.

1972 Ira Levin’s book ‘The Stepford Wives’ is published. A group of suburban husbands kill their successful wives and create look-alike robots to serve as docile housewives. It’s made into a film in 1975. Sounds like those men were feeling a bit threatened.

1979 The US Department of Defence introduces a new programming language called Ada after Ada Lovelace.

1982 The film Blade Runner is released. Both men and women are robots but oddly there are no male robots modelled as ‘basic pleasure units’. Can’t you guys think of anything else?

1984 Technology anthropologist Lucy Suchman draws on social sciences research to overturn the current computer science thinking on how best to design interactive gadgets that are easy to use. She goes on to win the Benjamin Franklin Medal, one of the oldest and most prestigious science awards in the world.

1985 In the film Weird Science, two teenage supergeeks hack into the government’s mainframe and instead of using their knowledge and skills to do something really cool…they create the perfect woman. Yawn. Not again.

1985 Sophie Wilson designs the instruction set for the first ARM RISC chip creating a chip that is both faster and uses less energy than traditional designs: just what you need for mobile gadgets. This chip family go on to power 95% of all smartphones. More

1988 Ingrid Daubechies comes up with a practical way to use ‘wavelets’, mathematical tools that when drawn are wave-like. This opens up new powerful ways to store images in far less memory, make images sharper,
and much, much more. More

1995 Angelina Jolie stars as the hacker Acid Burn in the film Hackers, proving once and for all that women can play the part of the technologically competent in films.

1995 Ming Lin co-invents algorithms for tracking moving objects and detecting collisions based on the idea of bounding them with boxes. They are used widely in games and computer-aided design software.

2004 A new version of The Stepford Wives is released starring Nicole Kidman. It flops at the box office and is panned by reviewers. Finally! Let’s hope they don’t attempt to remake this movie again.

2005 The president of Harvard University, Lawrence Summers, says that women have less “innate” or “natural” ability than men in science. This ridiculous remark causes uproar and Summers leaves his position in the wake of a no-confidence vote from Harvard faculty.

2006 Fran Allen is the first woman to win the Turing Award, which is considered the Nobel Prize of computer science, for work dating back to the 1950s. Allen says that she hopes that her award gives more “opportunities for women in science, computing and engineering”. More

2006 Torchwood’s technical expert Toshiko Sato (Torchwood is the organisation protecting the Earth from alien invasion in the BBC’s cult TV series) is not only a woman but also a quiet, highly intelligent computer genius. Fiction catches up with reality at last.

2006 Jeannette Wing promotes the idea of computational thinking as the key problem solving skill set of computer scientists. It is now taught in schools across the world.

2008 Barbara Liskov wins the Turing Award for her work in the design of programming languages and object-oriented programming. This happens 40 years after she becomes the first woman in the US to be awarded a PhD in computer science. More

2009 Wendy Hall is made a Dame Commander of the Order of the British Empire for her pioneering work on hypermedia and web science. More

2011  Kimberly Bryant, an electrical engineer and computer scientist founds Black Girls Code to encourage and support more African-American girls to learn to code. Thousands of girls have been trained. More

2012 Shafi Goldwasser wins the Turing Award. She co-invented zero knowledge proofs: a way to show that a claim being made is true without giving away any more information. This is important in cryptography to ensure people are honest without giving up privacy. More

2015 Sameena Shah’s AI driven fake news detection and verification system goes live giving Reuters an advantage of several years over competitors. More

2016 Hidden Figures, the film about Katherine Johnson, Dorothy Vaughan, and Mary Jackson, the female African-American mathematicians and programmers who worked for NASA supporting the space programme released. More

2018 Gladys West is inducted into the US Air Force Hall of Fame for her central role in the development of satellite remote sensing and GPS. Her work directly helps us all. More

2025 Ursula Martin is made a Dame Commander of the Order of the British Empire for services to Computer Science. She was the first female Professor of Computer Science in the UK focussing on theoretical Computer Science, Formal Methods and later maths as a social enterprise. She was the first true expert to examine the papers of Ada Lovelace. More.

It is of course important to remember that men occasionally helped too! The best computer science and innovation arise when the best people of whatever gender, culture, sexuality, ethnicity and background, disabled or otherwise, work together.

Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Operational Transformation

Algorithms for writing together

How do online word processing programs manage to allow two or more people to change the same document at the same time without getting in a complete muddle? One of the really key ideas that makes collaborative writing possible was developed by computer scientists, Clarence Ellis and Simon Gibbs. They called their idea ‘Operational transformation’.

Let’s look at a simple example to illustrate the problem. Suppose Alice and Bob share a document that starts:

"MEETING AT 10AM"

First of all one computer, called the ‘server’, holds the actual ‘master’ document. If the network goes down or computers crash then its that ‘master’ copy that is the real version everyone sees as the definitive version.

Both Alice and Bob’s computers can connect to that server and get copies to view on their own machines. They can both read the document without problem – they both see the same thing. But what happens if they both start to change it at once? That’s when things can get mixed up.

Let’s suppose Alice notices that the time in the document should be PM not AM. She puts her cursor at position 14 and replaces the letter there with P. As far as the copy she is looking at is concerned, that is where the faulty A is. Her computer sends a command to the server to change the master version accordingly, saying

CHANGE the character at POSITION 14 to P.

The new version at some point later will be sent to everyone viewing. However, suppose that at the same time as Alice was making her change, Bob notices that the meeting is at 1 not 10. He moves his cursor to position 13, so over the 0 in the version he is looking at, and deletes it. A command is sent to the server computer:

DELETE the character at POSITION 13.

Now if the server receives the instructions in that order then all is ok. The document ends up as both Bob and Alice intended. When they are sent the updated version it will have done both their changes correctly:

"MEETING AT 1PM"

However, as both Bob and Alice are editing at the same time, their commands could arrive at the server in either order. If the delete command arrives first then the document ends up in a muddle as first the 13th position is deleted giving.

"MEETING AT 1AM"

Then, when Alice’s command is processed the 14th character is changed to a P as it asks. Unfortunately, the 14th character is now the M because the deleted character has gone. We end up with

"MEETING AT 1AP"

Somehow the program has to avoid this happening. That is where the operational transformation algorithm comes in. It changes each instruction, as needed, to take other delete or insert instructions into account. Before the server follows them they are changed to ones so that they give the right result whatever order they came in.

So in the above example if the delete is done first, then any other instructions that arrive that apply to the same initial version of the document are changed to take account of the way the positions have changed due to the already applied deletion. We would get and so apply the new instructions:

STARTING FROM "MEETING AT 10AM"
DELETE the character at POSITION 13.
CHANGE the character at POSITION (14-1) to P.

Without Operational Transformation two people trying to write a document together would just be frustrating chaos. Online editing would have to be done the old way of taking it in turns, or one person making suggestions for the other to carry out. With the algorithm, thanks to Clarence Ellis and Simon Gibbs, people who are anywhere in the world can work on one document together. Group writing has changed forever.

Paul Curzon, Queen Mary University of London


This article was originally published on the CS4FN website.

More on …


EPSRC supports this blog through research grant EP/W033615/1.

The original version of this article was funded by the Institute of Coding.

Engineering a cloak of invisibility: manipulating light with metamaterials

by Akram Alomainy and Paul Curzon, QMUL

You pull a cloak around you and disappear! Reality or science fiction? Harry Potter’s invisibility cloak is surely Hogwarts’ magic that science can’t match. Even in Harry Potter’s world it takes powerful magic and complicated spells to make it work. Turns out even that kind of magic can be done with a combination of materials science and computer science. Professor Susumu Tachi of the University of Tokyo has developed a cloak made of thousands of tiny beads. Cameras video what is behind you and a computer system then projects the appropriate image onto the front of the cloak. The beads are made of a special material called retro-reflectrum. It is vital to give the image a natural feel – normal screens give too flat a look, losing the impression of seeing through the person. Now you see me, now you don’t at the flick of a switch.

But could an invisibility cloak, without tiny screens on it, ever be a reality? It sounds impossible especially if you understand how light behaves. It bounces off the things around us, travelling in straight lines. You see them when that reflected light eventually reaches your eyes. I can see the red toy over there because red light bounced from it to me. For it to be invisible, no light from it must reach my eyes, while at the same time light from everything else around should. How could that be possible? Akram Alomainy of Queen Mary, University of London, tells us more.

Well maybe things aren’t quite that simple…halls of mirrors, rainbows, polar bears and desert mirages all suggest some odd things can happen with light! They show that manipulating light is possible and that we may even be able to bend it in a way that alters the way things look – even humans.

Light fantastic

Have you ever wondered how the hall of mirrors in a fun fair distorts your reflection? Some make us look short and fat while others make us tall and slim! It’s all about controlling the behaviour of light. The light rays still travel in straight lines, but the mirrors deceive the eye. The light seems to arrive from a different place to reality because the mirrors are curved, not flat, making the light bounce at odd angles.

A rainbow is an object we see that isn’t really there. They occur because white light doesn’t actually exist. It is just coloured light all mixed up. When it hits a surface it separates back into individual colours. The colour of an object you see depends on which colours pass through or get reflected, and which get absorbed. The light is white when it hits the raindrops, but then comes out as the whole spectrum of colours. They head off at slightly different angles, which is why they appear in the different rainbow positions.

What about polar bears? Did you know that they have black skins and semi-transparent hair? You see them as white because of the way the hollow hairs reflect sunlight.

So what does this have to do with invisibility? Well, it suggests that with light all is not as it seems. Perhaps we can manipulate it to do anything we want.

Water! Water!

Now for the clincher – mirages! They show that invisibility cloaks ought to be a possibility. Light from the sun travels in a straight line through the sky. That means we see everything as it is. Except not quite. In places like deserts where the temperature is very high at noon, apparently weird things happen to the light. The difference between the temperature, and thus the difference in density between the higher air layers and the levels closer to the ground can be quite large. That temperature difference makes light coming from the sky change direction as it passes through each layer. It bends rather than just travelling in a straight line to us. It is that image of the sky that looks like the pool of water – the mirage. Our brains assume the light travelled in a straight line, so they misinterpret its location. Now, to make something invisible we just need to make light bend round it. That invisibility cloak is a possibility if we can just engineer what mirages do – bend light!

Nano-machines

That is the basic idea and it is an area of science called ‘transformation optics’ that makes it possible. The science tells us about the properties that each point of an object must have to make light waves travel in any particular way we wish through it. To make it happen engineers must then create special materials with those properties. These materials are known as metamaterials. Their properties are controlled using electromagnetism, which is where the electronic engineers come in! You can think of them as being made of vast numbers of tiny electrical machines built into big human-scale structures. Each tiny machine is able to control how light passes through it, even bending light in a way no natural material could. If the machines are small enough – ‘nanotechnology’ as small as the wavelength of light – and their properties can be controlled really precisely to match the science’s prediction, then we can make light passing through them do anything we want. For invisibility, the aim is to control those properties so the light bends as it passes through a metamaterial cloak. If the light comes out the other side of the cloak unchanged and travelling in the same direction as it entered, while avoiding objects in the middle, then those objects will be invisible.

Now you see it…

Simple cloaking devices that work this way have already been created but they are still very limited. One of the major challenges is the range of light they can work with. At the moment it’s possible to make a cloak that bends a single colour frequency, but not all light. As Yang Hao, a professor working in this area at Queen Mary, notes: “The obstacle engineers face is the complex manufacturing techniques needed to build devices that can bend light across the whole visible light spectrum. However, with the progress being made in nanotechnologies this could become a possibility in the near future”.

Perhaps we should leave the last word to J.K. Rowling: “A suspicious object like that, it was clearly full of Dark Magic.” So while we should appreciate the significance of such an invention we should perhaps be careful about the negative consequences!


More on …

Related Magazines…


EPSRC supports this blog through research grant EP/W033615/1.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.