Reclaim your name

by Jo Brodie and Paul Curzon, Queen Mary University of London

Canadian Passport
Image by tookapic from Pixabay

In June 2021 the Canadian government announced that Indigenous people would be allowed to use their ancestral family names on government-issued identity and travel documents. This meant that, for the first time, they could use the names that are part of their heritage and culture rather than the westernised names that are often used instead. Because of computers, it wasn’t quite as easy as that though …

Some Indigenous people take on a Western name to make things easier, to simplify things for official forms, to save having to spell the name, even to avoid teasing. If it is a real choice then perhaps that is fine, though surely we should be able to make it easy for people to use their actual names. For many it was certainly not a choice, their Indigenous names were taken from them. From the 19th century, hundreds of thousands of Indigenous children in Canada were sent to Western schools and made to take on Western names as part of an attempt to force them to “assimilate” into Western society. Some were even beaten if they did not use their new name. Because their family names had been “officially” changed, they and their descendants had to use these new names on official documents. Names matter. It is your identity, and in some cultures family names are also sacred. Being able to use them matters.

The change to allow ancestral names to be used was part of a reconciliation process to correct this injustice. After the announcement, Ta7talíya Nahanee, an indigenous woman from the Squamish community in Vancouver, was delighted to learn that she would be able to use her real name on her official documents, rather than ‘Michelle’ which she had previously used.

Unfortunately, she was frustrated to learn that travel documents could still only include the Latin alphabet (ABCDEFG etc) with French accents (À, Á, È, É etc). That excluded her name (pronounced Ta-taliya, the 7 is silent) as it contains a number and the letter í. Why? Because the computer said so!

Modern machine-readable passports have a specific area, called the Machine Readable Zone which can be read by a computer scanner at immigration. It has a very limited number of permitted characters. Names which don’t fit need to be “transliterated”, so Å would be written as AA, Ü as UE and the German letter ß (which looks like a B but sounds like a double S) is transliterated as SS. Names are completely rewritten to fit, so Müller becomes MUELLER, Gößmann becomes GOESSMANN, and Hämäläinen becomes HAEMAELAEINEN. If you’ve spent your life having your name adapted to fit someone else’s system this is another reminder of that.

While there are very sensible reasons for ensuring that a passport from one part of the world can be read by computers anywhere else, this choice of characters highlights that, in order to make things work, everyone else has been made to fall in line with the English-speaking population, another example of an unintentional bias. It isn’t, after all, remotely beyond our ability to design a system that meets the needs of everyone, it just needs the will. Designing computer systems isn’t just about machines. It’s about designing them for people.

More on …

Related Magazines …

EPSRC supports this blog through research grant EP/W033615/1. 

Al-Jazari: the father of robotics

by Paul Curzon, Queen Mary University of London

Al Jazari's hand washing automaton
Image from Wikipedia

Science fiction films are full of humanoid robots acting as servants, workers, friends or colleagues. The first were created during the Islamic Golden Age, a thousand years ago. 

Robots and automata have been the subject of science fiction for over a century, but their history in myth goes back millennia, but so does the actual building of lifelike animated machines. The Ancient Greeks and Egyptians built Automata, animal or human-like contraptions that seemed to come to life. The early automata were illusions that did not have a practical use, though, aside from entertainment or just to amaze people. 

It was the great inventor of mechanical gadgets Ismail Al-Jazari from the Islamic Golden Age of science, engineering and art in the 12th century, who first built robot-like machines with actual purposes. Powered by water, his automata acted as servants doing specific tasks. One machine was a humanoid automaton that acted as a servant during the ritual purification of hand washing before saying prayers. It poured water into a basin from a jug and then handed over a towel, mirror and comb. It used a toilet style flushing mechanism to deliver the water from a tank. Other inventions included a waitress automaton that served drinks and robotic musicians that played instruments from a boat. It may even have been programmable. 

We know about Al-Jazari’s machines because he not only created mechanical gadgets and automata, he also wrote a book about them: The Book of Knowledge of Ingenious Mechanical Devices. It’s possible that it inspired Leonardo Da Vinci who, in addition to being a famous painter of the Italian Renaissance, was a prolific inventor of machines. 

Such “robots” were not everyday machines. The hand washing automata was made for the King. Al-Jazari’s book, however, didn’t just describe the machines, it explained how to build them: possibly the first text book to cover Automata. If you weren’t a King, then perhaps you could, at least, have a go at making your own servants. 

More on …

Related Magazines …

EPSRC supports this blog through research grant EP/W033615/1. 

A PC Success

by Paul Curzon, Queen Mary University of London

An outline of a head showing the brain and spinal column on a digital background of binary and circuitry

Image by Gerd Altmann from Pixabay

We have moved on to smartphones, tablets and smartwatches, but for 30 years the desktop computer ruled, and originally not just any desktop computer, the IBM PC. A key person behind its success was African American computer scientist, Mark Dean.

IBM is synonymous with computers. It became the computing industry powerhouse as a result of building large, room-sized computers for businesses. The original model of how computers would be used followed IBM’s president, Thomas J Watson’s, supposed quote that “there is a world market for about five computers.” They produced gigantic computers that could be dialled into by those needed computing time. That prediction was very quickly shown to be wrong, though, as computer sales boomed.

Becoming more personal

Mark Dean was the first African American
to receive IBM’s highest honour.

By the end of the 1970s the computing world was starting to change. Small, but powerful, mini-computers had taken off and some companies were pushing the idea of computers for the desktop. IBM was at risk of being badly left behind… until they suddenly roared back into the lead with the IBM personal computer and almost overnight became the world leaders once more, revolutionising the way computers were seen, sold and used. Their predictions were still a little off with initial sales of the IBM PC 8 times more than they expected! Within a few years they were selling many hundreds of thousands a year and making billions of dollars. Soon every office desk had one and PC had become an everyday word used to mean computer.

Get on the bus

So who was behind this remarkable success? One of the design team who created the IBM PC was Mark Dean. As a consequence of his work on the PC, he became the first African American to be made an IBM fellow (IBM’s highest honour). One of his important contributions was in leading the development of the PC’s bus. Despite the name, a computer bus is more like a road than a vehicle, so its other name of data highway is perhaps better. It is the way the computer chip communicates with the outside world. A computer on its own is not really that useful to have on your desktop. It needs a screen, keyboard and so on. A computer bus is a bit like your nervous system used to send messages from your brain around your body. Just as your brain interacts with the world receiving messages from your senses, and allowing you to take action by sending messages to your muscles, all using your nervous system, a computer chip sends signals to its peripherals using the bus. Those peripherals include things like mouse, keyboard, printers, monitors, modems, external memory devices and more; the equivalents of its way of sensing the world and interacting with it. The bus is in essence just a set of connectors into the chip so wires out with different allocated uses and a set of rules about how they are used. All peripherals then follow the same set of rules to communicate to the computer. It means you can easily swap peripherals in and out (unlike your body!) Later versions of the PC bus, that Mark designed, ultimately became an industry standard for desktop computers.

Mark can fairly be called a key member of that PC development team, given he was responsible for a third of the patents behind the PC. He didn’t stop there though. He has continued to be awarded patents, most recently related to artificial neural networks inspired by neuroscience. He has moved on from making computer equivalents of the nervous system to computer equivalents of the brain itself.

More on …

Related Magazines …

EPSRC supports this blog through research grant EP/W033615/1. 

In space no one can hear you …

by Paul Curzon, Queen Mary University of London

Red arrows aircraft flying close to the ground.
Image by Bruno Albino from Pixabay 
Image by Bruno Albino from Pixabay 

Johanna Lucht could do maths before she learned language. Why? Because she was born deaf and there was little support for deaf people where she lived. Despite, or perhaps because of, that she became a computer scientist and works for NASA. 

Being deaf can be very, very disabling if you don’t get the right help. As a child, Johanna had no one to help her to communicate apart from her mother. She tried to teach Johanna sign language from a book. Throughout most of her primary school years she couldn’t have any real conversations with anyone, never mind learn. She got the lifeline she needed, when the school finally took on an interpreter, Keith Wann, to help her. She quickly learned American Sign Language working with him. Learning your first language is crucial to learning other things and suddenly she was able to learn in school like other children. She caught up remarkably quickly, showing that an intelligent girl had been locked in that silent, shy child. More than anything though, from Keith, she learned never to give up. 

Her early ability in maths, now her favourite subject, came to the fore as she excelled at science and technology. By this point her family had moved from Germany where she grew up to Alaska where there was much more support, an active deaf community for her to join and lots more opportunities that she started to take. She signed up for a special summer school on computing specifically for deaf people at the University of Washington, learning the programming skills that became the foundation for her future career at NASA. At only 17 she even returned to help teach the course. From there, she signed up to do Computer Science at university and applied for an internship at NASA. To her shock and delight she was given a place. 

Hitting the ground running 

A big problem for pilots especially of fighter aircraft is that of “controlled flight into terrain”: a technical sounding phrase that just means flying the plane into the ground for no good reason other than how difficult flying a fighter aircraft as low as possible in hazardous terrain is. The solution is a ground collision avoidance system: basically the pilots need a computer to warn them when hazardous terrain is coming up and when they are too close for comfort, and so should take evasive action. Johanna helped work on the interface design, so the part that pilots see and interact with. To be of any use in such high-pressure situations this communication has to be slick and very clear. 

She impressed those she was working with so much that she was offered a full time job and so became an engineer at NASA Armstrong working with a team designing, testing and integrating new research technology into experimental aircraft. She had to run tests with other technicians, the first problem being how to communicate effectively with the rest of the team. She succeeded twice as fast as her bosses expected, taking only a couple of days before the team were all working well together. Her experience from the challenges she had faced as a child were now providing her with the skills to do brilliantly in a job where teamwork and communication skills are vital. 

Mission control 

Eventually, she gained a place in Mission Control. There, slick comms are vital too. The engineers have to monitor the flight including all the communication as it happens, and be able to react to any developing situation. Johanna worked with an interpreter who listened directly to all the flight communications, signing it all for her to see on a second monitor. Working with interpreters in a situation like this is in itself a difficult task and Johanna had to make sure not only that they could communicate effectively but that the interpreter knew all the technical language that might come up in the flight. Johanna had plenty of experience dealing with issues like that though, and they worked together well, with the result that in April 2017 Johanna became the first deaf person to work in NASA mission control on a live mission … where of course she did not just survive the job, she excelled. 

As Johanna has pointed out it is not deafness itself that disables people, but the world deaf people live in that does. When in a world that wasn’t set up for deaf people, she struggled, but as soon as she started to get the basic help she needed that all changed. Change the environment to one that does not put up obstacles and deaf people can excel like anyone else. In space no one can hear anyone scream or for that matter speak. We don’t let it stop our space missions though. We just invent appropriate technology and make the problems go away. 

More on …

Read more about Johanna Lucht:

Related Magazines …

EPSRC supports this blog through research grant EP/W033615/1. 

The last piece of the continental drift puzzle

by Paul Curzon, Queen Mary University of London

Image by Gerd Altmann from Pixabay 

A computer helped provide the final piece in the puzzle of how the continents formed and moved around. It gave a convincing demonstration that the Americas, Europe and Africa had once been one giant continent, Pangea, the pieces of which had drifted apart.

Plate tectonics is the science behind how the different continents are both moving apart and crashing together in different parts of the world driven by the motion of molten rock below the Earths crust. It created the continents and mountain ranges, is causing oceans to expand and to sink, and leads to earthquakes in places like California. The earth’s hard outer shell is made up of a series of plates that sit above hotter molten rock and those plates slowly move around (up to 10cm a year) as, for example, rock pushes up between the gaps and solidifies. or pushes down and down under an adjacent plate. The continents as we see them are sitting on top of these plates.

The idea of continental drift had existed in different forms since the early 19th century. The idea was partly driven by an observation that on maps, South America and Africa seemed almost like two jigsaw pieces that fit together. On its own an observation like this isn’t enough as it could just be a coincidence, not least because the fit is not exact. Good science needs to combine theory with observation, predictions that prove correct with data that provides the evidence, but also clear mechanisms that explain what is going on. All of this came together to show that continental drift and ultimately plate tectonics describe what is really going on.

Very many people gathered the evidence, made the predictions and built the theories over many decades. For example, different people came up with a variety of models of what was happening but in the 19th and early 20th centuries there just wasn’t enough data available to test them. One theory was that the continents themselves were floating through the layer of rock below a bit like ice bergs floating in the ocean. Eventually evidence was gathered and this and other suggestions for how continents were moving did not stand up to the data collected. It wasn’t until the 1960s that the full story was tied down. The main reason that it took so long was that it needed new developments in both science and technology, most notably understanding of radioactivity, magnetism and not least ways to survey the ocean beds as developed during World War II to hunt for submarines. Science is a team game, always building on the advances of others, despite the way individuals are singled out.

By the early 1960s there was lots of strong evidence, but sometimes it is not just a mass of evidence that is needed to persuade scientists en-masse to agree a theory is correct, but compelling evidence that is hard to ignore. It turned out that was ultimately provided by a computer program.

Geophysicist, Edward Bullard, and his team in Cambridge were responsible for this last step. He had previously filled in early pieces of the puzzle working at the National Physical Laboratory on how the magnetism in the Earth’s core worked like a dynamo. He used their computer (one of the earliest) to do simulations to demonstrate this. This understanding led to studies of the magnetism in rock. This showed there were stripes where the magnetism in rock was in opposite directions. This was a result of rock solidifying either in different places or at different times and freezing the magnetic direction of the Earth at that time and place. Mapping of this “fossil” magnetism could be used to explore the ideas of continental drift. One such prediction suggested the patterns should be identical on either side of undersea ridges where new rock was being formed and pushing the plates apart. When checked they were exactly symmetrical as predicted.

	Jacques Kornprobst (redesigned after Bullard, E., Everett, J.E. and Smith, A.G., 1965. The fit of the continents around the Atlantic. Phil. Trans. Royal Soc., A 258, 1088, 41-51)

Image reconstruction of Bullard’s map by Jacques Kornprobst
from Wikipedia  CC BY-SA 4.0

In the 1960s, Bullard organised a meeting at the Royal Society to review all the evidence about continental drift. There was plenty of evidence to see that continental drift was fact. However, he unveiled a special map at the meeting showing how the continents on either side of the Atlantic really did fit together. It turned out to be the clincher.

The early suggestion that Africa and South America fit together has a flaw in that they are similar shapes, but do not fit exactly. With the advent of undersea mapping it was realised the coastline as shown on maps is not the right thing to be looking at. Those shapes depend on the current level of the sea which rises and falls. As it does so the apparent shape of the continents changes. In terms of geophysics, the real edge of the continents is much lower. That is where the continental shelf ends and the sea floor plummets. Bullard therefore based the shape of the continents on a line about a kilometre below sea level which was now known accurately because of that undersea mapping.

Maps like this had been created before but they hadn’t been quite as convincing. After all a human just drawing shapes as matching because they thought they did could introduce bias. More objective evidence was needed.

We see the Earth as flat on maps, but it is of course a sphere, and maps distort shapes to make things fit on the flat surface. What matters for continents is whether the shapes fit when placed and then moved around on the surface of a sphere, not on a flat piece of paper. This was done using some 18th century maths by Leonhard Euler. At school we learn Euclidean Geometry – the geometry of lines and shapes on a flat surface. The maths is different on a sphere though leading to what is called Spherical Geometry. For example, on a flat surface a straight line disappears in both directions to infinity. On a sphere a straight line disappearing in one direction can of course meet itself in the other. Similarly, we are taught that the angles of a triangle on a flat surface add up to 180 degrees, but the angles of a triangle drawn on a sphere add up to more than 180 degrees… Euler, usefully for Bullard’s team, had worked out theorems for how to move shapes around on a sphere.

This maths of spherical geometry and specifically Euler’s theorems form the basis of an algorithm that the team coded as a program. The program then created a plot following the maths. It showed the continents moved together in a picture (see above). As it was computer created, based on solid maths, it had a much greater claim to be objective, but on top of that it did also just look so convincing. The shapes of the continents based on that submerged continental line fit near perfectly all the way from the tip of South America to the northern-most point of North America. The plot became known as the ‘Bullard Fit’ and went down in history as the evidence that sealed the case.

The story of continental drift is an early example of how computers have helped change the way science is done. Computer models and simulations can provide more objective ways to test ideas, and computers can also visualise data in ways that help see patterns and stories emerge in ways that are both easy to understand and very convincing. Now computer modelling is a standard approach used to test theories. Back then the use of computers was much more novel, but science provided a key early use. Bullard and his team deserve credit not just for helping seal the idea of continental drift as fact, but also providing a new piece to the puzzle of how to use computers to do convincing science.

More on …

  • Read the book: Science: a history by John Gribbin for one of the best books on the full history of Science including plate techtonics.

Related Magazines …

EPSRC supports this blog through research grant EP/W033615/1. 

Digital lollipop: no calories, just electronics!

by Jane Waite, Queen Mary University of London

Can a computer create a taste in your mouth? Imagine scrolling down a list of flavours and then savouring your sweet choice from a digital lollipop. Not keen on that flavour, just click and choose a different one, and another and another. No calories, just the taste.

Nimesha Ranasinghe, a researcher at the National University of Singapore is developing a Tongue Mounted Digital Taste Interface, or digital lollipop. It sends tiny electrical signals to the very tip of your tongue to stimulate your taste buds and create a virtual taste!

One of UNESCO’s 2014 ’10 best innovations in the world’, the prototype doesn’t quite look like a lollipop (yet). There are two parts to this sweet sensation, the wearable tongue interface and the control system. The bit you put in your mouth, the tongue interface, has two small silver electrodes. You touch them to the tip of your tongue to get the taste hit. The control system creates a tiny electrical current and a minuscule temperature change, creating a taste as it activates your taste buds.

The prototype lollipop can create sour, salty, bitter, sweet, minty, and spicy sensations but it’s not just a bit of food fun. What if you had to avoid sweet foods or had a limited sense of taste? Perhaps the lollipop can help people with food addictions, just like the e-cigarette has helped those trying to give up smoking?
Perhaps the lollipop can help people with food addictions

But eating is more than just a flavour on your tongue, it is a multi-modal experience, you see the red of a ripe strawberry, hear the crunch of a carrot, feel sticky salt on chippy fingers, smell the Sunday roast, anticipate that satisfied snooze afterwards. How might computers simulate all that? Does it start with a digital lollipop? We will have to wait and see, hear, taste, smell, touch and feel!

Taste over the Internet

The Singapore team are exploring how to send tastes over the Internet. They have suggested rules to send ‘taste’ messages between computers, called the Taste Over Internet Protocol, including a messaging format called TasteXML They’ve also outlined the design for a mobile phone with electrodes to deliver the flavour! Sweet or salt anyone?

This article was originally published on the CS4FN website and also appears on page 14 of Issue 19 of the CS4FN magazine “Touch it, feel it, hear it” which you can download as a PDF below, along with all of our other free material here.

Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

The tale of the mote and the petrel

by Paul Curzon, Queen Mary University of London
(Updated from the archive)

Dust lit up by streaks of golden light
Image by Gosia K. from Pixabay
Image by Gosia K. from Pixabay 

Biology and computer science can meet in some unexpected, not to mention inhospitable, places. Who would have thought that the chemical soup in the nests of Petrels studied by field biologists might help in the development of futuristic dust-sized computers, for example?

Just Keep Doubling

One of the most successful predictions in Computer Science was made by Gordon Moore, co-founder of Intel. Back in 1965 he suggested that the number of transistors that can be squeezed onto an integrated circuit – the hardware computer processors are made of – doubled every few years: computers get ever more powerful and ever smaller. In the 60 or so years since Moore’s paper it has remained an amazingly accurate prediction. Will it continue to hold though or are we reaching some fundamental limit? Researchers at chip makers are confident that Moore’s Law can be relied on for the foreseeable future. The challenge will be met by the material scientists, the physicists and the chemists. Computer scientists must then be ready for the Law’s challenge too: delivering the software advances so that its trends are translated into changes in our everyday lives. It will lead to ever more complex systems on a single chip and so ever smaller computers that will truly disappear into the environment.

Dusting computers

Motes are one technology developed on the back of this trend. The aim is to create dust-sized computers. For example, the worlds smallest computer as of 2015 was the Michigan Micro Mote. It was only a few milimetres big but was a fully working computer system able to power itself, sense the world, process the data it collects and communicate data collected to other computers. In 2018 IBM announced a computer with sides a millimetre long. Rising to the challenge, the Michigan team soon announced their new mote with sides a third of a millimetre! The shrinking of motes will is not likely to stop!

Scatter motes around the environment and they form unobservable webs of intelligent sensors. Scatter them on a battlefield to detect troop movements or on or near roads to monitor traffic flow or pollution. Mix them in concrete and monitor the state of a bridge. Embed them in the home to support the elderly or in toys to interact with the kids. They are a technology that drives the idea of the Internet of Things where everyday objects become smart computers.

Battery technology has long been
the only big problem that remains.

What barriers must be overcome to make dust sized motes a ubiquitous reality? Much of the area of a computer is taken up by its connections to the outside world – all those pins allowing things to be plugged in. They can now be replaced by wireless communications. Computers contain multiple chips each housing separate processors. It is not the transistors that are the problem but the packaging – the chip casings are both bulky and expensive. Now we have “multicore” chips: large numbers of processors on a single small chip courtesy of Moore’s Law. This gives computer scientists significant challenges over how to develop software to run on such complicated hardware and use the resources well. Power can come from solar panels to allow them to constantly recharge even from indoor light. Even then, though, they still need batteries to store the energy. Battery technology is the only big problem that remains.

Enter the Petrels

Giant petrel flying over ice and rock
Image by Eduardo Ruiz from Pixabay
Image by Eduardo Ruiz from Pixabay 

But how do you test a device like that? Enter the Petrels. Intel’s approach is not to test futuristic technology on average users but to look for extreme ones who believe a technology will deliver them massive benefits. In the case of Motes, their early extreme users were field biologists who want to keep tabs on birds in extremely harsh field conditions. Not only is it physically difficult for humans to observe sea birds’ nests on inhospitable cliffs but human presence disturbs the birds. The solution: scatter motes in the nests to detect heat, humidity and the like from which the state and behaviour of the birds can be deduced. A nest is an extremely harsh environment for a computer though, both physically and chemically. A whole bunch of significant problems, overlooked by normal lab testing, must be overcome. The challenge of deploying Motes in such a harsh environment led to major improvements in the technology.

Moore’s Law is with us for a while yet, and with the efforts of material scientists, physicists, chemists, computer scientists and even field biologists and the sea birds they study it will continue to revolutionise our lives.

More on …

Related Magazines …

EPSRC supports this blog through research grant EP/W033615/1. 

Fran Allen: Smart Translation

Cars making light pattterns at night

Image by Светлана from Pixabay
Image by Светлана from Pixabay

by Paul Curzon, Queen Mary University of London
(Updated from the archive)

Computers don’t speak English, or Urdu or Cantonese for that matter. They have their own special languages that human programmers have to learn if they want to create new applications. Even those programming languages aren’t the language computers really speak. They only understand 1s and 0s. The programmers have to employ translators to convert what they say into Computerese (actually binary): just as if I wanted to speak with someone from Poland, I’d need a Polish translator. Computer translators aren’t called translators though. They are called ‘compilers’, and just as it might be a Pole who translated for me into Polish, compilers are special programs that can take text written in a programming language and convert it into binary.

The development of good compilers has been one of the most important advancements from the early years of computing and Fran Allen, one of the star researchers of computer giant, IBM, was awarded the ‘Turing Prize’ for her contribution. It is the Computer Science equivalent of a Nobel Prize. Not bad given she only joined IBM to clear her student debts from University.

Fran was a pioneer with her groundbreaking work on ‘optimizing compilers’. Translating human languages isn’t just about taking a word at a time and substituting each for the word in the new language. You get gibberish that way. The same goes for computer languages.

Things written in programming languages are not just any old text. They are instructions. You actually translate chunks of instructions together in one go. You also add a lot of detail to the program in the translation, filling in every little step.

Suppose a Japanese tourist used an interpreter to ask me for directions of how to get to Sheffield from Leeds. I might explain it as:

“Follow the M1 South from Junction 43 to Junction 33”.

If the Japanese translator explained it as a compiler would they might actually say (in Japanese):

“Take the M1 South from Junction 43 as far as Junction 42, then follow the M1 South from Junction 42 as far as Junction 41, then follow … from Junction 34 as far as Junction 33”.

Computers actually need all the minute detail to follow the instructions.

The most important thing about computer instructions (i.e., programs) is usually how fast following them leads to the job getting done. Imagine I was on the Information desk at Heathrow airport and the tourist wanted to get to Sheffield. I’ve never done that journey. I do know how to get from Heathrow to Leeds as I’ve done it a lot. I’ve also gone from Leeds to Sheffield a lot, so I know that journey too. So the easiest way for me to give instructions for getting from London to Sheffield, without much thought and be sure it gets the tourist there might be to say:

Go from Heathrow to Leeds:

  1. Take the M4 West to Junction 4B
  2. Take the M25 clockwise to Junction 21
  3. Take the M1 North to Leeds at Junction 43

Then go from Leeds to Sheffield:

  1. Take the M1 South to Sheffield at Junction 33

That is easy to write and made up of instructions I’ve written before perhaps. Programmers reuse instructions like this a lot – it both saves their time and reduces the chances of introducing mistakes into the instructions. That isn’t the optimum way to do the journey of course. You pass the turn off for Sheffield on the way up. An optimizing compiler is an intelligent compiler. It looks for inefficiency and actually converts it into a shorter and faster set of instructions. The Japanese translator, if acting like an optimizing compiler, would actually remove the redundant instructions from the ones I gave and simplify it (before converting it to all the junction by junction detailed steps) to:

  1. Take the M4 West to Junction 4B
  2. Take the M25 clockwise to Junction 21
  3. Take the M1 North to Sheffield Junction 33

Much faster! Much more intelligent! Happier tourists!

Next time you take the speed of your computer for granted, remember it is not just that fast because the hardware is quick, but because, thanks to people like Fran Allen, the compilers don’t just do what the programmers tell them to do. They are far smarter than that.

More on …

Related Magazines …

EPSRC supports this blog through research grant EP/W033615/1. 

A gendered timeline of technology

by Paul Curzon, Queen Mary University of London

(Updated from previous versions)

Women have played a gigantic role in the history of computing. Their ideas form the backbone to modern technology, though that has not always been obvious. Here is a gendered timeline of technology innovation to offset that.

825 Muslim scholar Al-Khwarizmi kicks it all off with a book on algorithms – recipes on how to do computation pulling together work of Indian mathematicians. Of course back then it’s people who do all the computation, as electronic computers won’t exist for another millennium.

A pocket watch in the sand
Image by annca from Pixabay 

1587 Mary, Queen of Scots loses her head because the English Queen, Elizabeth I, has a crack team of spies that are better at computer science than Mary’s are. They’ve read the Arab mathematician Al-Kindi’s book on the science of cryptography so they can read all Mary’s messages. More

1818 Mary Shelley writes the first science fiction novel on artificial life, Frankenstein. More

1827 Mary Web publishes the first ever Egyptian Mummy novel. Set in the future, in it she predicts a future with robot surgeons, AI lawyers and a version of the Internet. More

1842 Ada Lovelace and Charles Babbage work on the analytical engine. Lovelace shows that the machine could be programmed to calculate a series of numbers called Bernoulli numbers, if Babbage can just get the machine built. He can’t. It’s still Babbage who gets most of the credit for the next hundred-plus years. More

1854 George Boole publishes his work on a logical system that remains obscure until the 1930s, when Claude Shannon discovers that Boolean logic can be electrically applied to create digital circuits.

1856 Statistician (and nurse) Florence Nightingale returns from the Crimean War and launches the subject of data visualisation to convince politicians that soldiers are dying in hospital because of poor sanitation. More

1912 Thomas Edison claims “woman is now centuries, ages, even epochs behind man”, the year after Marie Curie wins the second of her two Nobel prizes.

1927 Metropolis, a silent science fiction film, is released. Male scientists kidnap a woman and create a robotic version of her to trick people and destroy the world. The robotic Maria dances nude to ‘mesmerise’ the workers. The underlying assumptions are bleak: women with power should be replaced with docile robots, bodies are more important than brains, and working class men are at the whim of beautiful gyrating women. Could the future be more offensive?

1931 Mary Clem starts work as a human computer at Iowa State College. She invents the zero check as a way of checking for errors in algorithms human computers (the only kind at a time) are following.

1941 Hedy Lamarr, better know as a blockbuster Hollywood actress co-invents frequency hopping: communicating by constantly jumping from one frequency to another. This idea underlies much of today’s mobile technology. More

1943 Thomas Watson, the CEO of IBM, announces that he thinks: “there is a world market for maybe 5 computers”. It’s hard to believe just how wrong he was!

1945 Grace Murray Hopper and her associates are hard at work on an early computer called Mark I when a moth causes the circuit to malfunction. Hopper (later made an admiral) refers to this as ‘debugging’ the circuit. She tapes the bug to her logbook. After this, computer malfunctions are referred to as ‘bugs’. Her achievements didn’t stop there: she develops the first compiler and one of the pioneering programming languages. More

1946 The Electronic Numerical Integrator and Computer is the world’s first general purpose electronic computer. The main six programmers, all highly skilled mathematicians, were women. They were seen to be more capable programmers because it was considered too repetitive for men and as a result it was labelled ‘sub-professional’ work. Once more men realised that it was interesting and fun, programming was re- classed as ‘professional’, the salaries became higher, and men become dominant in the field.

1949 A Popular Mechanics magazine article predicts that the computers of the future might weigh “as little as” 1.5 tonnes each. That’s over 10,000 iPhones!

1958 Daphne Oram, a pioneer of electronic music, co-founds the BBC Radiophonic Workshop, responsible for the soundscapes behind hundreds of tv and radio programmes. She suggests the idea of spatial sound where
sounds are in specific places. More

1967 The original series of TV show Star Trek includes an episode where mad ruler Harry Mudd runs a planet full of identical female androids who are ‘fully functional’ at physical pleasure to tend to his whims. But that’s not the end of the pleasure bots in this timeline…

1972 Karen Spärck Jones publishes a paper describing a new way to pick out the most important documents when doing searches. Twenty years later, once the web is up and running, the idea comes of age. It’s now used by most search engines to rank their results.

1972 Ira Levin’s book ‘The Stepford Wives’ is published. A group of suburban husbands kill their successful wives and create look-alike robots to serve as docile housewives. It’s made into a film in 1975. Sounds like those men were feeling a bit threatened.

1979 The US Department of Defence introduces a new programming language called Ada after Ada Lovelace.

1982 The film Blade Runner is released. Both men and women are robots but oddly there are no male robots modelled as ‘basic pleasure units’. Can’t you guys think of anything else?

1984 Technology anthropologist Lucy Suchman draws on social sciences research to overturn the current computer science thinking on how best to design interactive gadgets that are easy to use. She goes on to win the Benjamin Franklin Medal, one of the oldest and most prestigious science awards in the world.

1985 In the film Weird Science, two teenage supergeeks hack into the government’s mainframe and instead of using their knowledge and skills to do something really cool…they create the perfect woman. Yawn. Not again.

1985 Sophie Wilson designs the instruction set for the first ARM RISC chip creating a chip that is both faster and uses less energy than traditional designs: just what you need for mobile gadgets. This chip family go on to power 95% of all smartphones. More

1988 Ingrid Daubechies comes up with a practical way to use ‘wavelets’, mathematical tools that when drawn are wave-like. This opens up new powerful ways to store images in far less memory, make images sharper,
and much, much more. More

1995 Angelina Jolie stars as the hacker Acid Burn in the film Hackers, proving once and for all that women can play the part of the technologically competent in films.

1995 Ming Lin co-invents algorithms for tracking moving objects and detecting collisions based on the idea of bounding them with boxes. They are used widely in games and computer-aided design software.

2004 A new version of The Stepford Wives is released starring Nicole Kidman. It flops at the box office and is panned by reviewers. Finally! Let’s hope they don’t attempt to remake this movie again.

2005 The president of Harvard University, Lawrence Summers, says that women have less “innate” or “natural” ability than men in science. This ridiculous remark causes uproar and Summers leaves his position in the wake of a no-confidence vote from Harvard faculty.

2006 Fran Allen is the first woman to win the Turing Award, which is considered the Nobel Prize of computer science, for work dating back to the 1950s. Allen says that she hopes that her award gives more “opportunities for women in science, computing and engineering”. More

2006 Torchwood’s technical expert Toshiko Sato (Torchwood is the organisation protecting the Earth from alien invasion in the BBC’s cult TV series) is not only a woman but also a quiet, highly intelligent computer genius. Fiction catches up with reality at last.

2006 Jeannette Wing promotes the idea of computational thinking as the key problem solving skill set of computer scientists. It is now taught in schools across the world.

2008 Barbara Liskov wins the Turing Award for her work in the design of programming languages and object-oriented programming. This happens 40 years after she becomes the first woman in the US to be awarded a PhD in computer science. More

2009 Wendy Hall is made a Dame Commander of the British Empire for her pioneering work on hypermedia and web science. More

2011  Kimberly Bryant, an electrical engineer and computer scientist founds Black Girls Code to encourage and support more African-American girls to learn to code. Thousands of girls have been trained. More

2012 Shafi Goldwasser wins the Turing Award. She co-invented zero knowledge proofs: a way to show that a claim being made is true without giving away any more information. This is important in cryptography to ensure people are honest without giving up privacy. More

2012 Ursula Martin is awarded a CBE for services to Computer Science. She was the first female Professor of Computer Science in the UK focussing on theoretical Computer Science and Formal Methods.

2015 Sameena Shah’s AI driven fake news detection and verification system goes live giving Reuters an advantage of several years over competitors. More

2016 Hidden Figures, the film about Katherine Johnson, Dorothy Vaughan, and Mary Jackson, the female African-American mathematicians and programmers who worked for NASA supporting the space programme released. More

2018 Gladys West is inducted into the US Air Force Hall of Fame for her central role in the development of satellite remote sensing and GPS. Her work directly helps us all. More

It is of course important to remember that men occasionally helped too! The best computer science and innovation arise when the best people of whatever gender, culture, sexuality, ethnicity and background, disabled or otherwise, work together.

More on …

Related Magazines …

EPSRC supports this blog through research grant EP/W033615/1. 

Operational Transformation

Algorithms for writing together

by Paul Curzon, Queen Mary University of London

How do online word processing programs manage to allow two or more people to change the same document at the same time without getting in a complete muddle? One of the really key ideas that makes collaborative writing possible was developed by computer scientists, Clarence Ellis and Simon Gibbs. They called their idea ‘Operational transformation’.

Let’s look at a simple example to illustrate the problem. Suppose Alice and Bob share a document that starts:


First of all one computer, called the ‘server’, holds the actual ‘master’ document. If the network goes down or computers crash then its that ‘master’ copy that is the real version everyone sees as the definitive version.

Both Alice and Bob’s computers can connect to that server and get copies to view on their own machines. They can both read the document without problem – they both see the same thing. But what happens if they both start to change it at once? That’s when things can get mixed up.

Let’s suppose Alice notices that the time in the document should be PM not AM. She puts her cursor at position 14 and replaces the letter there with P. As far as the copy she is looking at is concerned, that is where the faulty A is. Her computer sends a command to the server to change the master version accordingly, saying

CHANGE the character at POSITION 14 to P.

The new version at some point later will be sent to everyone viewing. However, suppose that at the same time as Alice was making her change, Bob notices that the meeting is at 1 not 10. He moves his cursor to position 13, so over the 0 in the version he is looking at, and deletes it. A command is sent to the server computer:

DELETE the character at POSITION 13.

Now if the server receives the instructions in that order then all is ok. The document ends up as both Bob and Alice intended. When they are sent the updated version it will have done both their changes correctly:


However, as both Bob and Alice are editing at the same time, their commands could arrive at the server in either order. If the delete command arrives first then the document ends up in a muddle as first the 13th position is deleted giving.


Then, when Alice’s command is processed the 14th character is changed to a P as it asks. Unfortunately, the 14th character is now the M because the deleted character has gone. We end up with


Somehow the program has to avoid this happening. That is where the operational transformation algorithm comes in. It changes each instruction, as needed, to take other delete or insert instructions into account. Before the server follows them they are changed to ones so that they give the right result whatever order they came in.

So in the above example if the delete is done first, then any other instructions that arrive that apply to the same initial version of the document are changed to take account of the way the positions have changed due to the already applied deletion. We would get and so apply the new instructions:

DELETE the character at POSITION 13.
CHANGE the character at POSITION (14-1) to P.

Without Operational Transformation two people trying to write a document together would just be frustrating chaos. Online editing would have to be done the old way of taking it in turns, or one person making suggestions for the other to carry out. With the algorithm, thanks to Clarence Ellis and Simon Gibbs, people who are anywhere in the world can work on one document together. Group writing has changed forever.

This article was originally published on the CS4FN website.

More on …

This blog is funded through EPSRC grant EP/W033615/1.

The original version of this article was funded by the Institute of Coding.