The gender shades audit

by Jo Brodie, Queen Mary University of London

Face recognition technology is used widely, such as at passport controls and by police forces. What if it isn’t as good at recognising faces as it has been claimed to be? Joy Buolamwini and Timnit Gebru tested three different commercial systems and found that they were much more likely to wrongly classify darker skinned female faces compared to lighter or darker skinned male faces. The systems were not reliable.

Face recognition systems are trained to detect, classify and even recognise faces based on a bank of photographs of people. Joy and Timnit examined two banks of images used to train the systems and found that around 80 percent of the photos used were of people with lighter coloured skin. If the photographs aren’t fairly balanced in terms of having a range of people of different gender and ethnicity then the resulting technologies will inherit that bias too. The systems examined were being trained to recognise light skinned people.

The pilot parliaments benchmark

Joy and Timnit decided to create their own set of images and wanted to ensure that these covered a wide range of skin tones and had an equal mix of men and women (‘gender parity’). They did this using photographs of members of parliaments around the world which are known to have a reasonably equal mix of men and women. They selected parliaments both from countries with mainly darker skinned people (Rwanda, Senegal and South Africa) and from countries with mainly lighter skinned people (Iceland, Finland and Sweden).

They labelled all the photos according to gender (they had to make some assumptions based on name and appearance if pronouns weren’t available) and used a special scale called the Fitzpatrick scale to classify skin tones (see Different Shades below). The result was a set of photographs labelled as dark male, dark female, light male, light female, with a roughly equal mix across all four categories: this time, 53 per cent of the people were light skinned (male and female).

Testing times

Joy and Timnit tested the three commercial face recognition systems against their new database of photographs (a fair test of a wide range of faces that a recognition system might come across) and this is where they found that the systems were less able to correctly identify particular groups of people. The systems were very good at spotting lighter skinned men, and darker skinned men, but were less able to correctly identify darker skinned women, and women overall. The tools, trained on sets of data that had a bias built into them, inherited those biases and this affected how well they worked.

As a result of Joy and Timnit’s research there is now much more recognition of the problem, and what this might mean for the ways in which face recognition technology is used. There is some good news, though. The three companies made changes to improve their systems and several US cities have already banned the use of this technology in criminal investigations, with more likely to follow. People worldwide are more aware of the limitations of face recognition programs and the harms to which they may be (perhaps unintentionally) put, with calls for better regulation.

Different Shades
The Fitzpatrick skin tone scale is used by skin specialists to classify how someone’s skin responds to ultraviolet light. There are six points on the scale with 1 being the lightest skin and 6 being the darkest. People whose skin tone has a lower Fitzpatrick score are more likely to burn in the sun and are at greater risk of skin cancer. People with higher scores have darker skin which is less likely to burn and have a lower risk of skin cancer. A variation of the Fitzpatrick scale, with five points, is used to create the skin tone emojis that you’ll find on most messaging apps in addition to the ‘default’ yellow.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Black in Data

Lightbulb in a black circle surrounded by circles of colour representing data

Image based on combining bid data and lightbulb images by Gerd Altmann from Pixabay

Careers do not have to be decided on from day one. You can end up in a good place in a roundabout way. That is what happened to Sadiqah Musa, and now she is helping make the paths easier for others to follow.

Sadiqah went to university at QMUL expecting to become an environmental scientist. Her first job was as a geophysicist analysing seismic data. It was a job she thought she loved and would do forever. Unfortunately, she wasn’t happy, not least about the lack of job security. It was all about data though which was a part she did still enjoy, and the computer science job of Data Analyst was now a sought-after role. She retrained and started on a whole new exciting career. She currently works at the Guardian Newspapers where she met Devina Nembhard … who was the first Black woman she had ever worked with throughout her career.

Together they decided that was just wrong, but also set out to change it. They created “Black in Data” to support people of colour in the industry, mentoring them, training them in the computer science skills they might be short of: like programming and databases; helping them thrive. More than that they also confront industry to try and take down the barriers that block diversity in the first place.

Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Operational Transformation

Algorithms for writing together

How do online word processing programs manage to allow two or more people to change the same document at the same time without getting in a complete muddle? One of the really key ideas that makes collaborative writing possible was developed by computer scientists, Clarence Ellis and Simon Gibbs. They called their idea ‘Operational transformation’.

Let’s look at a simple example to illustrate the problem. Suppose Alice and Bob share a document that starts:

"MEETING AT 10AM"

First of all one computer, called the ‘server’, holds the actual ‘master’ document. If the network goes down or computers crash then its that ‘master’ copy that is the real version everyone sees as the definitive version.

Both Alice and Bob’s computers can connect to that server and get copies to view on their own machines. They can both read the document without problem – they both see the same thing. But what happens if they both start to change it at once? That’s when things can get mixed up.

Let’s suppose Alice notices that the time in the document should be PM not AM. She puts her cursor at position 14 and replaces the letter there with P. As far as the copy she is looking at is concerned, that is where the faulty A is. Her computer sends a command to the server to change the master version accordingly, saying

CHANGE the character at POSITION 14 to P.

The new version at some point later will be sent to everyone viewing. However, suppose that at the same time as Alice was making her change, Bob notices that the meeting is at 1 not 10. He moves his cursor to position 13, so over the 0 in the version he is looking at, and deletes it. A command is sent to the server computer:

DELETE the character at POSITION 13.

Now if the server receives the instructions in that order then all is ok. The document ends up as both Bob and Alice intended. When they are sent the updated version it will have done both their changes correctly:

"MEETING AT 1PM"

However, as both Bob and Alice are editing at the same time, their commands could arrive at the server in either order. If the delete command arrives first then the document ends up in a muddle as first the 13th position is deleted giving.

"MEETING AT 1AM"

Then, when Alice’s command is processed the 14th character is changed to a P as it asks. Unfortunately, the 14th character is now the M because the deleted character has gone. We end up with

"MEETING AT 1AP"

Somehow the program has to avoid this happening. That is where the operational transformation algorithm comes in. It changes each instruction, as needed, to take other delete or insert instructions into account. Before the server follows them they are changed to ones so that they give the right result whatever order they came in.

So in the above example if the delete is done first, then any other instructions that arrive that apply to the same initial version of the document are changed to take account of the way the positions have changed due to the already applied deletion. We would get and so apply the new instructions:

STARTING FROM "MEETING AT 10AM"
DELETE the character at POSITION 13.
CHANGE the character at POSITION (14-1) to P.

Without Operational Transformation two people trying to write a document together would just be frustrating chaos. Online editing would have to be done the old way of taking it in turns, or one person making suggestions for the other to carry out. With the algorithm, thanks to Clarence Ellis and Simon Gibbs, people who are anywhere in the world can work on one document together. Group writing has changed forever.

Paul Curzon, Queen Mary University of London


This article was originally published on the CS4FN website.

More on …


EPSRC supports this blog through research grant EP/W033615/1.

The original version of this article was funded by the Institute of Coding.

Mark Dean: An Inspiration

From the archive: This article by Dean Miller, is an edited version of one of the 2006 winning essays from the Queen Mary University of London, Department of Computer Science, first year essay competition.

May I ask you a question? When you think of the computer what names ring a bell? Bill Gates? Or for those more in touch with the history behind computers maybe Charles Babbage is a familiar name? May I ask you another question please? Do you know who Dr Mark Dean is? No, well you should. Do not worry yourself though, you are definitely not alone. I did not know of him either.

Allow me to enlighten you..

Mark Dean is in my opinion a very creative and inspirational black computer scientist. He is a vice-president at IBM and holds 3 of IBM’s first 9 patents on the personal computer. He has over 30 patents pending. He won the Black Engineer of the Year Presidents Award and was made an IBM fellow in 1995. An IBM fellow is IBM’s highest technical honor. Only 50 of IBM’s employee’s are fellows and Mark Dean was the first black one. Prior to joining IBM in 1980 he earned degrees in Electrical Engineering before going back to school to gain a PhD in the field from Stanford University. He was born in 1957 in Jefferson City, Tennessee and was one of the first black students to attend Jefferson City High School. He was an exceptional student and enjoyed athletics. Early manifestations of his desire to create were shown when he and his father built a tractor from scratch when he was just a boy.

Upon joining IBM Mark Dean and a partner led the team that developed the interior architecture (ISA systems bus) which allowed devices like the keyboard and printer to be connected to the motherboard making computers a part of our lives. It was that which earned him a spot in the National Inventors Hall of Fame. While at IBM he has been involved in numerous positions in computer system hardware architecture and design. He was responsible for IBM’s research laboratory in Austin, Texas where he focused on developing high performance microprocessors, software, systems and circuits. It is here where he made history by leading the team that built a gigahertz chip which did a billion calculations per second. In 2004, he was chosen as one of the 50 most important Blacks in Research Science.

He and his father built a tractor
from scratch when he was just a boy

I think that such a man should be well recognized in computer science, especially to black computer science students because from what I can see we are rare. We as a minority need an inspirational figure like Mark Dean. He inspires me, I wanted to share that with you. Before this small article it is very probable you had no knowledge of this man. So if there comes a time where you are asked about important names in the field of computers, I hope Dr Mark Dean springs to mind and rings a bell for you to hear loud and clear.

More on …


EPSRC supports this blog through research grant EP/W033615/1.

Recognising (and addressing) bias in facial recognition tech #BlackHistoryMonth

By Jo Brodie and Paul Curzon, Queen Mary University of London

A unit containing four sockets, 2 USB and 2 for a microphone and speakers.
Happy, though surprised, sockets Photo taken by Jo Brodie in 2016 at Gladesmore School in London.

Some people have a neurological condition called face blindness (also known as ‘prosopagnosia’) which means that they are unable to recognise people, even those they know well – this can include their own face in the mirror! They only know who someone is once they start to speak but until then they can’t be sure who it is. They can certainly detect faces though, but they might struggle to classify them in terms of gender or ethnicity. In general though, most people actually have an exceptionally good ability to detect and recognise faces, so good in fact that we even detect faces when they’re not actually there – this is called pareidolia – perhaps you see a surprised face in this picture of USB sockets below.

How about computers? There is a lot of hype about face recognition technology as a simple solution to help police forces prevent crime, spot terrorists and catch criminals. What could be bad about being able to pick out wanted people automatically from CCTV images, so quickly catch them?

What if facial recognition technology isn’t as good at recognising faces as it has sometimes been claimed to be, though? If the technology is being used in the criminal justice system, and gets the identification wrong, this can cause serious problems for people (see Robert Williams’ story in “Facing up to the problems of recognising faces“).

“An audit of commercial facial-analysis tools
found that dark-skinned faces are misclassified
at a much higher rate than are faces from any
other group. Four years on, the study is shaping
research, regulation and commercial practices.”

The unseen Black faces of AI algorithms
(19 October 2022) Nature

In 2018 Joy Buolamwini and Timnit Gebru shared the results of research they’d done, testing three different commercial facial recognition systems. They found that these systems were much more likely to wrongly classify darker-skinned female faces compared to lighter- or darker-skinned male faces. In other words, the systems were not reliable. (Read more about their research in “The gender shades audit“).

“The findings raise questions about
how today’s neural networks, which …
(look for) patterns in huge data sets,
are trained and evaluated.”

Study finds gender and skin-type bias
in commercial artificial-intelligence systems
(11 February 2018) MIT News

Their work has shown that face recognition systems do have biases and so are not currently at all fit for purpose. There is some good news though. The three companies whose products they studied made changes to improve their facial recognition systems and several US cities have already banned the use of this tech in criminal investigations. More cities are calling for it too and in Europe, the EU are moving closer to banning the use of live face recognition technology in public places. Others, however, are still rolling it out. It is important not just to believe the hype about new technology and make sure we do understand their limitations and risks.

More on

Further reading

More technical articles

• Joy Buolamwini and Timnit Gebru (2018) Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research 81:1-15. [EXTERNAL]
The unseen Black faces of AI algorithms (19 October 2022) Nature News & Views [EXTERNAL]


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


EPSRC supports this blog through research grant EP/W033615/1.

Devices that work for everyone

Cartoon of the invisible man - only the clothes are visible

Invisible man Image by OpenClipart-Vectors from Pixabay

In 2009 Desi Cryer, who is Black, shared a light-hearted video with a serious message. He’d bought a new computer with a face tracking camera… which didn’t track his face, at all. It did track his White colleague Wanda’s face though. In the video (below) he asked her to go in front of the camera and move from side to side and the camera obediently tracked her face – wherever she moved the camera followed. When Desi moved back in front of the camera it stopped again. He wondered if the computer might be racist…

Another video, this time from 2017, showed a dark-skinned man failing to get a soap to dispenser to give him some soap. Nothing happened when he put his hand underneath the sensor but as soon as his lighter-skinned friend put his hand under it – out popped some soap! The only way the first man could get any soap dispensed was to put a white tissue on his hand first. He wondered if the soap dispenser might be racist…

What’s going on?

Probably no-one set out to maliciously design a racist device but designers might need to check that their products work with a range of different people before putting them on the market. This can save the company embarrassment as well as creating something that more people want to buy. 

Sensors working overtime

Both devices use a sensor that is activated (or in these cases isn’t) by a signal. Soap dispensers shine a beam of light which bounces off a hand placed below it and some of that light is reflected back. Paler skin reflects more light (and so triggers the sensor) than darker skin. Next to the light is a sensor which responds to the reflected light – but if the device was only tested on White people then the sensor wasn’t adjusted for the full range of skin tones and so won’t respond appropriately. Similarly cameras have historically been designed for White skin tones meaning darker tones are not picked up as well.

Things can be improved!

It’s a good idea, when designing something that will be used by lots of different people, to make sure that it will work correctly with everyone. Having a diverse design team and, importantly, making sure that everyone feels empowered to contribute is a good way to start. Another is to test the design with different target audiences early in the design process so that changes can be made before it’s too late. How a company responds to feedback when they’ve made an oversight is also important. In the case of the computer company they acknowledged the problem and went to work to improve the camera’s sensitivity. 

A problem with pulse oximeters

During the coronavirus pandemic many people bought a ‘pulse oximeter’, a device which clips painlessly onto a finger and measures how much oxygen is circulating in your blood (and your pulse). If the oxygen reading became too low people were advised to go to hospital. Oximeters shine red and infrared light from the top clip through the finger and the light is absorbed diferently depending on how much oxygen is present in the blood. A sensor on the lower clip measures how much light has got through but the reading can be affected by skin colour (and coloured nail polish). People were concerned that pulse oximeters would overestimate the oxygen reading for someone with darker skin (that is, tell them they had more oxygen than they actually had) and that the devices might not detect a drop in oxygen quickly enough to warn them.

In response the UK Government announced in August 2022 that it would investigate this bias in a range of medical devices to ensure that future devices work effectively for everyone.

– Jo Brodie, Queen Mary University of London

Watch …

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Facing up to ALL faces

The problems of recognising faces

Wire frame face
Image by Gerd Altmann from Pixabay

How face recognition technology caused the wrong Black man to be arrested.

The police were waiting for Robert Williams when he returned home from work in Detroit, Michigan. They arrested him for robbery in front of his wife and terrified daughters aged two and five and took him to a detention centre where he was kept overnight. During his interview an officer showed him two grainy CCTV photos of a suspect alongside a photo of Williams from his driving licence. All the photos showed a large Black man, but that’s where the similarity ended – it wasn’t Williams on CCTV but a completely different man. Williams held up the photos to his face and said “I hope you don’t think all Black people look alike”, the officer replied that “the computer must have got it wrong.”

William’s problems began several months before his arrest when video clips and images of the robbery from the CCTV camera were run through face recognition software used by the Detroit Police Department. The system has access to the photos from everyone’s driving licence and can compare different faces until it finds a potential match and in this case it falsely identified Robert Williams. No system is ever perfect but studies have shown that face recognition technology is often better at correctly matching lighter skinned faces than darker skinned ones.

Check the signature

The way face recognition works is not actually by comparing pictures but by comparing data. When a picture of a face is added to the system, essentially lots of measurements are taken such as how far apart the eyes are, or what the shape of the nose is. This gives a signature for each face made up of all the numbers. That signature is added to the database. When looking for a match from say a CCTV image, the signature of the new image is first determined. Then algorithms look for the signature in the database “nearest” to the new one. How well it works depends on the particular features chosen, amongst many other things. If the features chosen are a poor way to distinguish particular groups of people then there will be lots of bad matches. But how does it decide what is “nearest” anyway given in essence it is just comparing groups of numbers? Many algorithms are based on machine learning. The system might be trained on lots of faces and told which match and which don’t, allowing it to look for patterns that are good ways to predict matches. If, however, it is trained on mainly light skinned faces it is likely to be bad at spotting matches for faces of other ethnic backgrounds. It may actually decide that “all black people look alike”.

Biasing the investigation

However, face recognition is only part of the story. A potential match is only a pointer towards someone who might be a suspect and it’s certainly not a ‘case closed’ conclusion – there’s still work to be done to check and confirm. But as Williams’ lawyer, Victoria Burton-Harris, pointed out once the computer had suggested Williams as a suspect that “framed and informed everything that officers did subsequently”. The man in the CCTV image wore a red baseball cap. It was for a team that Williams didn’t support (he’s not even a baseball fan) but no-one asked him about it. They also didn’t ask if he was in the area at the time (he wasn’t) or had an alibi (he did). Instead the investigators asked a security guard at the shop where the theft took place to look at some photos of possible suspects and he picked Williams from the line-up of images. Unfortunately the guard hadn’t been on duty on the day of the theft and had only seen the CCTV footage.

Robert Williams spent 30 hours in custody for a crime he didn’t commit after his face was mistakenly selected from a database. He was eventually released and the case dropped but his arrest is still on record along with his ‘mugshot’, fingerprints and a DNA sample. In other words he was wrongly picked from one database and has now been unfairly added to another. The experience for his whole family has been very traumatic and sadly his children’s first encounter with the police has been a distressing rather than a helpful one.

Remove the links

The American Civil Liberties Union (ACLU) has filed a lawsuit against the Detroit Police Department on Williams’ behalf for his wrongful arrest. It is not known how many people have been arrested because of face recognition technology but given how widely it is used it’s likely that others will have been misidentified too. The ACLU and Williams have asked for a public apology, for his police record to be cleared and for his images to be removed from any face recognition database. They have also asked that the Detroit Police Department stop using face recognition in their investigations. If Robert Williams had lived in New Hampshire he’d never have been arrested as there is a law there which prevents face recognition software from being linked with driving license databases.

In June 2020 Amazon, Microsoft and IBM denied the police any further access to their face recognition technology and IBM has also said that it will no longer work in this area because of concerns about racial profiling (targeting a person based on assumptions about their race instead of their individual actions) and violation of privacy and human rights. Campaigners are asking for a new law that protects people if this technology is used in future. But the ACLU and Robert Williams are asking for people to just stop using it – “I don’t want my daughters’ faces to be part of some government database. I don’t want cops showing up at their door because they were recorded at a protest the government didn’t like.”

Technology is only as good as the data and the algorithms it is based on. However, that isn’t the whole story. Even if very accurate, it is only as good as the way it is used. If as a society we want to protect people from bad things happening, perhaps some technologies should not be used at all.

– Jo Brodie and Paul Curzon, Queen Mary University of London

More on

Related Magazines …


An earlier version of this article was originally published on the Teaching London Computing website where you can find references and further reading.


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Hidden Figures: NASA’s brilliant calculators

Full Moon with a blue filter
Full Moon image by PIRO from Pixabay

NASA Langley was the birthplace of the U.S. space program where astronauts like Neil Armstrong learned to land on the moon. Everyone knows the names of astronauts, but behind the scenes a group of African-American women were vital to the space program: Katherine Johnson, Mary Jackson and Dorothy Vaughan. Before electronic computers were invented ‘computers’ were just people who did calculations and that’s where they started out, as part of a segregated team of mathematicians. Dorothy Vaughan became the first African-American woman to supervise staff there and helped make the transition from human to electronic computers by teaching herself and her staff how to program in the early programming language, FORTRAN.

The women switched from being the computers to programming them. These hidden women helped put the first American, John Glenn, in orbit, and over many years worked on calculations like the trajectories of spacecraft and their launch windows (the small period of time when a rocket must be launched if it is to get to its target). These complex calculations had to be correct. If they got them wrong, the mistakes could ruin a mission, putting the lives of the astronauts at risk. Get them right, as they did, and the result was a giant leap for humankind.

See the film ‘Hidden Figures’ for more of their story.

– Paul Curzon, Queen Mary University of London

from the archive

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Writing together: Clarence ‘Skip’ Ellis

Poster of Skip Ellis showing people working on a shared document
Poster by Richard Butterworth for CS4FN

Back in 1956, Clarence Ellis started his career at the very bottom of the computer industry. He was given a job, at the age of 15, as a “computer operator” … because he was the only applicant. He was also told that under no circumstances should he touch the computer! Its lucky for all of us he got the job, though! He went on to develop ideas that have made computers easier for everyone to use. Working at a computer was once a lonely endeavour: one person, on one computer, doing one job. Clarence Ellis changed that. He pioneered ways for people to use computers together effectively.

The graveyard shift

The company Clarence first worked for had a new computer. Just like all computers back then, it was the size of a room. He worked the graveyard shift and his duties were more those of a nightwatchman than a computer operator. It could have been a dead-end job, but it gave him lots of spare time and, more importantly, access to all the computer’s manuals … so he read them … over and over again. He didn’t need to touch the computer to learn how to use it!

Saving the day

His studying paid dividends. Only a few months after he started, the company had a potential disaster on its hands. They ran out of punch cards. Back then punch cards were used to store both data. They used patterns of holes and non-holes as a way to store numbers as binary in a away a computer could read them. Without punchcards the computer could not work!

It had to though, because the payroll program had to run before the night was out. If it didn’t then no-one would be paid that month. Because he had studied the manuals in detail, and more so than anyone else, Clarence was the only person who could work out how to reuse old punch cards. The problem was that the computer used a system called ‘parity checking’ to spot mistakes. In its simplest form parity checking of a punch card involves adding an extra binary digit (an extra hole or no-hole) on the end of each number. This is done in a way that ensures that the number of holes is even. If there is an even number of holes already, the extra digit is left as a non-hole. If, on the other hand there is an odd number of holes, a hole is punched as the extra digit. That extra binary digit isn’t part of the number. It’s just there so the computer can check if the number has been corrupted. If a hole was accidentally or otherwise turned into a non-hole (or vice versa), then this would show up. It would mean there was now an odd number of holes. Special circuitry in the computer would spot this and spit out the card, rejecting it. Clarence knew how to switch that circuitry off. That meant they could change the numbers on the cards by adding new holes without them being rejected.

After that success he was allowed to become a real operator and was relied on to troubleshoot whenever there were problems. His career was up and running.

Clicking icons

He later worked at Xerox Parc, a massively influential research centre. He was part of the team that invented graphical user interfaces (GUIs). With GUIs Xerox Parc completely transformed the way we used computers. Instead of typing obscure and hard to remember commands, they introduced the now standard ideas, of windows, icons, dragging and dropping, using a mouse, and more. Clarence, himself, has been credited with inventing the idea of clicking on an icon to run a program.

Writing Together

As if that wasn’t enough of an impact, he went on to help make groupware a reality: software that supports people working together. His focus was on software that let people write a document together. With Simon Gibbs he developed a crucial algorithm called Operational Transformation. It allows people to edit the same document at the same time without it becoming hopelessly muddled. This is actually very challenging. You have to ensure that two (or more) people can change the text at exactly the same time, and even at the same place, without each ending up with a different version of the document.

The actual document sits on a server computer. It must make sure that its copy is always the same as the ones everyone is individually editing. When people type changes into their local copy, the master is sent messages informing it of the actions they performed. The trouble is the order that those messages arrive can change what happens. Clarence’s operational transformation algorithm solved this by changing the commands from each person into ones that work consistently whatever order they are applied. It is the transformed operation that is the one that is applied to the master. That master version is the version everyone then sees as their local copy. Ultimately everyone sees the same version. This algorithm is at the core of programs like Google Docs that have ensured collaborative editing of documents is now commonplace.

Clarence Ellis started his career with a lonely job. By the end of his career he had helped ensure that writing on a computer at least no longer needs to be a lonely affair.

– Paul Curzon, Queen Mary University of London


This article was originally published on the CS4FN website. One of the aims of our Diversity in Computing posters (see below) is to help a classroom of young people see the range of computer scientists which includes people who look like them and people who don’t look like them. You can download our posters free from the link below.

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The original version of this article was funded by the Institute of Coding.

Mark Dean: An Inspiration

This article is an edited version of one of the 2006 winning essays from the Queen Mary University of London, Department of Computer Science, first year essay competition.

An icon of computer connected to peripherals
Image by OpenClipart-Vectors from Pixabay

May I ask you a question? When you think of the computer what names ring a bell? Bill Gates? Or for those more in touch with the history behind computers maybe Charles Babbage is a familiar name? May I ask you another question please? Do you know who Dr Mark Dean is? No, well you should. Do not worry yourself though, you are definitely not alone. I did not know of him either.

Allow me to enlighten you..

Mark Dean is in my opinion a very creative and inspirational black computer scientist. He is a vice-president at IBM and holds 3 of IBM’s first 9 patents on the personal computer. He has over 30 patents pending. He won the Black Engineer of the Year Presidents Award and was made an IBM fellow in 1995. An IBM fellow is IBM’s highest technical honor. Only 50 of IBM’s employee’s are fellows and Mark Dean was the first black one. Prior to joining IBM in 1980 he earned degrees in Electrical Engineering before going back to school to gain a PhD in the field from Stanford University. He was born in 1957 in Jefferson City, Tennessee and was one of the first black students to attend Jefferson City High School. He was an exceptional student and enjoyed athletics. Early manifestations of his desire to create were shown when he and his father built a tractor from scratch when he was just a boy.

Upon joining IBM Mark Dean and a partner led the team that developed the interior architecture (ISA systems bus) which allowed devices like the keyboard and printer to be connected to the motherboard making computers a part of our lives. It was that which earned him a spot in the National Inventors Hall of Fame. While at IBM he has been involved in numerous positions in computer system hardware architecture and design. He was responsible for IBM’s research laboratory in Austin, Texas where he focused on developing high performance microprocessors, software, systems and circuits. It is here where he made history by leading the team that built a gigahertz chip which did a billion calculations per second. In 2004, he was chosen as one of the 50 most important Blacks in Research Science.

I think that such a man should be well recognized in computer science, especially to black computer science students because from what I can see we are rare. We as a minority need an inspirational figure like Mark Dean. He inspires me, I wanted to share that with you. Before this small article it is very probable you had no knowledge of this man. So if there comes a time where you are asked about important names in the field of computers, I hope Dr Mark Dean springs to mind and rings a bell for you to hear loud and clear.

– Dean Miller, Queen Mary University of London

This article was originally published on the CS4FN website.

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

One of the aims of our Diversity in Computing posters (see below) is to help a classroom of young people see the range of computer scientists which includes people who look like them and people who don’t look like them. You can download our posters free from the link below. Isabel Wagner has also created some free posters to download about inspiring computer scientists and Mark Dean is one of them.

See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos