What the real Pros say

by Paul Curzon, Queen Mary University of London

Originally Published in the CS4FN “Women are Here” special

Rebecca Stewart CC BY 2.0 Thomas Bonte
Rebecca Stewart CC BY 2.0 Thomas Bonte

Some (female) computer scientists and electronic engineers were asked what they most liked about their job and the subject. Each quote is given with the job they had at the time of the quote. many have moved on or upwards since.

Here is what the real Pros think …

Building software that protects billions of people from online abuse … I find it tremendously rewarding…Every code change I make is a puzzle: exciting to solve and exhilarating to crack; I love doing this all day, every day.

Despoina Magka, Software engineer, Facebook

Taking on new challenges and overcoming my limitations with every program I write, every bug I fix, and every application I create. It has and continues to inspire me to grow, both professionally and personally.

Kavin Narasimhan, Researcher, University of Surrey

Because computer science skills are useful in nearly every part of our lives, I get to work with biologists, mathematicians, artists, designers, educators and lately a whole colony of naked mole-rats! I love the diversity.

Julie Freeman, artist and PhD student, QMUL

The flexibility of working from any place at any time. It offers many opportunities to collaborate with, and learn from, brilliant people from all over the world.

Greta Yorsh, Lecturer QMUL, former software engineer, ARM.
Shauna, Gavin and Greta
Shauna, Kavin and Greta

Possibilities! When you try to do something that seems crazy or impossible and it works, it opens up new possibilities… I enjoy being surrounded by creative people.

Justyna Petke, Researcher, UCL

That we get to study the deep characteristics of the human mind and yet we are so close to advances in technology and get to use them in our research.” – Mehrnoosh Sadrzadeh, Senior Lecturer, QMUL

I get the opportunity to understand what both business people and technologists are thinking about, their ideas and their priorities and I have the opportunity to bring these ideas to fruition. I feel very special being able to do this! I also like that it is a creative subject – elegant coding can be so beautiful!

Jill Hamilton, Vice President, Morgan Stanley

You never know what research area the solution to your problem will come from, so every conversation is valuable.

Vanessa Pope, PhD student, QMUL

I get to ask questions about people, and set about answering them in an empirical way. computer science can lead you in a variety of unexpected directions

Shauna Concannon, Researcher, QMUL

It is fascinating to be able to provide simpler solutions to challenging requirements faced by the business.

Emanuela Lins, Vice President, Morgan Stanley

I think the best thing is how you can apply it to so many different topics. If you are interested in biology, music, literature, sport or just about anything else you can think of, then there’s a problem that you can tackle using computer science or electronic engineering…I like writing code, but I enjoy making things even more.

Becky Stewart, Lecturer, QMUL

… you get to be both a thinker and a creator. You get to think logically and mathematically, be creative in the way you write and design systems and you can be artistic in the way you display things to users. …you’re always learning something new.

Yasaman Sepanj, Associate, Morgan Stanley

Creating the initial ideas, forming the game, making the story… Being part of the creative process and having a hands on approach“,

Nana Louise Nielsen, Senior Game Designer, Sumo Digital

Working with customers to solve their problems. The best feeling in the world is when you leave … knowing you’ve just made a huge difference.

Hannah Parker, IT Consultant, IBM

It changes so often… I am not always sure what the day will be like

Madleina Scheidegger, Software Engineer, Google.

I enjoy being able to work from home

Megan Beynon, Software Engineer, IBM

I love to see our plans come together with another service going live and the first positive user feedback coming in

Kerstin Kleese van Dam, Head of Data Management, CCLRC

…a good experienced team around me focused on delivering results

Anita King, Senior Project Manager, Metropolitan Police Service

I get to work with literally every single department in the organisation.

Jemima Rellie, Head of Digital Programme, Tate

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Susan Kare: Icon Draw

by Jo Brodie, Queen Mary University of London

(from the archive)

A pixel drawing of a dustbin icon

Pick up any computer or smart gadget and you’ll find small, colourful pictures on the screen. These ‘icons’ tell you which app is which. You can just touch them or click on them to open the app. It’s quick and easy, but it wasn’t always like that.

Up until the 1980s if you wanted to run a program you had to type a written command to tell the device what to do. This made things slow and hard. You had to remember all the different commands to type. It meant that only people who felt quite confident with computers were able to play with them.

Computer scientists wanted everyone to be able to join in (they wanted to sell more computers too!) so they developed a visual, picture-based way of letting people tell their computers what to do, instead of typing in commands. It’s called a ‘Graphical User Interface’ or GUI.

An artist, Susan Kare, was asked to design some very simple pictures – icons – that would make using computers easier. If people wanted to delete a file they would click on an icon with her drawing of a little dustbin. If people wanted to edit a letter they were writing they could click on the icon showing a pair of scissors to cut out a bit of text. She originally designed them on squared paper, with each square representing a pixel on the screen. Over the years the pictures have become more sophisticated (and sometimes more confusing) but in the early days they were both simple and clear thanks to Susan’s skill.

Try our pixel puzzles which use the same idea. Then invent your own icons or pixel puzzles. Can you come up with your own easily recognizable pictures using as few lines as possible?

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Celebrating Jean Bartik – she was one of six women who programmed the ‘ENIAC’, a computer from the 1940s

Four of the 42 panels that made up ENIAC.

by Jo Brodie, Queen Mary University of London.

Jean Bartik (born Betty Jean Jennings) was one of six women who programmed “ENIAC” (the Electronic Numerical Integrator and Computer), one of the earliest electronic programmable computers. The work she and her colleagues did in the 1940s had a huge impact on computer science however their contribution went largely unrecognised for 40 years. 

Jean Bartik – born 27 December 1924; died on this day, 23 March 2011

Born in Missouri USA in December 1924 to a family of teachers in Betty (as she was then known) showed promise in Mathematics, graduating from her high school in the summer of 1941 aged 16 with the highest marks in maths ever seen at her school. She began her degree in Maths and English at her local teachers’ college (which is now Northwest Missouri State University) but everything changed dramatically a few months in when the US became involved in the Second World War. The men (teachers and students) were called up for war service leaving a dwindling department and her studies were paused, resuming only in 1943 when retired professors were brought in to teach; she graduated in January 1945, the only person in her year to graduate in Maths.

Although her family encouraged her to become a local maths teacher she decided to seek more distant adventures. The University of Pennsylvania in Philadelphia (~1,000 miles away) had put out a call for people with maths skills to help with the war effort, she applied and was accepted. Along with over 80 other women she was employed to calculate, using advanced maths including differential calculus equations, accurate trajectories of bullets and bombs (ballistics) for the military. She and her colleagues were ‘human computers’ (people who did calculations before the word meant what it does today) creating range tables, columns of information that told the US army where they should point their guns to be sure of hitting their targets. This was complex work that had to take account of weather conditions as well as more obvious things like distance and size of the gun barrel.

Even with 80-100 women working on every possible combination of gun size and angle it still took over a week to generate one data table so the US Army was obviously keen to speed things up as much as possible. They had previously given funding in 1943 to John Mauchly (a physicist) and John Presper Eckert (an electrical engineer) to build a programmable electronic calculator – ENIAC – which would automate the calculations and give them a huge speed advantage. By 1945 the enormous new machine, which took up a room (as computers tended to do in those days) consisted of several thousand vacuum tubes, weighed 30 tonnes and was held together with several million soldered joints. It would be programmed with punched cards with holes punched at different positions in each card allowing a current to pass (or not pass, if no hole present) through a particular set of cables connected through a plugboard (like old-fashioned telephone exchanges). 

From the now 100 women working as human computers in the department six were selected to become the machine’s operators – a role that was exceptional. There were no manuals available and ‘programming’, as we know it today, didn’t yet exist – it was much more physical. Not only did the ‘ENIAC six’ have to correctly wire each cable they had to fully understand the machine’s underlying blueprints and electronic circuits to make it work as expected. Repairs could involve crawling into the machine to fix a broken wire or vacuum tube. 

Two of the ENIAC programmers, are preparing the computer for Demonstration Day in February 1946. “U.S. Army Photo” from the archives of the ARL Technical Library. Left: Betty Jennings (later Bartik), right: Frances Bilas (Spence) – via Wikipedia.

World War 2 actually ended in September 1945 before ENIAC was brought into full service, but being programmable (which meant rewiring the cables) it would soon be put to other uses. Jean really enjoyed her time working on ENIAC and said later that she’d “never since been in as exciting an environment. We knew we were pushing back frontiers” but she was working at a time when men’s jobs and achievements were given more credit than women’s.

In February 1946 ENIAC was unveiled to the press with its (male) inventors demonstrating its impressive calculating speeds and how much time could be saved compared with people performing the calculations with mechanical desk calculators. While Jean and some of the other women were in attendance (and appear in press photographs of the time) the women were not introduced, their work wasn’t celebrated, they were not always correctly identified in the photographs and were even not invited to the celebratory dinner after the event – as Jean said in a later interview (see the second video (YouTube) below) “We were sort of horrified!”.

In December 1946 she married William Bartik (an engineer) and over the next few years was instrumental in the programming and development of other early computers. She also taught others how to program them (an early computer science teacher!). She often worked with her husband too, following him to different cities for work. However her husband took on a new role in 1951 and the company’s policy was that wives were not allowed to work in the same place. Frustrated, Jean left computing for a while and also took a career break to raise her family. 

In the late 1960s she returned to the field of computer science and for several years she blended her background in Maths and English, writing technical reports on the newer ‘minicomputers’ (still quite large compared to modern computers but you could fit more of them in a room). However the company she worked for was sold off and she was made redundant in 1985 at the age of 60. She couldn’t find another job in the industry which she put down to age discrimination and she spent her remaining career working in real estate (selling property or land). She died, aged 86 on 23 March 2011. 

Jean’s contribution to computer science remained largely unknown to the wider world until 1986 when Kathy Kleinman (an author, law professor and programmer) decided to find out who the women in these photographs were and rediscovered the pioneering work of the ENIAC six.

Vimeo trailer for Kathy Kleinman’s book and documentary
YouTube video from the Computer History Museum

The ENIAC six women were Kathleen McNulty Mauchly Antonelli, Jean Jennings Bartik, Frances (Betty) Snyder Holberton, Marlyn Wescoff Meltzer, Frances Bilas Spence, and Ruth Lichterman Teitelbaum.

Further reading

Jean Bartik (Wikipedia)
ENIAC (Wikipedia)
The ENIAC Programmers Project – Kathy Kleinman’s project which uncovered the women’s role
Betty Jean Jennings Bartik (biography by the University of St Andrews)


Adapted (text added) version of Woman at a computer image by Chen from Pixabay

This blog is funded through EPSRC grant EP/W033615/1.

What’s that bird? Ask your phone – birdsong-recognition apps


by Dan Stowell, Queen Mary University of London

Could your smartphone automatically tell you what species of bird is singing outside your window? If so how?

Mobile phones contain microphones to pick up your voice. That means they should be able to pick up the sound of birds singing too, right? And maybe even decide which bird is which?

Smartphone apps exist that promise to do just this. They record a sound, analyse it, and tell you which species of bird they think it is most likely to be. But a smartphone doesn’t have the sophisticated brain that we have, evolved over millions of years to understand the world around us. A smartphone has to be programmed by someone to do everything it does. So if you had to program an app to recognise bird sounds, how would you do it? There are two very different ways computer scientists have devised to do this kind of decision making and they are used by researchers for all sorts of applications from diagnosing medical problems to recognising suspicious behaviour in CCTV images. Both ways are used by phone apps to recognise bird song that you can already buy.

Robin image by Darren Coleshill from Pixabay
The sound of the European robin (Erithacus rubecula) better known as robin redbreast, from Wikipedia.

Write down all the rules

If you ask a birdwatcher how to identify a blackbird’s sound, they will tell you specific rules. “It’s high-pitched, not low-pitched.” “It lasts a few seconds and then there’s a silent gap before it does it again.” “It’s twittery and complex, not just a single note.” So if we wrote down all those rules in a recipe for the machine to follow, each rule a little program that could say “Yes, I’m true for that sound”, an app combining them could decide when a sound matches all the rules and when it doesn’t.

Young blackbird in Oxfordshire, from Wikipedia
The sound of a European blackbird (Turdus merula) singing merrily in Finland, from Wikipedia (song 1).

This is called an ‘expert system’ approach. One difficulty is that it can take a lot of time and effort to actually write down enough rules for enough birds: there are hundreds of bird species in the UK alone! Each would need lots of rules to be hand crafted. It also needs lots of input from bird experts to get the rules exactly right. Even then it’s not always possible for people to put into words what makes a sound special. Could you write down exactly what makes you recognise your friends’ voices, and what makes them different from everyone else’s? Probably not! However, this approach can be good because you know exactly what reasons the computer is using when it makes decisions.

This is very different from the other approach which is…

Show it lots of examples

A lot of modern systems use the idea of ‘machine learning’, which means that instead of writing rules down, we create a system that can somehow ‘learn’ what the correct answer should be. We just give it lots of different examples to learn from, telling it what each one is. Once it has seen enough examples to get it right often enough, we let it loose on things we don’t know in advance. This approach is inspired by how the brain works. We know that brains are good at learning, so why not do what they do!

One difficulty with this is that you can’t always be sure how the machine comes up with its decisions. Often the software is a ‘black box’ that gives you an answer but doesn’t tell you what justifies that answer. Is it really listening to the same aspects of the sound as we do? How would we know?

On the other hand, perhaps that’s the great thing about this approach: a computer might be able to give you the right answer without you having to tell it exactly how to do that!

It means we don’t need to write down a ‘recipe’ for every sound we want to detect. If it can learn from examples, and get the answer right when it hears new examples, isn’t that all we need?

Which way is best?

There are hundreds of bird species that you might hear in the UK alone, and many more in tropical countries. Human experts take many years to learn which sound means which bird. It’s a difficult thing to do!

So which approach should your smartphone use if you want it to help identify birds around you? You can find phone apps that use one approach or another. It’s very hard to measure exactly which approach is best, because the conditions change so much. Which one works best when there’s noisy traffic in the background? Which one works best when lots of birds sing together? Which one works best if the bird is singing in a different ‘dialect’ from the examples we used when we created the system?

One way to answer the question is to provide phone apps to people and to see which apps they find most useful. So companies and researchers are creating apps using the ways they hope will work best. The market may well then make the decision. How would you decide?


This article was originally published on the CS4FN website and can also be found on pages 10 and 11 of Issue 21 of the CS4FN magazine ‘Computing sounds wild’. You can download a free PDF copy of the magazine (below), or any of our other free material at our downloads site.


Further bird- (& computing-) themed reading
🐦🐤🦜🦉


Related Magazine …


This blog is funded through EPSRC grant EP/W033615/1.

Spot the difference – modelling how humans see the world

A human eye with iris picked out in bright blue and overlaid with a digital drawing.

by Paul Curzon, Milan Verma and Hamit Soyel, Queen Mary University of London

Try our spot the difference puzzles set by an Artificial Intelligence …

NOTE: this page contains slowly flashing images.

A human eye with iris picked out in bright blue and overlaid with a digital drawing.
Machine eye image by intographics from Pixabay

Queen Mary researcher, Milan Verma used his AI program that modelled the way human brains see the world (our ‘vision system’) to change the details of some pictures in places where the program predicted changes should be easy to spot. Other pictures were changed in places where the AI predicted we would struggle to see even when big areas were changed.

The images flash back and forth between the original and the changed version. How quickly can you see the difference between the two versions.

Spot the Difference: Challenge 1

As this image flashes, something changes. This one is predicted by our AI to be quite hard to see. How quickly can you see the difference between the two versions?

A slowly flashing image with one part that appears or disappears and a black screen in the middle.
Challenge 1 – can you see which part of the image is visible or obscured?

Once you spot it (or give up) see both the answer (linked below) and the ‘saliency maps’ which show where the model predicts where your attention will be drawn to and away from.

You can also try our second challenge that is predicted to be easier.

Spot the Difference: Challenge 2

As this image flashes, something changes. This one is predicted by our AI to be easier to see. How quickly can you see the difference between the two versions?

A slowly flashing image with one part that appears or disappears and a black screen in the middle.

Answers!

Once you’ve tried the two challenges above head over to our answer page to see how you did.

Further reading

This article was originally published on the CS4FN website.


This blog is funded through EPSRC grant EP/W033615/1.

Inspiring Wendy Hall

Woman's manicured hand pointing a remote control at a large screen television on the opposite wall in a spacious modern room with white minimal furnishing.

by Paul Curzon, Queen Mary University of London

This article is inspired by a keynote talk Wendy Hall gave at the ITiCSE conference in Madrid, 2008.

What inspires researchers to dedicate their lives to study one area? In the case of computer scientist Dame Wendy Hall it was a TV programme called Hyperland starring former Dr Who Tom Baker and writer Douglas Adams of Hitchhiker’s Guide to the Galaxy fame that inspired her to become one of the most influential researchers of her area.

Woman's manicured hand pointing a remote control at a large screen television on the opposite wall in a spacious modern room with white minimal furnishing.
Remote control TV image by mohamed_hassan from Pixabay

A pioneer and visionary in the area of web science, many of Dame Wendy’s ideas have started to appear in the next generation web: the ‘great web that is yet to come’ (as Douglas Adams might put it), otherwise known as the semantic web. She has stacked up a whole bunch of accolades for her work. She is a Professor at the University of Southampton, a former President of the British Computer Society and now the first non-US President of the most influential body in computer science, the Association for Computing Machinery. She is also a Fellow of the Royal Academy of Engineering and this year she topped it all and gaining her most impressive sounding title for sure by being made a Dame Commander of the British Empire.

So how did that TV programme set her going?

Douglas Adams and Tom Baker acted out a vision of the future, a vision of how TV was going to change. At the time the web didn’t exist and TV was just something you sat in front of and passively watched. The future they imagined was interactive TV. TV that was personal. TV that did more than just entertain but served all your information needs.

In the programme Douglas Adams was watching TV, vegetating in front of it…and then Tom Baker appeared on Douglas’s screen. He started asking him questions…and then he stepped out of the TV screen. He introduced himself as a software agent, someone who had all the information ever put into digital format at his fingertips. More than that he was Douglas’s personal agent. He would use that information to answer any questions Douglas had. Not just to bring back documents (Google-style) that had something to do with the question and leave you to work out what to do with it all, but actually answer the question. He was an agent that was servant and friend, an agent whose character could even be changed to fit his master’s mood.

Wendy was inspired…so inspired that she decided she was going to make that improbable vision a reality. Reality hasn’t quite caught up yet, but she is getting there.

Most people who think about it at all believe that Tim Berners-Lee invented the idea of the web and of hypertext, the links that connect web pages together. He was the one that kick-started it into being a global reality, making it happen, but actually lots of people had been working in research labs round the world on the same ideas for years before, Wendy included, with her Microcosm hypermedia system. Tim’s version of hypermedia – interactive information – was a simple version, one simple enough to get the idea off the ground. Its time is coming to an end now though.

What is coming next? The semantic web: and it will be much more powerful. It is a version of the web much closer to that TV program, a version where the web’s data is not just linked to other data but where words, images, pictures, videos are all tagged with meaning: tags that the software agents of the future can use to understand.

The structure is now there for it to happen. What is needed is for people to start to use it, to write their web pages that way, to actually make it everyday reality. Then the web programmers will be able to start innovating with new ideas, new applications that use it, and the web scientists like Wendy will be able to study it: to work out what works for people, what doesn’t and why.

Then maybe it’s your turn to be inspired and drive the next leap forward.

This article was originally published on the CS4FN website.


Adapted (text added) version of Woman at a computer image by Chen from Pixabay


Related Magazine …


This blog is funded through EPSRC grant EP/W033615/1.

Barbara Liskov: Byzantine birthdays

by Paul Curzon, Queen Mary University of London

(from the archive, originally in our special issue “The women are here”)

The scroll of a cello
Image by HeungSoon from Pixabay
Image by HeungSoon from Pixabay 

You may not think of computers as argumentative, but some of them do actually bicker quite a lot, and for good reason. Many hi-tech systems we depend on rely on the right computers getting their way, and the problems involved are surprisingly tricky to get right. Barbara Liskov’s contributions to this fiendishly difficult problem of making sure the right computers win their arguments helped her scoop the world’s top computing prize in 2009.

The ACM Turing Award, which Barbara won, is the computing equivalent of a Nobel Prize. She was awarded it for more than 40 years’ work. Her early work on programming languages has been used in every significant new language that has been invented since. More recently she has moved on to problems about ‘distributed systems’: computer systems involving lots of separate computers that have to work together. That’s where the arguing comes in.

Barbara has been working on an area of distributed computing called ‘Byzantine fault tolerance’. It’s all about providing a good service despite the fact that just about anything can go wrong to the computers or the network connecting them. It’s so important because those computers could be doing anything from running an ecommerce site over the Internet to keeping an airliner in the air.

Happy birthday

Here’s a thought experiment to show you how tricky distributed computing problems can be. Alice is a cellist in an orchestra and it turns out that the birthday of the conductor, Cassie, is on the day they are playing a concert. Jo, the conductor’s partner, has asked Alice to arrange for the orchestra to play Happy Birthday mid-concert as a surprise. The rest of the orchestra were enthusiastic. The only trouble is they didn’t have the chance to agree which break to do it in. In fact, no one’s sure they’re definitely doing it at all, and now they are now all sitting in their places ready to start.

For it to work they all have to start precisely together or it will just sound horrible. If anyone starts on their own they will just look and sound silly. Can it be done, or should they all just forget the whole thing?

Alice decides to go ahead. She can tell the others it’s on by whispering to those next to her and telling them to pass the message on. As long as her message gets to enough people across the different sections it will be ok. Won’t it?

Actually no.

The problem is: how will she know enough people did get the message? It has to be passed when people aren’t playing, so some could presumably not get it in time. How will she know? If the whispers didn’t get through and she starts to play, she will be the embarrassed one.

Are you in?

That seems an easy thing to solve though – when each person gets the message they just send one back saying, “I’m in”. If she gets enough back, she knows it’s on, doesn’t she? Ahh! There the problem lies. She knows, but no one else can be sure she knows. If any are in doubt they won’t play and it will still go horribly wrong. How does she let everyone know that enough people are willing to go for it? Alice is in the same situation she was at the start! She doesn’t know it will happen for sure and neither does anyone else.

She can start whispering messages again saying that enough people have agreed but that won’t help in the end either. How does she know all the new messages get through?

Change the problem

A computer scientist might have a solution for Alice – change the problem. Following this advice, she starts by whispering to everyone that she will stand up and conduct at an appointed time. Are they in? Now all she needs to be sure of is that when she stands up, enough people have agreed to play so that she won’t look silly. The others send back their message saying ‘I’m in’, but no one else needs to know in advance whether the song is definitely going ahead. If she doesn’t stand up they won’t play. If she does, they go ahead.

Delivering a good service

General knowledge
It’s called Byzantine Fault Tolerance after some imaginary
generals in the ancient days of the Eastern Roman Empire,
whose capital was Byzantium. The original problem was about
how two generals could know to attack a city at the same time.

Byzantine fault tolerance is about designing this kind of system: one that involves lots of ‘agents’ (people or computers) that have to come to an agreement about what they know and will do. The aim is for them to work together to deliver a service. That service might be for an orchestra to play Happy Birthday, but is more likely to be something like taking airline bookings over the Internet, or even deciding on action to take to keep the airliner they are flying in the air. The separate computers have to agree as a group even when some could stop working, make mistakes due to software bugs or even behave maliciously due to a virus at any point. Can a way be engineered that allows the system as a whole to still behave correctly and deliver that service? This is the problem Barbara Liskov has been working on with Miguel Castro at MIT. Of course they weren’t thinking of birthdays and orchestras. They were interested in providing the service of looking after documents so they can be accessed anytime, anywhere. A simple computer does this with a file system. It keeps track of the names of your documents and where it has stored them in its memory. When you open a document it uses the records it has kept to go and fetch it from wherever it was put. With this kind of file system, though, if something goes wrong with your machine you could lose your documents forever.

Spread it around

A way to guard against this is to create a file system that distributes copies of each file to different computers around the Internet. When you make changes, those changes are also sent to all the other computers with copies. Then if the copy on one machine is corrupted, perhaps by a hacker or just by a disk crash, the file system as a whole can still give you the correct document. That is where the computers need to start arguing. When you ask for your document back how do all the computers with (possibly different) copies decide which is the correct, uncorrupted version? That sounds easy, but as the orchestra example shows, as soon as you create a situation where the different agents (whether human or computer) are distributed, and worse you can’t trust anyone or anything for sure, there are lots of subtle ways it can all go horribly wrong.

The way Barbara and Miguel’s solution to this problem works is similar to what Alice was doing. One computer acts as what is called the ‘primary’ (Alice played this role). It is where the request from the client (Jo) goes. The primary sends out a request to all the backup machines for the document, just like Alice’s whispered messages. All the backups reply with their copy of the document. As soon as more than some predetermined number come back with the same document, then that is the good copy.

Not so simple

Of course the detail of Barbara and Miguel’s method is a lot trickier than that. They’ve had to figure out how to cope if something goes wrong with the primary (Alice herself) to ensure that the client still gets their document. Their version also works without any synchronisation to make things happen in lockstep (suppose Alice is at the back so can’t stand up and conduct to keep things in time). There are lots of other details in Barbara and Miguel’s version too. Messages are timestamped, for example, so that the recipients can tell if a message is new or just a copy of an old one.

Practically perfect

The particularly special thing about Barbara and Miguel’s way of providing fault tolerance, though, is that it doesn’t take an excessively long time. Various people had come up with solutions before, but they were so slow no one could really use them. The new method is so fast it’s almost as if you weren’t bothering about fault tolerance at all. Better still, the fact that it doesn’t need synchronisation – no conducting – means it also works when the replicated services are on the Internet where the computers act independently and there is no way to keep them in lockstep.

Barbara’s work might never actually help with an orchestral surprise like Alice’s, However, because of it future computers will be in a better position to shrug off their own kind turning rogue due to hackers, cosmic rays or even just plain old breakdowns. Not a bad reason to have a byzantine argument.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Marissa Mayer: Lemons Linking 41 Shades of Blue – A/B Testing

Closeup of a slice of lemon in negative filter textured background

by Paul Curzon, Queen Mary University of London

Google, one of the most powerful companies in the world, is famous for being founded by Larry Page and Sergey Brin, but a key person, the 20th person employed, was engineer, programmer and believer in detail, Marissa Mayer. Her attention to detail made a gigantic difference to Google’s success. She was involved in most of their successful products, from the search engine to Gmail to Adwords and if she wasn’t convinced about a new feature or change, then it didn’t happen. When a designer suggested a new shade of blue for the links of ads, for example, she had to be persuaded. But how could she be sure she did make the right decisions? She used a centuries old idea from medicine, first used to help cure scurvy in sailors, and applied it to software design: the randomized controlled trial.

Randomized controlled trials revolutionized medicine. They could revolutionize many other aspects of our lives too, from education to prison reform, if they were used more. Computer Scientists realized that, and more trials are now used on software than medicines. It’s part of the Big Data revolution and is the way to avoid relying on hunches, instead relying on scientific method to find out what the right answer really is.

But what if …?

The problem with the way we do most things is “what-if”. We make decisions, but never know what would have happened if we took the other choice. If things go well we pat ourselves on the back and tell ourselves we are right. But things might have gone even better had we only made the other decision. We will never know. However good or bad it seems, there is no way of knowing actually if our decision was the right one, if all we do is make it. We then delude ourselves, and so keep doing bad things, over and over. That’s why illness was treated by getting leeches to suck blood for centuries!

Controlled trials overcome this. The big idea boils down to making sure you do both alternatives! Not only do you make the change, you also leave things alone too! That sounds impossible, but it’s simple. Split your population (patients, users, prisoners, students, …) into two groups at random. Apply the change to one group, but leave the other group alone. Then at the end of a suitable period, long enough so you can see any difference, compare the results. You see not only the result of making the change, but also what would have happened if you didn’t. Only then, with hard data about the consequences of both possibilities, do you take the decision.

The first medical trial like this involved sailors who were ill with scurvy – a disease that killed more wartime sailors than enemy action in the 18th century. Scottish Navy surgeon James Lind waited until his ship had been at sea long enough for many sailors to get scurvy. He then split a dozen into 6 pairs: one group had oranges and lemons on top of the normal food, and the others were given different alternatives like cider or vinegar instead. Within a week, the two eating fruit were virtually recovered. More to the point, there was no difference in any of the others apart from an improvement in the pair given cider. Eating fruit was clearly the right decision to cure scurvy. All new drugs are now tested in trials like this to find out if they really do make patients better or not. Because you know what happens to those not given the new treatment, you know any improvement wouldn’t have happened any way.

A bunch of lemons turned blue
Lemons image by Richard John from Pixabay – colour changed to blue

So how do computer scientists use this sort of trial? The way Marissa Mayer’s team did it is a classic example. One of Google’s designers was suggesting they use a slightly different shade of blue for the links on ads in Google’s mail program. Rather than take his word that it was an improvement, they ran a trial. They created a version of the program that had multiple colours possible for the links, each a different shade of blue. They then split all the users of the program into groups and gave each a different shade of blue for their links, tracking the results. One particular shade led to more clicks on the ads than any other. That was the shade Marissa chose (and it wasn’t the shade the designer had suggested!)

Software trials like this are called A/B Testing. They have become the mainstay of hi-tech companies wanting an edge. It actually leads to a new way of developing software. Rather than get a perfect product at the outset you get something basic that works well enough quickly. Then you set to work running trials on lots of small details, making what are called ‘marginal gains’, as soon as possible. One small detail may not make a big difference, but when you pile them up, each known to be a definite improvement because of the trial, then very quickly your software improves. Trials can give better results than intelligent design!

Does it make a difference? Well the one decision about that shade of blue of Marissa’s team supposedly made Google $200 million a year, as a result of more people clicking on ads. Google now run tens of thousand of trails like this each year. Add the benefits of lots and lots of small improvements and you get one of the most powerful companies on the planet.

Little Gains in Life

The idea of developing software through marginal gains is actually based on the process used by nature: evolution by natural selection. Each species of animal seems perfectly designed for its environment, not because they were designed, but because only the fittest individuals survive to have babies. Any small improvement in a baby that gives it a better ability to survive means the genes responsible for that improvement are passed on. Over many generations the marginal gains add up to give creatures perfectly adapted to their environment.


This article was originally published on the CS4FN website and a copy can also be found on page 22-23 of our free magazine celebrating women in computer science, which you can download as a PDF below along with all of our free material.


Related Magazine …


Adapted (text added) version of Woman at a computer image by Chen from Pixabay

This blog is funded through EPSRC grant EP/W033615/1.

Opinions, Opinions, Opinions

by Paul Curzon, Queen Mary University of London

Based on a talk by Jiayu Song at QMUL, March 2023

Multicoloured speech bubbles with a colourful cross-hairs target in the centre

Social media is full of people’s opinions, whether about politics, movies, things they bought, celebrities or just something in the news. However, sometimes there is just too much of it. Sometimes, you just want an overview without having to read all the separate comments yourself. That is where programs that can summarise text come in. The idea is that they take lots of separate opinions about a topic and automatically give you a summary. It is not an easy problem, however, and while systems exist, researchers continue to look for better ways.

That is what Queen Mary PhD student Jiayu Song is working on with her supervisor, Professor Maria Liakata. Some sources of opinions are easier to work with than others. For example reviews, whether of movies, restaurants or gadgets, tend to be more structured so more alike in the way they are written. Social media posts on the other hand are unlikely to have any common structure. What is written is much more ‘noisy’ and that makes it harder to summarise different opinions. Jiayu is particularly interested in summarising these noisy social media posts, so has set herself the harder problem of the two. 

What does distance of meaning mean?

Think of posts to be summarised as points scattered on a piece of paper. Her work is based on the idea that there is a hypothetical point (so hypothetical social media post) that is in the middle of those other points (a kind of average point) and the task is to find that point so summary post. If they were points on paper then we could use geometry to find a central point that minimises the total distance to all of them. For written text we need first to decide what we actually mean by ‘distance’ as it is no longer something we can measure with a ruler! For text we want some idea of  distance in meaning – we want a post that is as close as possible to those it is summarising but by “close” here we mean close in meaning. What does distance of meaning mean? King and Queen for example might be the same distance apart as boy and girl in meaning whereas tree is further away in meaning.

King and Queen for example might be
the same distance apart as boy and girl in meaning

Jiayu’s approach is based on finding a middle point for posts using a novel (for this purpose) way of determining distance called the Wasserstein distance. It gives a way of calculating distances between distributions of probabilities. Imagine you collected the marks people scored in a test and plotted a graph of how many got each mark. That would give a distribution of marks (likely it would give a hump-like curve known as normal distribution.). This could be used to estimate the distribution of marks you would get from a different class. If we did that for lots of different classes each would actually have a slightly different distribution (so curve when plotted). A summary of the different distributions would be a curve as similar (so as “close”) as possible  to all of them so a better predictor of what new classes might score.

From distance to distribution

You could do a similar thing to find the distribution of words in a book, counting how often each word arises and then plotting a curve of how common the different words are. That distribution gives the probability of different words appearing so could be used to predict how likely a given word was in some new book. For summarising, though it’s not words that are of interest but the meanings of words or phrases, as we want to summarise the meaning whatever the words that were actually used.  If the same thing is expressed using different words, then it should count as the same thing. “The Queen of the UK died today.” and “Today, the British monarch passed away.” are both expressing the same meaning.  It is not the distance apart of individual word meanings we want though, but of distributions of those meanings. Jiayu’s method is therefore first based on extracting the meanings of the words and working out the distribution of those meanings in the posts. However, it turns out it is useful to create two separate representations, one of these distributions of meanings but also another representing the syntax, so the structure of the words actually used too, to help put together the actual final written summary.

Once that decoding stage has been done, creating new versions of the texts to be summarised as distributions, Jiayu’s system uses that special Wasserstein distance to calculate a new distribution of meanings that represents the central point of all those that are being summarised. Even given a way to calculate distances there are different versions of what is meant by “central” point and Jiayu uses a version that helps with the next stage. That involves, a neural network based system, like those used for machine learning systems more generally, is used  to convert the summary distributions back into readable text. That summary is the final output of the program.

Does it work?

She has run experiments to compare the summaries from her approach to existing systems. To do this she took three existing datasets of opinions, one from Twitter combining opinions about politics and covid, a second containing posts from Reddit about covid, and a final one of reviews posted on Amazon. A panel of three experts then individually rated the summaries from Jiayu’s system with those from two existing summarising systems. The experts were also given “gold standard” summaries written by humans to judge all the summaries against. They had to rate which system produced the best and worst summary for each of a long series of summaries produced from the datasets. The expert’s ratings suggested that Jiayu’s system preserved meaning better than the others, though did less well in other categories such as how fluent the output was. Jiayu also found that there was a difference when rating the more structured Amazon reviews compared to the other more noisy social media posts and in these two cases a different approach was needed to decode the summary generated back into actual text based on the extra syntax representation created.

Systems like Jiayu’s, once perfected, could have lots of uses: they could help journalists quickly get on top of opinions being posted of a breaking story, help politicians judge the mood of the people about their policies or just help the rest of us decide which movie to watch or whether we should buy some new gadget or not.

Perhaps you have an opinion of whether that would be useful or not?

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1.