Celebrating Jean Bartik: 1940s programmer

Two of the ENIAC programmers, are preparing the computer for Demonstration Day in February 1946. “U.S. Army Photo” from the archives of the ARL Technical Library. Left: Betty Jennings (later Bartik), right: Frances Bilas (Spence) – Image via Wikipedia. Public Domain.

Jean Bartik (born Betty Jean Jennings) was one of six women who programmed “ENIAC” (the Electronic Numerical Integrator and Computer), one of the earliest electronic programmable computers. The work she and her colleagues did in the 1940s had a huge impact on computer science however their contribution went largely unrecognised for 40 years. 

Jean Bartik – born 27 December 1924; died on this day, 23 March 2011

Born in Missouri USA in December 1924 to a family of teachers in Betty (as she was then known) showed promise in Mathematics, graduating from her high school in the summer of 1941 aged 16 with the highest marks in maths ever seen at her school. She began her degree in Maths and English at her local teachers’ college (which is now Northwest Missouri State University) but everything changed dramatically a few months in when the US became involved in the Second World War. The men (teachers and students) were called up for war service leaving a dwindling department and her studies were paused, resuming only in 1943 when retired professors were brought in to teach; she graduated in January 1945, the only person in her year to graduate in Maths.

Although her family encouraged her to become a local maths teacher she decided to seek more distant adventures. The University of Pennsylvania in Philadelphia (~1,000 miles away) had put out a call for people with maths skills to help with the war effort, she applied and was accepted. Along with over 80 other women she was employed to calculate, using advanced maths including differential calculus equations, accurate trajectories of bullets and bombs (ballistics) for the military. She and her colleagues were ‘human computers’ (people who did calculations before the word meant what it does today) creating range tables, columns of information that told the US army where they should point their guns to be sure of hitting their targets. This was complex work that had to take account of weather conditions as well as more obvious things like distance and size of the gun barrel.

Even with 80-100 women working on every possible combination of gun size and angle it still took over a week to generate one data table so the US Army was obviously keen to speed things up as much as possible. They had previously given funding in 1943 to John Mauchly (a physicist) and John Presper Eckert (an electrical engineer) to build a programmable electronic calculator – ENIAC – which would automate the calculations and give them a huge speed advantage. By 1945 the enormous new machine, which took up a room (as computers tended to do in those days) consisted of several thousand vacuum tubes, weighed 30 tonnes and was held together with several million soldered joints. It would be programmed with punched cards with holes punched at different positions in each card allowing a current to pass (or not pass, if no hole present) through a particular set of cables connected through a plugboard (like old-fashioned telephone exchanges). 

From the now 100 women working as human computers in the department six were selected to become the machine’s operators – a role that was exceptional. There were no manuals available and ‘programming’, as we know it today, didn’t yet exist – it was much more physical. Not only did the ‘ENIAC six’ have to correctly wire each cable they had to fully understand the machine’s underlying blueprints and electronic circuits to make it work as expected. Repairs could involve crawling into the machine to fix a broken wire or vacuum tube. 

World War 2 actually ended in September 1945 before ENIAC was brought into full service, but being programmable (which meant rewiring the cables) it would soon be put to other uses. Jean really enjoyed her time working on ENIAC and said later that she’d “never since been in as exciting an environment. We knew we were pushing back frontiers” but she was working at a time when men’s jobs and achievements were given more credit than women’s.

In February 1946 ENIAC was unveiled to the press with its (male) inventors demonstrating its impressive calculating speeds and how much time could be saved compared with people performing the calculations with mechanical desk calculators. While Jean and some of the other women were in attendance (and appear in press photographs of the time) the women were not introduced, their work wasn’t celebrated, they were not always correctly identified in the photographs and were even not invited to the celebratory dinner after the event – as Jean said in a later interview (see the second video (YouTube) below) “We were sort of horrified!”.

In December 1946 she married William Bartik (an engineer) and over the next few years was instrumental in the programming and development of other early computers. She also taught others how to program them (an early computer science teacher!). She often worked with her husband too, following him to different cities for work. However her husband took on a new role in 1951 and the company’s policy was that wives were not allowed to work in the same place. Frustrated, Jean left computing for a while and also took a career break to raise her family. 

In the late 1960s she returned to the field of computer science and for several years she blended her background in Maths and English, writing technical reports on the newer ‘minicomputers’ (still quite large compared to modern computers but you could fit more of them in a room). However the company she worked for was sold off and she was made redundant in 1985 at the age of 60. She couldn’t find another job in the industry which she put down to age discrimination and she spent her remaining career working in real estate (selling property or land). She died, aged 86 on 23 March 2011. 

Jean’s contribution to computer science remained largely unknown to the wider world until 1986 when Kathy Kleinman (an author, law professor and programmer) decided to find out who the women in these photographs were and rediscovered the pioneering work of the ENIAC six.

The ENIAC six women were Kathleen McNulty Mauchly Antonelli, Jean Jennings Bartik, Frances (Betty) Snyder Holberton, Marlyn Wescoff Meltzer, Frances Bilas Spence, and Ruth Lichterman Teitelbaum.

Jo Brodie, Queen Mary University of London.

Watch…

More on …



EPSRC supports this blog through research grant EP/W033615/1.

What’s that bird? Ask your phone – birdsong-recognition apps

Could your smartphone automatically tell you what species of bird is singing outside your window? If so how?

Mobile phones contain microphones to pick up your voice. That means they should be able to pick up the sound of birds singing too, right? And maybe even decide which bird is which?

Smartphone apps exist that promise to do just this. They record a sound, analyse it, and tell you which species of bird they think it is most likely to be. But a smartphone doesn’t have the sophisticated brain that we have, evolved over millions of years to understand the world around us. A smartphone has to be programmed by someone to do everything it does. So if you had to program an app to recognise bird sounds, how would you do it? There are two very different ways computer scientists have devised to do this kind of decision making and they are used by researchers for all sorts of applications from diagnosing medical problems to recognising suspicious behaviour in CCTV images. Both ways are used by phone apps to recognise bird song that you can already buy.

The sound of the European robin (Erithacus rubecula) better known as robin redbreast, Recorded by
Vladimir Yu. Arkhipov, Arkhivov CC BY-SA 3.0 via wikimedia

Write down all the rules

Blackbird singing
Blackbird Image by Ian Lindsay from Pixabay

If you ask a birdwatcher how to identify a blackbird’s sound, they will tell you specific rules. “It’s high-pitched, not low-pitched.” “It lasts a few seconds and then there’s a silent gap before it does it again.” “It’s twittery and complex, not just a single note.” So if we wrote down all those rules in a recipe for the machine to follow, each rule a little program that could say “Yes, I’m true for that sound”, an app combining them could decide when a sound matches all the rules and when it doesn’t.

This is called an ‘expert system’ approach. One difficulty is that it can take a lot of time and effort to actually write down enough rules for enough birds: there are hundreds of bird species in the UK alone! Each would need lots of rules to be hand crafted. It also needs lots of input from bird experts to get the rules exactly right. Even then it’s not always possible for people to put into words what makes a sound special. Could you write down exactly what makes you recognise your friends’ voices, and what makes them different from everyone else’s? Probably not! However, this approach can be good because you know exactly what reasons the computer is using when it makes decisions.

The sound of a European blackbird (Turdus merula) singing merrily in Finland, from Wikipedia (song 1). Public Domain via wikimedia

This is very different from the other approach which is…

Show it lots of examples

A lot of modern systems use the idea of ‘machine learning’, which means that instead of writing rules down, we create a system that can somehow ‘learn’ what the correct answer should be. We just give it lots of different examples to learn from, telling it what each one is. Once it has seen enough examples to get it right often enough, we let it loose on things we don’t know in advance. This approach is inspired by how the brain works. We know that brains are good at learning, so why not do what they do!

One difficulty with this is that you can’t always be sure how the machine comes up with its decisions. Often the software is a ‘black box’ that gives you an answer but doesn’t tell you what justifies that answer. Is it really listening to the same aspects of the sound as we do? How would we know?

On the other hand, perhaps that’s the great thing about this approach: a computer might be able to give you the right answer without you having to tell it exactly how to do that!

It means we don’t need to write down a ‘recipe’ for every sound we want to detect. If it can learn from examples, and get the answer right when it hears new examples, isn’t that all we need?

Which way is best?

There are hundreds of bird species that you might hear in the UK alone, and many more in tropical countries. Human experts take many years to learn which sound means which bird. It’s a difficult thing to do!

So which approach should your smartphone use if you want it to help identify birds around you? You can find phone apps that use one approach or another. It’s very hard to measure exactly which approach is best, because the conditions change so much. Which one works best when there’s noisy traffic in the background? Which one works best when lots of birds sing together? Which one works best if the bird is singing in a different ‘dialect’ from the examples we used when we created the system?

One way to answer the question is to provide phone apps to people and to see which apps they find most useful. So companies and researchers are creating apps using the ways they hope will work best. The market may well then make the decision. How would you decide?

Dan Stowell, Queen Mary University of London

More on …


Related Magazine …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Inspiring Wendy Hall

This article is inspired by a keynote talk Wendy Hall gave at the ITiCSE conference in Madrid, 2008.

What inspires researchers to dedicate their lives to study one area? In the case of computer scientist Dame Wendy Hall it was a TV programme called Hyperland starring former Dr Who Tom Baker and writer Douglas Adams of Hitchhiker’s Guide to the Galaxy fame that inspired her to become one of the most influential researchers of her area.

A pioneer and visionary in the area of web science, many of Dame Wendy’s ideas have started to appear in the next generation web: the ‘great web that is yet to come’ (as Douglas Adams might put it), otherwise known as the semantic web. She has stacked up a whole bunch of accolades for her work. She is a Professor at the University of Southampton, a former President of the British Computer Society and now the first non-US President of the most influential body in computer science, the Association for Computing Machinery. She is also a Fellow of the Royal Academy of Engineering and this year she topped it all and gaining her most impressive sounding title for sure by being made a Dame Commander of the British Empire.

So how did that TV programme set her going?

Douglas Adams and Tom Baker acted out a vision of the future, a vision of how TV was going to change. At the time the web didn’t exist and TV was just something you sat in front of and passively watched. The future they imagined was interactive TV. TV that was personal. TV that did more than just entertain but served all your information needs.

In the programme Douglas Adams was watching TV, vegetating in front of it…and then Tom Baker appeared on Douglas’s screen. He started asking him questions…and then he stepped out of the TV screen. He introduced himself as a software agent, someone who had all the information ever put into digital format at his fingertips. More than that he was Douglas’s personal agent. He would use that information to answer any questions Douglas had. Not just to bring back documents (Google-style) that had something to do with the question and leave you to work out what to do with it all, but actually answer the question. He was an agent that was servant and friend, an agent whose character could even be changed to fit his master’s mood.

Wendy was inspired…so inspired that she decided she was going to make that improbable vision a reality. Reality hasn’t quite caught up yet, but she is getting there.

Most people who think about it at all believe that Tim Berners-Lee invented the idea of the web and of hypertext, the links that connect web pages together. He was the one that kick-started it into being a global reality, making it happen, but actually lots of people had been working in research labs round the world on the same ideas for years before, Wendy included, with her Microcosm hypermedia system. Tim’s version of hypermedia – interactive information – was a simple version, one simple enough to get the idea off the ground. Its time is coming to an end now though.

What is coming next? The semantic web: and it will be much more powerful. It is a version of the web much closer to that TV program, a version where the web’s data is not just linked to other data but where words, images, pictures, videos are all tagged with meaning: tags that the software agents of the future can use to understand.

The structure is now there for it to happen. What is needed is for people to start to use it, to write their web pages that way, to actually make it everyday reality. Then the web programmers will be able to start innovating with new ideas, new applications that use it, and the web scientists like Wendy will be able to study it: to work out what works for people, what doesn’t and why.

Then maybe it’s your turn to be inspired and drive the next leap forward.

Paul Curzon, Queen Mary University of London


Related Magazine …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Marissa Mayer: Lemons Linking 41 Shades of Blue – A/B Testing

A bunch of lemons turned blue
Lemons image by Richard John from Pixabay – colour changed to blue

Google, one of the most powerful companies in the world, is famous for being founded by Larry Page and Sergey Brin, but a key person, the 20th person employed, was engineer, programmer and believer in detail, Marissa Mayer. Her attention to detail made a gigantic difference to Google’s success. She was involved in most of their successful products, from the search engine to Gmail to Adwords and if she wasn’t convinced about a new feature or change, then it didn’t happen. When a designer suggested a new shade of blue for the links of ads, for example, she had to be persuaded. But how could she be sure she did make the right decisions? She used a centuries old idea from medicine, first used to help cure scurvy in sailors, and applied it to software design: the randomized controlled trial.

Randomized controlled trials revolutionized medicine. They could revolutionize many other aspects of our lives too, from education to prison reform, if they were used more. Computer Scientists realized that, and more trials are now used on software than medicines. It’s part of the Big Data revolution and is the way to avoid relying on hunches, instead relying on scientific method to find out what the right answer really is.

But what if …?

The problem with the way we do most things is “what-if”. We make decisions, but never know what would have happened if we took the other choice. If things go well we pat ourselves on the back and tell ourselves we are right. But things might have gone even better had we only made the other decision. We will never know. However good or bad it seems, there is no way of knowing actually if our decision was the right one, if all we do is make it. We then delude ourselves, and so keep doing bad things, over and over. That’s why illness was treated by getting leeches to suck blood for centuries!

Controlled trials overcome this. The big idea boils down to making sure you do both alternatives! Not only do you make the change, you also leave things alone too! That sounds impossible, but it’s simple. Split your population (patients, users, prisoners, students, …) into two groups at random. Apply the change to one group, but leave the other group alone. Then at the end of a suitable period, long enough so you can see any difference, compare the results. You see not only the result of making the change, but also what would have happened if you didn’t. Only then, with hard data about the consequences of both possibilities, do you take the decision.

The first medical trial like this involved sailors who were ill with scurvy – a disease that killed more wartime sailors than enemy action in the 18th century. Scottish Navy surgeon James Lind waited until his ship had been at sea long enough for many sailors to get scurvy. He then split a dozen into 6 pairs: one group had oranges and lemons on top of the normal food, and the others were given different alternatives like cider or vinegar instead. Within a week, the two eating fruit were virtually recovered. More to the point, there was no difference in any of the others apart from an improvement in the pair given cider. Eating fruit was clearly the right decision to cure scurvy. All new drugs are now tested in trials like this to find out if they really do make patients better or not. Because you know what happens to those not given the new treatment, you know any improvement wouldn’t have happened any way.

So how do computer scientists use this sort of trial? The way Marissa Mayer’s team did it is a classic example. One of Google’s designers was suggesting they use a slightly different shade of blue for the links on ads in Google’s mail program. Rather than take his word that it was an improvement, they ran a trial. They created a version of the program that had multiple colours possible for the links, each a different shade of blue. They then split all the users of the program into groups and gave each a different shade of blue for their links, tracking the results. One particular shade led to more clicks on the ads than any other. That was the shade Marissa chose (and it wasn’t the shade the designer had suggested!)

Software trials like this are called A/B Testing. They have become the mainstay of hi-tech companies wanting an edge. It actually leads to a new way of developing software. Rather than get a perfect product at the outset you get something basic that works well enough quickly. Then you set to work running trials on lots of small details, making what are called ‘marginal gains’, as soon as possible. One small detail may not make a big difference, but when you pile them up, each known to be a definite improvement because of the trial, then very quickly your software improves. Trials can give better results than intelligent design!

Does it make a difference? Well the one decision about that shade of blue of Marissa’s team supposedly made Google $200 million a year, as a result of more people clicking on ads. Google now run tens of thousand of trails like this each year. Add the benefits of lots and lots of small improvements and you get one of the most powerful companies on the planet.

Little Gains in Life

The idea of developing software through marginal gains is actually based on the process used by nature: evolution by natural selection. Each species of animal seems perfectly designed for its environment, not because they were designed, but because only the fittest individuals survive to have babies. Any small improvement in a baby that gives it a better ability to survive means the genes responsible for that improvement are passed on. Over many generations the marginal gains add up to give creatures perfectly adapted to their environment.

Paul Curzon, Queen Mary University of London


Related Magazine …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Joyce Wheeler: The Life of a Star

Exploding star
Star image by Dieter from Pixabay

The first computers transformed the way research is done. One of the very first computers, EDSAC (Electronic Delay Storage Automatic Calculator), contributed to the work of three Nobel prize winners: in Physics, Chemistry and Medicine. Astronomer, Joyce Wheeler was an early researcher to make use of the potential of computers to aid the study of other subjects in this way. She was a Cambridge PhD student in 1954 investigating the nuclear reactions that keep stars burning. This involved doing lots of calculations to work out the changing behaviour and composition of the star.

Joyce had seen EDSAC on a visit to the university before starting her PhD, and learnt to program it from its basic programming manual so that she could get it to do the calculations she needed. She would program by day and let EDSAC number crunch using her programs every Friday night, leaving her to work on the results in the morning, and then start the programming for the following week’s run. EDSAC not only allowed her to do calculations accurately that would otherwise have been impossible, it also meant she could run calculations over and over, tweaking what was done, refining the accuracy of the results, and checking the equations quickly with sample numbers. As a result EDSAC helped her to estimate the age of stars.

– Paul Curzon, Queen Mary University of London


More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The Devil is in the Detail: Lessons from Animal Welfare? (Temple Grandin)

What can Computer Scientists learn from a remarkable woman and the improvements she made to animal welfare and the meat processing industry?

Temple Grandin is an animal scientist – an animal welfare specialist and a remarkable innovator on top. She has extraordinary abilities that allow her to understand animals in ways others can’t. As a result her work has reduced the suffering of countless farm animals. She has designed equipment, for example, to restrain animals. It makes it easier to give them shots because, in contrast to the equipment it replaces, it does not discomfort the animals as they enter. By being able to see the detail that an animal perceives she is able to design to overcome the problems. Paradoxically perhaps for someone who cares so much about animals, she works with slaughter houses – Meat Processing factories like those of McDonalds.

Her aim, given people do eat meat, is to ensure the animals are treated humanely throughout the process of rearing an animal until its death. Her work has been close to miraculous in the changes she has brought about to ensure that farm animals do not suffer. She is good for business too. If cattle are spooked by something as they enter the processing factory (also known as a ‘plant’), whether by the glint of metal or a deep shadow, the plant’s efficiency drops. Fewer animals are processed per hour and that is a big problem for managers.

As a result of her work she has turned round plants, both in welfare terms and in terms of rescuing plants that might otherwise have been shut down. Suddenly plants she audits are treating their livestock humanely.

See the Bigger Picture

Where do Temple’s extraordinary abilities come from? In fact she was originally labelled as being mentally disabled. She is actually autistic. As a result her brain doesn’t quite work the way most people’s do. Autistic people as a result of these brain differences often have difficulties socialising with others. They can find it very hard to understand the nuances of human-human communication that the rest of us take for granted. This is in part because autistic people perceive the world differently. A non-autistic person misses vast amounts of the detail in front of their eyes. Instead just a bigger picture of what they are seeing is passed to their conscious selves. An autistic person doesn’t have that sub-conscious ability to filter out detail, but instead perceives every small thing all at once. That is why autistics can sometimes be overcome by their surroundings, finding the world too much to cope with. They think in terms of a series of pictures full of detail, not abstractly in words.

Temple Grandin argues that that is what makes her special when it comes to understanding farm animals. In some ways they see the world very much like she does. Just as a cow does, she notices the shadows and the glint of metal, the bright patch on the floor from the overhead lights or the jacket laid over the fence that is spooking it. The plant managers and animal handlers don’t even register them never mind see them as a problem.

Who ya gonna call?

Because of this ability to quickly spot the problems everyone else has missed, Temple gained a reputation for being the person to call when a problem seemed intractable. She has also turned it into a career as an animal welfare auditor, checking processing plants to ensure their standards are sufficiently high. This is where she has helped force through the biggest improvements, and it all boils down to checklists.


Tick that box

Checking that lists of guidelines are being adhered to is a common way to audit quality in many areas of life. Checklists are used in a computer science context as checks for usability (for example that a new version of some application is easy to use) and accessibility (could a blind person, or for that matter someone who was autistic, successfully use a website say). Checklists tend to be very long. After all it must be the case that the more you are checking, the higher the quality of the result, mustn’t it? Surprisingly that turns out not always to be true! That is why Temple Grandin has been so successful. Rather than have a checklist with hundreds of things to check she boiled her own set of questions to ask down to just 10.

Traditional animal welfare audits have checklist questions such as “Is the flooring slippery?” and “Is the electric prod used as little as possible?”. Even apart from the number to work through this kind of checklist can be very hard to follow, not least due to the vagueness.

Ouch!

Temple’s checklist includes questions like: “Do all animals remain unconscious after being stunned?”, “Do no more than 3% of animals vocalise during handling or stunning?” (a “Moo” in this situation means “Ouch”) They are precise, with little room for dispute – it isn’t left to the inspectors judgement. That also means everyone knows the target they are working towards. The fact that there are only 10 also means it is easy for everyone involved to know them all well. Perhaps most importantly they do not focus on the state of the factory, or the way things are done. Instead, they focus on the end results – that animals are humanely treated. The point is that one item covers a multitude of sins that could be causing it. If too many animals are crying out in pain then you have to fix ALL the causes, even if it is something new that no-one thought of putting on a checklist before.

Temple’s 10 point approach to checklists can apply to more than just animal welfare of course. The principles behind it could just as well apply to other areas like usability and accessibility of websites.

Some usability evaluation techniques do follow similar principles. Cognitive Walkthrough, a method of auditing that systems are easy to use on first encounter, has some of the features of this kind of approach. The original version involved a longish set of questions that an expert was to ask him/herself about a system under evaluation. After early trials the developers of the method Cathleen Wharton, John Rieman, Clayton Lewis and Peter Polson quickly realised this wasn’t very practical and replaced it by a 4 question version. It has since then even been replaced by a 3-question walkthrough. One of the questions, to be asked of each step in achieving a task, is: “Will a user know what to try and do at this point?” This has some of the flavour of the Grandin approach – it is about the end result not about some specific thing going wrong.

Let’s look at accessibility. Currently, where web designers think about it at all (UK law requires them to) the long checklist approach tends to be followed. Typical items to check are things like “Ensure that all information conveyed with colour is also available without colour”. Automatic systems are often used to do audits. That is good in one sense as the criteria have then to be very precise for a mere computer to make the decision. On the other hand it encourages items in the checklist to just be things a computer can check. It also encourages the long list of fine detail approach that Temple rejected. Worse, it also can lead to people conforming to the checklist without deeply understanding what the point actually is. A classic example is a web designer adding as the last item on a web page “If you are partially sighted click here”. As far as an automatic checker is concerned they may have done everything right – even providing alternative facilities that are clearly available (if you can see them). A partially sighted person however would only get to that instruction on the screen after they have struggled through the rest of the page. The designer got the right idea but missed the point.

Temple Grandin’s approach would suggest instead having checklists that ask about the outcomes of using the page: “Do 97% of partially-sighted people successfully complete their objective in using the site?” for example. That is why “user testing” is so important, at least as one of the evaluation approaches you follow. User testing involves people from a wide variety of backgrounds actually trying using your prototype software or web pages before they are released. It allows you to focus on the big picture. Of course if you are trying to ensure a web page is accessible your users must include people with different kinds of disabilities.


The Big Picture

One of Temple Grandin’s main messages is that the big advantage that arises as a result of her autism is that she thinks in concrete pictures not in abstract words. Whilst thinking verbally is good in some situations it seems to make us treat small things as though they were just as important as the big issues.

So whatever you are doing, whether looking after animals or designing accessible websites, don’t get lost in the detail. Focus on the point of it all.

Paul Curzon, Queen Mary University of London


More on …


EPSRC supports this blog through research grant EP/W033615/1.

100,000 frames – quick draw: how computers help animators create

Ben Stephenson of the University of Calgary gives us a guide to the basics of animation.

Animation isn’t a new field – artists have been creating animations for over a hundred years. While the technology used to create those animations has changed immensely during that time, modern computer generated imagery continues to employ some of the same techniques that were used to create the first animations.

The hard work of hand drawing

During the early days of animation, moving images were created by rapidly showing a sequence of still images. Each still image, referred to as a frame, was hand drawn by an artist. By making small changes in each new frame, characters were created that appeared to be walking, jumping and talking, or doing anything else that the artist could imagine.

In order for the animation to appear smooth, the frames need to be displayed quickly – typically at around 24 frames each second. This means that one minute of animation required artists to draw over 1400 frames. That means that the first feature-length animated film, a 70-minute Argentinean film called The Apostle, required over 100,000 frames to create.

Creating a 90-minute movie, the typical feature length for most animated films, took almost 130,000 hand drawn frames. Despite these daunting numbers, many feature length animated movies have been created using hand-drawn images.

Drawing with data

Today, many animations are created with the assistance of computers. Rather than simply drawing thousands of images of one character using a computer drawing program, artists can create one mathematical model to represent that character, from which all of his or her appearances in individual frames are generated. Artists manipulate the model, changing things like the position of the character’s limbs (so that the character can be made to walk, run or jump) and aspects of the character’s face (so that it can talk and express emotions). Furthermore, since the models only exist as data on a computer they aren’t confined by the physical realities that people are. As such, artists also have the flexibility to do physically impossible things such as shrinking, bending or stretching parts of a character. Remember Elastigirl, the stretchy mum in The Incredibles? All made of maths.

Once all of the mathematical models have been positioned correctly, the computer is used to generate an image of the models from a specific angle. Just like the hand-drawn frames of the past, this computer- generated image becomes one frame in the movie. Then the mathematical models representing the characters are modified slightly, and another frame is generated. This process is repeated to generate all of the frames for the movie.

The more things change

You might have noticed that, despite the use of computers, the process of generating and displaying the animation remains remarkably similar to the process used to create the first animations over 100 years ago. The animation still consists of a collection of still images. The illusion of smooth movement is still achieved by rapidly displaying a sequence of frames, where each frame in the sequence differs only slightly from the previous one.

The key difference is simply that now the images may be generated by a computer, saving artists from hand drawing over 100,000 copies of the same character. Hand-drawn animation is still alive in the films of Studio Ghibli and Disney’s recent The Princess and the Frog, but we wonder if the animators of hand-drawn features might be tempted to look over at their fellow artists who use computers and shake an envious fist. A cramped fist, too, probably.


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Understanding matters of the heart

Colourful depiction of a human heart
Heart image by Gordon Johnson from Pixabay

Creating accurate computer models of human organs

Ada Lovelace, the ‘first programmer’ thought the possibilities of computer science might cover a far wider breadth than anyone else of her time. For example, she mused that one day we might be able to create mathematical models of the human nervous system, essentially describing how electrical signals move around the body. University of Oxford’s Blanca Rodriguez is interested in matters of the heart. She’s a bioengineer creating accurate computer models of human organs.

How do you model a heart? Well you first have to create a 3D model of its structure. You start with MRI scans. They give you a series of pictures of slices through the heart. To turn that into a 3D model takes some serious computer science: image processing that works out, from the pictures, what is tissue and what isn’t. Next you do something called mesh generation. That involves breaking up the model into smaller parts. What you get is more than just a picture of the surface of the organ but an accurate model of its internal structure.

So far so good, but it’s still just the structure. The heart is a working, beating thing not just a sculpture. To understand it you need to see how it works. Blanca and her team are interested in simulating the electrical activity in the heart – how electrical pulses move through it. To do this they create models of the way individual cells propagate an electrical system. Once you have this you can combine it with the model of the heart’s structure to give one of how it works. You essentially have a lot of equations. Solving the equations gives a simulation of how electrical signals propagate from cell to cell.

The models Blanca’s team have created are based on a healthy rabbit heart. Now they have it they can simulate it working and see if it corresponds to the results from lab experiments. If it does then that suggests their understanding of how cells work together is correct. When the results don’t match, then that is still good as it gives new questions to research. It would mean something about their initial understanding was wrong, so would drive new work to fix the problem and so the models.

Once the models have been validated in this way – shown it is an accurate description of the way a rabbit’s heart works – they can use them to explore things you just can’t do with experiments – exploring what happens when changes are made to the structure of the virtual heart or how drugs change the way it works, for example. That can lead to new drugs.

They can also use it to explore how the human heart works. For example, early work has looked at the heart’s response to an electric shock. Essentially the heart reboots! That’s why when someone’s heart stops in hospital, the emergency team give it a big electric shock to get it going again. The model predicts in detail what actually happens to the heart when that is done. One of the surprising things is it suggests that how well an electric shock works depends on the particular structure of the person’s heart! That might mean treatment could be more effective if tailored for the person.

Computer modelling is changing the way science is done. It doesn’t replace experiments. Instead clinical work, modelling and experiments combine to give us a much deeper understanding of the way the world, and that includes our own hearts, work.

Paul Curzon, Queen Mary University of London


The charity Cardiac Risk in the Young raises awareness of cardiac electrical rhythm abnormalities and supports testing (electrocardiograms and echocardiograms) for all young people aged 14-35.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The Dark History of Algorithms

An Arabic pattern on a crescent moon
Image by Mohammad Shahriyar from Pixabay

Zin Derfoufi, a Computer Science student at Queen Mary, delves into some of the dark secrets of algorithms past.

Algorithms are used throughout modern life for the benefit of mankind whether as instructions in special programs to help disabled people, computer instructions in the cars we drive or the specific steps in any calculation. The technologies that they are employed in have helped save lives and also make our world more comfortable to live it. However, beneath all this lies a deep, dark, secret history of algorithms plagued with schemes, lies and deceit.

Algorithms have played a critical role in some of History’s worst and most brutal plots even causing the downfall and rise of nations and monarchs. Ever since humans have been sent on secret missions, plotted to overthrow rulers or tried to keep the secrets of a civilisation unknown, nations and civilisations have been using encrypted messages and so have used algorithms. Such messages aim to carry sensitive information recorded in such a way that it can only make sense to the sender and recipient whilst appearing to be gibberish to anyone else. There are a whole variety of encryption methods that can be used and many people have created new ones for their own use: a risky business unless you are very good at it.

One example is the ‘Caesar Cipher’ which is named after Julius Caesar who used it to send secret messages to his generals. The algorithm was one where each letter was replaced by the third letter down in the alphabet so A became D, B became E, etc. Of course, it means that the recipient must know of the algorithm (sequence to use) to regenerate the original letters of the text otherwise it would be useless. That is why a simple algorithm of “Move on 3 places in the alphabet” was used. It is an algorithm that is easy for the general to remember. With a plain English text there are around 400,000,000,000,000,000,000,000,000 different distinct arrangements of letters that could have been used! With that many possibilities it sounds secure. As you can imagine, this would cause any ambitious codebreaker many sleepless nights and even make them go bonkers!!! It became so futile to try and break the code that people began to think such messages were divine!

But then something significant happened. In the 9th Century a Muslim, Arabic Scholar changed the face of cryptography forever. His name was Abu Yusuf Ya’qub ibn Ishaq Al-Kindi -better known to the West as Alkindous. Born in Kufa (Iraq) he went to study in the famous Dar al-Hikmah (house of wisdom) found in Baghdad- the centre for learning in its time which produced the likes of Al-Khwarzimi, the father of algebra – from whose name the word algorithm originates; the three Bana Musa Brothers; and many more scholars who have shaped the fields of engineering, mathematics, physics, medicine, astrology, philosophy and every other major field of learning in some shape or form.

Al-Kindi introduced the technique of code breaking that was later to be known as ‘frequency analysis’ in his book entitled: ‘A Manuscript on Deciphering Cryptographic Messages’. He said in his book:

“One way to solve an encrypted message, if we know its language, is to find a different plaintext of the same language long enough to fill one sheet or so, and then we count the occurrences of each letter. We call the most frequently occurring letter the ‘first’, the next most occurring one the ‘second’, the following most occurring the ‘third’, and so on, until we account for all the different letters in the plaintext sample.

“Then we look at the cipher text we want to solve and we also classify its symbols. We find the most occurring symbol and change it to the form of the ‘first’ letter of the plaintext sample, the next most common symbol is changed to the form of the ‘second’ letter, and so on, until we account for all symbols of the cryptogram we want to solve”.

So basically to decrypt a message all we have to do is find out how frequent each letter is in each (both in the sample and in the encrypted message – the original language) and match the two. Obviously common sense and a degree of judgement has to be used where letters have a similar degree of frequency. Although it was a lengthy process it certainly was the most efficient of its time and, most importantly, the most effective.

Since decryption became possible, many plots were foiled changing the course of history. An example of this was how Mary Queen of Scots, a Catholic, plotted along with loyal Catholics to overthrow her cousin Queen Elizabeth I, a Protestant, and establish a Catholic country. The details of the plots carried through encrypted messages were intercepted and decoded and on Saturday 15 October 1586 Mary was on trial for treason. Her life had depended on whether one of her letters could be decrypted or not. In the end, she was found guilty and publicly beheaded for high treason. Walsingham, Elizabeth’s spymaster, knew of Al-Kindi’s approach.

A more recent example of cryptography, cryptanalysis and espionage was its use throughout World War I to decipher messages intercepted from enemies. The British managed to decipher a message sent by Arthur Zimmermann, the then German Foreign Minister, to the Mexicans calling for an alliance between them and the Japanese to make sure America stayed out of the war, attacking them if they did interfere. Once the British showed this to the Americans, President Woodrow Wilson took his nation to war. Just imagine what the world may have been like if America hadn’t joined.

Today encryption is a major part of our lives in the form of Internet security and banking. Learn the art and science of encryption and decryption and who knows, maybe some day you might succeed in devising a new uncrackable cipher or crack an existing banking one! Either way would be a path to riches! So if you thought that algorithms were a bore … it just got a whole lot more interesting.

Zin Derfoufi, Queen Mary University of London

More on…

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Cognitive crash dummies

The world is heading for catastrophe. We’re hooked on power hungry devices: our mobile phones and iPods, our Playstations and laptops. Wherever you turn people are using gadgets, and those gadgets are guzzling energy – energy that we desperately need to save. We are all doomed, doomed…unless of course a hero rides in on a white charger to save us from ourselves.

Don’t worry, the cognitive crash dummies are coming!

Actually the saviours may be people like professor of human-computer interaction, Bonnie John, and her then grad student, Annie Lu Luo: people who design cognitive crash dummies. When working at Carnegie Mellon University it was their job to figure out ways for deciding how well gadgets are designed.

If you’re designing a bridge you don’t want to have to build it before finding out if it stays up in an earthquake. If you’re designing a car, you don’t want to find out it isn’t safe by having people die in crashes. Engineers use models – sometimes physical ones, sometimes mathematical ones – that show in advance what will happen. How big an earthquake can the bridge cope with? The mathematical model tells you. How slow must the car go to avoid killing the baby in the back? A crash test dummy will show you.

Even when safety isn’t the issue, engineers want models that can predict how well their designs perform. So what about designers of computer gadgets? Do they have any models to do predictions with? As it happens, they do. Their models are called ‘human behavioural models’, but think of them as ‘cognitive crash dummies’. They are mathematical models of the way people behave, and the idea is you can use them to predict how easy computer interfaces are to use.

There are lots of different kind of human behavioural model. One such ‘cognitive crash dummies’ is called ‘GOMS’. When designers want to predict which of a few suggested interfaces will be the quickest to use, they can use GOMS to do it.

Send in the GOMS

Suppose you are designing a new phone interface. There are loads of little decisions you’ll have to make that affect how easy the phone is to use. You can fit a certain number of buttons on the phone or touch screen, but what should you make the buttons do? How big should they be? Should you use gestures? You can use menus, but how many levels of menus should a user have to navigate before they actually get to the thing they are trying to do? More to the point, with the different variations you have thought up, how quickly will the person be able to do things like send a text message or reply to a missed call? These are questions GOMS answers.

To do a GOMS prediction you first think up a task you want to know about – sending a text message perhaps. You then write a list of all the steps that are needed to do it. Not just the button presses, but hand movements from one button to another, thinking time, time for the machine to react, and so on. In GOMS, your imaginary user already knows how to do the task, so you don’t have to worry about spending time fiddling around or making mistakes. That means that once you’ve listed all your separate actions GOMS can work out how long the task will take just by adding up the times for all the separate actions. Those basic times have been worked out from lots and lots of experiments on a wide range of devices. The have shown, on average, how long it takes to press a button and how long users are likely to think about it first.

GOMS in 60 seconds?

GOMS has been around since the 1980s, but wasn’t being used much by industrial designers. The problem is that it is very frustrating and time-consuming to work out all those steps for all the different tasks for a new gadget. Bonnie John’s team developed a tool called CogTool to help. You make a mock-up of your phone design in it, and tell it which buttons to press to do each task. CogTool then worked out where the other actions, like hand movements and thinking time, are needed and makes predictions.

Bonnie John came up with an easier way to figure out how much human time and effort a new design uses, but what about the device itself? How about predicting which interface design uses less energy? That is where Annie Lu Luo, came in. She had the great idea that you could take a GOMS list of actions and instead of linking actions to times you could work out how much energy the device uses for each action instead. By using GOMS together with a tool like CogTools, a designer can find out whether their design is the most energy efficient too.

So it turns out you don’t need a white knight to help your battery usage, just Annie Lu Luo and her version of GOMS. Mobile phone makers saw the benefit of course. That’s why Annie walked straight into a great job on finishing university.

Paul Curzon, Queen Mary University of London


More on …


Related Magazine …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos