Pac-Man and Games for Girls

by Paul Curzon, Queen Mary University of London

In the beginning video games were for boys…and then came Pac-Man.

Pac-man eating dots
Image by OpenClipart-Vectors from Pixabay

Before mobile games, game consoles and PC based games, video games first took off in arcades. Arcade games were very big earning 39 billion dollars at their peak in the 1980s. Games were loaded into bespoke coin-operated arcade machines. For a game to do well someone had to buy the machines, whether actual gaming arcades or bars, cafes, colleges, shopping malls, … Then someone had to play them. Originally boys played arcade games the most and so games were targeted at them. Most games had a focus on shooting things: games like asteroids and space invaders or had some link to sports based on the original arcade game Pong. Girls were largely ignored by the designers… But then came Pac-Man. 

Pac-Man, created by a team led by Toru Iwatani,  is a maze game where the player controls the Pac-Man character as it moves around a maze, eating dots while being chased by the ghosts: Blinky, Pinky, Inky, and Clyde. Special power pellets around the maze, when eaten, allow Pac-Man to chase the ghosts for a while instead of being chased.

Pac-Man ultimately made around $19 million dollars in today’s money making it the biggest money making video arcade game of all time. How did it do it? It was the first game that was played by more females than males. It showed that girls would enjoy playing games if only the right kind of games were developed. Suddenly, and rather ironically given its name, there was a reason for the manufacturers to take notice of girls, not just boys.

A Pac-man like ghost
Image by OpenClipart-Vectors from Pixabay

It revolutionised games in many ways, showing the potential of different kinds of features to give it this much broader appeal. Most obviously Pac-Man did this by turning the tide away from shoot-em up space games and sports games to action games where characters were the star of the game, and that was one of its inventor Toru Iwatani’s key aims. To play you control Pac-Man rather than just a gun, blaster, tennis racket or golf club. It paved the way for Donkey Kong, Super Mario, and the rest (so if you love Mario and all his friends, then thank Pac-Man). Ultimately, it forged the path for the whole idea of avatars in games too. 

It was the first game to use power ups where, by collecting certain objects, the character gains extra powers for a short time. The ghosts were also characters controlled by simple AI – they didn’t just behave randomly or follow some fixed algorithm controlling their path, but reacted to what the player does, and each had their own personality in the way they behaved.

Because of its success, maze and character-based adventure games became popular among manufacturers, but more importantly designers became more adventurous and creative about what a video game could be. It was also the first big step towards the long road to women being fully accepted to work in the games industry. Not bad for a character based on a combination of a pizza and the Japanese symbol for “mouth”.

T. V. Raman and his virtual guide dogs

by Daniel Gill, Queen Mary University of London

Guide dog silhouette with binary superimposed
Image by PC modifying dog from Clker-Free-Vector-Images from Pixabay

It’s 1989, a year with lots of milestones in Computer Science. In March, Tim Berners-Lee puts down in writing the idea of an “information management system”, later to become the world wide web. In July, Nintendo releases the Game Boy in North America selling 118 million units worldwide over its 14-year production.

Come autumn, a 24-year-old arrives in Ithaca, US, home of Cornell University. He would be able to feel the cool September air as it blows off Cayuga Lake, smell the aromas from Ithaca’s 190 species of trees, and listen to a range of genres in the city’s live music scene. However,, he couldn’t take in the natural beauty of the city in its entirety as he started his PhD … because he was blind. That did not stop him going on to have a gigantic impact on  the lives of blind and partially sighted people worldwide.

T. V. Raman was born in Pune, India, in 1965. He had been partially sighted from birth, but at the age of 14 he became blind due to a disease called glaucoma. Throughout his life, however, he has not let this stop him.

While he was partially sighted, he was able to read and write – but as his sight worsened, and with the help of his brother, mentors, and aides, he was still able to continue learning from textbooks, and solve problems which were read to him. At the height of its popularity, in the early 1980s, he also learned how to solve a specially customised Rubik’s cube, and could do so in about 30 seconds.

Raman soon developed an interest in mathematics, and around 1983 started studying for a Maths degree at the University of Pune. On finishing in 1987, he studied for a Masters degree at the Indian Institute of Technology Bombay, this time in Computer Science and Maths. It was with the help of student volunteers he was able to learn from textbooks and assistance with programming was provided by an able volunteer. 

Today people with no vision often use a screen reader to hear what is on a screen. It not everyone is lucky enough to have so much help as Raman and screen readers play the part of all those human volunteers who helped him. Raman himself played a big part in their development.

Modern screen readers allow you to navigate the screen part-by-part, with important information and content read to you. Many of these systems are built into operating systems, such as the Narrator in Windows (which uses a huge number of keyboard shortcuts), and Google TalkBack for Android devices (where rubbing the screen, vibration, and audio hints are used). These simpler screen readers might already be installed on your system – if so have a go with them!

While Raman was learning programming, such screen readers were still in their infancy. It was only in the 1980s that a team at IBM developed a screen reader for the command-line interface of the IBM DOS (which Raman would later use), and it would be many years before screen readers were available for the much more challenging graphical user interfaces we’re so accustomed to today.

It was at Cornell University where Raman settled on his career-long research interest: accessibility. He originally intended to do an Applied Mathematics PhD, but then discovered the need for ways to use speech technology to read complicated documents, especially those with embedded mathematics. For his dissertation, he therefore developed the Audio System for Technical Readings (ASTER) to solve the problem.

What he realised was that when looking at information visually our eyes are active taking in information from different places but the display is passive. With an audio interface this is reversed with the ear passive and the display actively choosing the order of information presented. This makes it impossible to get a high level view first and then dive into particular detail. This is a big problem when ‘reading’ maths by listening to it. His system solved the problem using audio formatting which allows the listener to browse the structure of information first.

He named this program after his first guide dog, Aster, which he obtained, alongside a talking computer, in early 1990. Both supported him throughout his PhD. For this work, he received the ACM Doctoral Dissertation Award, a prestigious yearly worldwide celebrating the best PhD dissertation in computer science and related fields.

Following on from this work, he developed a program called Emacspeak, an audio desktop, which, unlike a screen reader, takes existing programs and makes them work with audio outputs. It makes use of Emacs, a family of text editors (think notepad, but with lots more features), as well as a programming language called Lisp. Raman has continued to develop Emacspeak ever since and the program is often bundled within Linux operating system installations. Like ASTER, versions of this program are dedicated to his guide dogs.

Following his PhD, Raman worked briefly with Adobe Systems and IBM, but, since 2005, has worked with Google on auditory user interfaces, accessibility, and usability. In 2014, alongside Google colleagues, he published a paper on a new application called JustSpeak, a system for navigating the Android operating system with voice commands. He has also gone back to his roots, integrating mathematical speech into the ChromeVox, the screen reader built into Chromebook devices.

Despite growing up in a time of limited access to computers for blind and visually impaired people, Raman was able, with the help of his brother and student volunteers, to learn how to program, solve a Rubik’s cube, and solve complex maths problems. With early screen readers he was also able to build tools for fellow blind and visually impaired people, and then benefit himself from his own tools to achieve even more.

Guide dogs can transform the lives of blind and partially sighted people by allowing them to do things in the physical world that they otherwise could not do. T. V. Raman’s tools provide a similar transformation in the digital world, changing lives for the better.

More on …

Magazines …


Front cover of CS4FN issue 29 - Diversity in Computing

EPSRC supports this blog through research grant EP/W033615/1,

Designing for autistic people

by Daniel Gill and Paul Curzon, Queen Mary University of London

What should you be thinking about when designing for a specific group with specific needs, such as autistic people? Queen Mary students were set this task and on the whole did well. The lessons though are useful when designing any technology, whether apps or gadgets.

A futuristic but complicated interface
A futuristic but complicated interface with lots of features: feature bloat?
Image by Tung Lam from Pixabay

The Interactive Systems Design module at QMUL includes a term-long realistic team interaction design project with the teaching team acting as clients. The topic changes each year but is always open-ended and aimed at helping some specific group of people. The idea is to give experience designing for a clear user group not just for anyone. A key requirement is always that the design, above all, must be very easy to use, without help. It should be intuitively obvious how to use it. At the end of the module, each team pitches their design in a short presentation as well as a client report.

This year the aim was to create something to support autistic people. What their design does, and how, was left to the teams to decide from their early research and prototyping. They had to identify a need themselves. As a consequence, the teams came up with a wide range of applications and tools to support autistic people in very different ways.

How do you come up with an idea for a design? It should be based on research. The teams had to follow a specific (if simplified) process. The first step was to find out as much as they could about the user group and other stakeholders being designed for: here autistic people and, if appropriate, their carers. The key thing is to identify their unmet goals and needs. There are lots of ways to do this: from book research (charities, for example, often provide good background information) and informally talking to people from the stakeholder group, to more rigorous methods of formal interviews, focus groups and even ethnography (where you embed yourself in a community).

Many of the QMUL teams came up with designs that clearly supported autistic people, but some projects were only quite loosely linked with autism. While the needs of autistic people were considered in the concept and design, they did not fully focus on supporting autistic people. More feedback directly from autistic people, both at the start and throughout the process, would have likely made the applications much more suitable. (That of course is quite hard in this kind of student role-playing scenario, though some groups were able to do so.) That though is key idea the module is aiming to teach – how important it is to involve users and their concerns closely throughout the design process, both in coming up with designs and evaluating them. Old fashioned waterfall models from software engineering, where designs are only tested with users at the end, are just not good enough.

From the research, the teams were then required to create design personas. These are detailed, realistic but fictional people with names, families, and lives. The more realistic the character the better (computer scientists need to be good at fiction too!) Personas are intended to represent the people being designed for in a concrete and tangible way throughout the design process. They help to ensure the designers do design for real people not some abstract tangible person that shape shifts to the needs of their ideas. Doing the latter can lead to concepts being pushed forward just because the designer is excited by their ideas rather than because they are actually useful. Throughout the design the team refer back to them – does this idea work for Mo and the things he is trying to do? 

An important part of good persona design lies around stereotypes. The QMUL groups avoided stereotypes of autistic people. One group went further, though: they included the positive traits that their autistic persona had, not just negative ones. They didn’t see their users in a simplistic way. Thinking about positive attributes is really, really important if designing for neurodivergent people, but also for those with physical disabilities too, to help make them a realistic person. That group’s persona was therefore outstanding. Alan Cooper, who came up with the idea of design personas, argued that stereotypes (such as a nurse persona being female) were good in that they could give people a quick and solid idea of the person. However, this is a very debatable view. It seems to go against the whole idea of personas. Most likely you miss the richness of real people and end up designing for a fictional person that doesn’t represent that group of people at all. The aim of personas is to help the designers see the world from the perspective of their users, so here of autistic people. A stereotype can only diminish that.

Multicolour jigsaw ribbon
Image by Oberholster Venita from Pixabay

Another core lesson of the module is the importance of avoiding feature bloat. Lots of software and gadgets are far harder to use than need be because they are packed with features: features that are hardly ever, possibly never, used. What could have been simple to use apps, focusing on some key tasks, instead are turned into ‘do everything’ apps. A really good video call app instead becomes a file store, a messaging place, chat rooms, a phone booth, a calendar, a movie player, and more. Suddenly it’s much harder to make video calls. Because there are so many features and so many modes all needing their own controls the important things the design was supposed to help you do become hard to do (think of a TV remote control – the more features the more buttons until important ones are lost). That undermines the aim that good design should make key tasks intuitively easy. The difficulty when designing such systems is balancing the desire to put as many helpful features as possible into a single application, and the complexity that this adds. That can be bad for neurotypical people, who may find it hard to use. For neurodivergent people it can be much worse – they can find themselves overwhelmed. When presented with such a system, if they can use it at all, they might have to develop their own strategies to overcome the information overload caused. For example, they might need to learn the interface bit-by-bit. For something being designed specifically for neurodiverse people, that should never happen. Some of the applications of the QMUL teams were too complicated like this. This seems to be one of the hardest things for designers to learn, as adding ideas, adding features seems to be a good thing, it is certainly vitally important not to make this mistake if designing for autistic people. 

Perhaps one of the most important points that arose from the designs was that many of the applications presented were designed to help autistic people change to fit into the world. While this would certainly be beneficial, it is important to realise that such systems are only necessary because the world is generally not welcoming for autistic people. It is much better if technology is designed to change the world instead. 

More on …

Magazines …


Front cover of CS4FN issue 29 - Diversity in Computing

EPSRC supports this blog through research grant EP/W033615/1,

Neurodiversity and what it takes to be a good programmer

by Paul Curzon, Queen Mary University of London

People often suggest neurodiverse people make good computer scientists. For example, one of the most famous autistic people, Temple Grandin, an academic at Colorado State University and animal welfre expert, has suggested programming is one of the jobs autistic people are potentially naturally good at (along with other computer science linked jobs) and that “Half the people in Silicon Valley probably have autism.”.. So what makes a good computer scientist? And why might people suggest neurodiverse people are good at it?

A multicoloured jigsaw pattern ribbon
Image by Oberholster Venita from Pixabay

What makes a good programmer? Is it knowledge, skills or is it the type of person you are? It is actually all three though it’s important to realise that all three can be improved. No one is born a computer scientist. You may not have the knowledge, the skills or be the right kind of person now, but you can improve them all.

To be a good programmer, you need to develop specialist knowledge such as knowing what the available language constructs are, knowing what those constructs do, knowing when to use them over others, and so on. You also need to develop particular technical skills like an ability to decompose problems into sub-problems, to formulate solutions in a particular restricted notation (the programming language), to generalise solutions, and so on. However, you also need to become the right kind of person to do well as a programmer. 

Thinking about what kind of person makes a good programmer, to help my students work on them and so become better programers, I made a list of the attributes I associate with good student programmers. My list includes: attention to detail, an ability to think clearly and logically, being creative, having good spatial visualising skills, being a hard worker, being resilient to things going wrong so determined, being organised, being able to meet deadlines, enjoying problem solving, being good at pattern matching, thinking analytically and being open to learning new and different ways of doing things.

More recently, when taking part in a workshop about neurodiversity I was struck by a similar list we were given. Part of the idea behind ‘neurodiversity’ is that everyone is different and everyone has strengths and weaknesses. If you think of ‘disability’ you tend to think of apparent weaknesses. Those ‘weaknesses’ are often there because the world we have created has turned them into weaknesses. For example, being in a wheelchair makes it hard to travel because we have built a world full of steps, kerbs and cobbles, doors that are hard to manipulate, high counters and so on. If we were to remove all those obstacles, a wheelchair would not have to reduce your ability to get around. Thinking about neurodiversity, the suggestion is to think about the strengths that come with it too, not just the difficulties you might encounter because of the way we’ve made the world.

The list of strengths of neurodiverse people given at the workshop were: attention to detail, focussed interest, problem-solving, creative, visualising, pattern recognition. Looking further you find both those positives reinforced and new positives. For example, one support website gives the positives of being an autistic person as: attention to detail, deep focus, observation skills, ability to absorb and retain facts, visual skills, expertise, a methodological approach, taking novel approaches, creativity, tenacity and resilience, accepting of difference and integrity. Thinking logically is also often picked out as a trait that neurodiverse people are often good at. The similarity of these lists to my list of what kind of person my students should aim to turn themselves into is very clear. Autistic people can start with a very solid basis to build on. If my list is right, then their personal positives may help neurodiverse people to quickly become good programmers.

Here are a few of those positives others have picked out that neurodiverse people may have and how they relate to programming:

Attention to detail: This is important in programming because a program is all about detail, both in the syntax (not missing brackets or semicolons) but more importantly not missing situations that could occur so cases the program must cover. A program must deal with every possibility that might arise, not just some. The way it deals with them also matters in the detail. Poor programs might just announce a mistake was made and shut down. A good program will explain the mistake and give the user a way to correct it for example. Detail like that matters. Attention to detail is also important in debugging as bugs are just details gone wrong. 

Resilience and determination: Programming is like being on an emotional roller coaster. Getting a program right is full of highs and lows. You think it is working and then the last test you run shows a deep flaw. Back to the drawing board. As a novice learning it is even worse. The learning curve is steep as programming is a complex skill. That means there are lots of lows and seemingly insurmountable highs. At the start it can seem impossible to get a program to even compile never mind run. You have to keep going. You have to be determined. You have to be resilient to take all the knocks.

Focussed interest. Writing a program takes time and you have to focus. Stop and come back later and it will be so much harder to continue and to avoid making mistakes. Decomposition is a way to break the overall task into smaller subtasks, so methods to code, and that helps, once you have the skill. However, even then being able to maintain your focus to finish each method, so each subtask, makes the programming job much easier.

Pattern recognition: Human expertise in anything ultimately comes down to pattern matching new situations against old. It is the way our brains work. Expert chess players pattern match situations to tell them what to do, and so do firefighters in a burning building. So do expert programmers. Initially programming is about learning the meaning of programming constructs and how to use them, problem solving every step of the way. That is why the learning curve is so steep. As you gain experience though it becomes more about pattern matching: realising what a particular program needs at this point and how it is similar to something you have seen before then matching it to one of many standard template solutions. Then you just whip out the template and adapt it to fit. Spot something is essentially a search task and you whip out a search algorithm to solve it. Need to process a 2 dimensional array – you just need the rectangular for loop solution. Once you can do that kind of pattern matching, programming becomes much, much simpler.

Creativity and doing things in novel ways: Writing a program is an act of creation, so just like arts and crafts involves creativity. Just writing a program is one kind of creativity, coming up with an idea for a program or spotting a need no one else has noticed so you can write a program that fills that need requires great creativity of a slightly different kind. So does coming up with a novel solution once you have a novel problem. Developing new algorithms is all about thinking up a novel way of solving a problem and that of course takes creativity. Designing interfaces that are aesthetically pleasing but make a task easier to do takes creativity. If you can think about a problem in a different way to everyone else, then likely you will come up with different solutions no one else thought of.

Problem solving and analytical minds: Programming is problem solving on steroids. Being able to think analytically is an important part of problem solving and is especially powerful if combined with creativity (see above). You need to be able to analyse a problem, come up with creative solutions and be able to analyse what is the best way of solving it from those creative solutions. Being analytical helps with solid testing too.

Visual thinking: research suggests those with good visual, spatial thinking skills make good programmers. The reasons are not clear, but good programs are all about clear structure, so it may be that the ability to easily see the structure of programs and manipulate them in your head is part of it. That is part of the idea of block-based programming languages like Scratch and why they are used as a way into programming for young children. The structure of the program is made visual. Some paradigms of programming are also naturally more visual. In particular object-oriented programming sees programs as objects that send messages to each other and that is something that can naturally be visualised. As programs become bigger that ability to still visualise the separate parts and how they work as a whole  is a big advantage.

A methodological approach: Novice programmers just tinker and hack programs together. Expert programmers design them. Many people never seem to get beyond the hacking stage, struggling with the idea of following a method to design first, yet it is vital if you are to engineer serious programs that work. That doesn’t mean that programming is just following methods, tinkering can be part of the problem solving and coming up with creative ideas, but it should be used within rigorous methodology not instead of it. More time is also spent by good programming teams testing programs as writing them, and that takes rigorous methods to do well too. Software engineering is all about following rigorous methods precisely because it is the only way to develop real programs that work and are fit for purpose. Vast amounts of software written is never used because it is useless. Rigorous methods give a way to avoid many of the problems.

Logical thinking: Being able to think clearly and logically is core to programming. It combines some of the things above like attention to detail, and thinking clearly and methodologically. Writing programs is essentially the practice of applied logic as logic underpins the semantics (ie meaning) of programming languages and so programs. You have to think logically when writing programs, when testing them and when debugging them. Computers are machines built from logic and you need to think in part like a computer to get it to do what you want. A key part of good programming is doing code walkthroughs – as a team stepping through what a program does line by line. That requires clear logical thinking to think in teh steps the computer performs.

I could go on, but will leave it there. The positives that neurodiverse people might have are very strongly positives for becoming a good programmer. That is why some of the best students I’ve had the privilege to teach have been neurodiverse students.

Different people, neurodiverse or otherwise, will start with different positives, different weaknesses. People start in different places for different reasons, some ahead, some behind. I liked doing puzzles as a child, so spent my childhood devouring logic and algorithmic puzzles. That meant when I first tried to learn to program, I found it fairly easy. I had built important skills and knowledge and had become a good logical thinker and problem solver just for fun. I learnt to program for fun. That meant it was if I’d started way beyond the starting line in the race to become a programmer. Many neurodiverse people do the same, if for different reasons.

Other skills I’ve needed as a computer scientist I have had to work hard on and develop strategies to overcome. I am a shy introvert. However, I need to both network and give presentations as a computer scientist (and ultimately now I give weekly lectures to hundreds at a time as an academic computer scientist). For that I had to practice, learn theory about good presentation and perhaps most importantly, given how paralysing my shyness was, devise a strategy to overcome that natural weakness. I did find a strategy – I developed an act. I have a fake extrovert persona that I act out. I act being that other person in these situations where I can’t otherwise cope, so it is not me you see giving presentations and lectures but my fake persona. Weaknesses can be overcome, even if they mean you start far behind the starting line. Of course, some weaknesses and ways we’ve built the world mean we may not be able to overcome the problems, and not everyone wants to be a computer scientist anyway so has the desire to. What matters is finding the future that matches your positives and interests, and where you can overcome the weaknesses where you set your mind to it.

Programming is not about born talent though (nothing is). We all have strengths and weaknesses and we can all become better by practicing and finding strategies that work for us, building upon our strengths and working on our weaknesses, especially when we have the help of a great teacher (or have help to change the way the world works so the weaknesses vanish).

My list above gives some of the key personal characteristics you need to work on improving (however good at them you are or are not right now) If you do want to be a good programmer. Anyone can become better at programming than they are if they have that desire. What matters is that you want to learn, are willing to put the practice in, can develop strategies to overcome your initial weaknesses, and you don’t give up. Neurodiverse people often have a head start on those personal attributes for becoming good at new things too.

More on …

Magazines …


Front cover of CS4FN issue 29 - Diversity in Computing

EPSRC supports this blog through research grant EP/W033615/1,

Equality, diversity and inclusion in the R Project: collaborative community coding & curating with Dr Heather Turner

You might not think of a programming language like Python or Scratch as being an ‘ecosystem’ but each language has its own community of people who create and improve its code, flush out the bugs, introduce new features, document any changes and write the ‘how to’ guides for new users. 

The logo for the R project.

R is one such programming language. It’s named after its two co-inventors (Ross Ihaka and Robert Gentleman) and is used by around two million people around the world. People working in all sorts of jobs and industries (for example finance, academic research, government, data journalists) use R to analyse their data. The software has useful tools to help people see patterns in their data and to make sense of that information. 

It’s also open source which means that anyone can use it and help to improve it, a bit like Wikipedia where anyone can edit an article or write a new one. That’s generally a good thing because it means everyone can contribute but it can also bring problems. Imagine writing an essay about an event at your school and sharing it with your class. Then imagine your classmates adding paragraphs of their own about the event, or even about different events. Your essay could soon become rather messy and you’d need to re-order things, take bits out and make sure people hadn’t repeated something that someone had already said (but in a slightly different way). 

When changes are made to software people also want to keep a note not just of the ‘words’ added (the code) but also to make a note of who added what and when. Keeping good records, also known as documentation, helps keep things tidy and gives the community confidence that the software is being properly looked after.

Code and documentation can easily become a bit chaotic when created by different people in the community so there needs to be a core group of people keeping things in order. Fortunately there is – the ‘R Core Team’, but these days its membership doesn’t really reflect the community of R users around the world. R was first used in universities, particularly by more privileged statistics professors from European countries and North America (the Global North), and so R’s development tended to be more in line with their academic interests. R needs input and ideas from a more diverse group of active developers and decision-makers, in academia and beyond to ensure that the voices of minoritised groups are included. Also the voices of younger people, particularly as many of the current core group are approaching retirement age.

Dr Heather Turner from the University of Warwick is helping to increase the diversity of those who develop and maintain the R programming language and she’s been given funding by the EPSRC* to work on this. Her project is a nice example of someone who is bringing together two different areas in her work. She is mixing software development (tech skills) with community management (people skills) to support a range of colleagues who use R and might want to contribute to developing it in future, but perhaps don’t feel confident to do so yet

Development can involve things like fixing bugs, helping to improve the behaviour or efficiency of programs or translating error messages that currently appear on-screen in the English language into different languages. Heather and her colleagues are working with the R community to create a more welcoming environment for ‘newbies’ that encourages participation, particularly from people who are in the community but who are not currently represented or under-represented by the core group and she’s working collaboratively with other community organisations such as R-Ladies, LatinR and RainbowR. Another task she’s involved in is producing an easier-to-follow ‘How to develop R’ guide.

There are also people who work in universities but who aren’t academics (they don’t teach or do research but do other important jobs that help keep things running well) and some of them use R too and can contribute to its development. However their contributions have been less likely to get the proper recognition or career rewards compared with those made by academics, which is a little unfair. That’s largely because of the way the academic system is set up. 

Generally it’s academics who apply for funding to do new research, they do the research and then publish papers in academic journals on the research that they’ve done and these publications are evidence of their work. But the important work that supporting staff do in maintaining the software isn’t classified as new research so doesn’t generally make it into the journals, so their contribution can get left out. They also don’t necessarily get the same career support or mentoring for their development work. This can make people feel a bit sidelined or discouraged. 

Logo for the Society of Research Software Engineering

To try and fix this and to make things fairer the Society of Research Software Engineering was created to champion a new type of job in computing – the Research Software Engineer (RSE). These are people whose job is to develop and maintain (engineer) the software that is used by academic researchers (sometimes in R, sometimes in other languages). The society wants to raise awareness of the role and to build a community around it. You can find out what’s needed to become an RSE below. 

Heather is in a great position to help here too, as she has a foot in each camp – she’s both an Academic and a Research Software Engineer. She’s helping to establish RSEs as an important role in universities while also expanding the diversity of people involved in developing R further, for its long-term sustainability.

Further reading

*Find out more about Heather’s EPSRC-funded Fellowship: “Sustainability and EDI (Equality, Diversity, and Inclusion) in the R Project” https://gtr.ukri.org/projects?ref=EP%2FV052128%2F1 and https://society-rse.org/getting-to-know-your-2021-rse-fellows-heather-turner/ 

Find out more about the job of the Research Software Engineer and the Society of Research Software Engineering https://society-rse.org/about/ 

Example job packs / adverts

Below are some examples of RSE jobs (these vacancies have now closed but you can read about what they were looking for and see if it might interest you in the future).

Note that these documents are written for quite a technical audience – the people who’d apply for the jobs will have studied computer science for many years and will be familiar with how computing skills can be applied to different subjects.

1. The Science and Technology Facilities Council (STFC) wanted four Research Software Engineers (who’d be working either in Warrington or Oxford) on a chemistry-related project (‘computational chemistry’ – “a branch of chemistry that uses computer simulation to assist in solving chemical problems”) 

2. The University of Cambridge was looking for a Research Software Engineer to work in the area of climate science – “Computational modelling is at the core of climate science, where complex models of earth systems are a routine part of the scientific process, but this comes with challenges…”

3. University College London (UCL) wanted a Research Software Engineer to work in the area of neuroscience (studying how the brain works, in this case by analysing the data from scientists using advanced microscopy).


EPSRC supports this blog through research grant EP/W033615/1.

Protecting your fridge

by Jo Brodie and Paul Curzon, Queen Mary University of London

Ever been spammed by your fridge? It has happened, but Queen Mary’s Gokop Goteng and Hadeel Alrubayyi aim to make it less likely…

Image by Gerd Altmann from Pixabay

Gokop has a longstanding interest in improving computing networks and did his PhD on cloud computing (at the time known as grid computing), exploring how computing could be treated more like gas and electricity utilities where you only pay for what you use. His current research is about improving the safety and efficiency of the cloud in handling the vast amounts of data, or ‘Big Data’, used in providing Internet services. Recently he has turned his attention to the Internet of Things.

It is a network of connected devices, some of which you might have in your home or school, such as smart fridges, baby monitors, door locks, lighting and heating that can be switched on and off with a smartphone. These devices contain a small computer that can receive and send data when connected to the Internet, which is how your smartphone controls them. However, it brings new problems: any device that’s connected to the Internet has the potential to be hacked, which can be very harmful. For example, in 2013 a domestic fridge was hacked and included in a ‘botnet’ of devices which sent thousands of spam emails before it was shut down (can you imagine getting spam email from your fridge?!)

A domestic fridge was hacked
and included in a ‘botnet’ of devices
which sent thousands of spam emails
before it was shut down.

The computers in these devices don’t usually have much processing power: they’re smart, but not that smart. This is perfectly fine for normal use, but to run software to keep out hackers, while getting on with the actual job they are supposed to be doing, like running a fridge, it becomes a problem. It’s important to prevent devices from being infected with malware (bad programs that hackers use to e.g., take over a computer) and work done by Gokop and others has helped develop better malwaredetecting security algorithms which take account of the smaller processing capacity of these devices.

One approach he has been exploring with PhD student Hadeel Alrubayyi is to draw inspiration from the human immune system: building artificial immune systems to detect malware. Your immune system is very versatile and able to quickly defend you against new bugs that you haven’t encountered before. It protects you from new illnesses, not just illnesses you have previously fought off. How? Using special blood cells, such as T-Cells, which are able to detect and attack rogue cells invading the body. They can spot patterns that tell the difference between the person’s own healthy cells and rogue or foreign cells. Hadeel and Gokop have shown that applying similar techniques to Internet of Things software can outperform other techniques for spotting new malware, detecting more problems while needing less computing resources.

Gokop is also using his skills in cloud computing and data science to enhance student employability and explore how Queen Mary can be a better place for everyone to do well. Whether a person, organisation or smart fridge Gokop aims to help you reach your full potential!

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

The gender shades audit

by Jo Brodie, Queen Mary University of London

Face recognition technology is used widely, such as at passport controls and by police forces. What if it isn’t as good at recognising faces as it has been claimed to be? Joy Buolamwini and Timnit Gebru tested three different commercial systems and found that they were much more likely to wrongly classify darker skinned female faces compared to lighter or darker skinned male faces. The systems were not reliable.

Different skin tone cosmetics
Image by Stefan Schweihofer from Pixabay

Face recognition systems are trained to detect, classify and even recognise faces based on a bank of photographs of people. Joy and Timnit examined two banks of images used to train the systems and found that around 80 percent of the photos used were of people with lighter coloured skin. If the photographs aren’t fairly balanced in terms of having a range of people of different gender and ethnicity then the resulting technologies will inherit that bias too. The systems examined were being trained to recognise light skinned people.

The pilot parliaments benchmark

Joy and Timnit decided to create their own set of images and wanted to ensure that these covered a wide range of skin tones and had an equal mix of men and women (‘gender parity’). They did this using photographs of members of parliaments around the world which are known to have a reasonably equal mix of men and women. They selected parliaments both from countries with mainly darker skinned people (Rwanda, Senegal and South Africa) and from countries with mainly lighter skinned people (Iceland, Finland and Sweden).

They labelled all the photos according to gender (they had to make some assumptions based on name and appearance if pronouns weren’t available) and used a special scale called the Fitzpatrick scale to classify skin tones (see Different Shades below). The result was a set of photographs labelled as dark male, dark female, light male, light female, with a roughly equal mix across all four categories: this time, 53 per cent of the people were light skinned (male and female).

Testing times

Joy and Timnit tested the three commercial face recognition systems against their new database of photographs (a fair test of a wide range of faces that a recognition system might come across) and this is where they found that the systems were less able to correctly identify particular groups of people. The systems were very good at spotting lighter skinned men, and darker skinned men, but were less able to correctly identify darker skinned women, and women overall. The tools, trained on sets of data that had a bias built into them, inherited those biases and this affected how well they worked.

As a result of Joy and Timnit’s research there is now much more recognition of the problem, and what this might mean for the ways in which face recognition technology is used. There is some good news, though. The three companies made changes to improve their systems and several US cities have already banned the use of this technology in criminal investigations, with more likely to follow. People worldwide are more aware of the limitations of face recognition programs and the harms to which they may be (perhaps unintentionally) put, with calls for better regulation.

Different Shades
The Fitzpatrick skin tone scale is used by skin specialists to classify how someone’s skin responds to ultraviolet light. There are six points on the scale with 1 being the lightest skin and 6 being the darkest. People whose skin tone has a lower Fitzpatrick score are more likely to burn in the sun and are at greater risk of skin cancer. People with higher scores have darker skin which is less likely to burn and have a lower risk of skin cancer. A variation of the Fitzpatrick scale, with five points, is used to create the skin tone emojis that you’ll find on most messaging apps in addition to the ‘default’ yellow.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Collecting mini-beasts and pocket monsters

by Paul Curzon, Queen Mary University of London

A Pokemon creature int he grass
Image by Ramadhan Notonegoro from Pixabay

Satoshi Tajiri created one of the biggest money-making media franchises of all time. It all started with his love of nature and, in particular, mini-beasts. It also eventually took gamers back into the fresh air.

As a child, Satoshi Tajiri, loved finding and collecting minibeasts, so spent lots of time outside, exploring nature. But, as Japan became more and more built up, his insect searching haunts disappeared. As the natural world disappeared he was drawn instead inside to video game arcades and those games became a new obsession. He became a super-fan of games and even created a game fanzine called Game Freak where he shared tips on playing different games. It wasn’t just something he sold to friends either: one issue sold 10,000 copies. An artist, Ken Sugimori, who started as a reader of the magazine, ultimately joined Satoshi, illustrating the magazine for him.

Rather than just writing about games, they wanted to create better ones themselves, so morphed Game Freak into a computer game company, ultimately turning it into one of the most successful ever. The cause of that success was their game Pokemon, designed by Satoshi with characters drawn by Ken. It took the idea of that first obsession, collecting minibeasts, and put it into a fun game with a difference.

It wasn’t about killing things, but moving around a game world searching for, taming and collecting monsters. The really creative idea, though, came from the idea of trading. There were two versions of the game and you couldn’t find all the creatures in your own version. To get a full set you had to talk to other people and trade from your collection. It was designed to be a social game from the outset.

It has been suggested that Satoshi is neuro-diverse. Whether he is or not, autistic people (as well as everyone else) found that Pokemon was a great way to make friends, something autistic people often find difficult. Pokemon, also became more than just a game, turning into a massive media franchise, with trading cards to collect, an animated series and a live action film. It also later sparked a second game craze when Pokemon Go was released. It combined the original idea with augmented reality, taking all those gamers back outside for real, searching for (virtual) beasts in the real world.

 

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

“Tlahcuilo”, a visual composer

by Rafael Pérez y Pérez of the Universidad Autónoma Metropolitana, México

A design by Tlahcuilo of circles made of dots
A design by Tlahcuilo

A main goal of computational creativity research is to help us better understand how this essential human characteristic, creativity, works. Creativity is a very complex phenomenon that we only just understand: we need to employ all the tools that we have available to fully comprehend it. Computers are a powerful tool that can help us generate that knowledge and reflect on it. By building computer models of the processes we think are behind creativity, we can start to probe how creativity really works.

When you hear someone claiming that a computer agent, whether program, robot or gadget, is creative, the first question you should ask is: what have we learned? What does studying this agent help us to realise or discover about creativity that we did not know before? If you do not get a satisfactory answer, I would hardly call it a computer model of creativity. As well as being able to generate novel, and interesting or useful, things, a creative agent ought to fulfil other criteria: using its knowledge, creating knowledge and evaluating its own work.

Be knowledgeable!

Truly creative agents should draw on their own knowledge to build the things, such as art, that they create. They should use a knowledge-base, not just create things randomly. We aren’t, for example, interested in programs that arbitrarily pick a picture from the web, randomly apply a filter to it and then claim they have generated art.

Create knowledge!

A design by Tlahcuilo of circles made of dots
A design by Tlahcuilo

A creative agent must be able to interpret its own creations in order to generate novel knowledge, and that knowledge should help it produce more original pieces. For example, a program that generates story plots must be able to read its own stories and learn from them, as well as from stories developed by others.

Evaluate it!

To deserve to be called creative, an agent also ought to be able to tell whether the things it has created are good or bad. It should be able to evaluate its work, as well as that produced by similar agents. It’s evaluation should also influence the way the generation process works. We don’t want joke creation programs that churn out thousands of ‘jokes’ leaving a human to decide which are actually funny. A creative agent ought to be able to do that itself!

Design me a design

At the moment few, if any, systems fulfil all these criteria. Nevertheless, I suggest they should be the main goals of those doing research in computational creativity. Over the past 20 years I’ve been studying computer models of creativity, aiming to do exactly that. My main research has focused on story generation, but with my team I’ve also developed programs that aim to create novel visual designs. This is the kind of thing someone developing new fabric, wallpaper or tiling patterns might do, for example. With Iván Guerrero and María González I developed a program called TLAHCUILO. It composes visual patterns based on photographs or an empty canvas. It employs geometrical patterns, like repeated shapes, in the picture and then uses them as the basis of a new abstract pattern.

The word “tlahcuilo” refers to painters and writers
in ancient México responsible for preserving
the knowledge and traditions of their people.

To build the system’s knowledge-base, we created a tool that human designers can use to do the same creative task. TLAHCUILO analyses the steps they follow as they develop a composition and registers what it has learnt in its knowledge base. For example, it might note the way the human designer adds elements to make the pattern symmetrical or to add balance. Once these approaches are in its knowledge base it can use them itself in its own compositions. This is a little like the way an apprentice to a craftsman might work, watching the Master at work, gradually building the experience to do it themselves. Our agent similarly builds on this experience to produce its own original outputs. It can also add its own pieces of work to its knowledge-base. Finally, it is able to assess the quality of its designs. It aims to meet the criteria set out above.

Design me a plot

A design by Tlahcuilo based on a fruit stall image
A design by Tlahcuilo

One of TLAHCUILO’s most interesting characteristics is that it uses the same model of creativity that we used to implement MEXICA, our story plot generator (see CS4FN Issue 18). This allows us to compare in detail the differences and similarities between an agent that produces short-stories and an agent that produces visual compositions. We hope this will allow us to generalise our understanding.

Creativity research is a fascinating field. We hope to learn not just how to build creative agents but more importantly to understand what it takes to be a creative human.

More on …

Related Magazines …

Issue 22 Cover Creative Computing

EPSRC supports this blog through research grant EP/W033615/1. 

Follow those ants

by Paul Curzon, Queen Mary University of London

Ants climbing on a mushroom obstacle course
Image by Puckel from Pixabay

Ant colonies are really good at adapting to changing situations: far better than humans. Sameena Shah wondered if Artificial Intelligence agents might do better by learning their intelligent behaviour from ants rather than us. She has suggested we could learn from the ants too.

Inspired by staring at ants adapting to new routes to food in the mud as a child, and then later as adult ants raided her milk powder, Sameena Shah studied for her PhD how a classic problem in computer science, that of finding the shortest path between points in a network, is solved by ant colonies. For ants this involves finding the shortest paths between food and the nest: something they are very good at. When foraging ants find a source of food they leave a pheromone (i.e., scent) trail as they return, a bit like Hansel and Gretel leaving a trail of breadcrumbs. Other ants follow existing trails to find the food as directly as possible, leaving their own trails as they do. Ants mostly follow the trail containing most pheromone, though not always. Because shorter paths are followed more quickly, there and back, they gain more pheromone than longer ones, so yet more ants follow them. This further reinforces the shortest trail as the one to follow.

There are lots of variations on the way ants actually behave. These variations are being explored by computer scientists as ways for AI agents to work together to solve problems. Sameena devised a new algorithm called EigenAnt to investigate such ant colony-based problem solving. If the above ant algorithm is used, then it turns out longer trails do not disappear even when a shorter path is found, particularly if it is found after a long delay. The original best path has a very strong trail so that it continues to be followed even after a new one is found. Computer-based algorithms add a step whereby all trails fade away at the same rate so that only ones still being followed stay around. This is better but still not perfect. Sameena’s EigenAnt algorithm instead removes pheromone trails selectively. Her software ants select paths using probabilities based on the strength of the trail. Any existing trail could be chosen but stronger trails are more likely to be. When a software ant chooses a trail, it adds its own pheromones but also removes some of the existing pheromone from the trail in a way that depends on the probability of the path being chosen in the first place. This mirrors what real ants do, as studies have shown they leave less pheromone on some trails than others.

Sameena proved mathematical properties of her algorithm as well as running simulations of it. This showed that EigenAnt does find the shortest path and never settles on something less than the best. Better still, it also adapts to changing situations. If a new shorter path arises then the software ants switch to it!

Sameena won the award
for the best PhD in India

There are all sorts of computer science uses for this kind of algorithm, such as in ever-changing computer networks, where we always want to route data via the current quickest route. Sameena, however, has also suggested we humans could learn from this rather remarkable adaptability of ants. We are very bad at adapting to new situations, often getting stuck on poor solutions because of our initial biases. The more successful a particular life path has been for us the more likely we will keep following it, behaving in the same way, even when the situation changes. Sameena found this out when she took her dream job as a Hedge Fund manager. It didn’t go well. Since then, after changing tack, she has been phenomenally successful, first developing AIs for news providers, and then more recently for a bank. As she says: don’t worry if your current career path doesn’t lead to success, there are many other paths to follow. Be willing to adapt and you will likely find something better. We need to nurture lots of possible life paths, not just blindly focus on one.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1.