Oh no! Not again…

What a mess. There’s flour all over the kitchen floor. A fortnight ago I opened the cupboard to get sugar for my hot chocolate. As I pulled out the sugar, it knocked against the bag of flour which was too close to the edge… Luckily the bag didn’t burst and I cleared it up quickly before anyone found out. Now it’s two weeks later and exactly the same thing just happened to my brother. This time the bag did burst and it went everywhere. Now he’s in big trouble for being so clumsy!

Flour cascading everywhere
Cropped image of that by Anastasiya Komarova from Pixabay
Image by Anastasiya Komarova from Pixabay (cropped)

In safety-critical industries, like healthcare and the airline industry, especially, it is really important that there is a culture of reporting incidents including near misses. It also, it turns out, is important that the mechanisms of reporting issues is appropriately designed, and that there is a no blame culture especially so that people are encouraged to report incidents and do so accurately and without ambiguity.

Was the flour incident my brother’s fault? Should he have been more careful? He didn’t choose to put the sugar in a high cupboard with the flour. Maybe it was my fault? I didn’t choose to put the sugar there either. But I didn’t tell anyone about the first time it happened. I didn’t move the sugar to a lower cupboard so it was easier to reach either. So maybe it was my fault after all? I knew it was a problem, and I didn’t do anything about it. Perhaps thinking about blame is the wrong thing to do!

Now think about your local hospital.

James is a nurse, working in intensive care. Penny is really ill and is being given insulin by a machine that pumps it directly into her vein. The insulin is causing a side effect though – a drop in blood potassium level – and that is life threatening. They don’t have time to set up a second pump, so the doctor decides to stop the insulin for a while and to give a dose of potassium through a second tube controlled by the same pump. James sets up the bag of potassium and carefully programs the pump to deliver it, then turns his attention to his next task. A few minutes later, he glances at the pump again and realises that he forgot to release the clamp on the tube from the bag of potassium. Penny is still receiving insulin, not the potassium she urgently needs. He quickly releases the clamp, and the potassium starts to flow. An hour later, Penny’s blood potassium levels are pretty much back to normal: she’s still ill, but out of danger. Phew! Good job he noticed in time and no-one else knows about the mistake!

Two weeks later, James’ colleague, Julia, is on duty. She makes a similar mistake treating a different patient, Peter. Except that she doesn’t notice her mistake until the bag of insulin has emptied. Because it took so long to spot, Peter needs emergency treatment. It’s touch-and-go for a while, but luckily he recovers.

Julia reports the incident through the hospital’s incident reporting system, so at least it can be prevented from happening again. She is wracked with guilt for making the mistake, but also hopes fervently that she won’t be blamed and so punished for what happened

Don’t miss the near misses

Why did it happen? There are a whole bunch of problems that are nothing to do with Julia or James. Why wasn’t it standard practice to always have a second pump set up for critically ill patients in case such emergency treatment is needed? Why can’t the pump detect which bag the fluid is being pumped from? Why isn’t it really obvious whether the clamp is open or closed? Why can’t the pump detect it. If the first incident – a ‘near miss’ – had been reported perhaps some of these problems might have been spotted and fixed. How many other times has it happened but not reported?

What can we learn from this? One thing is that there are lots of ways of setting up and using systems, and some may well make them safer. Another is that reporting “near misses” is really important. They are a valuable source of learning that can alert other people to mistakes they might make and lead to a search for ways of making the system safer, perhaps by redesigning the equipment or changing the way it is used, for example – but only if people tell others about the incidents. Reporting near-misses can help prevent the same thing happening again.

The above was just a story, but it’s based on an account of a real incident… one that has been reported so it might just save lives in the future.

Report it!

The mechanisms used to do it, as well as culture around reporting incidents can make a big difference to whether incidents are reported. However, even when incidents are reported, the reporting systems and culture can help or hinder the learning that results.

Chrystie Myketiak at Queen Mary analysed actual incident reports for the kind of language used by those writing them. She found that the people doing the reporting used different strategies in they way they wrote the reports depending on the kind of incident it was. In situations where there was no obvious implication that a person made a mistake (such as where sterilization equipment had not successfully worked) they used one kind of language. Where those involved were likely to be seen to be responsible, so blamed, (eg when a wrong number had been entered in a medical device, for example) they used a different kind of language.

In the former, where “user errors” might have been involved, those doing the reporting were more likely to write in a way that hid the identity of any person involved, eg saying “The pump was programmed” or writing about he or she rather than a named person. They were also more likely to write in a way that added ambiguity. For example, in the user error reports it was less clear whether the person making the report was the one involved or whether someone else was writing it such as a witness or someone not involved at all.

Writing in the kinds of ways found, and the fact that it differed to those with no one likely to be blamed, suggests that those completing the reports were aware that their words might be misinterpreted by those who read them. The fact that people might be blamed hung over the reporting.

The result of adding what Christie called “precise ambiguity” might mean important information was inadvertently concealed making it harder to understand why the incident happened so work out how best to avoid it. As a result, patient safety might then not be improved even though the incident was reported. This shows one of the reasons why a strong culture of no-fault reporting is needed if a system is to be made as safe as possible. In the airline industry, which is incredibly safe, there is a clear system of no fault reporting, with pilots, for example, being praised for reporting near-misses of plane crashes rather than being punished for any mistake that led to the near miss.

This work was part of the EPSRC funded CHI+MED research project led by Ann Blandford at UCL looking at design for patient safety. In separate work on the project, Alexis Lewis, at Swansea University, explored how best to design the actual incident reporting forms as part of her PhD. A variety of forms are used in hospitals across the UK and she examined more than 20 different ones. Many had features that would make it harder than necessary for nurses and doctors to report incidents accurately even if they wanted to openly so that hospital staff would learn as much as possible from the incidents that did happen. Some forms failed to ask about important facts and many didn’t encourage feedback. It wasn’t clear how much detail or even what should be reported. She used the results to design a new reporting form that avoided the problems and that could be built into a system that encourages the reporting of incidents . Ultimately her work led to changes to the reporting form and process used within at least one health board she was working with.

People make mistakes, but safety does not come from blaming those that make them. That just discourages a learning culture. To really improve safety you need to praise those that report near misses, as well as ensuring the forms and mechanisms they must use to do so helps them provide the information needed.

Updated from the archive, written by the CHI+MED team.

More on …

Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.



EPSRC supports this blog through research grant EP/W033615/1. 

Tanaka Atsuko: an electric dress

Wearable computing is now increasingly common whether wearing smart watches or clothes that light up. The pioneer of the latter was Japanese artist, Tanaka Atsuko, with her 1950s art work, Electric Dress. It was anything but light though, weighing 50-60kg, clothing her head to foot in a mixture of fluorescent and normal light bulbs.

Light reflecting from strip bulbs in a light bulb
Image by wal_172619 from Pixabay

She was a member of the influential Gutai (meaning concrete as opposed to abstract) Art Association and Zero Society of Japanese artists who pioneered highly experimental performance and conceptual art, that often included the artist’s actual body. The Electric Dress was an example of this, and she experimented with combining art and electronics in other work too.

Atsuko had studied dress-making as well as art, and did dress making as a hobby, so fashion was perhaps a likely way for her to express her artistic ideas, but Electric Dress was much more than just fashion as a medium for art. She had the idea of the dress when surrounded by the fluorescent lights in Osaka city centre. She set about designing and making the dress and ultimately walked around the gallery wearing it when it was exhibited at the 2nd Gutai Art Exhibition in Tokyo. Once on it flashed the lights randomly, bathing her in multicoloured light. Wearing it was potentially dangerous. It was incredibly hot and the light was dazzling. There was also a risk of electrocution if anything went wrong! She is quoted as saying after wearing it: “I had the fleeting thought: Is this how a death-row inmate would feel?”

It wasn’t the first time, electric lights had been worn, since as early as 1884 you could hire women, wearing lights on their heads powered by batteries hidden in their clothes, to light up a cocktail party, for example. However, Tanaka Atsuko’s was certainly the most extreme and influential version of a light dress, and shows how art and artists can inspire new ideas in technology. Up to then, what constituted wearable computing was more about watch like gadgets than adding electronics or computing to clothes.

Now, of course, with LEDs, and conductive thread that can be sewn into clothes and special micro-controllers, an electric dress is both much easier to make, and with programming skill you can program the lights in all sorts of creative ways. One example is a dress created for a BBC educational special of Strictly Come Dancing promoting the BBC micro:bit and showing what it was capable of with creativity. Worn by professional dancer, Karen Hauer, in a special dance to show it off, the micro:bit’s accelerometer was used to control the way the LEDs covering the dress in place of sequins, lit up in patterns. The faster she spun while dancing the more furious the patterns of the lights flashing.

Now you can easily buy kits to create your own computer-controlled clothes with online guides to get you started, so if interested in fashion and computer science why not start experimenting. Unlike Tanaka Atsuko you won’t have to put your life at risk for your art and wearable computing, overlapping with soft robotics is now a major research area, so it could be the start of a great research career.

by Paul Curzon, Queen Mary University of London

More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by UKRI, through grant EP/W033615/1.

Avoiding loneliness with StudyBuddy

A girl in a corner of a red space head on knees
Lonely Image by Foundry Co from Pixabay

University has always been a place where you make great friends for life. Social media means everyone can easily make as many online friends as they like, and ever more students go to university, meaning more potential friends to make. So surely things now are better than ever. And yet many students suffer from loneliness while at university. We somehow seem to have ever greater disconnection the more connections we make. Klara Brodahl realised there was a novel need here that no one was addressing well and decided to try to solve it for the final year project of her computer science degree. Her solution was StudyBuddy and with the support of an angel investor she has now set up a startup company and is rolling it out for real.

A loneliness epidemic

In the digital age, university students face an unexpected challenge—loneliness. Although they’re more “connected” than ever through social media and virtual interactions, the quality of these connections is often shallow. A 2023 study, for example, found that 92% of students in the UK feel lonely at some point during their university life. This “loneliness epidemic” has profound effects, contributing to issues like anxiety, depression, and struggling with their programme.

During her own university years, Klara Brodahl  had experienced first hand the challenge of forming meaningful friends in an environment where everyone seemed socially engaged online but weren’t always connected in real life. She soon discovered that it wasn’t just her but a shared struggle by students across the country. Inspired by this, she set out to write a program that would fill the void in student’s lives and bridge the gap between studying and social life.

Combatting loneliness in the real world

She came up with StudyBuddy: a mobile app designed to combat student loneliness by supporting genuine, in-person connections between university students, not just virtual ones. Her aim was that it would help students meet, study, and connect in real time and in shared spaces. 

She realised that technology does have the potential to strengthen social bonds, but how it’s designed and used makes all the difference. The social neuroscientist John Cacioppo has pointed out that using social media primarily as a destination in its own right often leaves people feeling distant and dissatisfied. However, when technology is designed to serve as a bridge to offline human engagement, it can reduce loneliness and improve well-being. StudyBuddy embodies this approach by encouraging students to connect in person rather than trying to replace meeting face-to-face.

Study together in the real world

Part of making this work is in having reasons to meet for real. Klara realised that the need to study, and the fact that doing this in groups rather than alone can help everyone do better, could provide the excuse for this. StudyBuddy, therefore, integrates study goals with social interaction, allowing friendships to form around shared academic interests—an ideal icebreaker for those who feel nervous in traditional social settings.

The app uses location-based technology to connect students for co-study sessions, making in-person meetings easy and natural. Through a live map, students can see where others are checked in nearby at study spots like libraries, cafes, or student common areas. They can join existing study groups or start their own. The app uses university ID verification to help ensure connections are built on a trusted network.

From idea to startup company

Klara didn’t originally plan for StudyBuddy to become a real company. Like many graduates, she thought starting a business was something to perhaps try later, once she had some professional experience from a more ‘normal’ graduate job. However, when the graduate scheme she won a place on after graduating was unexpectedly delayed, she found herself with time on her hands. Rather than do nothing she decided to keep working on the app as a side project. It was at this point that StudyBuddy caught the attention of an angel investor, whose enthusiasm for the app gave Klara the confidence to keep going.

When her graduate scheme finally began, she was therefore already deeply invested in StudyBuddy. Trying to manage both roles, she quickly realised she preferred the challenge and creativity of her startup work over the graduate scheme. And when it became impossible to balance both, she took a leap of faith, quitting her graduate job to focus on StudyBuddy full-time—a decision that has since paid off. She gained early positive feedback, ran a pilot at Queen Mary University of London, and won early funding for investors willing to invest in what was essentially still an idea, rather than a product with a known market. As a result StudyBuddy has gradually turned into a useful mission-driven platform, providing students with a safe, real-world way to connect.

Making a difference

StudyBuddy has the potential to transform the university experience by reducing loneliness and fostering authentic, in-person friendships. By rethinking what engagement in the digital age means, the app also serves as a model for how technology can promote meaningful social interaction more generally. Klara has shown that with thoughtful design, technology can be a powerful tool for bridging digital and physical divides, creating a campus environment where students thrive both academically and socially. Her experience also shows how the secret to being a great entrepreneur is to be able to see a human need that no one else has seen or solved well. Then, if you can come up with a creative solution that really solves that need, your ideas can become reality and really make a difference to people’s lives.

– Klara Brodahl, StudyBuddy and Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Our Books …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Byte Queens

Women have made vital contributions to computer science ever since Ada Lovelace debugged the first algorithm for an actual computer (written by Charles Babbage) almost 200 years ago (more on CS4FN’s Women Portal). Despite this, women make up only a fraction (25%) of the STEM workforce: only about a fifth of senior tech roles and only a fifth of computer science students are women. The problem starts early: research by the National Centre for Computing Education suggests that female student’s intension to study computing drops off between the ages of 8 and 13. Ilenia Maietta, a computer science student at Queen Mary, talks about her experiences of studying in a male-dominated field and how she is helping to build a network for other women in tech.

Ilenia’s love for science hasn’t wavered since childhood and she is now studying for a master’s degree in computer science – but back in sixth form, the decision was between computer science and chemistry:

“I have always loved science, and growing up my dream was to become a scientist in a lab. However, in year 12, I dreaded doing the practical experiments and all the preparation and calculations needed in chemistry. At the same time, I was working on my computer science programming project, and I was enjoying it a lot more. I thought about myself 10 years in the future and asked myself ‘Where do I see myself enjoying my work more? In a lab, handling chemicals, or in an office, programming?’ I fortunately have a cousin who is a biologist, and her partner is a software engineer. I asked them about their day-to-day work, their teams, the projects they worked on, and I realised I would not enjoy working in a science lab. At the same time I realised I could definitely see myself as a computer scientist, so maybe child me knew she wanted to be scientist, just a different kind.”

The low numbers of female students in computer science classrooms can have the knock-on effect of making girls feel like they don’t belong. These faulty stereotypes that women don’t belong in computer science, together with the behaviour of male peers, continue to have an impact on Ilenia’s education:

“Ever since I moved to the UK, I have been studying STEM subjects. My school was a STEM school and it was male-dominated. At GCSEs, I was the only girl in my computer science class, and at A-levels only one of two. Most of the time it does not affect me whatsoever, but there were times it was (and is) incredibly frustrating because I am not taken seriously or treated differently because I am a woman, especially when I am equally knowledgeable or skilled. It is also equally annoying when guys start explaining to me something I know well, when they clearly do not (i.e. mansplaining): on a few occasions I have had men explain to me – badly and incorrectly – what my degree was to me, how to write code or explain tech concepts they clearly knew nothing about. 80% of the time it makes no difference, but that 20% of the time feels heavy.”

Many students choose computer science because of the huge variety of topics that you can go on to study. This was the case for Ilenia, especially being able to apply her new-found knowledge to lots of different projects:

“Definitely getting to explore different languages and trying new projects: building a variety of them, all different from each other has been fun. I really enjoyed learning about web development, especially last semester when I got to explore React.js: I then used it to make my own portfolio website! Also the variety of topics: I am learning about so many aspects of technology that I didn’t know about, and I think that is the fun part.”

“I worked on [the portfolio website] after I learnt about React.js and Next.js, and it was the very first time I built a big project by myself, not because I was assigned it. It is not yet complete, but I’m loving it. I also loved working on my EPQ [A-Level research project] when I was in school: I was researching how AI can be used in digital forensics, and I enjoyed writing up my research.”

Like many university students, Ilenia has had her fair share of challenges. She discussed the biggest of them all: imposter syndrome, as well as how she overcame it. 

“I know [imposter syndrome is] very common at university, where we wonder if we fit in, if we can do our degree well. When I am struggling with a topic, but I am seeing others around me appear to understand it much faster, or I hear about these amazing projects other people are working on, I sometimes feel out of place, questioning if I can actually make it in tech. But at the end of the day, I know we all have different strengths and interests, so because I am not building games in my spare time, or I take longer to figure out something does not mean I am less worthy of being where I am: I got to where I am right now by working hard and achieving my goals, and anything I accomplish is an improvement from the previous step.”

Alongside her degree, Ilenia also supports a small organisation called Byte Queens, which aims to connect girls and women in technology with community support.

“I am one of the awardees for the Amazon Future Engineer Award by the Royal Academy of Engineering and Amazon, and one of my friends, Aurelia Brzezowska, in the programme started a community for girls and women in technology to help and support each other, called Byte Queens. She has a great vision for Byte Queens, and I asked her if there was anything I could do to help, because I love seeing girls going into technology. If I can do anything to remove any barriers for them, I will do it immediately. I am now the content manager, so I manage all the content that Byte Queens releases as I have experience in working with social media. Our aim is to create a network of girls and women who love tech and want to go into it, and support each other to grow, to get opportunities, to upskill. At the Academy of Engineering we have something similar provided for us, but we wanted this for every girl in tech. We are going to have mentoring programs with women who have a career in tech, help with applications, CVs, etc. Once we have grown enough we will run events, hackathons and workshops. It would be amazing if any girl or woman studying computer science or a technology related degree could join our community and share their experiences with other women!”

For women and girls looking to excel in computer science, Ilenia has this advice:

“I would say don’t doubt yourself: you got to where you are because you worked for it, and you deserve it. Do the best you can in that moment (our best doesn’t always look the same at different times of our lives), but also take care of yourself: you can’t achieve much if you are not taking care of yourself properly, just like you can’t do much with your laptop if you don’t charge it. And finally, take space: our generation has the possibility to reframe so much wrongdoing of the past generations, so don’t be afraid to make yourself, your knowledge, your skills heard and valued. Any opportunities you get, any goals you achieve are because you did it and worked for it, so take the space and recognition you deserve.”

Ilenia also highlighted the importance of taking opportunities to grow professionally and personally throughout her degree, “taking time to experiment with careers, hobbies, sports to discover what I like and who I want to become” mattered enormously. Following her degree, she wants to work in software development or cyber security. Once the stress of coursework and exams is gone, Ilenia intends to “try living in different countries for some time too”, though she thinks that “London is a special place for me, so I know I will always come back.”

Ilenia encourages all women in tech who are looking for a community and support, to join the Byte Queens community and share with others: “the more, the merrier!”

– lenia Maietta and Daniel Gill, Queen Mary University of London

Visit the Byte Queens website for more details. Interested women can apply here.

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing
Cover of Issue 20 of CS4FN, celebrating Ada Lovelace

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Double or nothing: an extra copy of your software, just in case

Ariane 5 on the launchpad
Ariane 5 on the launch pad. Photo Credit: (NASA/Chris Gunn) Public Domain via Wikimedia Commons.

If you spent billions of dollars on a gadget you’d probably like it to last more than a minute before it blows up. That’s what happened to a European Space Agency rocket. How do you make sure the worst doesn’t happen to you? How do you make machines reliable?

A powerful way to improve reliability is to use redundancy: double things up. A plane with four engines can keep flying if one fails. Worried about a flat tyre? You carry a spare in the boot. These situations are about making physical parts reliable. Most machines are a combination of hardware and software though. What about software redundancy?

You can have spare copies of software too. Rather than a single version of a program you can have several copies running on different machines. If one program goes wrong another can take over. It would be nice if it was that simple, but software is different to hardware. Two identical programs will fail in the same way at the same time: they are both following the same instructions so if one goes wrong the other will too. That was vividly shown by the maiden flight of the Ariane 5 rocket. Less than 40 seconds from launch things went wrong. The problem was to do with a big number that needed 64 bits of storage space to hold it. The program’s instructions moved it to a storage place with only 16 bits. With not enough space, the number was mangled to fit. That led to calculations by its guidance system going wrong. The rocket veered off course and exploded. The program was duplicated, but both versions were the same so both agreed on the same wrong answers. Seven billion dollars went up in smoke.

Can you get round this? One solution is to get different teams to write programs to do the same thing. The separate teams may make mistakes but surely they won’t all get the same thing wrong! Run them on different machines and let them vote on what to do. Then as long as more than half agree on the right answer the system as a whole will do the right thing. That’s the theory anyway. Unfortunately in practice it doesn’t always work. Nancy Leveson, an expert in software safety from MIT, ran an experiment where different programmers were given programs to write. She found they wrote code that gave the same wrong answers. Even if it had used independently written redundant code it’s still possible Ariane 5 would have exploded.

Redundancy is a big help but it can’t guarantee software works correctly. When designing systems to be highly reliable you have to assume things will still go wrong. You must still have ways to check for problems and to deal with them so that a mistake (whether by human or machine) won’t turn into a disaster.

Paul Curzon, Queen Mary University of London


Related Magazine …


More on …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Joyce Weisbecker: a teenager the first indie games developer?

CS4FN Banner

by Paul Curzon, Queen Mary University of London

Video games were once considered to be only of interest to boys, and the early games industry was dominated by men. Despite that, a teenage girl, Joyce Weisbecker, was one of the pioneers of commercial game development.

Originally, video games were seen as toys for boys. Gradually it was realised that there was a market for female game players too, if only suitably interesting games were developed, so the games companies eventually started to tailor games for them. That also meant, very late in the day, they started to employ women as games programmers. Now it is a totally normal thing to do. However, women were also there from the start, designing games. The first female commercial programmer (and possibly first independent developer) was Joyce Weisbecker. Working as an independent contractor she wrote her first games for sale in 1976 for the RCA Studio II games console that was released in January 1977.

RCA Studio II video games console
Image by WikimediaImages from Pixabay

Joyce was only a teenager when she started to learn to program computers and wrote her first games. She learnt on a computer that her engineer father designed and built at home called FRED (Flexible Recreational and Educational Device). He worked for RCA (originally the Radio Corporation of America), one of the major electronics, radio, TV and record companies of the 20th century. The company diversified their business into computers and Joyce’s father designed them for RCA (as well as at home for a hobby). He also invented a programming language called CHIP-8 that was used to program the RCA computers. This all meant Joyce was in a position to learn CHIP-8 and then to write programs for RCA computers including their new RCA Studio II games console before the machine was released, as a post-high school summer job.

The code for two games that she wrote in 1976, called Snake Race and Jackpot, were included in the manual for an RCA microcomputer called the COSMAC VIP, and she also wrote more programs for it the following year. These computers came in kit form for the buyer to build themselves. Her programs were example programs included for the owner to type in and then play once they had built the machine. Including them meant their new computer could do something immediately.

She also wrote the first game that she was paid for in that Summer of 1976. It was for the RCA Studio II games console, and it earned her $250 – well over $1000 in today’s money, so worth having for a teenager who would soon be going on to college. It was a quiz program, called TV School House I. It pitted two people against each other, answering questions on topics such as maths, history and geography, with two levels of difficulty. Questions were read from question booklets and whoever typed in the multiple choice answer number the fastest got the points for a question, with more points the faster they were. There is currently a craze for apps that augment physical games and this was a very early version of the genre.

Speedway screen from Wikimedia

She quickly followed it with racing and chase games, Speedway and Tag, though as screens were still very limited then, with only tiny screens, the graphics of all these games were very, very simple – eg racing rectangles around a blocky, rectangular racing track.

Unfortunately, the RCA games console itself was a commercial failure as it couldn’t compete with consoles like the Atari 2600, so RCA soon ended production. Joyce, meanwhile, retired from the games industry, still a teenager, ultimately becoming a radar signal processing engineer.

While games like Pong had come much earlier, the Atari 2600, which is credited with launching the first video game boom, was released in 1977, with Space Invaders, one of the most influential video games of all time, released in 1980. Joyce really was at the forefront of commercial games design. As a result her papers related to games programming, including letters and program listings, are now archived in the Strong National Museum of Play in New York.

More on …


Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.



This blog is funded through EPSRC grant EP/W033615/1.

Mary Ann Horton and the invention of email attachments

Mary Ann Horton was transitioning to female at the time that she made one of her biggest contributions to our lives with a simple computer science idea with a big impact: a program that allowed binary email attachments.

Now we take the idea of sending each other multimedia files – images, video, sound clips, programs, etc for granted, whether by email or social networks, Back in the 1970s, before even the web had been invented, people were able to communicate by email, but it was all text. Email programs worked on the basis that people were just sending words, or more specifically streams of characters, to each other. An email message was just a long sequence of characters sent over the Internet. Everything in computers is coded as binary: 1s and 0s, but text has a special representation. Each character has its own code of 1s and 0s, that can also be thought of as a binary number, but that can be displayed as the character by programs that process it. Today, computers use a universally accepted code called Unicode, but originally most adopted a standard code called ASCII. All these codes are just allocations of patterns of 1s and 0s to each character. In ASCII, ‘a’ is represented by 1100001 or the number 97, whereas A is 1000001 or number 65, for example. These are only 7 bits long and as computers store data in bytes of 8 bits at a time this means that not all patterns of binary (so representable numbers) correspond to one of the displayable characters that email messages were expected to contain by the programs that processed them.

That is fine if all you have done is used programs like text editors, that output characters so you are guaranteed to be sending printable characters. The problem was other kinds of data whether images or runnable programs, are not stored as sequences of characters. They are more general binary files, meaning the data is long sequences of byte-sized patterns of 1s and 0s and what those 1s and 0s meant depended on the kind of data and representation used. If email programs were given such data to send, pass on or receive, they would reject or more likely mangle it as not corresponding to characters they could display. The files didn’t even have to be non-character formats, as at the time some computer systems used a completely different code for characters. This meant text emails could also be mangled just because they passed through a computer using a different format of character.

Mary Ann realised that this was all too restrictive for what people would be needing computers to do. Email needed to be more flexible. However, she saw that there was a really easy solution. She wrote a program, called uuencode that could take any binary file and convert it to one that was slightly longer but contained only characters. A second program she wrote, called uudecode converted these files of characters back to the original binary file to be saved by the receiving email program exactly it was originally on the source program.

All the uuencode program did was take 3 bytes (24 bits) of the binary file at a time, split them into groups of 6 bits so effectively representing a number from 0 to 63, add 32 to this number so the numbers are now in the range 32 to 95 and those are the numbers so binary patterns of the printable characters that the email programs expected. Each three bytes were now 4 printable characters. These could be added to the text of an email, though with a special start and end sequence included to identify it as something to decode. uudecode just did this conversion backwards, turning each group of 4 characters back into the orginal three bytes of binary.

Email attachments had been born, and ever since communication programs, whether email, chat or social media, have allowed binary files, so multimedia, to be shared in similar ways. By seeing a coming problem, inventing a simple way to solve it and then writing the programs, Mary Ann Horton had made computers far more flexible and useful.

Paul Curzon, Queen Mary University of London

More on …

Related Magazines …

cs4fn issue 14 cover

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Pac-Man and Games for Girls

In the beginning video games were designed for boys…and then came Pac-Man.

Pac-man eating dots
Image by OpenClipart-Vectors from Pixabay

Before mobile games, game consoles and PC based games, video games first took off in arcades. Arcade games were very big earning 39 billion dollars at their peak in the 1980s. Games were loaded into bespoke coin-operated arcade machines. For a game to do well someone had to buy the machines, whether actual gaming arcades or bars, cafes, colleges, shopping malls, … Then someone had to play them. Originally boys played arcade games the most and so games were targeted at them. Most games had a focus on shooting things: games like asteroids and space invaders or had some link to sports based on the original arcade game Pong. Girls were largely ignored by the designers… But then came Pac-Man. 

Pac-Man, created by a team led by Toru Iwatani,  is a maze game where the player controls the Pac-Man character as it moves around a maze, eating dots while being chased by the ghosts: Blinky, Pinky, Inky, and Clyde. Special power pellets around the maze, when eaten, allow Pac-Man to chase the ghosts for a while instead of being chased.

Pac-Man ultimately made around $19 million dollars in today’s money making it the biggest money making video arcade game of all time. How did it do it? It was the first game that was played by more females than males. It showed that girls would enjoy playing games if only the right kind of games were developed. Suddenly, and rather ironically given its name, there was a reason for the manufacturers to take notice of girls, not just boys.

A Pac-man like ghost
Image by OpenClipart-Vectors from Pixabay

It revolutionised games in many ways, showing the potential of different kinds of features to give it this much broader appeal. Most obviously Pac-Man did this by turning the tide away from shoot-em up space games and sports games to action games where characters were the star of the game, and that was one of its inventor Toru Iwatani’s key aims. To play you control Pac-Man rather than just a gun, blaster, tennis racket or golf club. It paved the way for Donkey Kong, Super Mario, and the rest (so if you love Mario and all his friends, then thank Pac-Man). Ultimately, it forged the path for the whole idea of avatars in games too. 

It was the first game to use power ups where, by collecting certain objects, the character gains extra powers for a short time. The ghosts were also characters controlled by simple AI – they didn’t just behave randomly or follow some fixed algorithm controlling their path, but reacted to what the player does, and each had their own personality in the way they behaved.

Because of its success, maze and character-based adventure games became popular among manufacturers, but more importantly designers became more adventurous and creative about what a video game could be. It was also the first big step towards the long road to women being fully accepted to work in the games industry. Not bad for a character based on a combination of a pizza and the Japanese symbol for “mouth”.

– Paul Curzon, Queen Mary University of London

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Collaborative community coding & curating

Equality, diversity and inclusion in the R Project

You might not think of a programming language like Python or Scratch as being an ‘ecosystem’ but each language has its own community of people who create and improve its code (compilers, library code,…), flush out the bugs, introduce new features, document any changes and write the ‘how to’ guides for new users. 

R is one such programming language. It’s named after its two co-inventors (Ross Ihaka and Robert Gentleman) and is used by around two million people around the world. People working in all sorts of jobs and industries (for example finance, academic research, government, data journalists) use R to analyse their data. The software has useful tools to help people see patterns in their data and to make sense of that information. 

It’s also open source which means that anyone can use it and help to improve it, a bit like Wikipedia where anyone can edit an article or write a new one. That’s generally a good thing because it means everyone can contribute but it can also bring problems. Imagine writing an essay about an event at your school and sharing it with your class. Then imagine your classmates adding paragraphs of their own about the event, or even about different events. Your essay could soon become rather messy and you’d need to re-order things, take bits out and make sure people hadn’t repeated something that someone had already said (but in a slightly different way). 

When changes are made to software people also want to keep a note not just of the ‘words’ added (the code) but also to make a note of who added what and when. Keeping good records, also known as documentation, helps keep things tidy and gives the community confidence that the software is being properly looked after.

Code and documentation can easily become a bit chaotic when created by different people in the community so there needs to be a core group of people keeping things in order. Fortunately there is – the ‘R Core Team’, but these days its membership doesn’t really reflect the community of R users around the world. R was first used in universities, particularly by more privileged statistics professors from European countries and North America (the Global North), and so R’s development tended to be more in line with their academic interests. R needs input and ideas from a more diverse group of active developers and decision-makers, in academia and beyond to ensure that the voices of minoritised groups are included. Also the voices of younger people, particularly as many of the current core group are approaching retirement age.

Dr Heather Turner from the University of Warwick is helping to increase the diversity of those who develop and maintain the R programming language and she’s been given funding by the EPSRC* to work on this. Her project is a nice example of someone who is bringing together two different areas in her work. She is mixing software development (tech skills) with community management (people skills) to support a range of colleagues who use R and might want to contribute to developing it in future, but perhaps don’t feel confident to do so yet

Development can involve things like fixing bugs, helping to improve the behaviour or efficiency of programs or translating error messages that currently appear on-screen in the English language into different languages. Heather and her colleagues are working with the R community to create a more welcoming environment for ‘newbies’ that encourages participation, particularly from people who are in the community but who are not currently represented or under-represented by the core group and she’s working collaboratively with other community organisations such as R-Ladies, LatinR and RainbowR. Another task she’s involved in is producing an easier-to-follow ‘How to develop R’ guide.

There are also people who work in universities but who aren’t academics (they don’t teach or do research but do other important jobs that help keep things running well) and some of them use R too and can contribute to its development. However their contributions have been less likely to get the proper recognition or career rewards compared with those made by academics, which is a little unfair. That’s largely because of the way the academic system is set up. 

Generally it’s academics who apply for funding to do new research, they do the research and then publish papers in academic journals on the research that they’ve done and these publications are evidence of their work. But the important work that supporting staff do in maintaining the software isn’t classified as new research so doesn’t generally make it into the journals, so their contribution can get left out. They also don’t necessarily get the same career support or mentoring for their development work. This can make people feel a bit sidelined or discouraged. 

To try and fix this and to make things fairer the Society of Research Software Engineering was created to champion a new type of job in computing – the Research Software Engineer (RSE). These are people whose job is to develop and maintain (engineer) the software that is used by academic researchers (sometimes in R, sometimes in other languages). The society wants to raise awareness of the role and to build a community around it. You can find out what’s needed to become an RSE below. 

Heather is in a great position to help here too, as she has a foot in each camp – she’s both an Academic and a Research Software Engineer. She’s helping to establish RSEs as an important role in universities while also expanding the diversity of people involved in developing R further, for its long-term sustainability.

Further reading


Related careers

QMUL

Below is an example of a Research Software Engineer role which was advertised at QMUL in April 2024 – you can read the original advert and see a copy of the job description / person specification information which is archived at the “Jobs in Computer Science” website. This advert was looking for an RSE to support a research project “at the intersection of Natural Language Processing (NLP) and multi-modal Machine Learning, with applications in mental health.”

QMUL also has a team of Research Software Engineers and you can read about what they’re working on and their career here (there are also RSEs attached to different projects across the university, as above).

Archived job adverts from elsewhere

Below are some examples of RSE jobs (these particular vacancies have now closed but you can read about what they were looking for and see if that sort of thing might interest you in the future). The links will take you to a page with the original job advert + any Job Description (JD – what the person would actually be doing) and might also include a Person Specification (PS – the type of person they’re looking for in terms of skills, qualifications and experience) – collectively these are often known as ‘job packs’.

Note that these documents are written for quite a technical audience – the people who’d apply for the jobs will have studied computer science for many years and will be familiar with how computing skills can be applied to different subjects.

1. The Science and Technology Facilities Council (STFC) wanted four Research Software Engineers (who’d be working either in Warrington or Oxford) on a chemistry-related project (‘computational chemistry’ – “a branch of chemistry that uses computer simulation to assist in solving chemical problems”) 

2. The University of Cambridge was looking for a Research Software Engineer to work in the area of climate science – “Computational modelling is at the core of climate science, where complex models of earth systems are a routine part of the scientific process, but this comes with challenges…”

3. University College London (UCL) wanted a Research Software Engineer to work in the area of neuroscience (studying how the brain works, in this case by analysing the data from scientists using advanced microscopy).


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Cartoons, comics and computer games – Ada Lovelace’s graphic novel

by Jane Waite, Queen Mary University of London

In 2009 for Ada Lovelace day, a comic strip about Ada and Babbage was created, not quite 100% historically accurate but certainly in the spirit of Lovelace’s love of science and mathematics. Her thrilling adventures in Victorian London have now become a graphic novel.

Image by Andrew Martin from Pixabay

In her own time, Ada was captured as a demure and beautiful young woman in portraits and sketches that were shared in books about her father. Ada would have sat for hours to have her portrait drawn, but she would have known about quick draw cartoons. Newspapers and magazines such as Punch contained satirical cartoons of the day. They were very influential in the 1840’s. Faraday was drawn in Punch, but Babbage and Lovelace didn’t make it then. But now they are crime busting mathematical superheros in their very own alternate history of computing comic book.

Books, films, even a musical have been created about Ada Lovelace, but as we write the circle has not quite been closed. There are no computer games about Ada. But maybe you could change that.


Further reading

The Thrilling Adventures of Lovelace and Babbage: The (Mostly) True Story of the First Computer by Sydney Padua.


This article was first published on the original CS4FN website and a copy can be found on page 17 of issue 20 of the CS4FN magazine, which celebrates the work of Ada Lovelace. You can also read some of our other posts about Ada Lovelace and she features as one of our Women in Computing poster set.


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.