Protecting your fridge

by Jo Brodie and Paul Curzon, Queen Mary University of London

Ever been spammed by your fridge? It has happened, but Queen Mary’s Gokop Goteng and Hadeel Alrubayyi aim to make it less likely…

Image by Gerd Altmann from Pixabay

Gokop has a longstanding interest in improving computing networks and did his PhD on cloud computing (at the time known as grid computing), exploring how computing could be treated more like gas and electricity utilities where you only pay for what you use. His current research is about improving the safety and efficiency of the cloud in handling the vast amounts of data, or ‘Big Data’, used in providing Internet services. Recently he has turned his attention to the Internet of Things.

It is a network of connected devices, some of which you might have in your home or school, such as smart fridges, baby monitors, door locks, lighting and heating that can be switched on and off with a smartphone. These devices contain a small computer that can receive and send data when connected to the Internet, which is how your smartphone controls them. However, it brings new problems: any device that’s connected to the Internet has the potential to be hacked, which can be very harmful. For example, in 2013 a domestic fridge was hacked and included in a ‘botnet’ of devices which sent thousands of spam emails before it was shut down (can you imagine getting spam email from your fridge?!)

A domestic fridge was hacked
and included in a ‘botnet’ of devices
which sent thousands of spam emails
before it was shut down.

The computers in these devices don’t usually have much processing power: they’re smart, but not that smart. This is perfectly fine for normal use, but to run software to keep out hackers, while getting on with the actual job they are supposed to be doing, like running a fridge, it becomes a problem. It’s important to prevent devices from being infected with malware (bad programs that hackers use to e.g., take over a computer) and work done by Gokop and others has helped develop better malwaredetecting security algorithms which take account of the smaller processing capacity of these devices.

One approach he has been exploring with PhD student Hadeel Alrubayyi is to draw inspiration from the human immune system: building artificial immune systems to detect malware. Your immune system is very versatile and able to quickly defend you against new bugs that you haven’t encountered before. It protects you from new illnesses, not just illnesses you have previously fought off. How? Using special blood cells, such as T-Cells, which are able to detect and attack rogue cells invading the body. They can spot patterns that tell the difference between the person’s own healthy cells and rogue or foreign cells. Hadeel and Gokop have shown that applying similar techniques to Internet of Things software can outperform other techniques for spotting new malware, detecting more problems while needing less computing resources.

Gokop is also using his skills in cloud computing and data science to enhance student employability and explore how Queen Mary can be a better place for everyone to do well. Whether a person, organisation or smart fridge Gokop aims to help you reach your full potential!

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

The gender shades audit

by Jo Brodie, Queen Mary University of London

Face recognition technology is used widely, such as at passport controls and by police forces. What if it isn’t as good at recognising faces as it has been claimed to be? Joy Buolamwini and Timnit Gebru tested three different commercial systems and found that they were much more likely to wrongly classify darker skinned female faces compared to lighter or darker skinned male faces. The systems were not reliable.

Different skin tone cosmetics
Image by Stefan Schweihofer from Pixabay

Face recognition systems are trained to detect, classify and even recognise faces based on a bank of photographs of people. Joy and Timnit examined two banks of images used to train the systems and found that around 80 percent of the photos used were of people with lighter coloured skin. If the photographs aren’t fairly balanced in terms of having a range of people of different gender and ethnicity then the resulting technologies will inherit that bias too. The systems examined were being trained to recognise light skinned people.

The pilot parliaments benchmark

Joy and Timnit decided to create their own set of images and wanted to ensure that these covered a wide range of skin tones and had an equal mix of men and women (‘gender parity’). They did this using photographs of members of parliaments around the world which are known to have a reasonably equal mix of men and women. They selected parliaments both from countries with mainly darker skinned people (Rwanda, Senegal and South Africa) and from countries with mainly lighter skinned people (Iceland, Finland and Sweden).

They labelled all the photos according to gender (they had to make some assumptions based on name and appearance if pronouns weren’t available) and used a special scale called the Fitzpatrick scale to classify skin tones (see Different Shades below). The result was a set of photographs labelled as dark male, dark female, light male, light female, with a roughly equal mix across all four categories: this time, 53 per cent of the people were light skinned (male and female).

Testing times

Joy and Timnit tested the three commercial face recognition systems against their new database of photographs (a fair test of a wide range of faces that a recognition system might come across) and this is where they found that the systems were less able to correctly identify particular groups of people. The systems were very good at spotting lighter skinned men, and darker skinned men, but were less able to correctly identify darker skinned women, and women overall. The tools, trained on sets of data that had a bias built into them, inherited those biases and this affected how well they worked.

As a result of Joy and Timnit’s research there is now much more recognition of the problem, and what this might mean for the ways in which face recognition technology is used. There is some good news, though. The three companies made changes to improve their systems and several US cities have already banned the use of this technology in criminal investigations, with more likely to follow. People worldwide are more aware of the limitations of face recognition programs and the harms to which they may be (perhaps unintentionally) put, with calls for better regulation.

Different Shades
The Fitzpatrick skin tone scale is used by skin specialists to classify how someone’s skin responds to ultraviolet light. There are six points on the scale with 1 being the lightest skin and 6 being the darkest. People whose skin tone has a lower Fitzpatrick score are more likely to burn in the sun and are at greater risk of skin cancer. People with higher scores have darker skin which is less likely to burn and have a lower risk of skin cancer. A variation of the Fitzpatrick scale, with five points, is used to create the skin tone emojis that you’ll find on most messaging apps in addition to the ‘default’ yellow.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Collecting mini-beasts and pocket monsters

by Paul Curzon, Queen Mary University of London

A Pokemon creature int he grass
Image by Ramadhan Notonegoro from Pixabay

Satoshi Tajiri created one of the biggest money-making media franchises of all time. It all started with his love of nature and, in particular, mini-beasts. It also eventually took gamers back into the fresh air.

As a child, Satoshi Tajiri, loved finding and collecting minibeasts, so spent lots of time outside, exploring nature. But, as Japan became more and more built up, his insect searching haunts disappeared. As the natural world disappeared he was drawn instead inside to video game arcades and those games became a new obsession. He became a super-fan of games and even created a game fanzine called Game Freak where he shared tips on playing different games. It wasn’t just something he sold to friends either: one issue sold 10,000 copies. An artist, Ken Sugimori, who started as a reader of the magazine, ultimately joined Satoshi, illustrating the magazine for him.

Rather than just writing about games, they wanted to create better ones themselves, so morphed Game Freak into a computer game company, ultimately turning it into one of the most successful ever. The cause of that success was their game Pokemon, designed by Satoshi with characters drawn by Ken. It took the idea of that first obsession, collecting minibeasts, and put it into a fun game with a difference.

It wasn’t about killing things, but moving around a game world searching for, taming and collecting monsters. The really creative idea, though, came from the idea of trading. There were two versions of the game and you couldn’t find all the creatures in your own version. To get a full set you had to talk to other people and trade from your collection. It was designed to be a social game from the outset.

It has been suggested that Satoshi is neuro-diverse. Whether he is or not, autistic people (as well as everyone else) found that Pokemon was a great way to make friends, something autistic people often find difficult. Pokemon, also became more than just a game, turning into a massive media franchise, with trading cards to collect, an animated series and a live action film. It also later sparked a second game craze when Pokemon Go was released. It combined the original idea with augmented reality, taking all those gamers back outside for real, searching for (virtual) beasts in the real world.

 

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Follow those ants

by Paul Curzon, Queen Mary University of London

Ants climbing on a mushroom obstacle course
Image by Puckel from Pixabay

Ant colonies are really good at adapting to changing situations: far better than humans. Sameena Shah wondered if Artificial Intelligence agents might do better by learning their intelligent behaviour from ants rather than us. She has suggested we could learn from the ants too.

Inspired by staring at ants adapting to new routes to food in the mud as a child, and then later as adult ants raided her milk powder, Sameena Shah studied for her PhD how a classic problem in computer science, that of finding the shortest path between points in a network, is solved by ant colonies. For ants this involves finding the shortest paths between food and the nest: something they are very good at. When foraging ants find a source of food they leave a pheromone (i.e., scent) trail as they return, a bit like Hansel and Gretel leaving a trail of breadcrumbs. Other ants follow existing trails to find the food as directly as possible, leaving their own trails as they do. Ants mostly follow the trail containing most pheromone, though not always. Because shorter paths are followed more quickly, there and back, they gain more pheromone than longer ones, so yet more ants follow them. This further reinforces the shortest trail as the one to follow.

There are lots of variations on the way ants actually behave. These variations are being explored by computer scientists as ways for AI agents to work together to solve problems. Sameena devised a new algorithm called EigenAnt to investigate such ant colony-based problem solving. If the above ant algorithm is used, then it turns out longer trails do not disappear even when a shorter path is found, particularly if it is found after a long delay. The original best path has a very strong trail so that it continues to be followed even after a new one is found. Computer-based algorithms add a step whereby all trails fade away at the same rate so that only ones still being followed stay around. This is better but still not perfect. Sameena’s EigenAnt algorithm instead removes pheromone trails selectively. Her software ants select paths using probabilities based on the strength of the trail. Any existing trail could be chosen but stronger trails are more likely to be. When a software ant chooses a trail, it adds its own pheromones but also removes some of the existing pheromone from the trail in a way that depends on the probability of the path being chosen in the first place. This mirrors what real ants do, as studies have shown they leave less pheromone on some trails than others.

Sameena proved mathematical properties of her algorithm as well as running simulations of it. This showed that EigenAnt does find the shortest path and never settles on something less than the best. Better still, it also adapts to changing situations. If a new shorter path arises then the software ants switch to it!

Sameena won the award
for the best PhD in India

There are all sorts of computer science uses for this kind of algorithm, such as in ever-changing computer networks, where we always want to route data via the current quickest route. Sameena, however, has also suggested we humans could learn from this rather remarkable adaptability of ants. We are very bad at adapting to new situations, often getting stuck on poor solutions because of our initial biases. The more successful a particular life path has been for us the more likely we will keep following it, behaving in the same way, even when the situation changes. Sameena found this out when she took her dream job as a Hedge Fund manager. It didn’t go well. Since then, after changing tack, she has been phenomenally successful, first developing AIs for news providers, and then more recently for a bank. As she says: don’t worry if your current career path doesn’t lead to success, there are many other paths to follow. Be willing to adapt and you will likely find something better. We need to nurture lots of possible life paths, not just blindly focus on one.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Black in Data

by Paul Curzon, Queen Mary University of London

Careers do not have to be decided on from day one. You can end up in a good place in a roundabout way. That is what happened to Sadiqah Musa, and now she is helping make the paths easier for others to follow.

Lightbulb in a black circle surrounded by circles of colour representing data

Image based on ones by Gerd Altmann from Pixabay

Sadiqah went to university at QMUL expecting to become an environmental scientist. Her first job was as a geophysicist analysing seismic data. It was a job she thought she loved and would do forever. Unfortunately, she wasn’t happy, not least about the lack of job security. It was all about data though which was a part she did still enjoy, and the computer science job of Data Analyst was now a sought-after role. She retrained and started on a whole new exciting career. She currently works at the Guardian Newspapers where she met Devina Nembhard … who was the first Black woman she had ever worked with throughout her career.

Together they decided that was just wrong, but also set out to change it. They created “Black in Data” to support people of colour in the industry, mentoring them, training them in the computer science skills they might be short of: like programming and databases; helping them thrive. More than that they also confront industry to try and take down the barriers that block diversity in the first place.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Reclaim your name

by Jo Brodie and Paul Curzon, Queen Mary University of London

Canadian Passport
Image by tookapic from Pixabay

In June 2021 the Canadian government announced that Indigenous people would be allowed to use their ancestral family names on government-issued identity and travel documents. This meant that, for the first time, they could use the names that are part of their heritage and culture rather than the westernised names that are often used instead. Because of computers, it wasn’t quite as easy as that though …

Some Indigenous people take on a Western name to make things easier, to simplify things for official forms, to save having to spell the name, even to avoid teasing. If it is a real choice then perhaps that is fine, though surely we should be able to make it easy for people to use their actual names. For many it was certainly not a choice, their Indigenous names were taken from them. From the 19th century, hundreds of thousands of Indigenous children in Canada were sent to Western schools and made to take on Western names as part of an attempt to force them to “assimilate” into Western society. Some were even beaten if they did not use their new name. Because their family names had been “officially” changed, they and their descendants had to use these new names on official documents. Names matter. It is your identity, and in some cultures family names are also sacred. Being able to use them matters.

The change to allow ancestral names to be used was part of a reconciliation process to correct this injustice. After the announcement, Ta7talíya Nahanee, an indigenous woman from the Squamish community in Vancouver, was delighted to learn that she would be able to use her real name on her official documents, rather than ‘Michelle’ which she had previously used.

Unfortunately, she was frustrated to learn that travel documents could still only include the Latin alphabet (ABCDEFG etc) with French accents (À, Á, È, É etc). That excluded her name (pronounced Ta-taliya, the 7 is silent) as it contains a number and the letter í. Why? Because the computer said so!

Modern machine-readable passports have a specific area, called the Machine Readable Zone which can be read by a computer scanner at immigration. It has a very limited number of permitted characters. Names which don’t fit need to be “transliterated”, so Å would be written as AA, Ü as UE and the German letter ß (which looks like a B but sounds like a double S) is transliterated as SS. Names are completely rewritten to fit, so Müller becomes MUELLER, Gößmann becomes GOESSMANN, and Hämäläinen becomes HAEMAELAEINEN. If you’ve spent your life having your name adapted to fit someone else’s system this is another reminder of that.

While there are very sensible reasons for ensuring that a passport from one part of the world can be read by computers anywhere else, this choice of characters highlights that, in order to make things work, everyone else has been made to fall in line with the English-speaking population, another example of an unintentional bias. It isn’t, after all, remotely beyond our ability to design a system that meets the needs of everyone, it just needs the will. Designing computer systems isn’t just about machines. It’s about designing them for people.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Al-Jazari: the father of robotics

by Paul Curzon, Queen Mary University of London

Al Jazari's hand washing automaton
Image by user:Grenavitar, Public domain, via Wikimedia Commons

Science fiction films are full of humanoid robots acting as servants, workers, friends or colleagues. The first were created during the Islamic Golden Age, a thousand years ago. 

Robots and automata have been the subject of science fiction for over a century, but their history in myth goes back millennia, but so does the actual building of lifelike animated machines. The Ancient Greeks and Egyptians built Automata, animal or human-like contraptions that seemed to come to life. The early automata were illusions that did not have a practical use, though, aside from entertainment or just to amaze people. 

It was the great inventor of mechanical gadgets Ismail Al-Jazari from the Islamic Golden Age of science, engineering and art in the 12th century, who first built robot-like machines with actual purposes. Powered by water, his automata acted as servants doing specific tasks. One machine was a humanoid automaton that acted as a servant during the ritual purification of hand washing before saying prayers. It poured water into a basin from a jug and then handed over a towel, mirror and comb. It used a toilet style flushing mechanism to deliver the water from a tank. Other inventions included a waitress automaton that served drinks and robotic musicians that played instruments from a boat. It may even have been programmable. 

We know about Al-Jazari’s machines because he not only created mechanical gadgets and automata, he also wrote a book about them: The Book of Knowledge of Ingenious Mechanical Devices. It’s possible that it inspired Leonardo Da Vinci who, in addition to being a famous painter of the Italian Renaissance, was a prolific inventor of machines. 

Such “robots” were not everyday machines. The hand washing automata was made for the King. Al-Jazari’s book, however, didn’t just describe the machines, it explained how to build them: possibly the first text book to cover Automata. If you weren’t a King, then perhaps you could, at least, have a go at making your own servants. 

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

A PC Success

by Paul Curzon, Queen Mary University of London

An outline of a head showing the brain and spinal column on a digital background of binary and circuitry

Image by Gerd Altmann from Pixabay

We have moved on to smartphones, tablets and smartwatches, but for 30 years the desktop computer ruled, and originally not just any desktop computer, the IBM PC. A key person behind its success was African American computer scientist, Mark Dean.

IBM is synonymous with computers. It became the computing industry powerhouse as a result of building large, room-sized computers for businesses. The original model of how computers would be used followed IBM’s president, Thomas J Watson’s, supposed quote that “there is a world market for about five computers.” They produced gigantic computers that could be dialled into by those needed computing time. That prediction was very quickly shown to be wrong, though, as computer sales boomed.

Becoming more personal

Mark Dean was the first African American
to receive IBM’s highest honour.

By the end of the 1970s the computing world was starting to change. Small, but powerful, mini-computers had taken off and some companies were pushing the idea of computers for the desktop. IBM was at risk of being badly left behind… until they suddenly roared back into the lead with the IBM personal computer and almost overnight became the world leaders once more, revolutionising the way computers were seen, sold and used. Their predictions were still a little off with initial sales of the IBM PC 8 times more than they expected! Within a few years they were selling many hundreds of thousands a year and making billions of dollars. Soon every office desk had one and PC had become an everyday word used to mean computer.

Get on the bus

So who was behind this remarkable success? One of the design team who created the IBM PC was Mark Dean. As a consequence of his work on the PC, he became the first African American to be made an IBM fellow (IBM’s highest honour). One of his important contributions was in leading the development of the PC’s bus. Despite the name, a computer bus is more like a road than a vehicle, so its other name of data highway is perhaps better. It is the way the computer chip communicates with the outside world. A computer on its own is not really that useful to have on your desktop. It needs a screen, keyboard and so on. A computer bus is a bit like your nervous system used to send messages from your brain around your body. Just as your brain interacts with the world receiving messages from your senses, and allowing you to take action by sending messages to your muscles, all using your nervous system, a computer chip sends signals to its peripherals using the bus. Those peripherals include things like mouse, keyboard, printers, monitors, modems, external memory devices and more; the equivalents of its way of sensing the world and interacting with it. The bus is in essence just a set of connectors into the chip so wires out with different allocated uses and a set of rules about how they are used. All peripherals then follow the same set of rules to communicate to the computer. It means you can easily swap peripherals in and out (unlike your body!) Later versions of the PC bus, that Mark designed, ultimately became an industry standard for desktop computers.

Mark can fairly be called a key member of that PC development team, given he was responsible for a third of the patents behind the PC. He didn’t stop there though. He has continued to be awarded patents, most recently related to artificial neural networks inspired by neuroscience. He has moved on from making computer equivalents of the nervous system to computer equivalents of the brain itself.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

In space no one can hear you …

Red arrows aircraft flying close to the ground.
Image by Bruno Albino from Pixabay 
Image by Bruno Albino from Pixabay 

Johanna Lucht could do maths before she learned language. Why? Because she was born deaf and there was little support for deaf people where she lived. Despite, or perhaps because of, that she became a computer scientist and works for NASA. 

Being deaf can be very, very disabling if you don’t get the right help. As a child, Johanna had no one to help her to communicate apart from her mother. She tried to teach Johanna sign language from a book. Throughout most of her primary school years she couldn’t have any real conversations with anyone, never mind learn. She got the lifeline she needed, when the school finally took on an interpreter, Keith Wann, to help her. She quickly learned American Sign Language working with him. Learning your first language is crucial to learning other things and suddenly she was able to learn in school like other children. She caught up remarkably quickly, showing that an intelligent girl had been locked in that silent, shy child. More than anything though, from Keith, she learned never to give up. 

Her early ability in maths, now her favourite subject, came to the fore as she excelled at science and technology. By this point her family had moved from Germany where she grew up to Alaska where there was much more support, an active deaf community for her to join and lots more opportunities that she started to take. She signed up for a special summer school on computing specifically for deaf people at the University of Washington, learning the programming skills that became the foundation for her future career at NASA. At only 17 she even returned to help teach the course. From there, she signed up to do Computer Science at university and applied for an internship at NASA. To her shock and delight she was given a place. 

Hitting the ground running 

A big problem for pilots especially of fighter aircraft is that of “controlled flight into terrain”: a technical sounding phrase that just means flying the plane into the ground for no good reason other than how difficult flying a fighter aircraft as low as possible in hazardous terrain is. The solution is a ground collision avoidance system: basically the pilots need a computer to warn them when hazardous terrain is coming up and when they are too close for comfort, and so should take evasive action. Johanna helped work on the interface design, so the part that pilots see and interact with. To be of any use in such high-pressure situations this communication has to be slick and very clear. 

She impressed those she was working with so much that she was offered a full time job and so became an engineer at NASA Armstrong working with a team designing, testing and integrating new research technology into experimental aircraft. She had to run tests with other technicians, the first problem being how to communicate effectively with the rest of the team. She succeeded twice as fast as her bosses expected, taking only a couple of days before the team were all working well together. Her experience from the challenges she had faced as a child were now providing her with the skills to do brilliantly in a job where teamwork and communication skills are vital. 

Mission control 

Eventually, she gained a place in Mission Control. There, slick comms are vital too. The engineers have to monitor the flight including all the communication as it happens, and be able to react to any developing situation. Johanna worked with an interpreter who listened directly to all the flight communications, signing it all for her to see on a second monitor. Working with interpreters in a situation like this is in itself a difficult task and Johanna had to make sure not only that they could communicate effectively but that the interpreter knew all the technical language that might come up in the flight. Johanna had plenty of experience dealing with issues like that though, and they worked together well, with the result that in April 2017 Johanna became the first deaf person to work in NASA mission control on a live mission … where of course she did not just survive the job, she excelled. 

As Johanna has pointed out it is not deafness itself that disables people, but the world deaf people live in that does. When in a world that wasn’t set up for deaf people, she struggled, but as soon as she started to get the basic help she needed that all changed. Change the environment to one that does not put up obstacles and deaf people can excel like anyone else. In space no one can hear anyone scream or for that matter speak. We don’t let it stop our space missions though. We just invent appropriate technology and make the problems go away. 

– Paul Curzon, Queen Mary University of London

More on …

Read more about Johanna Lucht:

Related Magazines …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos