Bitten blue

by Paul Curzon, Queen Mary University of London

A mosquito biting into flesh
Image by Pete from Pixabay

For some reason biting flies home in on some people while leaving others (even those walking next to them) alone. What is going on, what does it have to do with the colour blue, and how is computer science helping?

There are lots of reasons biting flies are attracted to some people more than others. Smell is one reason, even possibly made worse if you use smelly soap as it can make you smell like an attractive flower! Another is the colour blue! It turns out many biting flies are attracted to people who wear blue! It sounds bizarre but it is the reason fly traps are coloured blue – to make them more effective. But why would a fly like blue? Scientists have been investigating. One theory was that it was because blue objects look like shade to a fly: once there the eating of you is a separate fortunate advantage (to the fly).

One area of Computer Science is known is biologically-inspired computing. The idea is that evolution, over Millenia of trial and error, has come up with lots of great ways to solve problems, and human designers can learn from them. By making computer systems copy the way animals solve those problems we can create better designs. One of the most successful versions of this is the neural network: a way of creating intelligent machines by copying the way animals’ brains are built from neurones. It has ultimately led to the chatbots that can write almost as well as humans and the game playing machines that can beat us at even the most complex games.

Another use of biologically-inspired computing is as a way of doing Science. By modelling the natural world with computer simulations we can better understand how it works. This computational modelling approach is revolutionising the way lots of Science is done. Aberystwyth University’s Roger Santer applied this idea to biting flies. His team created a computer model of the vision system of different kinds of biting flies to explore how they see the world, testing different theories about what was going on. The models were built from neural networks, trained to see like a fly rather than to be able to write or play games.

What the Aberystwyth team found was that to these kinds of flies, because of the way their vision systems work, areas of blue look just like a tasty meal, like animals that they like to bite. The neural networks could tell leaves from animals, but they often decided, incorrectly, that blue objects were animals. They could also correctly tell the difference between shade and non-shade but never mistook blue objects as shade. If their model is an accurate version of the actual way these flies see, then it suggests that the flies are not attracted to blue because it looks like shade, but because it looks like an animal!

The lesson therefore is, if you don’t want to look like a meat feast then do not wear blue when there are biting flies about!

More on …

Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Free CS4FN magazine issue 29 arriving in schools now, on Diversity in Computing

Diversity in Computing – CS4FN magazine issue 29

Schoolteachers, school librarians and home educators who subscribe* to the FREE Computer Science For Fun magazine will be receiving their free print copies this week (some have already landed!). We are still sending a few out but you shouldn’t have too long to wait.

Everyone can also download the magazine as a FREE PDF or read the articles online, along with lots of other articles that we couldn’t fit into the magazine!

*Around 21,000 print copies of the CS4FN magazine are sent (free) to subscribing UK schools (including homeschoolers). You can sign up to receive a copy or class set of the next issue here https://bit.ly/subscribecs4fn

Issue 29 – Diversity in Computing

The latest (29th) issue of the CS4FN (Computer Science For Fun) magazine is all about Diversity in Computing, with a focus on Black computer scientists.

The magazine contains… (deep breath)… Kimberly Bryant, Gokop Goteng & Hadeel Alrubayyi, bias in facial recognition (wrong man arrested), Joy Buolamwini & Timnit Gebru’s gender shades audit, Mark Dean (the first African American to receive IBM’s highest honour), Johanna Lucht, Clarence Ellis, Freddie Figgers, Satoshi Tajiri, Al-Jazari, machine-readable passports can discriminate against Indigenous people’s names in Canada (and elsewhere), Sadiqah Musa & Devina Nembhard, Christopher Strachey and Sameena Shah. Phew 🙂

We also have a larger Diversity portal with sections for LGBTQ+, Jewish, Women and Disabled computer scientists: https://cs4fn.blog/diversity/

CS4FN is a magazine and blog from the Computer Science department at Queen Mary University of London (QMUL). We share information about computer science research in an engaging way and produce a 20 page A4 magazine every year, usually on a themed topic. There’s an accompanying page on our blog with additional articles, all free to use in classrooms or for general interest reading. The blog and magazine articles are aimed at 13+ and we also have mini magazines (‘A Bit of CS4FN’) and other booklets for younger readers (free to download).


Related Magazine …

Front cover of CS4FN issue 29 – Diversity in Computing

See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


EPSRC supports this blog through research grant EP/W033615/1.

Creating great game worlds

by Wateen Aliady, Queen Mary University of London

A minecraft world
Image by allinonemovie from Pixabay

Are you a PUBG or Fortnite addict? Maybe you enjoy playing Minecraft? Have you thought how these games are created? Could you create a game yourself? It is all done using something called a “Game Engine”.

Games and films are similar as they require creativity and effort to make. Every movie is created by a talented cinema director who oversees everything involved in creating the film. Game creators use a special set of tools instead that similarly allow them to make compelling video game worlds, stories, and characters. These tools are called game engines and they bring your creative ideas to life! They are now even used to help make films too. So, whether you’re playing a game or watching a movie, get ready to be amazed as game creators and movie directors, the masterminds behind these incredible works, deliver captivating experiences that will leave us speechless.

Imagine a group of talented people working together to create a great video game. Miracles happen when a team’s mission becomes one. Every member in the team has a certain role, and when they work together, amazing things can happen. A key member in the group is the graphics whiz. They make everything look stunning by creating pretty scenery and characters with lots of details. Then, we have the physics guru who makes sure objects move realistically, like how they would in real life. They make things fall, bounce, and hit each other accurately. For example, they ensure the soccer ball in the game behaves like a real soccer ball when you kick it. Next, the sound expert who adds all the sounds to the game. The game engine takes on all these roles, so the experience and skill of all those people is built into the game engine, so now one person driving it can use it to create a stunning detailed backdrop, with physics that just works, integrated sound and much more.

Game creators use game engines to make all kinds of games. They have been used to create popular games like Minecraft and Fortnite. When you play a game, you enter a completely different world. You can visit epic places with beautiful views and secrets to discover. You can go on big adventures, solve tricky problems, and be immersed in thrilling fights. Game engines allow game developers to make fun and engaging games that people of all ages enjoy playing by looking after all the detail, leaving the developer to focus on the overall experience.

Anyone can learn to use a game engine even powerful industry standard ones like Unity used to create Pokemon Go, Monument Valley and Call of Duty: Mobile. Game engines could help you to create your own novel and creative games. These amazing tools can help you in creating characters, scenes, and adding fun features like animation and music. You can turn your ideas into fun games that you and your friends can play together. You might create a new video game that becomes massively popular, and people love all around the world. All it takes is for you to have the motivation and be willing to put in the time to learn the skills of driving a game engine and to develop your creativity. Interested? Then get started. You can do anything you want in a game world, so use your imagination and let the game engine help you make amazing games!

More on …

Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Hallucinating chatbots

Why can’t you trust what an AI says?

by Paul Curzon, Queen Mary University of London

postcards of cuba in a rack
Image by Victoria_Regen from Pixabay

Chatbots that can answer questions and write things for you are in the news at the moment. These Artificial Intelligence (AI) programs are very good now at writing about all sorts of things from composing songs and stories to answering exam questions. They write very convincingly in a human-like way. However, one of the things about them is that they often get things wrong. Apparently, they make “facts” up or as some have described it “hallucinate”. Why should a computer lie or hallucinate? What is going on? Writing postcards will help us see.

Write a postcard

We can get an idea of what is going on if we go back to one of the very first computer programs that generated writing. It was in the 1950s and written by Christopher Strachey a school teacher turned early programmer. He wrote a love letter writing program but we will look at a similar idea: a postcard writing program.

Postcards typically might have lots of similar sentences, like “Wish you were here” or “The weather is lovely”, “We went to the beach” or “I had my face painted with butterflies”. Another time you might write things like: The weather is beautiful”, “We went to the funfair” or “I had my face painted with rainbows”. Christopher Strachey’s idea was to write a program with template sentences that could be filled in by different words: “The weather is …”, “We went to the …”, “I had my face painted with …”. Then the program picks some sentence templates at random, and then picks words at random to go in their slots. In this way, applied to postcard writing it can write millions of unique postcards. It might generate something like the following, for example (where I’ve bolded the words it filled in):

Dear Gran,

I’m on holiday in Skegness. I’ve had a wonderful time.  The weather is sunny,   We went to the beach. I had my face painted with rainbows. I’ve eaten lots strawberry ice cream. Wish you were here!

Lots of love from Mo

but the next time you ask it to it will generate something completely different.

Do it yourself

You can do the same thing yourself. Write lots of sentences on strips of card, leaving gaps for words. Give each gap a number label and note whether it is an adjective (like ‘lovely’ or ‘beautiful’) or a noun (like ‘beach’ or ‘funfair’, ‘butterflies’ or ‘rainbows’). You could also have gaps for verbs or adverbs too. Now create separate piles of cards to fit in each gap. Write the number that labels the gap on one side and different possible words of the right kind for that gap on the other side of the cards. Then keep them in numbered piles.

To generate a postcard (the algorithm or steps for you to follow), shuffle the sentence strips and pick three or four at random. Put them on the table in front of you to spell out a message. Next, go to the numbered pile for each gap in turn, shuffle the cards in that pile and then take one at random. Place it in the gap to complete the sentence. Do this for each gap until you have generated a new postcard message. Add who it is to and from at the start and end. You have just followed the steps (the algorithm) that our simple AI program is following.

Making things up

When you write a postcard by following the steps of our AI algorithm, you create sentences for the postcard partly at random. It is not totally random though, because of the templates and because you chose words to write on cards for each pile that make sense there. The words and sentences are about things you could have done – they are possible – but that does not mean you did do them!

The AI makes things up that are untrue but sound convincing because even though it is choosing words at random, they are appropriate and it is fitting them into sentences about things that do happen on holiday. People talk of chatbots ‘hallucinating’ or ‘dreaming’ or ‘lying’ but actually, as here, they are always just making the whole thing up just as we are when following our postcard algorithm. They are just being a little more sophisticated in the way that they invent their reality!

Our simple way of generating postcards is far simpler than modern AIs, but it highlights some of the features of how AIs are built. There are two basic parts to our AI. The template sentences ensure that what is produced is grammatical. They provide a simple ‘language model‘: rules of how to create correct sentences in English that sound like a human would write. It doesn’t write like Yoda :

“Truly wonderful, the beach is.”

though it could with different templates.

The second part is the sets of cards that fit the gaps. They have to fit the holes left in the templates – only nouns in the noun gaps, adjectives in the adjectives gap, and also fit

Given a set of template sentences about what you might do on holiday, the cards provide data to train the AI to say appropriate things. The cards for the face paining noun slot need to be things that might be painted on your face. By providing different cards you would change the possible sentences. The more cards the more variety in the sentences it writes.

AIs also have a language model, the rules of the language and which words go sensibly in which places in a sentence. However, they also are trained on data that gives the possibilities of what is actually written. Rather than a person writing templates and thinking up words it is based on training data such as social media posts or other writing on the Internet and what is being learnt from this data is the likelihood of what words come next, rather than just filling in holes in a template. The language model used by AIs is also actually just based on the likelihood of words appearing in sentences (not actual grammar rules).

What’s the chances of that?

So, the chatbots are based on the likelihood of words appearing and that is based on statistics. What do we mean by that? We can add a simple version of it to our Postcard AI but first we would need to collect data. How often is each face paint design chosen at seaside resorts? How often do people go to funfairs when on holiday. We need statistics about these things.

As it stands any word we add to the stack of cards is just as likely to be used. If we add the card maggots to the face painting pile (perhaps because the face painter does gruesome designs at Halloween) then the chatbot could write

“I had my face painted with maggots”.

and that is just as likely as it writing

“I had my face painted with butterflies”.

If the word maggots is not written on a card it will never write it. Either it is possible or it isn’t. We could make the chatbot write things that are more realistic, however, by adding more cards of words that are about things that are more popular. So, if in every 100 people having their face painted, almost a third, 30 people choose to have butterflies painted on their face, then we create 30 cards out of 100 in the pack with the word BUTTERFLY on (instead of just 1 card). If 5 in a 100 people choose the rainbow pattern then we add five RAINBOW cards, and so on. Perhaps we would still have one maggot card as every so often someone who likes grossing people out picks it even on holiday. Then, over all the many postcards written this way by our algorithm, the claims will match statistically the reality of what humans would write overall if they did it themselves.

As a result, when you draw a card for a sentence you are now more likely to get a sentence that is true for you. However, it is still more likely to be wrong about you personally than right (you may have had your face painted with butterflies but 70 of the 100 cards still say something else). It is still being chosen by chance and it is only the overall statistics for all people who have their face painted that matches reality not the individual case of what is likely true for you.

Make it personal

How could we make it more likely to be right about you? You need to personalise it. Collect and give it (ie train it on) more information about you personally. Perhaps you usually have a daisy painted on your face because you like daisies (you personally choose a daisy pattern 70% of the time). Sometimes you have rainbows (20% of the time). You might then on a whim choose each of 10 other designs including the butterfly maybe 1 in a hundred times. So you make a pile of 70 DAISY cards, 20 RAINBOW cards and 1 card for each of the other designs, Now, its choices, statistically at least, will match yours. You have trained it about yourself, so it now has a model of you.

You can similarly teach it more about yourself generally, so your likely activities, by adding more cards about the things you enjoy – if you usually choose chocolate or vanilla ice cream then add lots of cards for CHOCOLATE and lots for VANILLA, and so on. The more cards the postcard generator has of a word, the more likely it is to use that word. By giving it more information about yourself, it is more likely to be able to get things about you right. However, it is of course still making it up so, while it is being realistic, on any given occasion it may or may not match reality that time.

Perfect personalisation

You could go a step further and train it on what you actually did do while on this holiday, so that the only cards in the packs are the ones you did actually do on this holiday. (You ate hotdogs and ice cream and chips and … so there are cards for HOTDOG, ICE CREAM, CHIPS …). You had one vanilla ice cream, two chocolate and one strawberry so have that number of each ice cream card. If it knows everything about you then it will be able to write a postcard that is true! That is why companies behind AIs want to collect every detail of your life. The more they know about you the more they get things right about you and so predict what you will do in future too.

Probabilities from the Internet

The modern chatbots work by choosing words at random based on how likely they are in a similar way to our personalised postcard writer. They pick the most likely words to write next based on probabilities of those words coming next in the data they have been trained on. Their training data is often conversations from the Internet. If the word is most likely to come next in all that training data, then the chatbot is more likely to use that word next. However, that doesn’t make the sentence it comes up with definitely true any more than with our postcard AI.

You can personalise the modern AIs too, by giving them more accurate information about yourself and then they are more likely to get what they write about you right. There is still always a chance of them picking the wrong words, if it is there as a possibility though, as they are still just choosing to some extent at random.

Never trust a chatbot

Artificial Intelligences that generate writing do not hallucinate just some of the time. They hallucinate all of the time, just with a big probability of getting it right. They make everything up. When they get things right it is just because the statistics of the data they were trained on made those words the most likely ones to be picked to follow what went before. Just as the Internet is full of false things, an Artificial Intelligence can get things wrong too.

If you use them for anything that matters, always double check that they are telling you the truth.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Protecting your fridge

by Jo Brodie and Paul Curzon, Queen Mary University of London

Ever been spammed by your fridge? It has happened, but Queen Mary’s Gokop Goteng and Hadeel Alrubayyi aim to make it less likely…

Image by Gerd Altmann from Pixabay

Gokop has a longstanding interest in improving computing networks and did his PhD on cloud computing (at the time known as grid computing), exploring how computing could be treated more like gas and electricity utilities where you only pay for what you use. His current research is about improving the safety and efficiency of the cloud in handling the vast amounts of data, or ‘Big Data’, used in providing Internet services. Recently he has turned his attention to the Internet of Things.

It is a network of connected devices, some of which you might have in your home or school, such as smart fridges, baby monitors, door locks, lighting and heating that can be switched on and off with a smartphone. These devices contain a small computer that can receive and send data when connected to the Internet, which is how your smartphone controls them. However, it brings new problems: any device that’s connected to the Internet has the potential to be hacked, which can be very harmful. For example, in 2013 a domestic fridge was hacked and included in a ‘botnet’ of devices which sent thousands of spam emails before it was shut down (can you imagine getting spam email from your fridge?!)

A domestic fridge was hacked
and included in a ‘botnet’ of devices
which sent thousands of spam emails
before it was shut down.

The computers in these devices don’t usually have much processing power: they’re smart, but not that smart. This is perfectly fine for normal use, but to run software to keep out hackers, while getting on with the actual job they are supposed to be doing, like running a fridge, it becomes a problem. It’s important to prevent devices from being infected with malware (bad programs that hackers use to e.g., take over a computer) and work done by Gokop and others has helped develop better malwaredetecting security algorithms which take account of the smaller processing capacity of these devices.

One approach he has been exploring with PhD student Hadeel Alrubayyi is to draw inspiration from the human immune system: building artificial immune systems to detect malware. Your immune system is very versatile and able to quickly defend you against new bugs that you haven’t encountered before. It protects you from new illnesses, not just illnesses you have previously fought off. How? Using special blood cells, such as T-Cells, which are able to detect and attack rogue cells invading the body. They can spot patterns that tell the difference between the person’s own healthy cells and rogue or foreign cells. Hadeel and Gokop have shown that applying similar techniques to Internet of Things software can outperform other techniques for spotting new malware, detecting more problems while needing less computing resources.

Gokop is also using his skills in cloud computing and data science to enhance student employability and explore how Queen Mary can be a better place for everyone to do well. Whether a person, organisation or smart fridge Gokop aims to help you reach your full potential!

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

The gender shades audit

by Jo Brodie, Queen Mary University of London

Face recognition technology is used widely, such as at passport controls and by police forces. What if it isn’t as good at recognising faces as it has been claimed to be? Joy Buolamwini and Timnit Gebru tested three different commercial systems and found that they were much more likely to wrongly classify darker skinned female faces compared to lighter or darker skinned male faces. The systems were not reliable.

Different skin tone cosmetics
Image by Stefan Schweihofer from Pixabay

Face recognition systems are trained to detect, classify and even recognise faces based on a bank of photographs of people. Joy and Timnit examined two banks of images used to train the systems and found that around 80 percent of the photos used were of people with lighter coloured skin. If the photographs aren’t fairly balanced in terms of having a range of people of different gender and ethnicity then the resulting technologies will inherit that bias too. The systems examined were being trained to recognise light skinned people.

The pilot parliaments benchmark

Joy and Timnit decided to create their own set of images and wanted to ensure that these covered a wide range of skin tones and had an equal mix of men and women (‘gender parity’). They did this using photographs of members of parliaments around the world which are known to have a reasonably equal mix of men and women. They selected parliaments both from countries with mainly darker skinned people (Rwanda, Senegal and South Africa) and from countries with mainly lighter skinned people (Iceland, Finland and Sweden).

They labelled all the photos according to gender (they had to make some assumptions based on name and appearance if pronouns weren’t available) and used a special scale called the Fitzpatrick scale to classify skin tones (see Different Shades below). The result was a set of photographs labelled as dark male, dark female, light male, light female, with a roughly equal mix across all four categories: this time, 53 per cent of the people were light skinned (male and female).

Testing times

Joy and Timnit tested the three commercial face recognition systems against their new database of photographs (a fair test of a wide range of faces that a recognition system might come across) and this is where they found that the systems were less able to correctly identify particular groups of people. The systems were very good at spotting lighter skinned men, and darker skinned men, but were less able to correctly identify darker skinned women, and women overall. The tools, trained on sets of data that had a bias built into them, inherited those biases and this affected how well they worked.

As a result of Joy and Timnit’s research there is now much more recognition of the problem, and what this might mean for the ways in which face recognition technology is used. There is some good news, though. The three companies made changes to improve their systems and several US cities have already banned the use of this technology in criminal investigations, with more likely to follow. People worldwide are more aware of the limitations of face recognition programs and the harms to which they may be (perhaps unintentionally) put, with calls for better regulation.

Different Shades
The Fitzpatrick skin tone scale is used by skin specialists to classify how someone’s skin responds to ultraviolet light. There are six points on the scale with 1 being the lightest skin and 6 being the darkest. People whose skin tone has a lower Fitzpatrick score are more likely to burn in the sun and are at greater risk of skin cancer. People with higher scores have darker skin which is less likely to burn and have a lower risk of skin cancer. A variation of the Fitzpatrick scale, with five points, is used to create the skin tone emojis that you’ll find on most messaging apps in addition to the ‘default’ yellow.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Playing the weighting game

by Paul Curzon, Queen Mary University of London

In the spotlight - bright lights dazzling in circles and stars
Image by Gerd Altmann from Pixabay

Imagine having a reality TV show where yet again Simon Cowell is looking for talent. This time it’s talent with a difference though, not stars to entertain us but ones with the raw ability to help find webpages. Yes, this time the budding stars are all words. Word Idol is here!

The format is simple. Each week Simon’s aim is to find talented words to create a new group: a group with star quality, a group with meaning. Like any talent competition, there are thousands of entries. Every word in every webpage out there wants to take part. They all have to be judged, but what do the specialist judges look for?

OK, we’re getting carried away. Simon Cowell may not be interested but there is big money in the idea. It’s a talent show that is happening all the time. The aim is to judge the words in each new webpage as it appears so that search engines can find it if ever someone goes looking. The real star of this show isn’t Simon Cowell but a Cambridge professor, Karen Spärck Jones. She came up with the way to judge words.

Karen worked out that to do this kind of judging a computer needs a thesaurus: a book of words. It just lists groups of words that mean the same thing. A computer, Karen realised, could use one to understand what words mean.

There is big money in the idea!

The fact that there are so many ways to say the same thing in human languages, makes it really hard for a computer to understand what we write. That is where a thesaurus comes in. If you ask a computer to search for web pages about whales, for example, it helps to know that, a page that talks about orcas is about whales too. Worse still, most words have more than one meaning, a fact that keeps crossword lovers in business.

Take the following example: “Leona is the new big star of the music business.”

The word ‘star’ here obviously means a celebrity, but how do you know? It could also mean a sun or a shape. The fact that it’s with the word ‘music’ helps you to work out which meaning is right even if you have no idea who or what Leona is. As Karen realised, a computer can also work out the intended meanings of words by the other words used with them. A thesaurus tells it what the critical groupings are, but what Karen wanted was a way a computer could work the thesaurus out for itself and now she had a way.

Her early approach was to write a program that takes lots and lots of documents and make lists of the words that keep appearing close together. If ‘music’ appears with ‘star’ lots then that is a new meaning. After building up a big collection of such lists of linked words, the program can then use it to decide which pages are talking about the same thing and so which ones to suggest when a search is done. So Karen had found the first way to judge whether a word has the right ‘talent’ to go in a group. The more often words appear together the higher the score or ‘weighting’ they should be given. Simple!

The only trouble is it doesn’t really work. That is where Karen’s big insight came. She realised that if two words appear together in a lot of different documents then, surprisingly perhaps, putting them together in a group isn’t actually that useful for finding documents! Do a search and they will just tell you that lots of web pages match. What you really want is to be told of the few web pages that contain the meaning you are looking for, not lots and lots that don’t.

The important word groupings are actually only in a small number of web pages. That suggests they give a very focused meaning. Word groups like that help you narrow down the search. So Karen now had a better way to judge word talent. Give high marks for pairs that do appear together but in as few web pages as possible. Rather than a talent show, it is more like a giant game of the quiz show Pointless where you win if you pick the words few other people did.

That idea was the big breakthrough and led to what is now called IDF weighting. It is the way to judge words, and is so good that it’s now used by pretty much every search engine out there. Playing the IDF weighting game may not make great TV but thanks to Karen it really does make for great web.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Collecting mini-beasts and pocket monsters

by Paul Curzon, Queen Mary University of London

A Pokemon creature int he grass
Image by Ramadhan Notonegoro from Pixabay

Satoshi Tajiri created one of the biggest money-making media franchises of all time. It all started with his love of nature and, in particular, mini-beasts. It also eventually took gamers back into the fresh air.

As a child, Satoshi Tajiri, loved finding and collecting minibeasts, so spent lots of time outside, exploring nature. But, as Japan became more and more built up, his insect searching haunts disappeared. As the natural world disappeared he was drawn instead inside to video game arcades and those games became a new obsession. He became a super-fan of games and even created a game fanzine called Game Freak where he shared tips on playing different games. It wasn’t just something he sold to friends either: one issue sold 10,000 copies. An artist, Ken Sugimori, who started as a reader of the magazine, ultimately joined Satoshi, illustrating the magazine for him.

Rather than just writing about games, they wanted to create better ones themselves, so morphed Game Freak into a computer game company, ultimately turning it into one of the most successful ever. The cause of that success was their game Pokemon, designed by Satoshi with characters drawn by Ken. It took the idea of that first obsession, collecting minibeasts, and put it into a fun game with a difference.

It wasn’t about killing things, but moving around a game world searching for, taming and collecting monsters. The really creative idea, though, came from the idea of trading. There were two versions of the game and you couldn’t find all the creatures in your own version. To get a full set you had to talk to other people and trade from your collection. It was designed to be a social game from the outset.

It has been suggested that Satoshi is neuro-diverse. Whether he is or not, autistic people (as well as everyone else) found that Pokemon was a great way to make friends, something autistic people often find difficult. Pokemon, also became more than just a game, turning into a massive media franchise, with trading cards to collect, an animated series and a live action film. It also later sparked a second game craze when Pokemon Go was released. It combined the original idea with augmented reality, taking all those gamers back outside for real, searching for (virtual) beasts in the real world.

 

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

“Tlahcuilo”, a visual composer

by Rafael Pérez y Pérez of the Universidad Autónoma Metropolitana, México

A design by Tlahcuilo of circles made of dots
A design by Tlahcuilo

A main goal of computational creativity research is to help us better understand how this essential human characteristic, creativity, works. Creativity is a very complex phenomenon that we only just understand: we need to employ all the tools that we have available to fully comprehend it. Computers are a powerful tool that can help us generate that knowledge and reflect on it. By building computer models of the processes we think are behind creativity, we can start to probe how creativity really works.

When you hear someone claiming that a computer agent, whether program, robot or gadget, is creative, the first question you should ask is: what have we learned? What does studying this agent help us to realise or discover about creativity that we did not know before? If you do not get a satisfactory answer, I would hardly call it a computer model of creativity. As well as being able to generate novel, and interesting or useful, things, a creative agent ought to fulfil other criteria: using its knowledge, creating knowledge and evaluating its own work.

Be knowledgeable!

Truly creative agents should draw on their own knowledge to build the things, such as art, that they create. They should use a knowledge-base, not just create things randomly. We aren’t, for example, interested in programs that arbitrarily pick a picture from the web, randomly apply a filter to it and then claim they have generated art.

Create knowledge!

A design by Tlahcuilo of circles made of dots
A design by Tlahcuilo

A creative agent must be able to interpret its own creations in order to generate novel knowledge, and that knowledge should help it produce more original pieces. For example, a program that generates story plots must be able to read its own stories and learn from them, as well as from stories developed by others.

Evaluate it!

To deserve to be called creative, an agent also ought to be able to tell whether the things it has created are good or bad. It should be able to evaluate its work, as well as that produced by similar agents. It’s evaluation should also influence the way the generation process works. We don’t want joke creation programs that churn out thousands of ‘jokes’ leaving a human to decide which are actually funny. A creative agent ought to be able to do that itself!

Design me a design

At the moment few, if any, systems fulfil all these criteria. Nevertheless, I suggest they should be the main goals of those doing research in computational creativity. Over the past 20 years I’ve been studying computer models of creativity, aiming to do exactly that. My main research has focused on story generation, but with my team I’ve also developed programs that aim to create novel visual designs. This is the kind of thing someone developing new fabric, wallpaper or tiling patterns might do, for example. With Iván Guerrero and María González I developed a program called TLAHCUILO. It composes visual patterns based on photographs or an empty canvas. It employs geometrical patterns, like repeated shapes, in the picture and then uses them as the basis of a new abstract pattern.

The word “tlahcuilo” refers to painters and writers
in ancient México responsible for preserving
the knowledge and traditions of their people.

To build the system’s knowledge-base, we created a tool that human designers can use to do the same creative task. TLAHCUILO analyses the steps they follow as they develop a composition and registers what it has learnt in its knowledge base. For example, it might note the way the human designer adds elements to make the pattern symmetrical or to add balance. Once these approaches are in its knowledge base it can use them itself in its own compositions. This is a little like the way an apprentice to a craftsman might work, watching the Master at work, gradually building the experience to do it themselves. Our agent similarly builds on this experience to produce its own original outputs. It can also add its own pieces of work to its knowledge-base. Finally, it is able to assess the quality of its designs. It aims to meet the criteria set out above.

Design me a plot

A design by Tlahcuilo based on a fruit stall image
A design by Tlahcuilo

One of TLAHCUILO’s most interesting characteristics is that it uses the same model of creativity that we used to implement MEXICA, our story plot generator (see CS4FN Issue 18). This allows us to compare in detail the differences and similarities between an agent that produces short-stories and an agent that produces visual compositions. We hope this will allow us to generalise our understanding.

Creativity research is a fascinating field. We hope to learn not just how to build creative agents but more importantly to understand what it takes to be a creative human.

More on …

Related Magazines …

Issue 22 Cover Creative Computing

EPSRC supports this blog through research grant EP/W033615/1. 

Follow those ants

by Paul Curzon, Queen Mary University of London

Ants climbing on a mushroom obstacle course
Image by Puckel from Pixabay

Ant colonies are really good at adapting to changing situations: far better than humans. Sameena Shah wondered if Artificial Intelligence agents might do better by learning their intelligent behaviour from ants rather than us. She has suggested we could learn from the ants too.

Inspired by staring at ants adapting to new routes to food in the mud as a child, and then later as adult ants raided her milk powder, Sameena Shah studied for her PhD how a classic problem in computer science, that of finding the shortest path between points in a network, is solved by ant colonies. For ants this involves finding the shortest paths between food and the nest: something they are very good at. When foraging ants find a source of food they leave a pheromone (i.e., scent) trail as they return, a bit like Hansel and Gretel leaving a trail of breadcrumbs. Other ants follow existing trails to find the food as directly as possible, leaving their own trails as they do. Ants mostly follow the trail containing most pheromone, though not always. Because shorter paths are followed more quickly, there and back, they gain more pheromone than longer ones, so yet more ants follow them. This further reinforces the shortest trail as the one to follow.

There are lots of variations on the way ants actually behave. These variations are being explored by computer scientists as ways for AI agents to work together to solve problems. Sameena devised a new algorithm called EigenAnt to investigate such ant colony-based problem solving. If the above ant algorithm is used, then it turns out longer trails do not disappear even when a shorter path is found, particularly if it is found after a long delay. The original best path has a very strong trail so that it continues to be followed even after a new one is found. Computer-based algorithms add a step whereby all trails fade away at the same rate so that only ones still being followed stay around. This is better but still not perfect. Sameena’s EigenAnt algorithm instead removes pheromone trails selectively. Her software ants select paths using probabilities based on the strength of the trail. Any existing trail could be chosen but stronger trails are more likely to be. When a software ant chooses a trail, it adds its own pheromones but also removes some of the existing pheromone from the trail in a way that depends on the probability of the path being chosen in the first place. This mirrors what real ants do, as studies have shown they leave less pheromone on some trails than others.

Sameena proved mathematical properties of her algorithm as well as running simulations of it. This showed that EigenAnt does find the shortest path and never settles on something less than the best. Better still, it also adapts to changing situations. If a new shorter path arises then the software ants switch to it!

Sameena won the award
for the best PhD in India

There are all sorts of computer science uses for this kind of algorithm, such as in ever-changing computer networks, where we always want to route data via the current quickest route. Sameena, however, has also suggested we humans could learn from this rather remarkable adaptability of ants. We are very bad at adapting to new situations, often getting stuck on poor solutions because of our initial biases. The more successful a particular life path has been for us the more likely we will keep following it, behaving in the same way, even when the situation changes. Sameena found this out when she took her dream job as a Hedge Fund manager. It didn’t go well. Since then, after changing tack, she has been phenomenally successful, first developing AIs for news providers, and then more recently for a bank. As she says: don’t worry if your current career path doesn’t lead to success, there are many other paths to follow. Be willing to adapt and you will likely find something better. We need to nurture lots of possible life paths, not just blindly focus on one.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1.