“Tlahcuilo”, a visual composer

by Rafael Pérez y Pérez of the Universidad Autónoma Metropolitana, México

A design by Tlahcuilo of circles made of dots
A design by Tlahcuilo

A main goal of computational creativity research is to help us better understand how this essential human characteristic, creativity, works. Creativity is a very complex phenomenon that we only just understand: we need to employ all the tools that we have available to fully comprehend it. Computers are a powerful tool that can help us generate that knowledge and reflect on it. By building computer models of the processes we think are behind creativity, we can start to probe how creativity really works.

When you hear someone claiming that a computer agent, whether program, robot or gadget, is creative, the first question you should ask is: what have we learned? What does studying this agent help us to realise or discover about creativity that we did not know before? If you do not get a satisfactory answer, I would hardly call it a computer model of creativity. As well as being able to generate novel, and interesting or useful, things, a creative agent ought to fulfil other criteria: using its knowledge, creating knowledge and evaluating its own work.

Be knowledgeable!

Truly creative agents should draw on their own knowledge to build the things, such as art, that they create. They should use a knowledge-base, not just create things randomly. We aren’t, for example, interested in programs that arbitrarily pick a picture from the web, randomly apply a filter to it and then claim they have generated art.

Create knowledge!

A design by Tlahcuilo of circles made of dots
A design by Tlahcuilo

A creative agent must be able to interpret its own creations in order to generate novel knowledge, and that knowledge should help it produce more original pieces. For example, a program that generates story plots must be able to read its own stories and learn from them, as well as from stories developed by others.

Evaluate it!

To deserve to be called creative, an agent also ought to be able to tell whether the things it has created are good or bad. It should be able to evaluate its work, as well as that produced by similar agents. It’s evaluation should also influence the way the generation process works. We don’t want joke creation programs that churn out thousands of ‘jokes’ leaving a human to decide which are actually funny. A creative agent ought to be able to do that itself!

Design me a design

At the moment few, if any, systems fulfil all these criteria. Nevertheless, I suggest they should be the main goals of those doing research in computational creativity. Over the past 20 years I’ve been studying computer models of creativity, aiming to do exactly that. My main research has focused on story generation, but with my team I’ve also developed programs that aim to create novel visual designs. This is the kind of thing someone developing new fabric, wallpaper or tiling patterns might do, for example. With Iván Guerrero and María González I developed a program called TLAHCUILO. It composes visual patterns based on photographs or an empty canvas. It employs geometrical patterns, like repeated shapes, in the picture and then uses them as the basis of a new abstract pattern.

The word “tlahcuilo” refers to painters and writers
in ancient México responsible for preserving
the knowledge and traditions of their people.

To build the system’s knowledge-base, we created a tool that human designers can use to do the same creative task. TLAHCUILO analyses the steps they follow as they develop a composition and registers what it has learnt in its knowledge base. For example, it might note the way the human designer adds elements to make the pattern symmetrical or to add balance. Once these approaches are in its knowledge base it can use them itself in its own compositions. This is a little like the way an apprentice to a craftsman might work, watching the Master at work, gradually building the experience to do it themselves. Our agent similarly builds on this experience to produce its own original outputs. It can also add its own pieces of work to its knowledge-base. Finally, it is able to assess the quality of its designs. It aims to meet the criteria set out above.

Design me a plot

A design by Tlahcuilo based on a fruit stall image
A design by Tlahcuilo

One of TLAHCUILO’s most interesting characteristics is that it uses the same model of creativity that we used to implement MEXICA, our story plot generator (see CS4FN Issue 18). This allows us to compare in detail the differences and similarities between an agent that produces short-stories and an agent that produces visual compositions. We hope this will allow us to generalise our understanding.

Creativity research is a fascinating field. We hope to learn not just how to build creative agents but more importantly to understand what it takes to be a creative human.

More on …

Related Magazines …

Issue 22 Cover Creative Computing

EPSRC supports this blog through research grant EP/W033615/1. 

Understanding Parties

Three glasses of lemonade in a huddle as if talking

Image by Susanne Jutzeler, Schweiz 🇨🇭 💕Thanks for Likes from Pixabay
Image by Susanne Jutzeler, Schweiz 🇨🇭 💕Thanks for Likes from Pixabay 

by Paul Curzon, Queen Mary University of London

(First appeared in Issue 23 of the CS4FN magazine “The women are (still) here”)

The stereotype of a computer scientist is someone who doesn’t understand people. For many, how people behave is exactly what they are experts in. Kavin Narasimhan is one. When a student at QMUL she studied how people move and form groups at parties, creating realistic computer models of what is going on.

We humans are very good at subtle behaviour, and do much of it without even realising it. One example is the way we stand when we form small groups to talk. We naturally adjust our positions and the way we face each other so we can see and hear clearly, while not making others feel uncomfortable by getting too close. The positions we take as we stand to talk are fairly universal. If we understand what is going on we can create computational models that behave the same way. Most previous models simulated the way we adjust positions as others arrive or leave by assuming everyone tries to both face, and keep the same distance from, the midpoint of the group. However, there is no evidence that that is what we actually do. There are several alternatives. Rather than pointing ourselves at some invisible centre point, we could be subconsciously maximising our view of the people around. We could be adjusting our positions and the direction we face based on the position only of the people next to us, or instead based on the positions of everyone in the group.

Kavin videoed real parties where lots of people formed small groups to find out more of the precise detail of how we position and reposition ourselves. This gave her a bird’s eye view of the positions people actually took. She also created simulations with virtual 2D characters that move around, forming groups then moving on to join other groups. This allowed her to try out different rules of how the characters behaved, and compare them to the real party situations.

She found that her alternate rules were more realistic than rules based on facing a central point. For example, the latter generates regular shapes like triangular and square formations, but the positions real humans take are less regular. They are better modelled by assuming people focus on getting the best view of others. The simulations showed that this was also a more accurate way to predict the sizes of groups that formed, how long they formed for, and how they were spread across the room. Kavin’s rules therefore appear to give a realistic way to describe how we form groups.

Being able to create models like this has all sorts of applications. It is useful for controlling the precise movement of avatars, whether in virtual worlds or teleconferencing. They can be used to control how computer-generated (CGI) characters in films behave, without needing to copy the movements from actors first. It can make the characters in computer games more realistic as they react to whatever movements the real people, and each other, make. In the future we are likely to interact more and more with robots in everyday life, and it will be important that they follow appropriate rules too, so as not to seem alien.

So you shouldn’t assume computer scientists don’t understand people. Many understand them far better than the average person. That is how they are able to create avatars, robots and CGI characters that behave exactly like real people. Virtual parties are set to be that little bit more realistic.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

The Hive at Kew

Art meets bees, science and electronics

by Paul Curzon, Queen Mary University of London

(from the archive)

a boy lying in the middle of the Hive at Kew Gardens.

Combine an understanding of science, with electronics skills and the creativity of an artist and you can get inspiring, memorable and fascinating experiences. That is what the Hive, an art instillation at Kew Gardens in London, does. It is a massive sculpture linked to a subtle sound and light experience, surrounded by a wildflower meadow, but based on the work of scientists studying bees.

The Hive is a giant aluminium structure that represents a bee hive. Once inside you see it is covered with LED lights that flicker on and off apparently randomly. They aren’t random though, they are controlled by a real bee hive elsewhere in the gardens. Each pulse of a light represents bees communicating in that real hive where the artist Wolfgang Buttress placed accelerometers. These are simple sensors like those in phones or a BBC micro:bit that sense movement. The sensitive ones in the bee hive pick up vibrations caused by bees communicating with each other The signals generated are used to control lights in the sculpture.

A new way to communicate

This is where the science comes in. The work was inspired by Martin Bencsik’s team at Nottingham Trent University who in 2011 discovered a new kind of communication between bees using vibrations. Before bees are about to swarm, where a large part of the colony split off to create a new hive, they make a specific kind of vibration, as they prepare to leave. The scientists discovered this using the set up copied by Wolfgang Buttress, using accelerometers in bee hives to help them understand bee behaviour. Monitoring hives like this could help scientists understand the current decline of bees, not least because large numbers of bees die when they swarm to search for a new nest.

Hear the vibrations through your teeth

Good vibrations

The Kew Hive has one last experience to surprise you. You can hear vibrations too. In the base of the Hive you can listen to the soundtrack through your teeth. Cover your ears and place a small coffee stirrer style stick between your teeth, and put the other end of the stick in to a slot. Suddenly you can hear the sounds of the bees and music. Vibrations are passing down the stick, through your teeth and bones of your jawbone to be picked up in a different way by your ears.

A clever use of simple electronics has taught scientists something new and created an amazing work of art.


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Hoverflies: comin’ to get ya

by Peter W McOwan and Paul Curzon, Queen Mary University of London

(from the archive)

A hoverfly on a blade of grass

By understanding the way hoverflies mate, computer scientists found a way to sneak up on humans, giving a way to make games harder.

When hoverflies get the hots for each other they make some interesting moves. Biologists had noticed that as one hoverfly moves towards a second to try and mate, the approaching fly doesn’t go in a straight line. It makes a strange curved flight. Peter and his student Andrew Anderson thought this was an interesting observation and started to look at why it might be. They came up with a cunning idea. The hoverfly was trying to sneak up on its prospective mate unseen.

The route the approaching fly takes matches the movements of the prospective mate in such a way that, to the mate, the fly in the distance looks like it’s far away and ‘probably’ stationary.

Tracking the motion of a hoverfly and its sightlines

How does it do this? Imagine you are walking across a field with a single tree in it, and a friend is trying to sneak up on you. Your friend starts at the tree and moves in such a way that they are always in direct line of sight between your current position and the tree. As they move towards you they are always silhouetted against the tree. Their motion towards you is mimicking the stationary tree’s apparent motion as you walk past it… and that’s just what the hoverfly does when approaching a mate. It’s a stealth technique called ‘active motion camouflage’.

By building a computer model of the mating flies, the team were able to show that this complex behaviour can actually be done with only a small amount of ‘brain power’. They went on to show that humans are also fooled by active motion camouflage. They did this by creating a computer game where you had to dodge missiles. Some of those missiles used active motion camouflage. The missiles using the fly trick were the most difficult to spot.

It just goes to show: there is such a thing as a useful computer bug.


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Ant Art

by Paul Curzon, Queen Mary University of London

(from the archive)

The close up head of an ant staring at you
Image by Virvoreanu Laurentiu from Pixabay 

There are many ways Artificial Intelligences might create art. Breeding a colony of virtual ants is one of the most creative.

Photogrowth from the University of Coimbra does exactly that. The basic idea is to take an image and paint an abstract version of it. Normally you would paint with brush strokes. In ant paintings you paint with the trails of hundreds of ants as they crawl over the picture, depositing ink rather than the normal chemical trails ants use to guide other ants to food. The colours in the original image act as food for the ants, which absorb energy from its bright parts. They then use up energy as they move around. They die if they don’t find enough food, but reproduce if they have lots. The results are highly novel swirl-filled pictures.

The program uses vector graphics rather than pixel-based approaches. In pixel graphics, an image is divided into a grid of squares and each allocated a colour. That means when you zoom in to an area, you just see larger squares, not more detail. With vector graphics, the exact position of the line followed is recorded. That line is just mapped on to the particular grid of the display when you view it. The more pixels in the display, the more detailed the trail is drawn. That means you can zoom in to the pictures and just see ever more detail of the ant trails that make them up.

You become a breeder of a species of ant
that produce trails, and so images,
you will find pleasing

Because the virtual ants wander around at random, each time you run the program you will get a different image. However, there are lots of ways to control how ants can move around their world. Exploring the possibilities by hand would only ever uncover a small fraction of the possibilities. Photogrowth therefore uses a genetic algorithm. Rather than set all the options of ant behaviour for each image, you help design a fitness function for the algorithm. You do this by adjusting the importance of different aspects like the thickness of trail left and the extent the ants will try and cover the whole canvas. In effect you become a breeder of a species of ant that produce trails, and so images, you will find pleasing. Once you’ve chosen the fitness function, the program evolves a colony of ants based on it, and they then paint you a picture with their trails.

The result is a painting painted by ants bred purely to create images that please you.


More on …

Related Magazines …

Cover issue 18
cs4fn issue 4 cover

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Standup Robots

‘How do robots eat pizza?’… ‘One byte at a time’. Computational Humour is real, but it’s not jokes about computers, it’s computers telling their own jokes.

Robot performing
Image from istockphoto

Computers can create art, stories, slogans and even magic tricks. But can computers perform themselves? Can robots invent their own jokes? Can they tell jokes?

Combining Artificial Intelligence, computational linguistics and humour studies (yes you can study how to be funny!) a team of Scottish researchers made an early attempt at computerised standup comedy! They came up with Standup (System to Augment Non Speakers Dialogue Using Puns): a program that generates riddles for kids with language difficulties. Standup has a dictionary and joke-building mechanism, but does not perform, it just creates the jokes. You will have to judge for yourself as to whether the puns are funny. You can download the software from here. What makes a pun funny? It is a about the word having two meanings at exactly the same time in a sentence. It is also about generating an expectation that you then break: a key idea about what is at the core of creativity too.

A research team at Virginia Tech in the US created a system that started to learn about funny pictures. Having defined a ‘funniness score’ they created a computational model for humorous scenes, and trained it to predict funniness, perhaps with an eye to spotting pics for social media posting, or not.

But are there funny robots out there? Yes! RoboThespian programmed by researchers at Queen Mary University of London, and Data, created by researchers at Carnegie Mellon University are both robots programmed to do stand-up comedy. Data has a bank of jokes and responds to audience reaction. His developers don’t actually know what he will do when he performs, as he is learning all the time. At his first public gig, he got the crowd laughing, but his timing was poor. You can see his performance online, in a TED Talk.

RoboThespian did a gig at the London Barbican alongside human comedians. The performance was a live experiment to understand whether the robot could ‘work the audience’ as well as a human comedian. They found that even relatively small changes in the timing of delivery make a big difference to audience response.

What have these all got in common? Artificial Intelligence, machine learning and studies to understand what humour actually is, are being combined to make something that is funny. Comedy is perhaps the pinnacle of creativity. It’s certainly not easy for a human to write even one joke, so think how hard it is distill that skill into algorithms and train a computer to create loads of them.

You have to laugh!

Watch RoboThespian [EXTERNAL]

– Jane Waite, Queen Mary University of London, Summer 2017

Download Issue 22 of the cs4fn magazine “Creative Computing” here

Lots more computing jokes on our Teaching London Computing site

Smart bags

In our stress-filled world with ever increasing levels of anxiety, it would be nice if technology could sometimes reduce stress rather than just add to it. That is the problem that QMUL’s Christine Farion set out to solve for her PhD. She wanted to do something stylish too, so she created a new kind of bag: a smart bag.

Christine realised that one thing that causes anxiety for a lot of people is forgetting everyday things. It is very common for us to forget keys, train tickets, passports and other everyday things we need for the day. Sometimes it’s just irritating. At other times it can ruin the day. Even when we don’t forget things, we waste time unpacking and repacking bags to make sure we really do have the things we need. Of course, the moment we unpack a bag to check, we increase the chance that something won’t be put back!

Electronic bags

Christine wondered if a smart bag could help. Over the space of several years, she built ten different prototypes using basic electronic kits, allowing her to explore lots of options. Her basic design has coloured lights on the outside of the bag, and a small scanner inside. To use the bag, you attach electronic tags to the things you don’t want to forget. They are like the ones shops use to keep track of stock and prevent shoplifting. Some tags are embedded into things like key fobs, while others can be stuck directly on to an object. Then when you pack your bag, you scan the objects with the reader as you put them in, and the lights show you they are definitely there. The different coloured lights allow you to create clear links – natural mappings – between the lights and the objects. For her own bag, Christine linked the blue light to a blue key fob with her keys, and the yellow light to her yellow hayfever tablet box.

In the wild

One of the strongest things about her work was she tested her bags extensively ‘in the wild’. She gave them to people who used them as part of their normal everyday life, asking them to report to her what did and didn’t work about them. This all fed in to the designs for subsequent bags and allowed her to learn what really mattered to make this kind of bag work for the people using it. One of the key things she discovered was that the technology needed to be completely simple to use. If it wasn’t both obvious how to use and quick and simple to do it wouldn’t be used.

Christine also used the bags herself, keeping a detailed diary of incidents related to the bags and their design. This is called ‘autoethnography’. She even used one bag as her own main bag for a year and a half, building it completely into her life, fixing problems as they arose. She took it to work, shopping, to coffee shops … wherever she went.

Suspicious?

When she had shown people her prototype bags, one of the common worries was that the electronics would look suspicious and be a problem when travelling. She set out to find out, taking her bag on journeys around the country, on trains and even to airports, travelling overseas on several occasions. There were no problems at all.

Fashion matters

As a bag is a personal item we carry around with us, it becomes part of our identity. She found that appropriate styling is, therefore, essential in this kind of wearable technology. There is no point making a smart bag that doesn’t fit the look that people want to carry around. This is a problem with a lot of today’s medical technology, for example. Objects that help with medical conditions: like diabetic monitors or drug pumps and even things as simple and useful as hearing aids or glasses, while ‘solving’ a problem, can lead to stigma if they look ugly. Fashion on the other hand does the opposite. It is all about being cool. Christine showed that by combining design of the technology with an understanding of fashion, her bags were seen as cool. Rather than designing just a single functional smart bag, ideally you need a range of bags, if the idea is to work for everyone.

Now, why don’t I have my glasses with me?

– Paul Curzon, Queen Mary University of London, Autumn 2018

Download Issue 25 of the cs4fn magazine “Technology Worn Out (and about) on Wearable Computing here.

Sick tattoos

Image by Anand Kumar from Pixabay

Researchers at MIT and Harvard have new skin in the game when it comes to monitoring people’s bodily health. They have developed a new wearable technology in the form of colour- and shape-changing tattoos. These tattoos work by using bio-sensitive inks, changing colour, fading away or appearing under different coloured illumination, depending on your body chemistry. They could, for example, change their colour, or shape as their parts fade away, depending on your blood glucose levels.

This kind of constantly on, constantly working body monitoring ensures that there is nothing to fall off, get broken or run out of power. That’s important in chronic conditions like diabetes where monitoring and controlling blood glucose levels is crucial to the person’s health. The project, called Dermal Abyss, brings together scientists and artists in a new way to create a data interface on your skin.

There are still lots of questions to answer, like how long will the tattoos last and would people be happy displaying their health status to anyone who catches a glimpse of their body art? How would you feel having your body stats displayed on your tats? It’s a future question for researchers to draw out the answer to.

– Peter W. McOwan, Queen Mary University of London, Autumn 2018

Studying Comedy with Computers

by Vanessa Pope, Queen Mary University of London

Smart speakers like Alexa might know a joke or two, but machines aren’t very good at sounding funny yet. Comedians, on the other hand, are experts at sounding both funny and exciting,  even when they’ve told the same joke hundreds of times. Maybe speech technology could learn a thing or two from comedians… that is what my research is about.

Image by Rob Slaven from Pixabay 

To test a joke, stand-up comedians tell it to lots of different audiences and see how they react. If no-one laughs, they might change the words of the joke or the way they tell it. If we can learn how they make their adjustments, maybe technology can borrow their tricks. How much do comedians change as they write a new show? Does a comedian say the same joke the same way at every performance? The first step is to find out.

The first step is to record lots of the same live show of a comedian and find the parts that match from one show to the next. It was much faster to write a program to find the same jokes in different shows than finding them all myself. My code goes through all the words and sounds a comedian said in one live show and looks for matching chunks in their other shows. Words need to be in the same exact order to be a match: “Why did the chicken cross the road” is very different to “Why did the road cross the chicken”! The process of looking through a sequence to find a match is called “subsequence matching,” because you’re looking through one sequence (the whole set of words and sounds in a show) for a smaller sequence (the “sub” in “subsequence”). If a subsequence (little sequence) is found in lots of shows, it means the comedian says that joke the same way at every show. Subsequence matching is a brand new way to study comedy and other types of speech that are repeated, like school lessons or a favourite campfire story.

By comparing how comedians told the same jokes in lots of different shows, I found patterns in the way they told them. Although comedy can sound very improvised, a big chunk of comedians’ speech (around 40%) was exactly the same in different shows. Sounds like “ummm” and “errr” might seem like mistakes but these hesitation sounds were part of some matches, so we know that they weren’t actually mistakes. Maybe “umm”s help comedians sound like they’re making up their jokes on the spot.

Varying how long pauses are could be an important part of making speech sound lively, too. A comedian told a joke more slowly and evenly when they were recorded on their own than when they had an audience. Comedians work very hard to prepare their jokes so they are funny to lots of different people. Computers might, therefore, be able to borrow the way comedians test their jokes and change them. For example, one comedian kept only five of their original jokes in their final show! New jokes were added little by little around the old jokes, rather than being added in big chunks.

If you want to run an experiment at home, try recording yourself telling the same joke to a few different people. How much practice did you need before you could say the joke all at once? What did you change, including little sounds like “umm”? What didn’t you change? How did the person you were telling the joke to, change how you told it?

There’s lots more to learn from comedians and actors, like whether they change their voice and movement to keep different people’s attention. This research is the first to use computers to study how performers repeat and adjust what they say, but hopefully just the beginning. 

Now, have you heard the one about the …

For more information about Vanessa’s work visit https://vanessapope.co.uk/ [EXTERNAL]