Sophie Wilson: Where would feeding cows take you?

Chip design that changed the world

by Paul Curzon, Queen Mary University of London

(Updated from the archive)

cows grazing
Image by Christian B. from Pixabay 

Some people’s innovations are so amazing it is hard to know where to start. Sophie Wilson is like that. She helped kick start the original 80’s BBC micro computer craze, then went on to help design the chips in virtually every smartphone ever made. Her more recent innovations are the backbone that is keeping broadband infrastructure going. The amount of money her innovations have made easily runs into tens of billions of dollars, and the companies she helped succeed make hundreds of billions of dollars. It all started with her feeding cows!

While still a student Sophie spent a summer designing a system that could automatically feed cows. It was powered by a microcomputer called the MOS 6502: one of the first really cheap chips. As a result Sophie gained experience in both programming using the 6502’s set of instructions but also embedded computers: the idea that computers can disappear into other everyday objects. After university she quickly got a job as a lead designer at Acorn Computers and extended their version of the BASIC language, adding, for example, a way to name procedures so that it was easier to write large programs by breaking them up into smaller, manageable parts.

Acorn needed a new version of their microcomputer, based on the 6502 processor, to bid for a contract with the BBC for a project to inspire people about the fun of coding. Her boss challenged her to design it and get it working, all in only a week. He also told her someone else in the team had already said they could do it. Taking up the challenge she built the hardware in a few days, soldering while watching the Royal Wedding of Charles and Diana on TV. With a day to go there were still bugs in the software, so she worked through the night debugging. She succeeded and within the week of her taking up the challenge it worked! As a result Acorn won a contract from the BBC, the BBC micro was born and a whole generation were subsequently inspired to code. Many computer scientists, still remember the BBC micro fondly 30 years later.

That would be an amazing lifetime achievement for anyone. Sophie went on to even greater things. Acorn morphed into the company ARM on the back of more of her innovations. Ultimately this was about returning to the idea of embedded computers. The Acorn team realised that embedded computers were the future. As ARM they have done more than anyone to make embedded computing a ubiquitous reality. They set about designing a new chip based on the idea of Reduced Instruction Set Computing (RISC). The trend up to that point was to add ever more complex instructions to the set of programming instructions that computer architectures supported. The result was bloated systems that were hungry for power. The idea behind RISC chips was to do the opposite and design a chip with a small but powerful instruction set. Sophie’s colleague Steve Furber set to work designing the chip’s architecture – essentially the hardware. Sophie herself designed the instructions it had to support – its lowest level programming language. The problem was to come up with the right set of instructions so that each could be executed really, really quickly – getting as much work done in as few clock cycles as possible. Those instructions also had to be versatile enough so that when sequenced together they could do more complicated things quickly too. Other teams from big companies had been struggling to do this well despite all their clout, money and powerful computer mainframes to work on the problem. Sophie did it in her head. She wrote a simulator for it in her BBC BASIC running on the BBC Micro. The resulting architecture and its descendants took over the world, with ARM’s RISC chips running 95% of all smartphones. If you have a smartphone you are probably using an ARM chip. They are also used in game controllers and tablets, drones, televisions, smart cars and homes, smartwatches and fitness trackers. All these applications, and embedded computers generally, need chips that combine speed with low energy needs. That is what RISC delivered allowing the revolution to start.

If you want to thank anyone for your personal mobile devices, not to mention the way our cars, homes, streets and work are now full of helpful gadgets, start by thanking Sophie…and she’s not finished yet!


More on …

Related Magazines …


This blog is funded by UKRI, through grant EP/W033615/1.

The Hive at Kew

Art meets bees, science and electronics

by Paul Curzon, Queen Mary University of London

(from the archive)

a boy lying in the middle of the Hive at Kew Gardens.

Combine an understanding of science, with electronics skills and the creativity of an artist and you can get inspiring, memorable and fascinating experiences. That is what the Hive, an art instillation at Kew Gardens in London, does. It is a massive sculpture linked to a subtle sound and light experience, surrounded by a wildflower meadow, but based on the work of scientists studying bees.

The Hive is a giant aluminium structure that represents a bee hive. Once inside you see it is covered with LED lights that flicker on and off apparently randomly. They aren’t random though, they are controlled by a real bee hive elsewhere in the gardens. Each pulse of a light represents bees communicating in that real hive where the artist Wolfgang Buttress placed accelerometers. These are simple sensors like those in phones or a BBC micro:bit that sense movement. The sensitive ones in the bee hive pick up vibrations caused by bees communicating with each other The signals generated are used to control lights in the sculpture.

A new way to communicate

This is where the science comes in. The work was inspired by Martin Bencsik’s team at Nottingham Trent University who in 2011 discovered a new kind of communication between bees using vibrations. Before bees are about to swarm, where a large part of the colony split off to create a new hive, they make a specific kind of vibration, as they prepare to leave. The scientists discovered this using the set up copied by Wolfgang Buttress, using accelerometers in bee hives to help them understand bee behaviour. Monitoring hives like this could help scientists understand the current decline of bees, not least because large numbers of bees die when they swarm to search for a new nest.

Hear the vibrations through your teeth

Good vibrations

The Kew Hive has one last experience to surprise you. You can hear vibrations too. In the base of the Hive you can listen to the soundtrack through your teeth. Cover your ears and place a small coffee stirrer style stick between your teeth, and put the other end of the stick in to a slot. Suddenly you can hear the sounds of the bees and music. Vibrations are passing down the stick, through your teeth and bones of your jawbone to be picked up in a different way by your ears.

A clever use of simple electronics has taught scientists something new and created an amazing work of art.


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Hoverflies: comin’ to get ya

by Peter W McOwan and Paul Curzon, Queen Mary University of London

(from the archive)

A hoverfly on a blade of grass

By understanding the way hoverflies mate, computer scientists found a way to sneak up on humans, giving a way to make games harder.

When hoverflies get the hots for each other they make some interesting moves. Biologists had noticed that as one hoverfly moves towards a second to try and mate, the approaching fly doesn’t go in a straight line. It makes a strange curved flight. Peter and his student Andrew Anderson thought this was an interesting observation and started to look at why it might be. They came up with a cunning idea. The hoverfly was trying to sneak up on its prospective mate unseen.

The route the approaching fly takes matches the movements of the prospective mate in such a way that, to the mate, the fly in the distance looks like it’s far away and ‘probably’ stationary.

Tracking the motion of a hoverfly and its sightlines

How does it do this? Imagine you are walking across a field with a single tree in it, and a friend is trying to sneak up on you. Your friend starts at the tree and moves in such a way that they are always in direct line of sight between your current position and the tree. As they move towards you they are always silhouetted against the tree. Their motion towards you is mimicking the stationary tree’s apparent motion as you walk past it… and that’s just what the hoverfly does when approaching a mate. It’s a stealth technique called ‘active motion camouflage’.

By building a computer model of the mating flies, the team were able to show that this complex behaviour can actually be done with only a small amount of ‘brain power’. They went on to show that humans are also fooled by active motion camouflage. They did this by creating a computer game where you had to dodge missiles. Some of those missiles used active motion camouflage. The missiles using the fly trick were the most difficult to spot.

It just goes to show: there is such a thing as a useful computer bug.


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Ant Art

by Paul Curzon, Queen Mary University of London

(from the archive)

The close up head of an ant staring at you
Image by Virvoreanu Laurentiu from Pixabay 

There are many ways Artificial Intelligences might create art. Breeding a colony of virtual ants is one of the most creative.

Photogrowth from the University of Coimbra does exactly that. The basic idea is to take an image and paint an abstract version of it. Normally you would paint with brush strokes. In ant paintings you paint with the trails of hundreds of ants as they crawl over the picture, depositing ink rather than the normal chemical trails ants use to guide other ants to food. The colours in the original image act as food for the ants, which absorb energy from its bright parts. They then use up energy as they move around. They die if they don’t find enough food, but reproduce if they have lots. The results are highly novel swirl-filled pictures.

The program uses vector graphics rather than pixel-based approaches. In pixel graphics, an image is divided into a grid of squares and each allocated a colour. That means when you zoom in to an area, you just see larger squares, not more detail. With vector graphics, the exact position of the line followed is recorded. That line is just mapped on to the particular grid of the display when you view it. The more pixels in the display, the more detailed the trail is drawn. That means you can zoom in to the pictures and just see ever more detail of the ant trails that make them up.

You become a breeder of a species of ant

that produce trails, and so images,

you will find pleasing

Because the virtual ants wander around at random, each time you run the program you will get a different image. However, there are lots of ways to control how ants can move around their world. Exploring the possibilities by hand would only ever uncover a small fraction of the possibilities. Photogrowth therefore uses a genetic algorithm. Rather than set all the options of ant behaviour for each image, you help design a fitness function for the algorithm. You do this by adjusting the importance of different aspects like the thickness of trail left and the extent the ants will try and cover the whole canvas. In effect you become a breeder of a species of ant that produce trails, and so images, you will find pleasing. Once you’ve chosen the fitness function, the program evolves a colony of ants based on it, and they then paint you a picture with their trails.

The result is a painting painted by ants bred purely to create images that please you.


More on …

Related Magazines …

Cover issue 18
cs4fn issue 4 cover

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Diamond Dogs: Bowie’s algorithmic creativity

by Paul Curzon, Queen Mary University of London

(Updated from the archive)

Bowie black and white portrait
Image by Cristian Ferronato from Pixabay

Rock star David Bowie co-wrote a program that generated lyric ideas. It gave him inspiration for some of his most famous songs. It generated sentences at random based on something called the ‘cut-up’ technique: an algorithm for writing lyrics that he was already doing by hand. You take sentences from completely different places, cut them into bits and combine them in new ways. The randomness in the algorithm creates strange combinations of ideas and he would use ones that caught his attention, sometimes building whole songs around the ideas they expressed.

Tools for creativity

Rather than being an algorithm that is creative in itself, it is perhaps more a tool to help people (or perhaps other algorithms) be more creative. Both kinds of algorithm are of course useful. It does help highlight an issue with any “creative algorithm”, whether creating new art, music or writing. If the algorithm produces lots of output and a human then chooses the ones to keep (and show others), then where is the creativity? In the algorithm or in the person? That selection process of knowing what to keep and what to discard (or keep working on) seems to be a key part of creativity. Any truly creative program should therefore include a module to do such vetting of its work!

All that, aside, an algorithm is certainly part of the reason Bowie’s song lyrics were often so surreal and intriguing!


Write a cut-up technique program

Why not try and write your own cut-up technique program to produce lyrics. You will likely need to use String processing libraries of whatever language you choose. You could feed it things like the text of webpages or news reports. If you don’t program yet, do it by hand cutting up magazines, shuffling the part sentences before gluing them back together.


More on …

Related Magazines …


This blog is funded by UKRI, through grant EP/W033615/1.

The algorithm that could not speak its name

by Paul Curzon, Queen Mary University of London

(Updated from the archive)

Image by PIRO4D from Pixabay 

The first program that was actually creative was probably written by Christopher Strachey, in 1952. It wrote love letters…possibly gay ones.

The letters themselves weren’t particularly special. They wouldn’t make your heart skip a beat if they were written to you, though they are kind of quaint. They actually have the feel of someone learning English doing their best but struggling with the right words! It’s the way the algorithm works that was special. It would be simple to write a program that ‘wrote’ love letters thought up and then pre-programmed by the programmer. Strachey’s program could do much more than that though – it could write letters he never envisaged. It did this using a few simple rules that despite their simplicity gave it the power to write a vast number of different letters. It was based on lists of different kinds of words chosen to be suitable for love letters. There was a list of nouns (like ‘affection’, ‘ardour’, …), a list of adjectives (like ‘tender’, ‘loving’, …), and so on.

It then just chose words from the appropriate list at random and plugged them into place in template sentences, a bit like slotting the last pieces into a jigsaw. It only used a few kinds of sentences as its basic rules such as: “You are my < adjective > < noun >”. That rule could generate, for example, “You are my tender affection.” or “You are my loving heart”, substituting in different combinations of its adjectives and nouns. It then combined several similar rules about different kinds of sentences to give a different love letter every time.

Strachey knew Alan Turing, who was a key figure in the creation of the first computers, and they may have worked on the ideas behind the program together. As both were gay it is entirely possible that the program was actually written to generate gay love letters. Oddly, the one word the program never uses is the word ‘love’ – a sentiment that at the time gay people just could not openly express. It was a love letter algorithm that just could not speak its name!

You can try out Strachey’s program [EXTERNAL] and the Twitter Bot loveletter_txt is based on it [EXTERNAL] Better still why not write your own version. It’s not too hard.

Here is one of the offerings from my attempt to write a love letter writing program:

Beloved Little Cabbage,

I cling to your anxious fervour. I want to hold you forever. You are my fondest craving. You are my fondest enthusiasm. My affection lovingly yearns for your loveable passion.

Yours, keenly Q

The template I used was:

salutation1 + ” ” + salutation2 + “,”

“I ” + verb + ” your ” + adjective + ” ” + noun + “.”

“You are my ” + noun + “.”

“I want ” + verb + ” you forever.”

“I ” + verb + ” your ” + adjective + ” ” + noun + “.”

“My ” + noun1 + ” ” + adverb + ” ” + verb  + ” your ” + adjective + ” ” + noun2 + “.”

“Yours, ” + adverb + ” Q”

Here characters in double quotes stay the same, whereas those that are not in quotes are variables: place holders for a word from the word associated word list.

Experiment with different templates and different word lists and create your own unique version. If you can’t program yet, you can do it on paper by writing out the template and putting the different word lists on different coloured small post-it notes. Number them and use dice to choose one at random.

Of course, you don’t have to write love poems. Perhaps, you could use the same idea for a post card writing program this summer holiday…


More on …

Related Magazines …


This blog is funded by UKRI, through grant EP/W033615/1.

Only the fittest slogans survive!

by Paul Curzon, Queen Mary University of London

Assembly line image by OpenClipart-Vectors from Pixabay

 

Being creative isn’t just for the fun of it. It can be serious too. Marketing people are paid vast amounts to come up with slogans for new products, and in the political world, a good, memorable soundbite can turn the tide over who wins and loses an election. Coming up with great slogans that people will remember for years needs both a mastery of language and a creative streak too. Algorithms are now getting in on the act, and if anyone can create a program as good as the best humans, they will soon be richer than the richest marketing executive. Polona Tomašicˇ and her colleagues from the Jožef Stefan Institute in Slovenia are one group exploring the use of algorithms to create slogans. Their approach is based on the way evolution works – genetic algorithms. Only the fittest slogans survive!

A mastery of language
To generate a slogan, you give their program a short description on the slogan’s topic – a new chocolate bar perhaps. It then uses existing language databases and programs to give it the necessary understanding of language.

First, it uses a database of common grammatical links between pairs of words generated from wikipedia pages. Then skeletons of slogans are extracted from an Internet list of famous (so successful) slogans. These skeletons don’t include the actual words, just the grammatical relationships between the words. They provide general outlines that successful slogans follow.

From the passage given, the program pulls out keywords that can be used within the slogans (beans, flavour, hot, milk, …). It generates a set of fairly random slogans from those words to get started. It does this just by slotting keywords into the skeletons along with random filler words in a way that matches the grammatical links of the skeletons.

Breeding Slogans
New baby slogans are now produced by mating pairs of initial slogans (the parents). This is done by swapping bits into the baby from each parent. Both whole sections and individual words are swapped in. Mutation is allowed too. For example, adjectives are added in appropriate places. Words are also swapped for words with a related meaning. The resulting children join the new population of slogans. Grammar is corrected using a grammar checker.

Culling Slogans
Slogans are now culled. Any that are the same as existing ones go immediately. The slogans are then rated to see which are fittest. This uses simple properties like their length, the number of keywords used, and how common the words used are. More complex tests used are based on how related the meanings of the words are, and how commonly pairs of words appear together in real sentences. Together these combine to give a single score for the slogan. The best are kept to breed in the next generation, the worst are discarded (they die!), though a random selection of weaker slogans are also allowed to survive. The result is a new set of slogans that are slightly better than the previous set.

Many generations later…
The program breeds and culls slogans like this for thousands, even millions of generations, gradually improving them, until it finally chooses its best. The slogans produced are not yet world beating on their own, and vary in quality as judged by humans. For chocolate, one run came up with slogans like “The healthy banana” and “The favourite oven”, for example. It finally settled on “The HOT chocolate” which is pretty good.

Hot chocolate image by Sabrina Ripke from Pixabay

More work is needed on the program, especially its fitness function – the way it decides what is a good slogan and what isn’t. As it stands this sort of program isn’t likely to replace anyone’s marketing department. They could help with brainstorming sessions though, to spark new ideas but leaving humans to make the final choice. Supporting human creativity rather than replacing it is probably just as rewarding for the program after all.

 


 

This article was originally published on CS4FN and also appears on p13 of Creative Computing, issue 22 of the CS4FN magazine. You can download a free PDF copy of the magazine as well as all of our previous booklets and magazines from the CS4FN downloads site.

 

 

Stopping sounds getting left behind: the Bela computer (from @BelaPlatform)

By Jo Brodie and Paul Curzon, Queen Mary University of London

Computer-based musical instruments are so flexible and becoming more popular. They have had one disadvantage though. The sound could drag behind the musician in a way that made some digital instruments seem unplayable. Thanks to a new computer called Bela, that problem may now be a thing of the past.

 

 

A Bela computer surrounded by transistors, resistors, sensors, integrated circuits, buttons & switches. Credit: Andrew McPherson

If you pluck a guitar string or thwack a drum the sound you hear is instantaneous. Well, nearly. There’s a tiny delay. The sound still has to leave the instrument and travel to your ear. The vibration of the string or drum skin pushes the air back and forth, and vibrating air is all a sound is. Your ear receives the sound as soon as that vibrating air gets to you. Then your brain has to recognise it as a sound (and tell you what kind of sound it is, which direction it came from, which instrument produced it and so on!). The time it takes for sound and then your brain to do all that is measured in tens of milliseconds – thousandths of a second. It is called ‘latency‘, not because the delay makes it ‘late’ (though it does!), but from the Latin word latens which means hidden or concealed, because the time between the signal being created and being received, it is hidden from us.

Digital instruments take slightly longer than physical instruments, however, because electronic circuitry and computer processing is involved. It’s not just the sound going through air to ear but a digital signal whizzing through a circuit, or being processed by a computer, first to generate the sound which then goes through air to ear.

Your ear (actually your brain) will detect two sounds as being separate if there’s a gap of around 30 milliseconds between them. Drop that gap down to around 10 milliseconds between the sounds and you’ll hear them as a single sound. If that circuit-whizzing adds 10-20 milliseconds then you’re going to notice that the instrument is lagging behind you, making it feel unplayable. Reducing a digital instrument’s latency is therefore a very important part of improving the experience for the musician.

In 2014 Andrew McPherson and colleagues at Queen Mary University of London aimed to solve this problem. They developed Bela, a tiny computer, similar in size to a Raspberry Pi or Arduino, that can be used in a variety of digital instruments but which is special because it has an ultra-low latency of only around 2 milliseconds – super fast.

How does it do it? A computer can seem to run slowly if it is trying to do lots of things at the same time (e.g. lots of apps running or too many windows open at once). That is when the experience for the user can be a bit glitchy. Bela works by prioritising the audio signal above ALL other activities to ensure that, no matter what else the computer is doing, the gap between input (pressing a key) and output (hearing a sound) is barely noticeable. The small size of Bela also makes it completely portable and so easy to use in musical performances without needing the performer to be tethered to a large computer.

There is definitely a demand for such a computer amongst musicians. Andrew and the team wanted to make Bela available, so began fundraising through Kickstarter to create more kits. Their fundraiser reached £5,000 within four hours and within a month they’d raised £54,000, so production could begin and they launched a company, Augmented Instruments Ltd, to sell the Bela hardware kits.

Bela allows musicians to stop worrying about the sounds getting left behind. Instead, they can just get on with playing and creating amazing sounds.

See Bela in action on YouTube. Follow them on Twitter.

Featured image credit: Andrew McPherson.

 

 

Ada Lovelace: Visionary

Cover of Issue 20 of CS4FN, celebrating Ada Lovelace

By Paul Curzon, Queen Mary University of London

It is 1843, Queen Victoria is on the British throne. The industrial revolution has transformed the country. Steam, cogs and iron rule. The first computers won’t be successfully built for a hundred years. Through the noise and grime one woman sees the future. A digital future that is only just being realised.

Ada Lovelace is often said to be the first programmer. She wrote programs for a designed, but yet to be built, computer called the Analytical Engine. She was something much more important than a programmer, though. She was the first truly visionary person to see the real potential of computers. She saw they would one day be creative.

Charles Babbage had come up with the idea of the Analytical Engine – how to make a machine that could do calculations so we wouldn’t need to do it by hand. It would be another century before his ideas could be realised and the first computer was actually built. As he tried to get the money and build the computer, he needed someone to help write the programs to control it – the instructions that would tell it how to do calculations. That’s where Ada came in. They worked together to try and realise their joint dream, jointly working out how to program.

Ada also wrote “The Analytical Engine has no pretensions to originate anything.” So how does that fit with her belief that computers could be creative? Read on and see if you can unscramble the paradox.

Ada was a mathematician with a creative flair and while Charles had come up with the innovative idea of the Analytical Engine itself, he didn’t see beyond his original idea of the computer as a calculator, she saw that they could do much more than that.

The key innovation behind her idea was that the numbers could stand for more than just quantities in calculations. They could represent anything – music for example. Today when we talk of things being digital – digital music, digital cameras, digital television, all we really mean is that a song, a picture, a film can all be stored as long strings of numbers. All we need is to agree a code of what the numbers mean – a note, a colour, a line. Once that is decided we can write computer programs to manipulate them, to store them, to transmit them over networks. Out of that idea comes the whole of our digital world.

Ada saw even further though. She combined maths with a creative flair and so she realised that not only could they store and play music they could also potentially create it – they could be composers. She foresaw the whole idea of machines being creative. She wasn’t just the first programmer, she was the first truly creative programmer.

This article was originally published at the CS4FN website, along with lots of other articles about Ada Lovelace. We also have a special Ada Lovelace-themed issue of the CS4FN magazine which you can download as a PDF (click picture below).

See also: The very first computers and Ada Lovelace Day (2nd Tuesday of October). Help yourself to our Women in Computing posters PDF (or sign up to get FREE copies posted to your school (UK-based only, please).

 

Standup Robots

‘How do robots eat pizza?’… ‘One byte at a time’. Computational Humour is real, but it’s not jokes about computers, it’s computers telling their own jokes.

Robot performing
Image from istockphoto

Computers can create art, stories, slogans and even magic tricks. But can computers perform themselves? Can robots invent their own jokes? Can they tell jokes?

Combining Artificial Intelligence, computational linguistics and humour studies (yes you can study how to be funny!) a team of Scottish researchers made an early attempt at computerised standup comedy! They came up with Standup (System to Augment Non Speakers Dialogue Using Puns): a program that generates riddles for kids with language difficulties. Standup has a dictionary and joke-building mechanism, but does not perform, it just creates the jokes. You will have to judge for yourself as to whether the puns are funny. You can download the software from here. What makes a pun funny? It is a about the word having two meanings at exactly the same time in a sentence. It is also about generating an expectation that you then break: a key idea about what is at the core of creativity too.

A research team at Virginia Tech in the US created a system that started to learn about funny pictures. Having defined a ‘funniness score’ they created a computational model for humorous scenes, and trained it to predict funniness, perhaps with an eye to spotting pics for social media posting, or not.

But are there funny robots out there? Yes! RoboThespian programmed by researchers at Queen Mary University of London, and Data, created by researchers at Carnegie Mellon University are both robots programmed to do stand-up comedy. Data has a bank of jokes and responds to audience reaction. His developers don’t actually know what he will do when he performs, as he is learning all the time. At his first public gig, he got the crowd laughing, but his timing was poor. You can see his performance online, in a TED Talk.

RoboThespian did a gig at the London Barbican alongside human comedians. The performance was a live experiment to understand whether the robot could ‘work the audience’ as well as a human comedian. They found that even relatively small changes in the timing of delivery make a big difference to audience response.

What have these all got in common? Artificial Intelligence, machine learning and studies to understand what humour actually is, are being combined to make something that is funny. Comedy is perhaps the pinnacle of creativity. It’s certainly not easy for a human to write even one joke, so think how hard it is distill that skill into algorithms and train a computer to create loads of them.

You have to laugh!

Watch RoboThespian [EXTERNAL]

– Jane Waite, Queen Mary University of London, Summer 2017

Download Issue 22 of the cs4fn magazine “Creative Computing” here

Lots more computing jokes on our Teaching London Computing site