Is ChatGPT’s “CS4FN” article good enough?

(Or how to write for CS4FN)

by Paul Curzon, Queen Mary University of London

Follow the news and it is clear that the chatbots are about to take over journalism, novel writing, script writing, writing research papers, … just about all kinds of writing. So how about writing for the CS4FN magazine. Are they good enough yet? Are we about to lose our jobs? Jo asked ChatGPT to write a CS4FN article to find out. Read its efforts before reading on…

As editor I not only wrote but also vet articles and tweak them when necessary to fit the magazine style. So I’ve looked at ChatGPT’s offering as I would one coming from a person …

ChatGPT’s essay writing has been compared to that of a good but not brilliant student. Writing CS4FN articles is a task we have set students in the past: in part to give them experience over how you must write in different styles for different purposes. Different audience? Different writing. Only a small number come close to what I am after. They generally have one or more issues. A common problem when students write for CS4FN is sadly a lack of good grammar and punctuation throughout beyond just typos (basic but vital English skills seem to be severely lacking these days even with spell checking and grammar checking tools to help). Other common problems include a lack of structure, no hook at the start, over-formal writing so the wrong style, no real fun element at all and/or being devoid of stories about people, an obsession with a few subjects (like machine learning!) rather than finding something new to write about. They are also then often vanilla articles about that topic, just churning out looked-up facts rather than finding some new, interesting angle.

How did the chatbot do? It seems to have made most of the same mistakes. At least, chatGPT’s spelling and grammar are basically good so that is a start: it is a good primary school student then! Beyond that it has behaved like the weaker students do… and missed the point. It has actually just written a pretty bog standard factual article explaining the topic it chose, and of course given a free choice, it chose … Machine Learning! Fine, if it had a novel twist, but there are no interesting angles added to the topic to bring it alive. Nor did it describe the contributions of a person. In fact, no people are mentioned at all. It is also using a pretty formal style of writing (“In conclusion…”). Just like humans (especially academics) it also used too much jargon and didn’t even explain all the jargon it did use (even after being prompted to write for a younger audience). If I was editing I’d get rid of the formality and unexplained jargon for starters. Just like the students who can actually write but don’t yet get the subtleties, it hasn’t got the fact that it should have adapted its style, even when prompted.

It knows about structure and can construct an essay with a start, a middle and end as it has put in an introduction and a conclusion. What it hasn’t done though is add any kind of “grab”. There is nothing at the start to really capture the attention. There is no strange link, no intriguing question, no surprising statement, no interesting person…nothing to really grab you (though Jo saved it by adding to the start, the grab that she had asked an AI to write it). It hasn’t added any twist at the end, or included anything surprising. In fact, there is no fun element at all. Our articles can be serious rather than fun but then the grab has to be about the seriousness: linked to bad effects for society, for example.

ChatGPT has also written a very abstract essay. There is little in the way of context or concrete examples. It says, for example, “rules … couldn’t handle complex situations”. Give me an example of a complex situation so I know what you are talking about! There are no similes or metaphors to help explain. It throws in some application areas for context like game-playing and healthcare but doesn’t at all explain them (it doesn’t say what kind of breakthrough has been made to game playing, for example). In fact, it doesn’t seem to be writing in a “semantic wave” style that makes for good explanations at all. That is where you explain something by linking an abstract technical thing you are explaining, to some everyday context or concrete example, unpacking then repacking the concepts. Explaining machine learning? Then illustrate your points with an example such as how machine learning might use movies to predict your voting habits perhaps…and explain how the example does illustrate the abstract concepts such as pointing out the patterns it might spot.

There are several different kinds of CS4FN article. Overall, CS4FN is about public engagement with research. That gives us ways in to explain core computer science though (like what machine learning is). We try to make sure the reader learns something core, if by stealth, in the middle of longer articles. We also write about people and especially diversity, sometimes about careers or popular culture, or about the history of computation. So, context is central to our articles. Sometimes we write about general topics but always with some interesting link, or game or puzzle or … something. For a really, really good article that I instantly love, I am looking for some real creativity – something very different, whether that is an intriguing link, a new topic, or just a not very well known and surprising fact. ChatGPT did not do any of that at all.

Was ChatGPT’s article good enough? No. At best I might use some of what it wrote in the middle of some other article but in that case I would be doing all the work to make it a CS4FN article.

ChatGPT hasn’t written a CS4FN article
in any sense other than in writing about computing.

Was it trained on material from CS4FN to allow it to pick up what CS4FN was? We originally assumed so – our material has been freely accessible on the web for 20 years and the web is supposedly the chatbots’ training ground. If so I would have expected it to do much better at getting the style right. I’m left thinking that actually when it is asked to write articles or essays without more guidance it understands, it just always writes about machine learning! (Just like I always used to write science fiction stories for every story my English teacher set, to his exasperation!) We assumed, because it wrote about a computing topic, that it did understand, but perhaps, it is all a chimera. Perhaps it didn’t actually understand the brief even to the level of knowing it was being asked to write about computing and just hit lucky. Who knows? It is a black box. We could investigate more, but this is a simple example of why we need Artificial Intelligences that can justify their decisions!

Of course we could work harder to train it up as I would a human member of our team. With more of the right prompting we could perhaps get it there. Also given time the chatbots will get far better, anyway. Even without that they clearly can now do good basic factual writing so, yes, lots of writing jobs are undoubtedly now at risk (and that includes a wide range of jobs, like lawyers, teachers, and even programmers and the like too) if we as a society decide to let them. We may find the world turns much more vanilla as a result though with writing turning much more bland and boring without the human spark and without us noticing till it is lost (just like modern supermarket tomatoes so often taste bland having lost the intense taste they once had!) … unless the chatbots gain some real creativity.

The basic problem of new technology is it reaps changes irrespective of the human cost (when we allow it to, but we so often do, giddy with the new toys). That is fine if as a society we have strong ways to support those affected. That might involve major support for retraining and education into new jobs created. Alternatively, if fewer jobs are created than destroyed, which is the way we may be going, where jobs become ever scarcer, then we need strong social support systems and no stigma to not having a job. However, currently that is not looking likely and instead changes of recent times have just increased, not reduced inequality, with small numbers getting very, very rich but many others getting far poorer as the jobs left pay less and less.

Perhaps it’s not malevolent Artificial Intelligences of science fiction taking over that is the real threat to humanity. Corporations act like living entities these days, working to ensure their own survival whatever the cost, and we largely let them. Perhaps it is the tech companies and their brand of alien self-serving corporation as ‘intelligent life’ acting as societal disrupters that we need to worry about. Things happen (like technology releases) because the corporation wants them to but at the moment that isn’t always the same as what is best for people long term. We could be heading for a wonderful utopian world where people do not need to work and instead spend their time doing fulfilling things. It increasingly looks like instead we have a very dystopian future to look forward to – if we let the Artificial Intelligences do too many things, taking over jobs, just because they can so that corporations can do things more cheaply, so make more fabulous wealth for the few.

Am I about to lose my job writing articles for CS4FN? I don’t think so. Why do I write CS4FN? I love writing this kind of stuff. It is my hobby as much as anything. So I do it for my own personal pleasure as well as for the good I hope it does whether inspiring and educating people, or just throwing up things to think about. Even if the chatBots were good enough, I wouldn’t stop writing. It is great to have a hobby that may also be useful to others. And why would I stop doing something I do for fun, just because a machine could do it for me? But that is just lucky for me. Others who do it for a living won’t be so lucky.

We really have to stop and think about what we want as humans. Why do we do creative things? Why do we work? Why do we do anything? Replacing us with machines is all well and good, but only if the future for all people is actually better as a result, not just a few.

Further reading


This blog is funded through EPSRC grant EP/W033615/1.

A machine wrote this post – OpenAi’s ChatGPT

Blog post by OpenAI’s ChatGPT
Introduction by Jo Brodie

ChatGPT image by Alexandra_Koch from Pixaba

1. Jo wrote this…

ChatGPT is an AI chatbot which can have a conversation with you using everyday (natural) language. It has access to the huge amounts of digital information found on the internet and its ability to use language to write good answers in response to questions has been tested and improved by giving it feedback. It learns, and refines its output.

You don’t need to programme it, you can just ask it a question (giving it a prompt) and, after a moment’s ‘thinking’ it will produce an answer. Lots of people have been testing ChatGPT by asking it questions that we already know the answer to, to see how well it can do. It’s actually quite impressive! If you think it’s made a mistake or misunderstood your prompt you can reword your question or give it more information to help it improve its answer.

Can ChatGPT write an article that’s suitable for our magazine or website?

I gave ChatGPT the following prompt: “Please write a 200 to 500 word article suitable for the CS4FN magazine” and moments later it generated a completely new article about machine learning. That told me it knew (or at least could find out) that CS4FN has something to do with computer science. I’m not surprised that it picked machine learning as a topic given that it is a machine that can learn. I didn’t tell it what to write about, it came up with that idea itself.

The article it gave me contained some technical words and not all of them were explained, so I asked it to rewrite the article for a younger audience (who might not have come across some of the words yet).

The result is below – what do you think?

2. ChatGPT wrote this…

From Logic to Learning: The Incredible Advancements of Artificial Intelligence

Artificial Intelligence (AI) has come a long way since it was first invented in the 1950s. At the beginning, AI was based on logic and rules. Experts in a specific field would give the computer a set of instructions to help it make decisions. But, these rules were limited and couldn’t handle complex situations.

Today, AI has seen a huge shift towards machine learning. This means that instead of following rules, AI learns from data. This shift happened because computers got stronger, and more data became available. The breakthroughs in machine learning algorithms helped too.

Machine learning has two categories: supervised learning and unsupervised learning. In supervised learning, the computer learns from labeled data. This means that the correct answer is given to the computer for each input. Once the computer has learned enough, it can predict the answer for new data. Supervised learning is useful for applications like image and speech recognition.

In unsupervised learning, the computer learns from unlabeled data. This means that there is no correct answer provided. The goal of unsupervised learning is to find patterns in the data. This type of learning is useful in applications like anomaly detection and clustering.

Deep learning is a special type of machine learning that uses neural networks. Neural networks are inspired by the human brain and have layers of connected nodes. Each node applies a mathematical function to its inputs and sends the result to the next layer of nodes. Deep learning has revolutionized AI, leading to breakthroughs in areas like natural language processing, computer vision, and game playing.

Even though machine learning has achieved so much, there are still challenges to overcome. One challenge is the lack of transparency in machine learning models. Unlike rule-based systems, it can be hard to understand how a machine learning model made a decision. This can be a problem, especially in areas like healthcare where the decision-making process needs to be trustworthy and explainable.

Another challenge is the need for a lot of data to train machine learning models. While data is easy to find in some areas, like social media, it can be hard to come by in other areas like medical imaging. This lack of data can make machine learning less effective in some applications.

In conclusion, AI has come a long way since the 1950s. We’ve gone from rule-based systems to machine learning, which has led to breakthroughs in many areas. Although there are still challenges to overcome, the future of AI is exciting!

The first attempt is below…

The Evolution of Artificial Intelligence: From Rule-Based to Machine Learning

Artificial Intelligence (AI) has come a long way since its inception in the 1950s. Initially, AI was developed based on rule-based systems, where experts in a specific domain would provide rules to the system to help it make decisions. While these systems worked well in some areas, they were limited in their ability to handle complex problems and adapt to new situations.

In recent years, AI has seen a significant shift towards machine learning, where systems learn from data rather than relying on pre-defined rules. This shift has been driven by advances in computer hardware, the availability of large amounts of data, and breakthroughs in machine learning algorithms.

Machine learning has two main categories: supervised learning and unsupervised learning. In supervised learning, the system is trained on labeled data, where the correct answer is provided for each input. The system uses this labeled data to learn a function that maps inputs to outputs. Once trained, the system can make predictions on new, unseen data. This approach has been successful in a wide range of applications, such as image and speech recognition.

In unsupervised learning, the system is trained on unlabeled data, where no correct answer is provided. The goal of unsupervised learning is to discover patterns and structure in the data. This approach has been useful in areas such as anomaly detection and clustering.

Deep learning is a subset of machine learning that uses neural networks to learn from data. Neural networks are inspired by the structure of the human brain and consist of layers of interconnected nodes. Each node applies a mathematical function to its inputs and passes the result to the next layer of nodes. Deep learning has revolutionized the field of AI, enabling breakthroughs in areas such as natural language processing, computer vision, and game playing.

Despite the many successes of machine learning, there are still limitations and challenges to overcome. One challenge is the lack of interpretability of machine learning models. Unlike rule-based systems, it can be difficult to understand how a machine learning model arrived at a particular decision. This lack of transparency can be problematic in areas such as healthcare, where the decision-making process needs to be explainable and trustworthy.

Another challenge is the need for large amounts of data to train machine learning models. While data is abundant in some domains, such as social media, it can be scarce in others, such as medical imaging. This scarcity of data can limit the effectiveness of machine learning approaches in certain applications.

In conclusion, the evolution of AI from rule-based systems to machine learning has enabled breakthroughs in many areas and holds tremendous potential for the future. However, there are still challenges to overcome, particularly in the areas of interpretability and data scarcity. Addressing these challenges will be critical to realizing the full potential of AI in the years to come.

Further reading


This blog is funded through EPSRC grant EP/W033615/1.

Sophie Wilson: Where would feeding cows take you?

Chip design that changed the world

by Paul Curzon, Queen Mary University of London

(Updated from the archive)

cows grazing
Image by Christian B. from Pixabay 

Some people’s innovations are so amazing it is hard to know where to start. Sophie Wilson is like that. She helped kick start the original 80’s BBC micro computer craze, then went on to help design the chips in virtually every smartphone ever made. Her more recent innovations are the backbone that is keeping broadband infrastructure going. The amount of money her innovations have made easily runs into tens of billions of dollars, and the companies she helped succeed make hundreds of billions of dollars. It all started with her feeding cows!

While still a student Sophie spent a summer designing a system that could automatically feed cows. It was powered by a microcomputer called the MOS 6502: one of the first really cheap chips. As a result Sophie gained experience in both programming using the 6502’s set of instructions but also embedded computers: the idea that computers can disappear into other everyday objects. After university she quickly got a job as a lead designer at Acorn Computers and extended their version of the BASIC language, adding, for example, a way to name procedures so that it was easier to write large programs by breaking them up into smaller, manageable parts.

Acorn needed a new version of their microcomputer, based on the 6502 processor, to bid for a contract with the BBC for a project to inspire people about the fun of coding. Her boss challenged her to design it and get it working, all in only a week. He also told her someone else in the team had already said they could do it. Taking up the challenge she built the hardware in a few days, soldering while watching the Royal Wedding of Charles and Diana on TV. With a day to go there were still bugs in the software, so she worked through the night debugging. She succeeded and within the week of her taking up the challenge it worked! As a result Acorn won a contract from the BBC, the BBC micro was born and a whole generation were subsequently inspired to code. Many computer scientists, still remember the BBC micro fondly 30 years later.

That would be an amazing lifetime achievement for anyone. Sophie went on to even greater things. Acorn morphed into the company ARM on the back of more of her innovations. Ultimately this was about returning to the idea of embedded computers. The Acorn team realised that embedded computers were the future. As ARM they have done more than anyone to make embedded computing a ubiquitous reality. They set about designing a new chip based on the idea of Reduced Instruction Set Computing (RISC). The trend up to that point was to add ever more complex instructions to the set of programming instructions that computer architectures supported. The result was bloated systems that were hungry for power. The idea behind RISC chips was to do the opposite and design a chip with a small but powerful instruction set. Sophie’s colleague Steve Furber set to work designing the chip’s architecture – essentially the hardware. Sophie herself designed the instructions it had to support – its lowest level programming language. The problem was to come up with the right set of instructions so that each could be executed really, really quickly – getting as much work done in as few clock cycles as possible. Those instructions also had to be versatile enough so that when sequenced together they could do more complicated things quickly too. Other teams from big companies had been struggling to do this well despite all their clout, money and powerful computer mainframes to work on the problem. Sophie did it in her head. She wrote a simulator for it in her BBC BASIC running on the BBC Micro. The resulting architecture and its descendants took over the world, with ARM’s RISC chips running 95% of all smartphones. If you have a smartphone you are probably using an ARM chip. They are also used in game controllers and tablets, drones, televisions, smart cars and homes, smartwatches and fitness trackers. All these applications, and embedded computers generally, need chips that combine speed with low energy needs. That is what RISC delivered allowing the revolution to start.

If you want to thank anyone for your personal mobile devices, not to mention the way our cars, homes, streets and work are now full of helpful gadgets, start by thanking Sophie…and she’s not finished yet!


More on …

Related Magazines …


This blog is funded by UKRI, through grant EP/W033615/1.

The Hive at Kew

Art meets bees, science and electronics

by Paul Curzon, Queen Mary University of London

(from the archive)

a boy lying in the middle of the Hive at Kew Gardens.

Combine an understanding of science, with electronics skills and the creativity of an artist and you can get inspiring, memorable and fascinating experiences. That is what the Hive, an art instillation at Kew Gardens in London, does. It is a massive sculpture linked to a subtle sound and light experience, surrounded by a wildflower meadow, but based on the work of scientists studying bees.

The Hive is a giant aluminium structure that represents a bee hive. Once inside you see it is covered with LED lights that flicker on and off apparently randomly. They aren’t random though, they are controlled by a real bee hive elsewhere in the gardens. Each pulse of a light represents bees communicating in that real hive where the artist Wolfgang Buttress placed accelerometers. These are simple sensors like those in phones or a BBC micro:bit that sense movement. The sensitive ones in the bee hive pick up vibrations caused by bees communicating with each other The signals generated are used to control lights in the sculpture.

A new way to communicate

This is where the science comes in. The work was inspired by Martin Bencsik’s team at Nottingham Trent University who in 2011 discovered a new kind of communication between bees using vibrations. Before bees are about to swarm, where a large part of the colony split off to create a new hive, they make a specific kind of vibration, as they prepare to leave. The scientists discovered this using the set up copied by Wolfgang Buttress, using accelerometers in bee hives to help them understand bee behaviour. Monitoring hives like this could help scientists understand the current decline of bees, not least because large numbers of bees die when they swarm to search for a new nest.

Hear the vibrations through your teeth

Good vibrations

The Kew Hive has one last experience to surprise you. You can hear vibrations too. In the base of the Hive you can listen to the soundtrack through your teeth. Cover your ears and place a small coffee stirrer style stick between your teeth, and put the other end of the stick in to a slot. Suddenly you can hear the sounds of the bees and music. Vibrations are passing down the stick, through your teeth and bones of your jawbone to be picked up in a different way by your ears.

A clever use of simple electronics has taught scientists something new and created an amazing work of art.


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Hoverflies: comin’ to get ya

by Peter W McOwan and Paul Curzon, Queen Mary University of London

(from the archive)

A hoverfly on a blade of grass

By understanding the way hoverflies mate, computer scientists found a way to sneak up on humans, giving a way to make games harder.

When hoverflies get the hots for each other they make some interesting moves. Biologists had noticed that as one hoverfly moves towards a second to try and mate, the approaching fly doesn’t go in a straight line. It makes a strange curved flight. Peter and his student Andrew Anderson thought this was an interesting observation and started to look at why it might be. They came up with a cunning idea. The hoverfly was trying to sneak up on its prospective mate unseen.

The route the approaching fly takes matches the movements of the prospective mate in such a way that, to the mate, the fly in the distance looks like it’s far away and ‘probably’ stationary.

Tracking the motion of a hoverfly and its sightlines

How does it do this? Imagine you are walking across a field with a single tree in it, and a friend is trying to sneak up on you. Your friend starts at the tree and moves in such a way that they are always in direct line of sight between your current position and the tree. As they move towards you they are always silhouetted against the tree. Their motion towards you is mimicking the stationary tree’s apparent motion as you walk past it… and that’s just what the hoverfly does when approaching a mate. It’s a stealth technique called ‘active motion camouflage’.

By building a computer model of the mating flies, the team were able to show that this complex behaviour can actually be done with only a small amount of ‘brain power’. They went on to show that humans are also fooled by active motion camouflage. They did this by creating a computer game where you had to dodge missiles. Some of those missiles used active motion camouflage. The missiles using the fly trick were the most difficult to spot.

It just goes to show: there is such a thing as a useful computer bug.


More on …

Related Magazines …

cs4fn issue 4 cover
A hoverfly on a leaf

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Ant Art

by Paul Curzon, Queen Mary University of London

(from the archive)

The close up head of an ant staring at you
Image by Virvoreanu Laurentiu from Pixabay 

There are many ways Artificial Intelligences might create art. Breeding a colony of virtual ants is one of the most creative.

Photogrowth from the University of Coimbra does exactly that. The basic idea is to take an image and paint an abstract version of it. Normally you would paint with brush strokes. In ant paintings you paint with the trails of hundreds of ants as they crawl over the picture, depositing ink rather than the normal chemical trails ants use to guide other ants to food. The colours in the original image act as food for the ants, which absorb energy from its bright parts. They then use up energy as they move around. They die if they don’t find enough food, but reproduce if they have lots. The results are highly novel swirl-filled pictures.

The program uses vector graphics rather than pixel-based approaches. In pixel graphics, an image is divided into a grid of squares and each allocated a colour. That means when you zoom in to an area, you just see larger squares, not more detail. With vector graphics, the exact position of the line followed is recorded. That line is just mapped on to the particular grid of the display when you view it. The more pixels in the display, the more detailed the trail is drawn. That means you can zoom in to the pictures and just see ever more detail of the ant trails that make them up.

You become a breeder of a species of ant
that produce trails, and so images,
you will find pleasing

Because the virtual ants wander around at random, each time you run the program you will get a different image. However, there are lots of ways to control how ants can move around their world. Exploring the possibilities by hand would only ever uncover a small fraction of the possibilities. Photogrowth therefore uses a genetic algorithm. Rather than set all the options of ant behaviour for each image, you help design a fitness function for the algorithm. You do this by adjusting the importance of different aspects like the thickness of trail left and the extent the ants will try and cover the whole canvas. In effect you become a breeder of a species of ant that produce trails, and so images, you will find pleasing. Once you’ve chosen the fitness function, the program evolves a colony of ants based on it, and they then paint you a picture with their trails.

The result is a painting painted by ants bred purely to create images that please you.


More on …

Related Magazines …

Cover issue 18
cs4fn issue 4 cover

EPSRC supports this blog through research grant EP/W033615/1, and through EP/K040251/2 held by Professor Ursula Martin. 

Diamond Dogs: Bowie’s algorithmic creativity

by Paul Curzon, Queen Mary University of London

(Updated from the archive)

Bowie black and white portrait
Image by Cristian Ferronato from Pixabay

Rock star David Bowie co-wrote a program that generated lyric ideas. It gave him inspiration for some of his most famous songs. It generated sentences at random based on something called the ‘cut-up’ technique: an algorithm for writing lyrics that he was already doing by hand. You take sentences from completely different places, cut them into bits and combine them in new ways. The randomness in the algorithm creates strange combinations of ideas and he would use ones that caught his attention, sometimes building whole songs around the ideas they expressed.

Tools for creativity

Rather than being an algorithm that is creative in itself, it is perhaps more a tool to help people (or perhaps other algorithms) be more creative. Both kinds of algorithm are of course useful. It does help highlight an issue with any “creative algorithm”, whether creating new art, music or writing. If the algorithm produces lots of output and a human then chooses the ones to keep (and show others), then where is the creativity? In the algorithm or in the person? That selection process of knowing what to keep and what to discard (or keep working on) seems to be a key part of creativity. Any truly creative program should therefore include a module to do such vetting of its work!

All that, aside, an algorithm is certainly part of the reason Bowie’s song lyrics were often so surreal and intriguing!


Write a cut-up technique program

Why not try and write your own cut-up technique program to produce lyrics. You will likely need to use String processing libraries of whatever language you choose. You could feed it things like the text of webpages or news reports. If you don’t program yet, do it by hand cutting up magazines, shuffling the part sentences before gluing them back together.


More on …

Related Magazines …


This blog is funded by UKRI, through grant EP/W033615/1.

The algorithm that could not speak its name

by Paul Curzon, Queen Mary University of London

(Updated from the archive)

Image by PIRO4D from Pixabay 

The first program that was actually creative was probably written by Christopher Strachey, in 1952. It wrote love letters…possibly gay ones.

The letters themselves weren’t particularly special. They wouldn’t make your heart skip a beat if they were written to you, though they are kind of quaint. They actually have the feel of someone learning English doing their best but struggling with the right words! It’s the way the algorithm works that was special. It would be simple to write a program that ‘wrote’ love letters thought up and then pre-programmed by the programmer. Strachey’s program could do much more than that though – it could write letters he never envisaged. It did this using a few simple rules that despite their simplicity gave it the power to write a vast number of different letters. It was based on lists of different kinds of words chosen to be suitable for love letters. There was a list of nouns (like ‘affection’, ‘ardour’, …), a list of adjectives (like ‘tender’, ‘loving’, …), and so on.

It then just chose words from the appropriate list at random and plugged them into place in template sentences, a bit like slotting the last pieces into a jigsaw. It only used a few kinds of sentences as its basic rules such as: “You are my < adjective > < noun >”. That rule could generate, for example, “You are my tender affection.” or “You are my loving heart”, substituting in different combinations of its adjectives and nouns. It then combined several similar rules about different kinds of sentences to give a different love letter every time.

Strachey knew Alan Turing, who was a key figure in the creation of the first computers, and they may have worked on the ideas behind the program together. As both were gay it is entirely possible that the program was actually written to generate gay love letters. Oddly, the one word the program never uses is the word ‘love’ – a sentiment that at the time gay people just could not openly express. It was a love letter algorithm that just could not speak its name!

You can try out Strachey’s program [EXTERNAL] and the Twitter Bot loveletter_txt is based on it [EXTERNAL] Better still why not write your own version. It’s not too hard.

Here is one of the offerings from my attempt to write a love letter writing program:

Beloved Little Cabbage,

I cling to your anxious fervour. I want to hold you forever. You are my fondest craving. You are my fondest enthusiasm. My affection lovingly yearns for your loveable passion.

Yours, keenly Q

The template I used was:

salutation1 + ” ” + salutation2 + “,”

“I ” + verb + ” your ” + adjective + ” ” + noun + “.”

“You are my ” + noun + “.”

“I want ” + verb + ” you forever.”

“I ” + verb + ” your ” + adjective + ” ” + noun + “.”

“My ” + noun1 + ” ” + adverb + ” ” + verb  + ” your ” + adjective + ” ” + noun2 + “.”

“Yours, ” + adverb + ” Q”

Here characters in double quotes stay the same, whereas those that are not in quotes are variables: place holders for a word from the word associated word list.

Experiment with different templates and different word lists and create your own unique version. If you can’t program yet, you can do it on paper by writing out the template and putting the different word lists on different coloured small post-it notes. Number them and use dice to choose one at random.

Of course, you don’t have to write love poems. Perhaps, you could use the same idea for a post card writing program this summer holiday…


More on …

Related Magazines …


This blog is funded by UKRI, through grant EP/W033615/1.

Only the fittest slogans survive!

by Paul Curzon, Queen Mary University of London

(From the archive)

Being creative isn’t just for the fun of it. It can be serious too. Marketing people are paid vast amounts to come up with slogans for new products, and in the political world, a good, memorable soundbite can turn the tide over who wins and loses an election. Coming up with great slogans that people will remember for years needs both a mastery of language and a creative streak too. Algorithms are now getting in on the act, and if anyone can create a program as good as the best humans, they will soon be richer than the richest marketing executive. Polona Tomašicˇ and her colleagues from the Jožef Stefan Institute in Slovenia are one group exploring the use of algorithms to create slogans. Their approach is based on the way evolution works – genetic algorithms. Only the fittest slogans survive!

A mastery of language

To generate a slogan, you give their program a short description on the slogan’s topic – a new chocolate bar perhaps. It then uses existing language databases and programs to give it the necessary understanding of language.

First, it uses a database of common grammatical links between pairs of words generated from wikipedia pages. Then skeletons of slogans are extracted from an Internet list of famous (so successful) slogans. These skeletons don’t include the actual words, just the grammatical relationships between the words. They provide general outlines that successful slogans follow.

From the passage given, the program pulls out keywords that can be used within the slogans (beans, flavour, hot, milk, …). It generates a set of fairly random slogans from those words to get started. It does this just by slotting keywords into the skeletons along with random filler words in a way that matches the grammatical links of the skeletons.

Breeding Slogans

New baby slogans are now produced by mating pairs of initial slogans (the parents). This is done by swapping bits into the baby from each parent. Both whole sections and individual words are swapped in. Mutation is allowed too. For example, adjectives are added in appropriate places. Words are also swapped for words with a related meaning. The resulting children join the new population of slogans. Grammar is corrected using a grammar checker.

Culling Slogans

Slogans are now culled. Any that are the same as existing ones go immediately. The slogans are then rated to see which are fittest. This uses simple properties like their length, the number of keywords used, and how common the words used are. More complex tests used are based on how related the meanings of the words are, and how commonly pairs of words appear together in real sentences. Together these combine to give a single score for the slogan. The best are kept to breed in the next generation, the worst are discarded (they die!), though a random selection of weaker slogans are also allowed to survive. The result is a new set of slogans that are slightly better than the previous set.

Many generations later…

The program breeds and culls slogans like this for thousands, even millions of generations, gradually improving them, until it finally chooses its best. The slogans produced are not yet world beating on their own, and vary in quality as judged by humans. For chocolate, one run came up with slogans like “The healthy banana” and “The favourite oven”, for example. It finally settled on “The HOT chocolate” which is pretty good.

More work is needed on the program, especially its fitness function – the way it decides what is a good slogan and what isn’t. As it stands this sort of program isn’t likely to replace anyone’s marketing department. They could help with brainstorming sessions though, to spark new ideas but leaving humans to make the final choice. Supporting human creativity rather than replacing it is probably just as rewarding for the program after all.


Related Magazines …

Issue 22 Cover Creative Computing

This article was originally published on CS4FN and also appears on p13 of Creative Computing, issue 22 of the CS4FN magazine. You can download a free PDF copy of the magazine as well as all of our previous booklets and magazines from the CS4FN downloads site.