Alexander Graham Bell: It’s good to talk

An antique phone

Image modified version of that by Christine Sponchia from Pixabay

by Peter W McOwan, Queen Mary University of London

(From the archive)

The famous inventor of the telephone, Alexander Graham Bell, was born in 1847 in Edinburgh, Scotland. His story is a fascinating one, showing that like all great inventions, a combination of talent, timing, drive and a few fortunate mistakes are what’s needed to develop a technology that can change the world.

A talented Scot

As a child the young Alexander Graham Bell, Aleck, as he was known to his family, showed remarkable talents. He had the ability to look at the world in a different way, and come up with creative solutions to problems. Aged 14, Bell designed a device to remove the husks from wheat by combining a nailbrush and paddle into a rotary-brushing wheel.

Family talk

The Bell family had a talent with voices. His grandfather had made a name for himself as a notable, but often unemployed, actor. Aleck’s Mother was deaf, but rather than use her ear trumpet to talk to her like everyone else did, the young Alexander came up with the cunning idea that speaking to her in low, booming tones very close to her forehead would allow her to hear his voice through the vibrations his voice would make. This special bond with his mother gave him a lifelong intereste in the education of deaf people, which combined with his inventive genius and some odd twists of fate were to change the world.

A visit to London, and a talking dog

While visiting London with his father, Aleck was fascinated by a demonstration of Sir Charles Wheatstone’s “speaking machine”, a mechanical contraption that made human like noises. On returning to Edinburgh their father challenged Aleck and his older brother to come up with a machine of their own. After some hard work and scrounging bits from around the place they built a machine with a mouth, throat, nose, movable tongue, and bellow for lungs, and it worked. It made human-like sounds. Delighted by his success Aleck went a step further and massaged the mouth of his Skye terrier so that the dog’s growls were heard as words. Pretty wruff on the poor dog.

Speaking of teaching

By the time he was 16, Bell was teaching music and elocution at a boy’s boarding school. He was still fascinated by trying to help those with speech problems improve their quality of life, and was very successful in this, later publishing two well-respected books called ‘The Practical Elocutionist’ and ‘Stammering and Other Impediments of Speech’. Alexander and his brother toured the country giving demonstrations of their techniques to improve peoples’ speech. He also started his study at the University of London, where a mistake in reading German was to change his life and lay the foundations for the telecommunications revolution.

A ‘silly’ language mistake that changed the world

At University, Bell became fascinated by the ideas of German physicist Hermann Von Helmholtz. Von Helmholtz had produced a book, ‘On The Sensations of Tone’, in which he said that vowel sounds, a, e, i, o and u, could be produced using electrical tuning forks and resonators. However Bell couldn’t read German very well, and mistakenly believed that Von Helmholtz’s had written that vowel sounds could be transmitted over a wire. This misunderstanding changed history. As Bell later stated, “It gave me confidence. If I had been able to read German, I might never have begun my experiments in electricity.”

Tragedy and Travel

Things were going well for young Bell’s career, when tragedy struck. Both his brothers and he contracted Tuberculosis, a common disease at the time. His two brothers died and at the age of 23, still suffering from the disease, Bell left Britain to move to Ontario in Canada to convalesce and then to Boston to work in a school for deaf mutes.

The time for more than dots and dashes

His dreams of transmitting voices over a wire were still spinning round in his creative head. It just needed some new ideas to spark him off again. Samuel Morse had just developed Morse Code and the electronic telegraph, which allowed single messages in the form of long and short electronic pulses, dots and dashes, to be transmitted rapidly along a wire over huge distances. Bell saw the similarities between the idea of being able to send multiple messages and the multiple notes in a musical chord, the “harmonic telegraph” could be a way to send voices.

Chance encounter

Again chance played its roll in telecommunications history. At the electrical machine shop of Charles Williams, Bell ran into young Thomas Watson, a skilled electrical machinist able to build the devices that Bell was devising. The two teamed up and started to work toward making Bell’s dream a reality. To make this reality work they needed to invent two things: something to measure a voice at one end, and another device to reproduce the voice at the other, what we would call today the microphone and the speaker. The speaker accident June 2, 1875 was a landmark day for team Bell and Watson. Working in their laboratory they were trying to free a reed, a small flat piece of metal, which they had wound too tightly to the pole of an electromagnet. In trying to free it Watson produced a ‘twang’. Bell heard the twang and came running. It was a sound similar to the sounds in human speech; this was the solution to producing an electronic voice, a discovery that must have come as a relief for all the dogs in the Boston area. The mercury microphone Bell had also discovered that a wire vibrated by his voice while partially dipped in a conducting liquid, like mercury or battery acid, could be made to produce a changing electrical current. They had a device where the voice could be transformed into an electronic signal. Now all that was needed was to put the two inventions together.

The first ’emergency’ phone call (allegedly)

On March 10, 1876, Bell and Watson set out to test their new system. The story goes that Bell knocked over a container with battery acid, which they were using as the conducting liquid in the ‘microphone’. Spilled acid tends to be nasty and Bell shouted out “Mr. Watson, come here. I want you!” Watson, working in the next room, heard Bell’s cry for help through the wire. The first phone call had been made, and Watson quickly went through to answer it. The telephone was invented, and Bell was only 29 years old.

The world listens

The telephone was finally introduced to the world at the Centennial Exhibition in Philadelphia in 1876. Bell quoted Hamlet over the phone line from the main building 100 yards away, causing the surprised Brazilian Emperor Dom Pedro to exclaim, “My God, it talks”, and talk it did. From there on, the rest, as they say, is history. The telephone spread throughout the world changing the way people lived their lives. Though it was not without its social problems. In many upper class homes it was considered to be vulgar. Many people considered it intrusive (just like some people’s view of mobile phones today!), but eventually it became indispensable.

Can’t keep a good idea down

Inventor Elisha Gray also independently designed his own version of the telephone. In fact both he and Bell rushed their designs to the US patent office within hours of each other, but Alexander Graham Bell patented his telephone first. With the massive amounts of money to be made Elisha Gray and Alexander Graham Bell entered into a famous legal battle over who had invented the telephone first, and Bell had to fight may legal battles over his lifetime as others claimed they had invented the technology first. In all the legal cases Bell won, partly many claimed because he was such a good communicator and had such a convincing talking voice. As is often the way few people now remember the other inventors. In fact, it is now recognized that Italian Antonio Meucci had invented a method of electronic voice communication earlier though did not have the funds to patent it.

Fame and Fortune under Forty

Bell became rich and famous, and he was only in his mid thirties. The Bell telephone company was set up, and later went on to become AT&T one of Americas foremost telecommunications giants.

Read Terry Pratchett’s brilliant book ‘Going Postal’ for a fun fantasy about inventing and making money from communication technology on DiscWorld.

Related Magazines and a new book…


EPSRC supports this blog through research grant EP/W033615/1. 

Manufacturing Magic

Cover of the twleve magicians of Osiris - eyes, lightening between hands, camel, pyramids

by Howard Williams, Queen Mary University of London

(From the archive)

Can computers lend a creative hand to the production of new magic tricks? That’s a question our team, led by Peter McOwan at Queen Mary, wrestled with.

The idea that computers can help with creative endeavours like music and drawing is nothing new – turn the radio on and the song you are listening to will have been produced with the help of a computer somewhere along the way, whether it’s a synthesiser sound, or the editing of the arrangement, and some music is created purely inside software. Researchers have been toiling away for years, trying to build computer systems that actually write the music too! Some of the compositions produced in this way are surprisingly good! Inspired by this work, we decided to explore whether computers could create magic.

The project to build creative software to help produce new magic tricks started with a magical jigsaw that could be rearranged in certain ways to make objects on its surface disappear. Pretty cool, but what part did the computer play? A jigsaw is made up of different pieces, each with four sides – the number of different ways all these pieces can be put together is very large; for a human to sit down and try out all the different configurations would take many hours (perhaps thousands, if not millions!). Whizzing through lots of different combinations is something a computer is very good at. When there are simply too many different combinations for even a computer to try out exhaustively, programmers have to take a different approach.

Evolve a jigsaw

A genetic algorithm is a program that mimics the biological process of natural selection. We used one to intelligently search through all the interesting combinations that the jigsaw might be made up from. A population of jigsaws is created, and is then ‘evolved’ via a process that evaluates how good each combination is in each generation, gradually weeding out the combinations that wouldn’t make good jigsaws. At the end of the process you hope to be left with a winner; a jigsaw that matches all the criteria that you are hoping for. In this particular case, we hoped to find a jigsaw that could be built in two different ways, but each with a different number of the same object in the picture, so that you could appear to make an object disappear and reappear again as you made and remade it. The idea is based on a very old trick popularised by Sam Lloyd, but our aim was to create a new version that a human couldn’t, realistically, have come up with, without a lot of free time on their hands!

To understand what role the computer played, we need to explore the Genetic Algorithm mechanism it used to find the best combinations. How did the computer know which combinations were good or bad? This is something creative humans are great at – generating ideas, and discarding the ones they don’t like in favour of ones they do. This creative process gradually leads to new works of art, be they music, painting, or magic tricks. We tackled this problem by first running some experiments with real people to find out what kind of things would make the jigsaw seem more ‘magical’ to a spectator. We also did experiments to find out what would influence a magician performing the trick. This information was then fed into the algorithm that searched for good jigsaw combinations, giving the computer a mechanism for evaluating the jigsaws, similar to the ones a human might use when trying to design a similar trick.

More tricks

We went on to use these computational techniques to create other new tricks, including a card trick, a mind reading trick on a mobile phone, and a trick that relies on images and words to predict a spectator’s thought processes. You can find out more including downloading the jigsaw at www.Qmagicworld.wordpress.com

Is it creative, though?

There is a lot of debate about whether this kind of ‘artificial intelligence’ software, is really creative in the way humans are, or in fact creative in any way at all. After all, how would the computer know what to look out for if the researchers hadn’t configured the algorithms in specific ways? Does a computer even understand the outputs that it creates? The fact is that these systems do produce novel things though – new music, new magic tricks – and sometimes in surprising and pleasing ways, previously not thought of.

Are they creative (and even intelligent)? Or are they just automatons bound by the imaginations of their creators? What do you think?

Related Magazines and a new book…


EPSRC supports this blog through research grant EP/W033615/1. 

Solving problems you care about

Two microbit computers; one is plugged in to a USB cable.

by Patricia Charlton and Stefan Poslad, Queen Mary University of London Queen Mary University of London

The best technology helps people solve real problems. To be a creative innovator you need not only to be able to create a solution that works but also to spot a need in the first place and be able to come up with creative solutions. Over the summer a group of sixth formers on internships at Queen Mary had a go at doing this. Ultimately their aim was to build something from a programmable gadget such as a BBC micro:bit or Raspberry Pi. They therefore had to learn about the different possible gadgets they could use, how to program them and how to control the on-board sensors available. They were then given the design challenge of creating a device to solve a community problem.

Street in London with two red buses going in opposite directions.
Red London buses image by Albrecht Fietz from Pixabay

Hearing the bus is here

Tai Kirby wanted to help visually impaired people. He knew that it’s hard for someone with poor sight to tell when a bus is arriving. In busy cities like London this problem is even worse as buses for different destinations often arrive at once. His solution was a prototype that announces when a specific bus is arriving, letting the person know which was which. He wrote it in Python and it used a Raspberry pi linked to low energy Bluetooth devices.

The fun spell

Filsan Hassan decided to find a fun way to help young kids learn to spell. She created a gadget that associated different sounds with different letters of the alphabet, turning spelling words into a fun, musical experience. It needed two micro:bits and a screen communicating with each other using a radio link. One micro:bit controlled the screen while the other ran the main program that allowed children to choose a word, play a linked game and spell the word using a scrolling alphabet program she created. A big problem was how to make sure the combination of gadgets had a stable power supply. This needed a special circuit to get enough power to the screen without frying the micro:bit and sadly we lost some micro:bits along the way: all part of the fun!

Two microbit computers; one is plugged in to a USB cable.
Microbit programming image by JohnnyAndren from Pixabay

Remote robot

Jesus Esquivel Roman developed a small remote-controlled robot using a buggy kit. There are lots of applications for this kind of thing, from games to mine-clearing robots. The big challenge he had to overcome was how to do the navigation using a compass sensor. The problem was that the batteries and motor interfered with the calibration of the compass. He also designed a mechanism that used the accelerometer of a second micro:bit allowing the vehicle to be controlled by tilting the remote control.

Memory for patterns

Finally, Venet Kukran was interested in helping people improve their memory and thinking skills. He invented a pattern memory game using a BBC micro:bit and implemented in micropython. The game generates patterns that the player has to match and then replicate to score points. The program generates new patterns each time so every game is different. The more you play the more complex the patterns you have to remember become.

As they found you have to be very creative to be an innovator, both to come up with real issues that need a solution, but also to overcome the problems you are bound to encounter in your solutions.


This article was originally published on the CS4FN website and a copy can also be found in issue 22 of the magazine called Creative Computing. You can download that as a PDF by clicking on the picture below and you can also download all of our free material, including back issues of the CS4FN magazine and other booklets, at our downloads site: https://cs4fndownloads.wordpress.com


Related Magazine …


EPSRC supports this blog through research grant EP/W033615/1.

Sameena Shah: News you can trust

Woman reading news at a cafe table.
Image by Jean Luc (Jarrick) from Pixabay
Image by Jean Luc (Jarrick) from Pixabay 

by Paul Curzon, Queen Mary University of London

Having reliable news always matters to us: whether when disasters strike, of knowing for sure what our politicians really said, or just knowing what our favourite celebrity is really up to. Nowadays social networks like Twitter and Facebook are a place to find breaking news, though telling fact from fake-news is getting ever harder. How do you know where to look, and when you find something how do you know that juicy story isn’t just made up?

One way to be sure of stories is from trusted news-providers, like the BBC, but how do they make sure their stories are real. A lot of fake news is created by Artificial Intelligence bots and Artificial Intelligence is part of the solution to beat them.

Sameena Shah realised this early on. An expert in Artificial Intelligence, she led a research team at news provider Thomson Reuters. They provide trusted information for news organisations worldwide. To help ensure we all have fast, reliable news, Sameena’s team created an Artificial Intelligence program to automatically discover news from the mass of social networking information that is constantly being generated. It combines programs that process and understand language to work out the meaning of people’s posts – ‘natural language processing’ – with machine learning programs that look for patterns in all the data to work out what is really news and most importantly what is fake. She both thought up the idea for the system and led the development team. As it was able to automatically detect fake news, when news organisations were struggling with how much was being generated, it gave Thomson Reuters a head-start of several years over other trusted news companies.

Sameena’s ideas and work putting them in to practice has helped make sure we all know what’s really happening.

(This is an updated version of an article that first appeared in Issue 23 of the CS4FN magazine “The women are (still) here”)

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Object-oriented pizza at the end of the universe

An eclipse halo. looking like a blank pizza with a spark of life triggering it to make itself.
Image by ipicgr from Pixabay
Image by ipicgr from Pixabay 

by Paul Curzon, Queen Mary University of London

(Based on a section from Computing Without Computers, a free book by Paul to help struggling students understand programming concepts).

Object-oriented programming is a popular kind of programming. To understand what it is all about it can help to think about cooking a meal (Hitchhiker’s Guide to the Galaxy style) where the meal cooks itself.

People talk about programs being like recipes to follow. This can help because both programs and recipes are sets of instructions. If you follow the instructions precisely in the right order, it should lead to the intended result (without you needing any thought of how to do it yourself).

That is only one way of thinking about what a program is, though. The recipe metaphor corresponds to a style of programming called procedural programming. Another completely different way of thinking about programs (a different paradigm) is object-oriented programming. So what is that about if not recipes?

In object-oriented programming, programmers think of a program, not as a series of recipes (so not sets instructions to be followed that do distinct tasks) but as a series of objects that send messages to each other to get things done. Different objects also have different behaviours – different actions they can perform. What do we mean by that? That is where The Hitchhiker’s Guide to the Galaxy may help.

In the book “The Restaurant at the End of the Universe”, by Douglas Adam, part of the Hitchhiker’s Guide to the Galaxy series, genetically modified animals are bred to delight in being your meal. They take great personal pride in being perfectly fattened and might suggest their leg as being particularly tasty, for example.

We can take this idea a little further. Imagine a genetically engineered future in which animals and vegetables are bred to have such intelligence (if you can call it that) and are able to cook themselves. Each duck can roast itself to death or alternatively fry itself perfectly. Now, when a request comes in for duck and mushroom pizza, messages go to the mushrooms, the ducks, etc and they get to work preparing themselves as requested by the pizza base, who on creation and addition of the toppings, promptly bakes itself in a hot oven as requested. This is roughly how an object-oriented programmer sees a program. It is just a collection of objects come to life. Each different kind of object is programmed with instructions about all the operations that it can perform on itself (its behaviours). If such an operation is required, a request goes to the object itself to do it.

Compare these genetically modified beings to a program, which could be to control a factory making food, say. In the procedural programming version we write a program (or recipe) for duck and mushroom pizza, that set out the sequence of instructions to follow. The computer, acting as a chef, works down the instructions in turn. The programmer splits the instructions into separate sets to do different tasks: for making pizza dough, adding all the toppings, and so on. Specific instructions say when the computer chef should start following new instructions and return to previous tasks to continue with old ones.

Now, following the genetically-modified food idea instead, the program is thought of as separate objects, one for the pizza base, one for the duck one for each mushroom, so the programmer has to think in terms of what objects exist and what their properties and behaviours are. She writes instructions (the program) to give each group of objects their specific behaviours (so a duck has instructions for how to roast itself, instructions for how to tear itself into pieces, for how to add its pieces on to the pizza base; a mushroom has instructions for how to wash itself, slice itself, and so on). Parts of those behaviours the programmer programs are instructions to send messages to other objects to get things done: the pizza base object, tells the mushroom objects and duck object to get their act together and prepare themselves and jump on top, for example.

This is a completely different way to think of a program based on a completely different way of decomposing it. Instead of breaking the task into subtasks of things to do, you break it into objects, separate entities that send messages to each other to get things done. Which is best depends on what the program does, but for many kinds of tasks the object-oriented approach is a much more natural way to think about the problem and so write the program.

So ducks that cook themselves may never happen in the real universe (I hope), but they could exist in the programs of future kitchens run by computers if the programmers use object-oriented programming.

More on …

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Stretching your keyboard – getting more out of QWERTY

A screenshot of an iPhone's on-screen keyboard layout which is known as QWERTY because of the positioning of the letters in the alphabet on the first line.

by Jo Brodie, Queen Mary University of London

If you’ve ever sent a text on a phone or written an essay on a computer you’ve most likely come across the ‘QWERTY’ keyboard layout. It looks like this on a smartphone.

A screenshot of an iPhone's on-screen keyboard layout which is known as QWERTY because of the positioning of the letters in the alphabet on the first line.
A smartphone’s on-screen keyboard layout, called QWERTY after the first six letters on the top line.

This layout has been around in one form or another since the 1870s and was first used in old mechanical typewriters where pressing a letter on the keyboard caused a hinged metal arm with that same letter embossed at the end to swing into place, thwacking a ribbon coated with ink, to make an impression on the paper. It was quite loud!

Typewriter gif showing a mechanical typewriter in use as the typist presses a key on the keyboard and the corresponding letter is raised to hit the page.
Mechanical typewriter gif from Tenor. The person is typing one of the number keys which has an 8 and an asterisk (*) on it. That causes one of the hinged metal arms to bounce up and hit the page. Each arm has two letters or symbols on it, one above the other, and the Shift key physically moves the arm so the upper (case) letter strikes the page.

The QWERTY keyboard isn’t just used by English speakers but can easily be used by anyone whose language is based on the same A,B,C Latin alphabet (so French, Spanish, German etc). All the letters that an English-speaker needs are right there in front of them on the keyboard and with QWERTY… WYSIWYG (What You See Is What You Get).  There’s a one-to-one mapping of key to letter: if you tap the A key you get a letter A appearing on screen, click the M key and an M appears. (To get a lowercase letter you just tap the key but to make it uppercase you need to tap two keys; the up arrow (‘shift’) key plus the letter).

A French or Spanish speaking person could also buy an adapted keyboard that includes letters like É and Ñ, or they can just use a combination of keys to make those letters appear on screen (see Key Combinations below). But what about writers of other languages which don’t use the Latin alphabet? The QWERTY keyboard, by itself, isn’t much use for them so it potentially excludes a huge number of people from using it.

In the English language the letter A never alters its shape depending on which letter goes before or comes after it. (There are 39 lower case letter ‘a’s and 3 upper case ‘A’s in this paragraph and, apart from the difference in case, they all look exactly the same.) That’s not the case for other languages such as Arabic or Hindi where letters can change shape depending on the adjacent letters. With some languages the letters might even change vertical position, instead of being all on the same line as in English.

Early attempts to make writing in other languages easier assumed that non-English alphabets could be adapted to fit into the dominant QWERTY keyboard, with letters that are used less frequently being ignored and other letters being simplified to suit. That isn’t very satisfactory and speakers of other languages were concerned that their own language might become simplified or standardised to fit in with Western technology, a form of ‘digital colonialism’.

But in the 1940s other solutions emerged. The design for one Chinese typewriter avoided QWERTY’s ‘one key equals one letter’ (which couldn’t work for languages like Chinese or Japanese which use thousands of characters – impossible to fit onto one keyboard, see picture at the end!).

Rather than using the keys to print one letter, the user typed a key to begin the process of finding a character. A range of options would be displayed and the user would select another key from among them, with the options narrowing until they arrived at the character they wanted. Luckily this early ‘retrieval system’ of typing actually only took a few keystrokes to bring up the right character, otherwise it would have taken ages.

This is a way of using a keyboard to type words rather than letters, saving time by only displaying possible options. It’s also an early example of ‘autocomplete’ now used on many devices to speed things up by displaying the most likely word for the user to tap, which saves them typing it.

For example in English the letter Q is generally* always followed by the letter U to produce words like QUAIL, QUICK or QUOTE. There are only a handful of letters that can follow QU – the letter Z wouldn’t be any use but most of the vowels would be. You might be shown A, E, I or O and if you selected A then you’ve further restricted what the word could be (QUACK, QUARTZ, QUARTET etc).

In fact one modern typing system, designed for typists with physical disabilities, also uses this concept of ‘retrieval’, relying on a combination of letter frequency (how often a letter is used in the English language) and probabilistic predictions (about how likely a particular letter is to come next in an English word). Dasher is a computer program that lets someone write text without using a keyboard, instead a mouse, joystick, touchscreen or a gaze-tracker (a device that tracks the person’s eye position) can be used.

Letters are presented on-screen in alphabetic order from top to bottom on the right hand side (lowercase first, then upper case) and punctuation marks. The user ‘drives’ through the word by first pushing the cursor towards the first letter, then the next possible set of letters appear to choose from, and so on until each word is completed. You can see it in action in this video below.

The Dasher software interface

Key combinations

The use of software to expand the usefulness of QWERTY keyboards is now commonplace with programs pre-installed onto devices which run in the background. These IMEs or Input Method Editors can convert a set of keystrokes into a character that’s not available on the keyboard itself. For example, while I can type SHIFT+8 to display the asterisk (*) symbol that sits on the 8 key there’s no degree symbol (as in 30°C) on my keyboard. On a Windows computer I can create it using the numeric keypad on the right of some keyboards, holding down the ALT key while typing the sequence 0176. While I’m typing the numbers nothing appears but once I complete the sequence and release the ALT key the ° appears on the screen.

English language keyboard image by john forcier from Pixabay, showing the numeric keypad highlighted in yellow with the two Alt keys and the ‘num lock’ key highlighted in pink. Num lock (‘numeric lock’) needs to be switched on for the keypad to work, then use the Alt key plus a combination of letters on the numeric keypad to produce a range of additional ‘alt code‘ characters.

When Japanese speakers type they use the main ‘ABC’ letters on the keyboard, but the principle is the same – a combination of keys produces a sequence of letters that the IME converts to the correct character. Or perhaps they could use Google Japan’s April Fool solution from 2010, below!

Google Japan’s 2010 April Fool joke with a “Japanese keyboard” set out as a drumkit for easy reach of all keys…

*QWERTY is a ‘word’ which starts with a Q that’s not followed by a U of course…

References

Further reading

The ‘retrieval system’ of typing mentioned above, which lets the user get to the word or characters more quickly, is similar to the general problem solving strategy called ‘Divide and Conquer’. You can read more about that and other search algorithms in our free booklet ‘Searching to Speak‘ (PDF) which explores how the design of an algorithm could allow someone with locked-in syndrome to communicate. Locked-in syndrome is a condition resulting from a stroke where a person is totally paralysed. They can see, hear and think but cannot speak. How could a person with Locked-in syndrome write a book? How might they do it if they knew some computational thinking?


EPSRC supports this blog through research grant EP/W033615/1.

Is ChatGPT’s “CS4FN” article good enough?

(Or how to write for CS4FN)

by Paul Curzon, Queen Mary University of London

Follow the news and it is clear that the chatbots are about to take over journalism, novel writing, script writing, writing research papers, … just about all kinds of writing. So how about writing for the CS4FN magazine. Are they good enough yet? Are we about to lose our jobs? Jo asked ChatGPT to write a CS4FN article to find out. Read its efforts before reading on…

As editor I not only wrote but also vet articles and tweak them when necessary to fit the magazine style. So I’ve looked at ChatGPT’s offering as I would one coming from a person …

ChatGPT’s essay writing has been compared to that of a good but not brilliant student. Writing CS4FN articles is a task we have set students in the past: in part to give them experience over how you must write in different styles for different purposes. Different audience? Different writing. Only a small number come close to what I am after. They generally have one or more issues. A common problem when students write for CS4FN is sadly a lack of good grammar and punctuation throughout beyond just typos (basic but vital English skills seem to be severely lacking these days even with spell checking and grammar checking tools to help). Other common problems include a lack of structure, no hook at the start, over-formal writing so the wrong style, no real fun element at all and/or being devoid of stories about people, an obsession with a few subjects (like machine learning!) rather than finding something new to write about. They are also then often vanilla articles about that topic, just churning out looked-up facts rather than finding some new, interesting angle.

How did the chatbot do? It seems to have made most of the same mistakes. At least, chatGPT’s spelling and grammar are basically good so that is a start: it is a good primary school student then! Beyond that it has behaved like the weaker students do… and missed the point. It has actually just written a pretty bog standard factual article explaining the topic it chose, and of course given a free choice, it chose … Machine Learning! Fine, if it had a novel twist, but there are no interesting angles added to the topic to bring it alive. Nor did it describe the contributions of a person. In fact, no people are mentioned at all. It is also using a pretty formal style of writing (“In conclusion…”). Just like humans (especially academics) it also used too much jargon and didn’t even explain all the jargon it did use (even after being prompted to write for a younger audience). If I was editing I’d get rid of the formality and unexplained jargon for starters. Just like the students who can actually write but don’t yet get the subtleties, it hasn’t got the fact that it should have adapted its style, even when prompted.

It knows about structure and can construct an essay with a start, a middle and end as it has put in an introduction and a conclusion. What it hasn’t done though is add any kind of “grab”. There is nothing at the start to really capture the attention. There is no strange link, no intriguing question, no surprising statement, no interesting person…nothing to really grab you (though Jo saved it by adding to the start, the grab that she had asked an AI to write it). It hasn’t added any twist at the end, or included anything surprising. In fact, there is no fun element at all. Our articles can be serious rather than fun but then the grab has to be about the seriousness: linked to bad effects for society, for example.

ChatGPT has also written a very abstract essay. There is little in the way of context or concrete examples. It says, for example, “rules … couldn’t handle complex situations”. Give me an example of a complex situation so I know what you are talking about! There are no similes or metaphors to help explain. It throws in some application areas for context like game-playing and healthcare but doesn’t at all explain them (it doesn’t say what kind of breakthrough has been made to game playing, for example). In fact, it doesn’t seem to be writing in a “semantic wave” style that makes for good explanations at all. That is where you explain something by linking an abstract technical thing you are explaining, to some everyday context or concrete example, unpacking then repacking the concepts. Explaining machine learning? Then illustrate your points with an example such as how machine learning might use movies to predict your voting habits perhaps…and explain how the example does illustrate the abstract concepts such as pointing out the patterns it might spot.

There are several different kinds of CS4FN article. Overall, CS4FN is about public engagement with research. That gives us ways in to explain core computer science though (like what machine learning is). We try to make sure the reader learns something core, if by stealth, in the middle of longer articles. We also write about people and especially diversity, sometimes about careers or popular culture, or about the history of computation. So, context is central to our articles. Sometimes we write about general topics but always with some interesting link, or game or puzzle or … something. For a really, really good article that I instantly love, I am looking for some real creativity – something very different, whether that is an intriguing link, a new topic, or just a not very well known and surprising fact. ChatGPT did not do any of that at all.

Was ChatGPT’s article good enough? No. At best I might use some of what it wrote in the middle of some other article but in that case I would be doing all the work to make it a CS4FN article.

ChatGPT hasn’t written a CS4FN article
in any sense other than in writing about computing.

Was it trained on material from CS4FN to allow it to pick up what CS4FN was? We originally assumed so – our material has been freely accessible on the web for 20 years and the web is supposedly the chatbots’ training ground. If so I would have expected it to do much better at getting the style right. I’m left thinking that actually when it is asked to write articles or essays without more guidance it understands, it just always writes about machine learning! (Just like I always used to write science fiction stories for every story my English teacher set, to his exasperation!) We assumed, because it wrote about a computing topic, that it did understand, but perhaps, it is all a chimera. Perhaps it didn’t actually understand the brief even to the level of knowing it was being asked to write about computing and just hit lucky. Who knows? It is a black box. We could investigate more, but this is a simple example of why we need Artificial Intelligences that can justify their decisions!

Of course we could work harder to train it up as I would a human member of our team. With more of the right prompting we could perhaps get it there. Also given time the chatbots will get far better, anyway. Even without that they clearly can now do good basic factual writing so, yes, lots of writing jobs are undoubtedly now at risk (and that includes a wide range of jobs, like lawyers, teachers, and even programmers and the like too) if we as a society decide to let them. We may find the world turns much more vanilla as a result though with writing turning much more bland and boring without the human spark and without us noticing till it is lost (just like modern supermarket tomatoes so often taste bland having lost the intense taste they once had!) … unless the chatbots gain some real creativity.

The basic problem of new technology is it reaps changes irrespective of the human cost (when we allow it to, but we so often do, giddy with the new toys). That is fine if as a society we have strong ways to support those affected. That might involve major support for retraining and education into new jobs created. Alternatively, if fewer jobs are created than destroyed, which is the way we may be going, where jobs become ever scarcer, then we need strong social support systems and no stigma to not having a job. However, currently that is not looking likely and instead changes of recent times have just increased, not reduced inequality, with small numbers getting very, very rich but many others getting far poorer as the jobs left pay less and less.

Perhaps it’s not malevolent Artificial Intelligences of science fiction taking over that is the real threat to humanity. Corporations act like living entities these days, working to ensure their own survival whatever the cost, and we largely let them. Perhaps it is the tech companies and their brand of alien self-serving corporation as ‘intelligent life’ acting as societal disrupters that we need to worry about. Things happen (like technology releases) because the corporation wants them to but at the moment that isn’t always the same as what is best for people long term. We could be heading for a wonderful utopian world where people do not need to work and instead spend their time doing fulfilling things. It increasingly looks like instead we have a very dystopian future to look forward to – if we let the Artificial Intelligences do too many things, taking over jobs, just because they can so that corporations can do things more cheaply, so make more fabulous wealth for the few.

Am I about to lose my job writing articles for CS4FN? I don’t think so. Why do I write CS4FN? I love writing this kind of stuff. It is my hobby as much as anything. So I do it for my own personal pleasure as well as for the good I hope it does whether inspiring and educating people, or just throwing up things to think about. Even if the chatBots were good enough, I wouldn’t stop writing. It is great to have a hobby that may also be useful to others. And why would I stop doing something I do for fun, just because a machine could do it for me? But that is just lucky for me. Others who do it for a living won’t be so lucky.

We really have to stop and think about what we want as humans. Why do we do creative things? Why do we work? Why do we do anything? Replacing us with machines is all well and good, but only if the future for all people is actually better as a result, not just a few.

Further reading


EPSRC supports this blog through research grant EP/W033615/1.