The paranoid program

by Paul Curzon, Queen Mary University of London

One of the greatest characters in Douglas Adams’ Hitchhiker’s Guide to the Galaxy, science fiction radio series, books and film was Marvin the Paranoid Android. Marvin wasn’t actually paranoid though. Rather, he was very, very depressed. This was because as he often noted he had ‘a brain the size of a planet’ but was constantly given trivial and uninteresting jobs to do. Marvin was fiction. One of the first real computer programs to be able to converse with humans, PARRY, did aim to behave in a paranoid way, however.

PARRY was in part inspired by the earlier ELIZA program. Both were early attempts to write what we would now call chatbots: programs that could have conversations with humans. This area of Natural Language Processing is now a major research area. Modern chatbot programs rely on machine learning to learn rules from real conversations that tell them what to say in different situations. Early programs relied on hand written rules by the programmer. ELIZA, written by Joseph Weizenbaum, was the most successful early program to do this and fooled people into thinking they were conversing with a human. One set of rules, called DOCTOR, that ELIZA could use, allowed it to behave like a therapist of the kind popular at the time who just echoed back things their patient said. Weizenbaum’s aim was not actually to fool people, as such, but to show how trivial human-computer conversation was, and that with a relatively simple approach where the program looked for trigger words and used them to choose pre-programmed responses could lead to realistic appearing conversation.

PARRY was more serious in its aim. It was written by, Kenneth Colby, in the early 1970s. He was a psychiatrist at Stanford. He was trying to simulate the behaviour of person suffering from paranoid schizophrenia. It involves symptoms including the person believing that others have hostile intentions towards them. Innocent things other people say are seen as being hostile even when there was no such intention.

PARRY was based on a simple model of how those with the condition were thought to behave. Writing programs that simulate something being studied is one of the ways computer science has added to the way we do science. If you fully understand a phenomena, and have embodied that understanding in a model that describes it, then you should be able to write a program that simulates that phenomena. Once you have written a program then you can test it against reality to see if it does behave the same way. If there are differences then this suggests the model and so your understanding is not yet fully accurate. The model needs improving to deal with the differences. PARRY was an attempt to do this in the area of psychiatry. Schizophrenia is not in itself well-defined: there is no objective test to diagnose it. Psychiatrists come to a conclusion about it just by observing patients, based on their experience. Could a program display convincing behaviours?

It was tested by doing a variation of the Turing Test: Alan Turing’s suggestion of a way to tell if a program could be considered intelligent or not. He suggested having humans and programs chat to a panel of judges via a computer interface. If the judges cannot accurately tell them apart then he suggested you should accept the programs as intelligent. With PARRY rather than testing whether the program was intelligent, the aim was to find out if it could be distinguished from real people with the condition. A series of psychiatrists were therefore allowed to chat with a series of runs of the program as well as with actual people diagnosed with paranoid schizophrenia. All conversations were through a computer. The psychiatrists were not told in advance which were which. Other psychiatrists were later allowed to read the transcripts of those conversations. All were asked to pick out the people and the programs. The result was they could only correctly tell which was a human and which was PARRY about half the time. As that was about as good as tossing a coin to decide it suggests the model of behaviour was convincing.

As ELIZA was simulating a mental health doctor and PARRY a patient someone had the idea of letting them talk to each other. ELIZA (as the DOCTOR) was given the chance to chat with PARRY several times. You can read one of the conversations between them here. Do they seem believably human? Personally, I think PARRY comes across more convincingly human-like, paranoid or not!


Activity for you to do…

If you can program, why not have a go at writing your own chatbot. If you can’t writing a simple chatbot is quite a good project to use to learn as long as you start simple with fixed conversations. As you make it more complex, it can, like ELIZA and PARRY, be based on looking for keywords in the things the other person types, together with template responses as well as some fixed starter questions, also used to change the subject. It is easier if you stick to a single area of interest (make it football mad, for example): “What’s your favourite team?” … “Liverpool” … “I like Liverpool because of Klopp, but I support Arsenal.” …”What do you think of Arsenal?” …

Alternatively, perhaps you could write a chatbot to bring Marvin to life, depressed about everything he is asked to do, if that is not too depressingly simple, should you have a brain the size of a planet.


More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.

Hidden Figures: NASA’s brilliant calculators

Full Moon with a blue filter
Full Moon image by PIRO from Pixabay

NASA Langley was the birthplace of the U.S. space program where astronauts like Neil Armstrong learned to land on the moon. Everyone knows the names of astronauts, but behind the scenes a group of African-American women were vital to the space program: Katherine Johnson, Mary Jackson and Dorothy Vaughan. Before electronic computers were invented ‘computers’ were just people who did calculations and that’s where they started out, as part of a segregated team of mathematicians. Dorothy Vaughan became the first African-American woman to supervise staff there and helped make the transition from human to electronic computers by teaching herself and her staff how to program in the early programming language, FORTRAN.

The women switched from being the computers to programming them. These hidden women helped put the first American, John Glenn, in orbit, and over many years worked on calculations like the trajectories of spacecraft and their launch windows (the small period of time when a rocket must be launched if it is to get to its target). These complex calculations had to be correct. If they got them wrong, the mistakes could ruin a mission, putting the lives of the astronauts at risk. Get them right, as they did, and the result was a giant leap for humankind.

See the film ‘Hidden Figures’ for more of their story.

– Paul Curzon, Queen Mary University of London

from the archive

More on …

Magazines …

Front cover of CS4FN issue 29 - Diversity in Computing

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Freddie Figgers – the abandoned baby who became a runaway telecom tech star

Freddie Figgers. Image by Freddie Figgers, CC BY 4.0 , via Wikimedia Commons

As a baby, born in the US in 1989, Freddie Figgers was abandoned by his biological parents but he was brought up with love and kindness by two much older adoptive parents who kindled his early enthusiasm for fixing things and inspired his work in smart health. He now runs the first Black-owned telecommunications company in the US.

When Freddie was 9 his father bought him an old (broken) computer from a charity shop to play around with. He’d previously enjoyed tinkering with his father’s collection of radios and alarm clocks and when he opened up the computer could see which of its components and soldering links were broken. He spotted that he could replace these with the same kinds of components from one of his dad’s old radios and, after several attempts, soon his computer was working again – Freddie was hooked, and he started to learn how to code.

When he was 12 he attended an after-school club and set to work fixing the school’s broken computers. His skill impressed the club’s leader, who also happened to be the local Mayor, and soon Freddie was being paid several dollars an hour to repair even more computers for the Mayor’s office (in the city of Quincy, Florida) and her staff. A few years later Quincy needed a new system to ensure that everyone’s water pressure was correct. A company offered to create software to monitor the water pressure gauges and said it would cost 600,000 dollars. Freddie, now 15 and still working with the Mayor, offered to create a low-cost program of his own and he saved the city thousands in doing so.

He was soon offered other contracts and used the money coming in to set up his own computing business. He heard about an insurance company in another US city whose offices had been badly damaged by a tornado and lost all of their customers’ records. That gave him the idea to set up a cloud computing service (which means that the data are stored in different places and if one is damaged the data can easily be recovered from the others).

His father, now quite elderly, had dementia and regularly wandered off and got lost. Freddie found an ingenious way to help him by rigging up one of his dad’s shoes with a GPS detector and two-way communication connected to his computer – he could talk to his dad through the shoe! If his dad was missing Freddie could talk to him, find out where he was and go and get him. Freddie later sold his shoe tracker for over 2 million dollars.

Living in a rural area he knew that mobile phone coverage and access to the internet was not as good as in larger cities. Big telecommunications companies are not keen to invest their money and equipment in areas with much smaller populations so instead Freddie decided to set up his own. It took him quite a few applications to the FCC (the US’ Federal Communications Commission who regulate internet and phone providers) but eventually, at 21, he was both the youngest and the first Black person in the US to own a telecoms company.

Most telecoms companies just provide a network service but his company also creates affordable smart phones which have ‘multi-user profiles’ (meaning that phones can be shared by several people in a family, each with their own profile). The death of his mother’s uncle, from a diabetic coma, also inspired him to create a networked blood glucose (sugar) meter that can link up wirelessly to any mobile phone. This not only lets someone share their blood glucose measurements with their healthcare team, but also with close family members who can help keep them safe while their glucose levels are too high.

Freddie has created many tools to help people in different ways through his work in health and communications – he’s even helping the next generation too. He’s also created a ‘Hidden Figgers’ scholarship to encourage young people in the US to take up tech careers, so perhaps we’ll see a few more fantastic folk like Freddie Figgers in the future.

– Jo Brodie and Paul Curzon, Queen Mary University of London

More on …


This article was originally published on our sister website at Teaching London Computing (which has lots of free resources for computing teachers). It hasn’t yet been published in an issue of CS4FN but you can download all of our free magazines here.

See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing

Further reading

See also

A ‘shoe tech’ device for people who have no sense of that direction – read about it in ‘Follow that Shoe’ on the last page of the wearable technology issue of CS4FN (‘Technology Worn Out (And About)’, issue 25).

Right to Repair – a European movement to make it easier for people to repair their devices, or even just change the battery in a smartphone themselves. See also the London-based Restart Project which is arguing for the same in the UK.


This blog is funded through EPSRC grant EP/W033615/1.

Hiding in Skype

Steganography in a video app

Computer Science isn’t just about using language, sometimes it’s about losing it. Sometimes people want to send messages so no one even knows they exist and a great place to lose language is inside a conversation.

Cryptography is the science of making messages unreadable. Spymasters have used it for a thousand years or more. Now it’s a part of everyday life. It’s used by the banks every time you use a cash point and by online shops when you buy something over the Internet. It’s used by businesses that don’t want their industrial secrets revealed and by celebrities who want to be sure that tabloid hackers can’t read their texts.

Cryptography stops messages being read, but sometimes just knowing that people are having a conversation can reveal more than they want even if you don’t know what was said. Knowing a football star is exchanging hundreds of texts with his team mate’s girlfriend suggests something is going on, for example. Similarly, CIA chief David Petraeus whose downfall made international news, might have kept his secret and his job if the emails from his lover had been hidden. David Bowie kept his 2013 comeback single ‘Where are we now?’ a surprise until the moment it was released. It might not have made him the front page news it did if a music journalist had just tracked who had been talking to who amongst the musicians involved in the months before.

That’s where steganography comes in – the science of hiding messages so no one even knows they exist. Invisible ink is one form of steganography used, for example, by the French resistance in World War II. More bizarre forms have been used over the years though – an Ancient Greek slave had a message tattooed on his shaven head warning of Persian invasion plans. Once his hair had grown back he delivered it with no one on the way the wiser.

Digital communication opens up new ways to hide messages. Computers store information using a code of 0s and 1s: bits. Steganography is then about finding places to hide those bits. A team of Polish researchers led by Wojciech Mazurczyk have now found a way to hide them in a video app (Skype) conversation.

Skype was one of the early popular video call applications, eventually replaced by Microsoft Teams. When you use Skype to make a phone call, the program converts the sounds you make to a long series of bits. They are sent over the Internet and converted back to sound at the other end. At the same time more sounds as bits stream back from the person you are talking to. Data transmitted over the Internet isn’t sent all in one go, though. It’s broken into packets: a bit like taking your conversation and tweeting it one line at a time.

Why? Imagine you run a crack team of commandos who have to reach a target in enemy territory to blow it up – a stately home where all the enemy’s Generals are having a party perhaps. If all the commandos travel together in one army truck and something goes wrong along the way probably no one will make it – a disaster. If on the other hand they each travel separately, rendezvousing once there, the mission is much more likely to be successful. If a few are killed on the way it doesn’t matter as the rest can still complete the mission.

The same applies to a video call. Each packet contains a little bit of the full conversation and each makes its own way to the destination across the Internet. On arriving there, they reform into the full message. To allow this to happen, each packet includes some extra data that says, for example, what conversation it is part of, how big it is and also where it fits in the sequence. If some don’t make it then the rest of the conversation can still be put back together without them. As long as too much isn’t missing, no one will notice.

Skype does something special with its packets. The size of the packets changes depending on how much data needs to be transmitted. If the person is talking, each packet carries a lot of information. If the person is listening then what is being transmitted is mainly silence. Skype then sends shorter packets. The Polish team realised they could exploit this for steganography. Their program, SkyDe, intercepts Skype packets looking for short ones. Any found are replaced with packets holding the data from the covert message. At the destination another copy of SkyDe intercepts them and extracts the hidden message and passes it on to the intended recipient. As far as Skype is concerned some packets just never arrive.

There are several properties that matter for a good steganographic technique. One is its bandwidth: how much data can be sent using the method. Because Skype calls contain a lot of silence SkyDe has a high bandwidth: there are lots of opportunities to hide messages. A second important property is obviously undetectability. The Polish team’s experiments have shown that SkyDe messages are very hard to detect. As only packets that contain silence are used and so lost, the people having the conversation won’t notice and the Skype receiver itself can’t easily tell because what is happening is no different to a typical unreliable network. Packets go missing all the time. Because both the Skype data and the hidden messages are encrypted, someone observing the packets travelling over the network won’t see a difference – they are all just random patterns of bits. Skype calls are now common so there are also lots of natural opportunities for sending messages this way – no one is going to get suspicious that lots of calls are suddenly being made.

All in all SkyDe provides an elegant new form of steganography. Invisible ink is so last century (and tattooing messages on your head so very last millennium). Now the sound of silence is all you need to have a hidden conversation.

Paul Curzon, Queen Mary University of London


Related Magazines …

A version of this article was originally published on the CS4FN website and a copy also appears on pages 10-11 of Issue 16 of the magazine.

You can also download PDF copies of all of our free magazines.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The heart of an Arabic programming language

‘Hello World’, in Arabic

So far almost all computer languages have been written in English, but that doesn’t need to be the case. Computers don’t care. Computer scientist Ramsey Nasser developed the first programming language that uses Arabic script. His computer language is called قلب. In English, it’s pronounced “Qalb”, after the Arabic word for heart. As long as a computer understands what to do with the instructions it’s given, they can be in any form, from numbers to letters to images.

Paul Curzon, Queen Mary University of London

More on …


Related Magazines …

You can also download PDF copies of all of our free magazines.


This blog is funded through EPSRC grant EP/W033615/1.

QMUL CS4FN EPSRC logos

Escape from Egypt

The humble escape character

Egyptian hieroglyphs from Luxor
Hieroglyphs at Luxor. Image by Alexander Paukner from Pixabay 

The escape character is a rather small and humble thing, often ignored, easily misunderstood but vital in programming languages. It is used simply to say symbols that follow should be treated differently. The n in \n is no longer just an n but a newline character, for example. It is the escape character \ that makes the change. The escape character has a long history dating back to at least Ancient Egypt and probably earlier.

The Ancient Egyptians famously used a language of pictures to write: hieroglyphs. How to read the language was lost for thousands of years, and it proved to be fiendishly difficult to decipher. The key to doing this turned out to be the Rosetta Stone, discovered when Napoleon invaded Egypt. It contained the same text in three different languages: the Hieroglyphic script, Greek and also an Egyptian script called Demotic.

A whole series of scholars ultimately contributed, but the final decipherment was done by Jean-François Champollion. Part of the difficulty in decipherment, even with a Greek translation of the Rosetta Stone text available, was because it wasn’t, as commonly thought, just a language where symbolic pictures represented words (a picture of the sun, meaning sun, for example). Instead, it combined several different systems of writing but using the same symbols. Those symbols could be read in different ways. The first way was as alphabetic letters that stood for consonants (like b, d and p in our alphabet). Words could be spelled out in this alphabet. The second was phonetically where symbols could stand for groups of such sounds. Finally, the picture could stand not for a sound but for a meaning. A picture of a duck could mean a particular sound or it could mean a duck!

A cartouche for Cleopatra.
A cartouche for Cleopatra. Image by CS4FN

Part of the reason it took so long to decipher the language was that it was assumed that all the symbols were pictures of the things they represented. It was only when eventually scholars started to treat some as though they represented sounds that progress was made. Even more progress was made when it was realised the same symbol meant different things and might be read in a different way, even in the same phrase.

However, if the same symbol meant different things in different places of a passage, how on earth could even Egyptian readers tell? How might you indicate a particular group of characters had a special meaning?

One way the Egyptians used specifically for names is called a cartouche: they enclosed the sequence of symbols that represented a name in an oval-like box, like the one shown for Cleopatra. This was one of the first keys to unlocking the language as the name of pharaoh Ptolemy appeared several times in the Greek of the Rosetta Stone. Once someone had the idea that the cartouches might be names, the symbols used to spell out Ptolemy a letter at a time could be guessed at.

Putting things in boxes works for a pictorial language, but it isn’t so convenient as a more general way of indicating different uses of particular symbols or sequences of them. The Ancient Egyptians therefore had a much simpler way too. The normal reading of a symbol was as a sound. A symbol that was to be treated as a picture of the word it represented was followed by a line (so despite all the assumptions of the translators and the general perception of them, a hieroglyph as picture is treated as the exception not the norm!)

The Egyptian hieroglyph for aleph, Image by CS4FN

For example, the hieroglyph that is a picture of the Egyptian eagle stands for a single consonant sound, aleph. We would pronounce it ‘ah’ and it can be seen in the cartouche for Cleopatra that sounds out her name. However, add the line after the picture of the eagle (as shown) and it just means what it looks like: the Egyptian eagle.

The Egyptian hieroglyph for the Egyptian Eagle. Image by CS4FN

Cartouches actually included the line at the end too indicating in itself their special meaning, as can be seen on the Cleopatra cartouche above

The Egyptian line hieroglyph is what we would now call an escape character: its purpose is to say that the symbol it is paired with is not treated normally, but in a special way.

Computer Scientists use escape characters in a variety of ways in programming languages as well as in scripting languages like HTML. Different languages use a different symbol as the escape character, though \ is popular (and very reminiscent of the Egyptian line!). One place escapes are used is to represent special characters in strings (sequences of characters like words or sentences) so they can be manipulated or printed. If I want my program to print a word like “not” then I must pass an appropriate string to the print command. I just put the three characters in quotation marks to show I mean the characters n then o then t. Simple.

However, the string “\no\t” does not similarly mean five characters \, n, o, \ and t. It still represents three characters, but this time \n, o and \t. \ is an escape character saying that the n and the t symbols that follow it are not really representing the n or t characters but instead stand for a newline (\n : which jumps to the next line) and a tab character (\t : which adds some space). “\no\t” therefore means newline o tab.

This begs the question what if you actually want to print a \ character! If you try to use it as it is, it just turns what comes after it into something else and disappears. The solution is simple. You escape it by preceding it with a \. \\ means a single \ character! So “n\\t” means n followed by an actual \ character followed by a t. The normal meaning of \ is to escape what follows. Its special meaning when it is escaped is just to be a normal character! Other characters’s meanings are inverted like this too, where the more natural meaning is the one you only get with an escape character. For example what if you want a program to print a quotation so use quotation marks. But quotation marks are used to show you are starting and ending a String. They already have another meaning. So if you want a string consisting of the five characters “, n, o, t and ” you might try to write “”not”” but that doesn’t work as the initial “” already makes a string, just one with no characters in it. The string has ended before you got to the n. Escape characters to the rescue. You need ” to mean something other than its “normal” meeting of starting or ending a string so just escape it inside the string and write “\”not\””.

Once you get used to it, escaping characters is actually very simple, but is easy to find confusing when first learning to program. It is not surprising those trying to decipher hieroglyphs struggled so much as escapes were only one of the problems they had to contend with.

Paul Curzon, Queen Mary University of London


More on …

    Related Magazines …


    This blog is funded through EPSRC grant EP/W033615/1.

    QMUL CS4FN EPSrC logos

    The Mummy in an AI world: Jane Webb’s future

    The sarcophagus of a mummy
    Image by albertr from Pixabay

    Inspired by Mary Shelley’s Frankenstein, 17-year old Victorian orphan, Jane Webb secured her future by writing the first ever Mummy story. The 22nd century world in which her novel was set is perhaps the most amazing thing about the three volume book though.

    On the death of her father, Jane realised she needed to find a way to support herself and did so by publishing her novel “The Mummy!” in 1827. In contrast to their modern version as stars of horror films, Webb’s Mummy, a reanimation of Cheops, was actually there to help those doing good and punish those that were evil. Napoleon had, through the start of the century, invaded Egypt, taking with him scholars intent on understanding the Ancient Egyptian society. Europe was fascinated with Ancient Egypt and awash with Egyptian artefacts and stories around them. In London, the Egyptian Hall had been built in Piccadilly in 1812 to display Egyptian artefacts and in 1821 it displayed a replica of the tomb of Seti I. The Rosetta Stone that led to the decipherment of hieroglyphics was cracked in 1822. The time was therefore ripe for someone to come up with the idea of a Mummy story.

    The novel was not, however, set in Victorian times but in a 22nd century future that she imagined, and that future was perhaps more amazing than the idea of a mummy coming to life. Her version of the future was full of technological inventions supporting humanity, as well as social predictions, many of which have come to fruition such as space travel and the idea that women might wear trousers as the height of fashion (making her a feminist hero). The machines she described in the book led to her meeting her future husband, John Loudon. As a writer about farming and gardening he was so impressed by the idea of a mechanical milking machine included in the book, that he asked to meet her. They married soon after (and she became Jane Loudon).

    The skilled artificial intelligences she wrote into her future society are perhaps the most amazing of her ideas in that she was the first person to really envision in fiction a world where AIs and robots were embedded in society just doing good as standard. To put this into context of other predictions, Ada Lovelace wrote her notes suggesting machines of the future would be able to compose music 20 years later.

    Jane Webb’s future was also full of cunning computational contraptions: there were steam-powered robot surgeons, foreseeing the modern robots that are able to do operations (and with their steady hands are better at, for example, eye surgery than a human). She also described Artificial Intelligences replacing lawyers. Her machines were fed their legal brief, giving them instructions about the case, through tubes. Whilst robots may not yet have fully replaced barristers and judges, artificial intelligence programs are already used, for example, to decide the length of sentences of those convicted in some places, and many see it now only being a matter of time before lawyers are spending their time working with Artificial Intelligence programs as standard. Jane’s world also includes a version of the Internet, at a time before electric telegraph existed and when telegraph messages were sent by semaphore between networks of towers.

    The book ultimately secured her future as required, and whilst we do not yet have any real reanimated mummy’s wandering around doing good deeds, Jane Webb did envision lots of useful inventions, many that are now a reality, and certainly had pretty good ideas about how future computer technology would pan out in society…despite computers, never mind artificial intelligences, still being well over a century away.

    – Paul Curzon, Queen Mary University of London

    More on …

    Related Magazines …


    EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

    QMUL CS4FN EPSRC logos

    A storm in a bell jar

    lightning
    Image by FelixMittermeier from Pixabay 

    Ada Lovelace was close friends with John Crosse, and knew his father Andrew: the ‘real Frankenstein’. Andrew Crosse apparently created insect life from electricity, stone and water…

    Andrew Crosse was a ‘gentleman scientist’ doing science for his own amusement including work improving giant versions of the first batteries called ‘voltaic piles’. He was given the nickname ‘the thunder and lightning man’ because of the way he used the batteries to do giant discharges of electricity with bangs as loud as canons.

    He hit the headlines when he appeared to create life from electricity, Frankenstein-like. This was an unexpected result of his experiments using electricity to make crystals. He was passing a current through water containing dissolved limestone over a period of weeks. In one experiment, about a month in, a perfect insect appeared apparently from no-where, and soon after starting to move. More and more insects then appeared over time. He mentioned it to friends, which led to a story in a local paper. It was then picked up nationally. Some of the stories said he had created the insects, and this led to outrage and death threats over his apparent blasphemy of trying to take the position of God.

    (Does this start to sound like a modern social networking storm, trolls and all?) In fact he appears to have believed, and others agreed, that the mineral samples he was using must have been contaminated with tiny insect eggs, that just naturally hatched. Scientific results are only accepted if they can be replicated. Others, who took care to avoid contamination couldn’t get the same result. The secret of creating life had not been found.

    While Mary Shelley, who wrote Frankenstein, did know Crosse, sadly perhaps, for the story’s sake, he can’t have been the inspiration for Frankenstein as has been suggested, given she wrote it decades earlier!

    – Paul Curzon, Queen Mary University of London (from the archicve)


    More on …

    Related Magazines …


    EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

    QMUL CS4FN EPSRC logos

    Pass the screwdriver, Igor

    Mary Shelley, Frankenstein’s monster and artificial life

    Frankenstein's Monster
    Image by sethJreid from Pixabay

    Shortly after Ada Lovelace was born, so long before she made predictions about future “creative machines”, Mary Shelley, a friend of her father (Lord Byron), was writing a novel. In her book, Frankenstein, inanimate flesh is brought to life. Perhaps Shelley foresaw what is actually to come, what computer scientists might one day create: artificial life.

    Life it may not be, but engineers are now doing pretty well in creating humanoid machines that can do their own thing. Could a machine ever be considered alive? The 21st century is undoubtedly going to be the age of the robot. Maybe it’s time to start thinking about the consequences in case they gain a sense of self.

    Frankenstein was obsessed with creating life. In Mary Shelley’s story, he succeeded, though his creation was treated as a “Monster” struggling to cope with the gift of life it was given. Many science fiction books and films have toyed with these themes: the film Blade Runner, for example, explored similar ideas about how intelligent life is created; androids that believe they are human, and the consequences for the creatures concerned.

    Is creating intelligent life fiction? Not totally. Several groups of computer scientists are exploring what it means to create non-biological life, and how it might be done. Some are looking at robot life, working at the level of insect life-forms, for example. Others are looking at creating intelligent life within cyberspace.

    For 70 years or more scientists have tried to create artificial intelligences. They have had a great deal of success in specific areas such as computer vision and chess playing programs. They are not really intelligent in the way humans are, though they are edging closer. However none of these programs really cuts it as creating “life”. Life is something more than intelligence.

    A small band of computer scientists have been trying a different approach that they believe will ultimately lead to the creation of new life forms: life forms that could one day even claim to be conscious (and who would we be to disagree with them if they think they are?) These scientists believe life can’t be engineered in a piecemeal way, but that the whole being has to be created as a coherent whole. Their approach is to build the basic building blocks and let life emerge from them.

    Sodarace creatures racing over a bumpy terrain
    A sodarace in action
    by CS4FN

    The outline of the idea could be seen in the game Sodarace, where you could build your own creatures that move around a virtual world, and even let them evolve. One approach to building creatures, such as a spider, would be to try and work out mathematical equations about how each leg moves and program those equations. The alternative artificial life way as used in Sodarace is to instead program up the laws of physics such as gravity and friction and how masses, springs and muscles behave according to those laws. Then you just put these basic bits together in a way that corresponds to a spider. With this approach you don’t have to work out in advance every eventuality (what if it comes to a wall? Or a cliff? Or bumpy ground?) and write code to deal with it. Instead natural behaviour emerges.

    The artificial life community believe, not just life-like movement, but life-like intelligence can emerge in a similar way. Rather than programming the behaviour of muscles you program the behaviour of neurones and then build brains out of them. That it turns out has been the key to the machine learning programs that are storming the world of Artificial Intelligence, turning it into an everyday tool. However, if aiming for artificial life, you would keep going and combine it with the basic biochemistry of an immune system, do a similar thing with a reproductive system, and so on.

    Want to know more? A wonderful early book is Steve Grand’s: “Creation”, on how he created what at the time was claimed to be “the nearest thing to artificial life yet”… It started life as the game “Creatures”.

    Then have a go at creating artificial life yourself (but be nice to it).

    – Paul Curzon and Peter W McOwan, Queen Mary University of London

    More on …

    Related Magazines …


    EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

    QMUL CS4FN EPSRC logos

    Ada Lovelace in her own words

    A jumble of letters
    Image by Paul Curzon

    Charles Babbage invented wonderful computing machines. But he was not very good at explaining things. That’s where Ada Lovelace came in. She is famous for writing a paper in 1843 explaining how Charles Babbage’s Analytical Engine worked – including a big table of formulas which is often described as “the first computer program”.

    Charles Babbage invented his mechanical computers to save everyone from the hard work of doing big mathematical calculations by hand. He only managed to build a few tiny working models of his first machine, his difference engine. It was finally built to Babbage’s designs in the 1990s and you can see it in the London Science Museum. It has 8,000 mechanical parts, and is the size of small car, but when the operator turns the big handle on the side it works perfectly, and prints out correct answers.

    Babbage invented, but never built, a more ambitious machine, his Analytical Engine. In modern language, this was a general purpose computer, so it could have calculated anything a modern computer can – just a lot more slowly. It was entirely mechanical, but it had all the elements we recognize today – like memory, CPU, and loops.

    Lovelace’s paper explains all the geeky details of how numbers are moved from memory to the CPU and back, and the way the machine would be programmed using punched cards.

    But she doesn’t stop there – in quaint Victorian language she tells us about the challenges familiar to every programmer today! She understands how complicated programming is:

    “There are frequently several distinct sets of effects going on simultaneously; all in a manner independent of each other, and yet to a greater or less degree exercising a mutual influence.”

    the difficulty of getting things right:

    “To adjust each to every other, and indeed even to perceive and trace them out with perfect correctness and success, entails difficulties whose nature partakes to a certain extent of those involved in every question where conditions are very numerous and inter-complicated.”

    and the challenge of making things go faster:

    “One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation.”

    She explains how computing is about patterns:

    “it weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves”.

    and inventing new ideas

    “We might even invent laws … in an arbitrary manner, and set the engine to work upon them, and thus deduce numerical results which we might not otherwise have thought of obtaining”.

    and being creative. If we knew the laws for composing music:

    “the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”

    Alan Turing famously asked if a machine can think – Ada Lovelace got there first:

    “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.”

    Wow, pretty amazing, for someone born 200 years ago.

    – Ursula Martin, University of Oxford (From the archive)


    More on …

    Related Magazines …


    EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1). 

    QMUL CS4FN EPSRC logos