Stretching your keyboard – getting more out of QWERTY

A screenshot of an iPhone's on-screen keyboard layout which is known as QWERTY because of the positioning of the letters in the alphabet on the first line.

by Jo Brodie, Queen Mary University of London

If you’ve ever sent a text on a phone or written an essay on a computer you’ve most likely come across the ‘QWERTY’ keyboard layout. It looks like this on a smartphone.

A screenshot of an iPhone's on-screen keyboard layout which is known as QWERTY because of the positioning of the letters in the alphabet on the first line.
A smartphone’s on-screen keyboard layout, called QWERTY after the first six letters on the top line.

This layout has been around in one form or another since the 1870s and was first used in old mechanical typewriters where pressing a letter on the keyboard caused a hinged metal arm with that same letter embossed at the end to swing into place, thwacking a ribbon coated with ink, to make an impression on the paper. It was quite loud!

Typewriter gif showing a mechanical typewriter in use as the typist presses a key on the keyboard and the corresponding letter is raised to hit the page.
Mechanical typewriter gif from Tenor. The person is typing one of the number keys which has an 8 and an asterisk (*) on it. That causes one of the hinged metal arms to bounce up and hit the page. Each arm has two letters or symbols on it, one above the other, and the Shift key physically moves the arm so the upper (case) letter strikes the page.

The QWERTY keyboard isn’t just used by English speakers but can easily be used by anyone whose language is based on the same A,B,C Latin alphabet (so French, Spanish, German etc). All the letters that an English-speaker needs are right there in front of them on the keyboard and with QWERTY… WYSIWYG (What You See Is What You Get).  There’s a one-to-one mapping of key to letter: if you tap the A key you get a letter A appearing on screen, click the M key and an M appears. (To get a lowercase letter you just tap the key but to make it uppercase you need to tap two keys; the up arrow (‘shift’) key plus the letter).

A French or Spanish speaking person could also buy an adapted keyboard that includes letters like É and Ñ, or they can just use a combination of keys to make those letters appear on screen (see Key Combinations below). But what about writers of other languages which don’t use the Latin alphabet? The QWERTY keyboard, by itself, isn’t much use for them so it potentially excludes a huge number of people from using it.

In the English language the letter A never alters its shape depending on which letter goes before or comes after it. (There are 39 lower case letter ‘a’s and 3 upper case ‘A’s in this paragraph and, apart from the difference in case, they all look exactly the same.) That’s not the case for other languages such as Arabic or Hindi where letters can change shape depending on the adjacent letters. With some languages the letters might even change vertical position, instead of being all on the same line as in English.

Early attempts to make writing in other languages easier assumed that non-English alphabets could be adapted to fit into the dominant QWERTY keyboard, with letters that are used less frequently being ignored and other letters being simplified to suit. That isn’t very satisfactory and speakers of other languages were concerned that their own language might become simplified or standardised to fit in with Western technology, a form of ‘digital colonialism’.

But in the 1940s other solutions emerged. The design for one Chinese typewriter avoided QWERTY’s ‘one key equals one letter’ (which couldn’t work for languages like Chinese or Japanese which use thousands of characters – impossible to fit onto one keyboard, see picture at the end!).

Rather than using the keys to print one letter, the user typed a key to begin the process of finding a character. A range of options would be displayed and the user would select another key from among them, with the options narrowing until they arrived at the character they wanted. Luckily this early ‘retrieval system’ of typing actually only took a few keystrokes to bring up the right character, otherwise it would have taken ages.

This is a way of using a keyboard to type words rather than letters, saving time by only displaying possible options. It’s also an early example of ‘autocomplete’ now used on many devices to speed things up by displaying the most likely word for the user to tap, which saves them typing it.

For example in English the letter Q is generally* always followed by the letter U to produce words like QUAIL, QUICK or QUOTE. There are only a handful of letters that can follow QU – the letter Z wouldn’t be any use but most of the vowels would be. You might be shown A, E, I or O and if you selected A then you’ve further restricted what the word could be (QUACK, QUARTZ, QUARTET etc).

In fact one modern typing system, designed for typists with physical disabilities, also uses this concept of ‘retrieval’, relying on a combination of letter frequency (how often a letter is used in the English language) and probabilistic predictions (about how likely a particular letter is to come next in an English word). Dasher is a computer program that lets someone write text without using a keyboard, instead a mouse, joystick, touchscreen or a gaze-tracker (a device that tracks the person’s eye position) can be used.

Letters are presented on-screen in alphabetic order from top to bottom on the right hand side (lowercase first, then upper case) and punctuation marks. The user ‘drives’ through the word by first pushing the cursor towards the first letter, then the next possible set of letters appear to choose from, and so on until each word is completed. You can see it in action in this video below.

The Dasher software interface

Key combinations

The use of software to expand the usefulness of QWERTY keyboards is now commonplace with programs pre-installed onto devices which run in the background. These IMEs or Input Method Editors can convert a set of keystrokes into a character that’s not available on the keyboard itself. For example, while I can type SHIFT+8 to display the asterisk (*) symbol that sits on the 8 key there’s no degree symbol (as in 30°C) on my keyboard. On a Windows computer I can create it using the numeric keypad on the right of some keyboards, holding down the ALT key while typing the sequence 0176. While I’m typing the numbers nothing appears but once I complete the sequence and release the ALT key the ° appears on the screen.

English language keyboard image by john forcier from Pixabay, showing the numeric keypad highlighted in yellow with the two Alt keys and the ‘num lock’ key highlighted in pink. Num lock (‘numeric lock’) needs to be switched on for the keypad to work, then use the Alt key plus a combination of letters on the numeric keypad to produce a range of additional ‘alt code‘ characters.

When Japanese speakers type they use the main ‘ABC’ letters on the keyboard, but the principle is the same – a combination of keys produces a sequence of letters that the IME converts to the correct character. Or perhaps they could use Google Japan’s April Fool solution from 2010, below!

Google Japan’s 2010 April Fool joke with a “Japanese keyboard” set out as a drumkit for easy reach of all keys…

*QWERTY is a ‘word’ which starts with a Q that’s not followed by a U of course…

References

Further reading

The ‘retrieval system’ of typing mentioned above, which lets the user get to the word or characters more quickly, is similar to the general problem solving strategy called ‘Divide and Conquer’. You can read more about that and other search algorithms in our free booklet ‘Searching to Speak‘ (PDF) which explores how the design of an algorithm could allow someone with locked-in syndrome to communicate. Locked-in syndrome is a condition resulting from a stroke where a person is totally paralysed. They can see, hear and think but cannot speak. How could a person with Locked-in syndrome write a book? How might they do it if they knew some computational thinking?


This blog is funded through EPSRC grant EP/W033615/1.

Joyce Wheeler: The Life of a Star

Exploding star

by Paul Curzon, Queen Mary University of London

The first computers transformed the way research is done. One of the very first computers, EDSAC*, contributed to the work of three Nobel prize winners: in Physics, Chemistry and Medicine. Astronomer, Joyce Wheeler was an early researcher to make use of the potential of computers to aid the study of other subjects in this way. She was a Cambridge PhD student in 1954 investigating the nuclear reactions that keep stars burning. This involved doing lots of calculations to work out the changing behaviour and composition of the star.

Exploding star
Star image by Dieter from Pixabay

Joyce had seen EDSAC on a visit to the university before starting her PhD, and learnt to program it from its basic programming manual so that she could get it to do the calculations she needed. She would program by day and let EDSAC number crunch using her programs every Friday night, leaving her to work on the results in the morning, and then start the programming for the following week’s run. EDSAC not only allowed her to do calculations accurately that would otherwise have been impossible, it also meant she could run calculations over and over, tweaking what was done, refining the accuracy of the results, and checking the equations quickly with sample numbers. As a result EDSAC helped her to estimate the age of stars.

*Electronic Delay Storage Automatic Calculator

EDSAC Monitoring Desk, image from Wikipedia

This article was originally published on the CS4FN website and also appears on page 17 of Issue 23 of the CS4FN magazine, The Women are (still) Here. You can download a free copy of the magazine as a PDF below, along with all of our other free material.



Related Magazine …


This blog is funded through EPSRC grant EP/W033615/1.

Alan Turing’s life

by Jonathan Black, Paul Curzon and Peter W. McOwan, Queen Mary University of London

From the archive

Alan Turing smiling

Alan Turing was born in London on 23 June 1912. His parents were both from successful, well-to-do families, which in the early part of the 20th century in England meant that his childhood was pretty stuffy. He didn’t see his parents much, wasn’t encouraged to be creative, and certainly wasn’t encouraged in his interest in science. But even early in his life, science was what he loved to do. He kept up his interest while he was away at boarding school, even though his teachers thought it was beneath well-bred students. When he was 16 he met a boy called Christopher Morcom who was also very interested in science. Christopher became Alan’s best friend, and probably his first big crush. When Christopher died suddenly a couple of years later, Alan partly helped deal with his grief with science, by studying whether the mind was made of matter, and where – if anywhere – the mind went when someone died.

The Turing machine

After he finished school, Alan went to the University of Cambridge to study mathematics, which brought him closer to questions about logic and calculation (and mind). After he graduated he stayed at Cambridge as a fellow, and started working on a problem that had been giving mathematicians headaches: whether it was possible to determine in advance if a particular mathematical proposition was provable. Alan solved it (the answer was no), but it was the way he solved it that helped change the world. He imagined a machine that could move symbols around on a paper tape to calculate answers. It would be like a mind, said Alan, only mechanical. You could give it a set of instructions to follow, the machine would move the symbols around and you would have your answer. This imaginary machine came to be called a Turing machine, and it forms the basis of how modern computers work.

Code-breaking at Bletchley Park

By the time the Second World War came round, Alan was a successful mathematician who’d spent time working with the greatest minds in his field. The British government needed mathematicians to help them crack the German codes so they could read their secret communiqués. Alan had been helping them on and off already, but when war broke out he moved to the British code-breaking headquarters at Bletchley Park to work full-time. Based on work by Polish mathematicians, he helped crack one of the Germans’ most baffling codes, called the Enigma, by designing a machine (based on earlier version by the Poles again!) that could help break Enigma messages as long as you could guess a small bit of the text (see box). With the help of British intelligence that guesswork was possible, so Alan and his team began regularly deciphering messages from ships and U-boats. As the war went on the codes got harder, but Alan and his colleagues at Bletchley designed even more impressive machines. They brought in telephone engineers to help marry Alan’s ideas about logic and statistics with electronic circuitry. That combination was about to produce the modern world.

Building a brain

The problem was that the engineers and code-breakers were still having to make a new machine for every job they wanted it to do. But Alan still had his idea for the Turing machine, which could do any calculation as long as you gave it different instructions. By the end of the war Alan was ready to have a go at building a Turing machine in real life. If it all went to plan, it would be the first modern electronic computer, but Alan thought of it as “building a brain”. Others were interested in building a brain, though, and soon there were teams elsewhere in the UK and the USA in the race too. Eventually a group in Manchester made Alan’s ideas a reality.

Troubled times

Not long after, he went to work at Manchester himself. He started thinking about new and different questions, like whether machines could be intelligent, and how plants and animals get their shape. But before he had much of a chance to explore these interests, Alan was arrested. In the 1950s, gay sex was illegal in the UK, and the police had discovered Alan’s relationship with a man. Alan didn’t hide his sexuality from his friends, and at his trial Alan never denied that he had relationships with men. He simply said that he didn’t see what was wrong with it. He was convicted, and forced to take hormone injections for a year as a form of chemical castration.

Although he had had a very rough period in his life, he kept living as well as possible, becoming closer to his friends, going on holiday and continuing his work in biology and physics. Then, in June 1954, his cleaner found him dead in his bed, with a half-eaten, cyanide-laced apple beside him.

Alan’s suicide was a tragic, unjust end to a life that made so much of the future possible.

More on …

Related Magazines …

cs4fn issue 14 cover

This blog is funded through EPSRC grant EP/W033615/1.

The paranoid program

by Paul Curzon, Queen Mary University of London

One of the greatest characters in Douglas Adams’ Hitchhiker’s Guide to the Galaxy, science fiction radio series, books and film was Marvin the Paranoid Android. Marvin wasn’t actually paranoid though. Rather, he was very, very depressed. This was because as he often noted he had ‘a brain the size of a planet’ but was constantly given trivial and uninteresting jobs to do. Marvin was fiction. One of the first real computer programs to be able to converse with humans, PARRY, did aim to behave in a paranoid way, however.

PARRY was in part inspired by the earlier ELIZA program. Both were early attempts to write what we would now call chatbots: programs that could have conversations with humans. This area of Natural Language Processing is now a major research area. Modern chatbot programs rely on machine learning to learn rules from real conversations that tell them what to say in different situations. Early programs relied on hand written rules by the programmer. ELIZA, written by Joseph Weizenbaum, was the most successful early program to do this and fooled people into thinking they were conversing with a human. One set of rules, called DOCTOR, that ELIZA could use, allowed it to behave like a therapist of the kind popular at the time who just echoed back things their patient said. Weizenbaum’s aim was not actually to fool people, as such, but to show how trivial human-computer conversation was, and that with a relatively simple approach where the program looked for trigger words and used them to choose pre-programmed responses could lead to realistic appearing conversation.

PARRY was more serious in its aim. It was written by, Kenneth Colby, in the early 1970s. He was a psychiatrist at Stanford. He was trying to simulate the behaviour of person suffering from paranoid schizophrenia. It involves symptoms including the person believing that others have hostile intentions towards them. Innocent things other people say are seen as being hostile even when there was no such intention.

PARRY was based on a simple model of how those with the condition were thought to behave. Writing programs that simulate something being studied is one of the ways computer science has added to the way we do science. If you fully understand a phenomena, and have embodied that understanding in a model that describes it, then you should be able to write a program that simulates that phenomena. Once you have written a program then you can test it against reality to see if it does behave the same way. If there are differences then this suggests the model and so your understanding is not yet fully accurate. The model needs improving to deal with the differences. PARRY was an attempt to do this in the area of psychiatry. Schizophrenia is not in itself well-defined: there is no objective test to diagnose it. Psychiatrists come to a conclusion about it just by observing patients, based on their experience. Could a program display convincing behaviours?

It was tested by doing a variation of the Turing Test: Alan Turing’s suggestion of a way to tell if a program could be considered intelligent or not. He suggested having humans and programs chat to a panel of judges via a computer interface. If the judges cannot accurately tell them apart then he suggested you should accept the programs as intelligent. With PARRY rather than testing whether the program was intelligent, the aim was to find out if it could be distinguished from real people with the condition. A series of psychiatrists were therefore allowed to chat with a series of runs of the program as well as with actual people diagnosed with paranoid schizophrenia. All conversations were through a computer. The psychiatrists were not told in advance which were which. Other psychiatrists were later allowed to read the transcripts of those conversations. All were asked to pick out the people and the programs. The result was they could only correctly tell which was a human and which was PARRY about half the time. As that was about as good as tossing a coin to decide it suggests the model of behaviour was convincing.

As ELIZA was simulating a mental health doctor and PARRY a patient someone had the idea of letting them talk to each other. ELIZA (as the DOCTOR) was given the chance to chat with PARRY several times. You can read one of the conversations between them here. Do they seem believably human? Personally, I think PARRY comes across more convincingly human-like, paranoid or not!


Activity for you to do…

If you can program, why not have a go at writing your own chatbot. If you can’t writing a simple chatbot is quite a good project to use to learn as long as you start simple with fixed conversations. As you make it more complex, it can, like ELIZA and PARRY, be based on looking for keywords in the things the other person types, together with template responses as well as some fixed starter questions, also used to change the subject. It is easier if you stick to a single area of interest (make it football mad, for example): “What’s your favourite team?” … “Liverpool” … “I like Liverpool because of Klopp, but I support Arsenal.” …”What do you think of Arsenal?” …

Alternatively, perhaps you could write a chatbot to bring Marvin to life, depressed about everything he is asked to do, if that is not too depressingly simple, should you have a brain the size of a planet.


More on …

Related Magazines …

Issue 16 cover clean up your language

This blog is funded through EPSRC grant EP/W033615/1.

Hidden Figures: NASA’s brilliant calculators #BlackHistoryMonth

Full Moon and silhouetted tree tops

by Paul Curzon, Queen Mary University of London

Full Moon with a blue filter
Full Moon image by PIRO from Pixabay

NASA Langley was the birthplace of the U.S. space program where astronauts like Neil Armstrong learned to land on the moon. Everyone knows the names of astronauts, but behind the scenes a group of African-American women were vital to the space program: Katherine Johnson, Mary Jackson and Dorothy Vaughan. Before electronic computers were invented ‘computers’ were just people who did calculations and that’s where they started out, as part of a segregated team of mathematicians. Dorothy Vaughan became the first African-American woman to supervise staff there and helped make the transition from human to electronic computers by teaching herself and her staff how to program in the early programming language, FORTRAN.

FORTRAN code on a punched card, from Wikipedia.

The women switched from being the computers to programming them. These hidden women helped put the first American, John Glenn, in orbit, and over many years worked on calculations like the trajectories of spacecraft and their launch windows (the small period of time when a rocket must be launched if it is to get to its target). These complex calculations had to be correct. If they got them wrong, the mistakes could ruin a mission, putting the lives of the astronauts at risk. Get them right, as they did, and the result was a giant leap for humankind.

See the film ‘Hidden Figures’ for more of their story (trailer below).

This story was originally published on the CS4FN website and was also published in issue 23, The Women Are (Still) Here, on p21 (see ‘Related magazine’ below).

More on …


See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing


Related Magazine …

This blog is funded through EPSRC grant EP/W033615/1.

Freddie Figgers – the abandoned baby who became a runaway telecom tech star

by Jo Brodie and Paul Curzon, Queen Mary University of London

As a baby, born in the US in 1989, Freddie Figgers was abandoned by his biological parents but he was brought up with love and kindness by two much older adoptive parents who kindled his early enthusiasm for fixing things and inspired his work in smart health. He now runs the first Black-owned telecommunications company in the US.

Freddie Figgers in 2016

When Freddie was 9 his father bought him an old (broken) computer from a charity shop to play around with. He’d previously enjoyed tinkering with his father’s collection of radios and alarm clocks and when he opened up the computer could see which of its components and soldering links were broken. He spotted that he could replace these with the same kinds of components from one of his dad’s old radios and, after several attempts, soon his computer was working again – Freddie was hooked, and he started to learn how to code.

When he was 12 he attended an after-school club and set to work fixing the school’s broken computers. His skill impressed the club’s leader, who also happened to be the local Mayor, and soon Freddie was being paid several dollars an hour to repair even more computers for the Mayor’s office (in the city of Quincy, Florida) and her staff. A few years later Quincy needed a new system to ensure that everyone’s water pressure was correct. A company offered to create software to monitor the water pressure gauges and said it would cost 600,000 dollars. Freddie, now 15 and still working with the Mayor, offered to create a low-cost program of his own and he saved the city thousands in doing so.

He was soon offered other contracts and used the money coming in to set up his own computing business. He heard about an insurance company in another US city whose offices had been badly damaged by a tornado and lost all of their customers’ records. That gave him the idea to set up a cloud computing service (which means that the data are stored in different places and if one is damaged the data can easily be recovered from the others).

His father, now quite elderly, had dementia and regularly wandered off and got lost. Freddie found an ingenious way to help him by rigging up one of his dad’s shoes with a GPS detector and two-way communication connected to his computer – he could talk to his dad through the shoe! If his dad was missing Freddie could talk to him, find out where he was and go and get him. Freddie later sold his shoe tracker for over 2 million dollars.

Living in a rural area he knew that mobile phone coverage and access to the internet was not as good as in larger cities. Big telecommunications companies are not keen to invest their money and equipment in areas with much smaller populations so instead Freddie decided to set up his own. It took him quite a few applications to the FCC (the US’ Federal Communications Commission who regulate internet and phone providers) but eventually, at 21, he was both the youngest and the first Black person in the US to own a telecoms company.

Most telecoms companies just provide a network service but his company also creates affordable smart phones which have ‘multi-user profiles’ (meaning that phones can be shared by several people in a family, each with their own profile). The death of his mother’s uncle, from a diabetic coma, also inspired him to create a networked blood glucose (sugar) meter that can link up wirelessly to any mobile phone. This not only lets someone share their blood glucose measurements with their healthcare team, but also with close family members who can help keep them safe while their glucose levels are too high.

Freddie has created many tools to help people in different ways through his work in health and communications – he’s even helping the next generation too. He’s also created a ‘Hidden Figgers’ scholarship to encourage young people in the US to take up tech careers, so perhaps we’ll see a few more fantastic folk like Freddie Figgers in the future.

More on …


This article was originally published on our sister website at Teaching London Computing (which has lots of free resources for computing teachers). It hasn’t yet been published in an issue of CS4FN but you can download all of our free magazines here.

See more in ‘Celebrating Diversity in Computing

We have free posters to download and some information about the different people who’ve helped make modern computing what it is today.

Screenshot showing the vibrant blue posters on the left and the muted sepia-toned posters on the right

Or click here: Celebrating diversity in computing

Further reading

See also

A ‘shoe tech’ device for people who have no sense of that direction – read about it in ‘Follow that Shoe’ on the last page of the wearable technology issue of CS4FN (‘Technology Worn Out (And About)’, issue 25).

Right to Repair – a European movement to make it easier for people to repair their devices, or even just change the battery in a smartphone themselves. See also the London-based Restart Project which is arguing for the same in the UK.


This blog is funded through EPSRC grant EP/W033615/1.

Hiding in Skype: cryptography and steganography

Magic book with sparkly green and purple colours

by Paul Curzon, Queen Mary University of London

Computer Science isn’t just about using language, sometimes it’s about losing it. Sometimes people want to send messages so no one even knows they exist and a great place to lose language is inside a conversation.

Cryptography is the science of making messages unreadable. Spymasters have used it for a thousand years or more. Now it’s a part of everyday life. It’s used by the banks every time you use a cash point and by online shops when you buy something over the Internet. It’s used by businesses that don’t want their industrial secrets revealed and by celebrities who want to be sure that tabloid hackers can’t read their texts.

Cryptography stops messages being read, but sometimes just knowing that people are having a conversation can reveal more than they want even if you don’t know what was said. Knowing a football star is exchanging hundreds of texts with his team mate’s girlfriend suggests something is going on, for example. Similarly, CIA chief David Petraeus whose downfall made international news, might have kept his secret and his job if the emails from his lover had been hidden. David Bowie kept his 2013 comeback single ‘Where are we now?’ a surprise until the moment it was released. It might not have made him the front page news it did if a music journalist had just tracked who had been talking to who amongst the musicians involved in the months before.

That’s where steganography comes in – the science of hiding messages so no one even knows they exist. Invisible ink is one form of steganography used, for example, by the French resistance in World War II. More bizarre forms have been used over the years though – an Ancient Greek slave had a message tattooed on his shaven head warning of Persian invasion plans. Once his hair had grown back he delivered it with no one on the way the wiser.

Digital communication opens up new ways to hide messages. Computers store information using a code of 0s and 1s: bits. Steganography is then about finding places to hide those bits. A team of Polish researchers led by Wojciech Mazurczyk have now found a way to hide them in a Skype conversation.

When you use Skype to make a phone call, the program converts the sounds you make to a long series of bits. They are sent over the Internet and converted back to sound at the other end. At the same time more sounds as bits stream back from the person you are talking to. Data transmitted over the Internet isn’t sent all in one go, though. It’s broken into packets: a bit like taking your conversation and tweeting it one line at a time.

Why? Imagine you run a crack team of commandos who have to reach a target in enemy territory to blow it up – a stately home where all the enemy’s Generals are having a party perhaps. If all the commandos travel together in one army truck and something goes wrong along the way probably no one will make it – a disaster. If on the other hand they each travel separately, rendezvousing once there, the mission is much more likely to be successful. If a few are killed on the way it doesn’t matter as the rest can still complete the mission.

The same applies to a Skype call. Each packet contains a little bit of the full conversation and each makes its own way to the destination across the Internet. On arriving there, they reform into the full message. To allow this to happen, each packet includes some extra data that says, for example, what conversation it is part of, how big it is and also where it fits in the sequence. If some don’t make it then the rest of the conversation can still be put back together without them. As long as too much isn’t missing, no one will notice.

Skype does something special with its packets. The size of the packets changes depending on how much data needs to be transmitted. If the person is talking each packet carries a lot of information. If the person is listening then what is being transmitted is mainly silence. Skype then sends shorter packets. The Polish team realised they could exploit this for steganography. Their program, SkyDe, intercepts Skype packets looking for short ones. Any found are replaced with packets holding the data from the covert message. At the destination another copy of SkyDe intercepts them and extracts the hidden message and passes it on to the intended recipient. As far as Skype is concerned some packets just never arrive.

There are several properties that matter for a good steganographic technique. One is its bandwidth: how much data can be sent using the method. Because Skype calls contain a lot of silence SkyDe has a high bandwidth: there are lots of opportunities to hide messages. A second important property is obviously undetectability. The Polish team’s experiments have shown that SkyDe messages are very hard to detect. As only packets that contain silence are used and so lost, the people having the conversation won’t notice and the Skype receiver itself can’t easily tell because what is happening is no different to a typical unreliable network. Packets go missing all the time. Because both the Skype data and the hidden messages are encrypted, someone observing the packets travelling over the network won’t see a difference – they are all just random patterns of bits. Skype calls are now common so there are also lots of natural opportunities for sending messages this way – no one is going to get suspicious that lots of calls are suddenly being made.

All in all SkyDe provides an elegant new form of steganography. Invisible ink is so last century (and tattooing messages on your head so very last millennium). Now the sound of silence is all you need to have a hidden conversation.

A version of this article was originally published on the CS4FN website and a copy also appears on pages 10-11 of Issue 16 of the magazine (see Related magazines below).

You can also download PDF copies of all of our free magazines.


Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

The heart of an Arabic programming language

A colourful repeating geometric pattern

‘Hello World’, in Arabic

by Paul Curzon, Queen Mary University of London

So far almost all computer languages have been written in English, but that doesn’t need to be the case. Computers don’t care. Computer scientist Ramsey Nasser developed the first programming language that uses Arabic script. His computer language is called قلب. In English, it’s pronounced “Qalb”, after the Arabic word for heart. As long as a computer understands what to do with the instructions it’s given, they can be in any form, from numbers to letters to images.

A version of this article was originally published on the CS4FN website and a copy also appears on page 2 of Issue 16 of the magazine (see Related magazines below).

You can also download PDF copies of all of our free magazines.


Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

Escape from Egypt

The humble escape character

by Paul Curzon, Queen Mary University of London

Egyptian hieroglyphs from Luxor
Hieroglyphs at Luxor. Image by Alexander Paukner from Pixabay 

The escape character is a rather small and humble thing, often ignored, easily misunderstood but vital in programming languages. It is used simply to say symbols that follow should be treated differently. The n in \n is no longer just an n but a newline character, for example. It is the escape character \ that makes the change. The escape character has a long history dating back to at least Ancient Egypt and probably earlier.

The Ancient Egyptians famously used a language of pictures to write: hieroglyphs. How to read the language was lost for thousands of years, and it proved to be fiendishly difficult to decipher. The key to doing this turned out to be the Rosetta Stone, discovered when Napoleon invaded Egypt. It contained the same text in three different languages: the Hieroglyphic script, Greek and also an Egyptian script called Demotic.

A whole series of scholars ultimately contributed, but the final decipherment was done by Jean-François Champollion. Part of the difficulty in decipherment, even with a Greek translation of the Rosetta Stone text available, was because it wasn’t, as commonly thought, just a language where symbolic pictures represented words (a picture of the sun, meaning sun, for example). Instead, it combined several different systems of writing but using the same symbols. Those symbols could be read in different ways. The first way was as alphabetic letters that stood for consonants (like b, d and p in our alphabet). Words could be spelled out in this alphabet. The second was phonetically where symbols could stand for groups of such sounds. Finally, the picture could stand not for a sound but for a meaning. A picture of a duck could mean a particular sound or it could mean a duck!

Part of the reason it took so long to decipher the language was that it was assumed that all the symbols were pictures of the things they represented. It was only when eventually scholars started to treat some as though they represented sounds that progress was made. Even more progress was made when it was realised the same symbol meant different things and might be read in a different way, even in the same phrase.

However, if the same symbol meant different things in different places of a passage, how on earth could even Egyptian readers tell? How might you indicate a particular group of characters had a special meaning?

A cartouche for Cleopatra
A cartouche for Cleopatra (from Wikipedia)

One way the Egyptians used specifically for names is called a cartouche: they enclosed the sequence of symbols that represented a name in an oval-like box, like the one shown for Cleopatra. This was one of the first keys to unlocking the language as the name of pharaoh Ptolemy appeared several times in the Greek of the Rosetta Stone. Once someone had the idea that the cartouches might be names, the symbols used to spell out Ptolemy a letter at a time could be guessed at.

The Egyptian hieroglyph for aleph (an egyptian eagle)
The Egyptian hieroglyph for aleph

Putting things in boxes works for a pictorial language, but it isn’t so convenient as a more general way of indicating different uses of particular symbols or sequences of them. The Ancient Egyptians therefore had a much simpler way too. The normal reading of a symbol was as a sound. A symbol that was to be treated as a picture of the word it represented was followed by a line (so despite all the assumptions of the translators and the general perception of them, a hieroglyph as picture is treated as the exception not the norm!)

The Egyptian hieroglyph for an Egyptian eagle (an Egyptian eagle followed by a line).
The Egyptian hieroglyph for the Egyptian Eagle

For example, the hieroglyph that is a picture of the Egyptian eagle stands for a single consonant sound, aleph. We would pronounce it ‘ah’ and it can be seen in the cartouche for Cleopatra that sounds out her name. However, add the line after the picture of the eagle (as shown) and it just means what it looks like: the Egyptian eagle.

Cartouches actually included the line at the end too indicating in itself their special meaning, as can be seen on the Cleopatra cartouche above

The Egyptian line hieroglyph is what we would now call an escape character: its purpose is to say that the symbol it is paired with is not treated normally, but in a special way.

Computer Scientists use escape characters in a variety of ways in programming languages as well as in scripting languages like HTML. Different languages use a different symbol as the escape character, though \ is popular (and very reminiscent of the Egyptian line!). One place escapes are used is to represent special characters in strings (sequences of characters like words or sentences) so they can be manipulated or printed. If I want my program to print a word like “not” then I must pass an appropriate string to the print command. I just put the three characters in quotation marks to show I mean the characters n then o then t. Simple.

However, the string “\no\t” does not similarly mean five characters \, n, o, \ and t. It still represents three characters, but this time \n, o and \t. \ is an escape character saying that the n and the t symbols that follow it are not really representing the n or t characters but instead stand for a newline (\n : which jumps to the next line) and a tab character (\t : which adds some space). “\no\t” therefore means newline o tab.

This begs the question what if you actually want to print a \ character! If you try to use it as it is, it just turns what comes after it into something else and disappears. The solution is simple. You escape it by preceding it with a \. \\ means a single \ character! So “n\\t” means n followed by an actual \ character followed by a t. The normal meaning of \ is to escape what follows. Its special meaning when it is escaped is just to be a normal character! Other characters’s meanings are inverted like this too, where the more natural meaning is the one you only get with an escape character. For example what if you want a program to print a quotation so use quotation marks. But quotation marks are used to show you are starting and ending a String. They already have another meaning. So if you want a string consisting of the five characters “, n, o, t and ” you might try to write “”not”” but that doesn’t work as the initial “” already makes a string, just one with no characters in it. The string has ended before you got to the n. Escape characters to the rescue. You need ” to mean something other than its “normal” meeting of starting or ending a string so just escape it inside the string and write “\”not\””.

Once you get used to it, escaping characters is actually very simple, but is easy to find confusing when first learning to program. It is not surprising those trying to decipher hieroglyphs struggled so much as escapes were only one of the problems they had to contend with.


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

The Mummy in an AI world: Jane Webb’s future

by Paul Curzon, Queen Mary University of London

The sarcophagus of a mummy
Image by albertr from Pixabay

Inspired by Mary Shelley’s Frankenstein, 17-year old Victorian orphan, Jane Webb secured her future by writing the first ever Mummy story. The 22nd century world in which her novel was set is perhaps the most amazing thing about the three volume book though.

On the death of her father, Jane realised she needed to find a way to support herself and did so by publishing her novel “The Mummy!” in 1827. In contrast to their modern version as stars of horror films, Webb’s Mummy, a reanimation of Cheops, was actually there to help those doing good and punish those that were evil. Napoleon had, through the start of the century, invaded Egypt, taking with him scholars intent on understanding the Ancient Egyptian society. Europe was fascinated with Ancient Egypt and awash with Egyptian artefacts and stories around them. In London, the Egyptian Hall had been built in Piccadilly in 1812 to display Egyptian artefacts and in 1821 it displayed a replica of the tomb of Seti I. The Rosetta Stone that led to the decipherment of hieroglyphics was cracked in 1822. The time was therefore ripe for someone to come up with the idea of a Mummy story.

The novel was not, however, set in Victorian times but in a 22nd century future that she imagined, and that future was perhaps more amazing than the idea of a mummy coming to life. Her version of the future was full of technological inventions supporting humanity, as well as social predictions, many of which have come to fruition such as space travel and the idea that women might wear trousers as the height of fashion (making her a feminist hero). The machines she described in the book led to her meeting her future husband, John Loudon. As a writer about farming and gardening he was so impressed by the idea of a mechanical milking machine included in the book, that he asked to meet her. They married soon after (and she became Jane Loudon).

The skilled artificial intelligences she wrote into her future society are perhaps the most amazing of her ideas in that she was the first person to really envision in fiction a world where AIs and robots were embedded in society just doing good as standard. To put this into context of other predictions, Ada Lovelace wrote her notes suggesting machines of the future would be able to compose music 20 years later.

Jane Webb’s future was also full of cunning computational contraptions: there were steam-powered robot surgeons, foreseeing the modern robots that are able to do operations (and with their steady hands are better at, for example, eye surgery than a human). She also described Artificial Intelligences replacing lawyers. Her machines were fed their legal brief, giving them instructions about the case, through tubes. Whilst robots may not yet have fully replaced barristers and judges, artificial intelligence programs are already used, for example, to decide the length of sentences of those convicted in some places, and many see it now only being a matter of time before lawyers are spending their time working with Artificial Intelligence programs as standard. Jane’s world also includes a version of the Internet, at a time before electric telegraph existed and when telegraph messages were sent by semaphore between networks of towers.

The book ultimately secured her future as required, and whilst we do not yet have any real reanimated mummy’s wandering around doing good deeds, Jane Webb did envision lots of useful inventions, many that are now a reality, and certainly had pretty good ideas about how future computer technology would pan out in society…despite computers, never mind artificial intelligences, still being well over a century away.


More on …

Related Magazines …


EPSRC supported this article through research grants (EP/K040251/2 and EP/K040251/2 held by Professor Ursula Martin as well as grant EP/W033615/1).