The heart of an Arabic programming language

A colourful repeating geometric pattern

‘Hello World’, in Arabic

by Paul Curzon, Queen Mary University of London

So far almost all computer languages have been written in English, but that doesn’t need to be the case. Computers don’t care. Computer scientist Ramsey Nasser developed the first programming language that uses Arabic script. His computer language is called قلب. In English, it’s pronounced “Qalb”, after the Arabic word for heart. As long as a computer understands what to do with the instructions it’s given, they can be in any form, from numbers to letters to images.

A version of this article was originally published on the CS4FN website and a copy also appears on page 2 of Issue 16 of the magazine (see Related magazines below).

You can also download PDF copies of all of our free magazines.


Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

Escape from Egypt

The humble escape character

by Paul Curzon, Queen Mary University of London

Egyptian hieroglyphs from Luxor
Hieroglyphs at Luxor. Image by Alexander Paukner from Pixabay 

The escape character is a rather small and humble thing, often ignored, easily misunderstood but vital in programming languages. It is used simply to say symbols that follow should be treated differently. The n in \n is no longer just an n but a newline character, for example. It is the escape character \ that makes the change. The escape character has a long history dating back to at least Ancient Egypt and probably earlier.

The Ancient Egyptians famously used a language of pictures to write: hieroglyphs. How to read the language was lost for thousands of years, and it proved to be fiendishly difficult to decipher. The key to doing this turned out to be the Rosetta Stone, discovered when Napoleon invaded Egypt. It contained the same text in three different languages: the Hieroglyphic script, Greek and also an Egyptian script called Demotic.

A whole series of scholars ultimately contributed, but the final decipherment was done by Jean-François Champollion. Part of the difficulty in decipherment, even with a Greek translation of the Rosetta Stone text available, was because it wasn’t, as commonly thought, just a language where symbolic pictures represented words (a picture of the sun, meaning sun, for example). Instead, it combined several different systems of writing but using the same symbols. Those symbols could be read in different ways. The first way was as alphabetic letters that stood for consonants (like b, d and p in our alphabet). Words could be spelled out in this alphabet. The second was phonetically where symbols could stand for groups of such sounds. Finally, the picture could stand not for a sound but for a meaning. A picture of a duck could mean a particular sound or it could mean a duck!

Part of the reason it took so long to decipher the language was that it was assumed that all the symbols were pictures of the things they represented. It was only when eventually scholars started to treat some as though they represented sounds that progress was made. Even more progress was made when it was realised the same symbol meant different things and might be read in a different way, even in the same phrase.

However, if the same symbol meant different things in different places of a passage, how on earth could even Egyptian readers tell? How might you indicate a particular group of characters had a special meaning?

A cartouche for Cleopatra
A cartouche for Cleopatra (from Wikipedia)

One way the Egyptians used specifically for names is called a cartouche: they enclosed the sequence of symbols that represented a name in an oval-like box, like the one shown for Cleopatra. This was one of the first keys to unlocking the language as the name of pharaoh Ptolemy appeared several times in the Greek of the Rosetta Stone. Once someone had the idea that the cartouches might be names, the symbols used to spell out Ptolemy a letter at a time could be guessed at.

The Egyptian hieroglyph for aleph (an egyptian eagle)
The Egyptian hieroglyph for aleph

Putting things in boxes works for a pictorial language, but it isn’t so convenient as a more general way of indicating different uses of particular symbols or sequences of them. The Ancient Egyptians therefore had a much simpler way too. The normal reading of a symbol was as a sound. A symbol that was to be treated as a picture of the word it represented was followed by a line (so despite all the assumptions of the translators and the general perception of them, a hieroglyph as picture is treated as the exception not the norm!)

The Egyptian hieroglyph for an Egyptian eagle (an Egyptian eagle followed by a line).
The Egyptian hieroglyph for the Egyptian Eagle

For example, the hieroglyph that is a picture of the Egyptian eagle stands for a single consonant sound, aleph. We would pronounce it ‘ah’ and it can be seen in the cartouche for Cleopatra that sounds out her name. However, add the line after the picture of the eagle (as shown) and it just means what it looks like: the Egyptian eagle.

Cartouches actually included the line at the end too indicating in itself their special meaning, as can be seen on the Cleopatra cartouche above

The Egyptian line hieroglyph is what we would now call an escape character: its purpose is to say that the symbol it is paired with is not treated normally, but in a special way.

Computer Scientists use escape characters in a variety of ways in programming languages as well as in scripting languages like HTML. Different languages use a different symbol as the escape character, though \ is popular (and very reminiscent of the Egyptian line!). One place escapes are used is to represent special characters in strings (sequences of characters like words or sentences) so they can be manipulated or printed. If I want my program to print a word like “not” then I must pass an appropriate string to the print command. I just put the three characters in quotation marks to show I mean the characters n then o then t. Simple.

However, the string “\no\t” does not similarly mean five characters \, n, o, \ and t. It still represents three characters, but this time \n, o and \t. \ is an escape character saying that the n and the t symbols that follow it are not really representing the n or t characters but instead stand for a newline (\n : which jumps to the next line) and a tab character (\t : which adds some space). “\no\t” therefore means newline o tab.

This begs the question what if you actually want to print a \ character! If you try to use it as it is, it just turns what comes after it into something else and disappears. The solution is simple. You escape it by preceding it with a \. \\ means a single \ character! So “n\\t” means n followed by an actual \ character followed by a t. The normal meaning of \ is to escape what follows. Its special meaning when it is escaped is just to be a normal character! Other characters’s meanings are inverted like this too, where the more natural meaning is the one you only get with an escape character. For example what if you want a program to print a quotation so use quotation marks. But quotation marks are used to show you are starting and ending a String. They already have another meaning. So if you want a string consisting of the five characters “, n, o, t and ” you might try to write “”not”” but that doesn’t work as the initial “” already makes a string, just one with no characters in it. The string has ended before you got to the n. Escape characters to the rescue. You need ” to mean something other than its “normal” meeting of starting or ending a string so just escape it inside the string and write “\”not\””.

Once you get used to it, escaping characters is actually very simple, but is easy to find confusing when first learning to program. It is not surprising those trying to decipher hieroglyphs struggled so much as escapes were only one of the problems they had to contend with.


More on …

Related Magazines …


This blog is funded through EPSRC grant EP/W033615/1.

Chocoholic Subtraction – make an edible calculating Turing machine

Chocoholic Subtraction

by Paul Curzon, Queen Mary University of London

A Turing machine can be used to do any computation, as long as you get its program right. Let’s create a program to do something simple to see how to do it. Our program will subtract two numbers.

A delicious uneven tower of broken chocolate bars with dark chocolate, white chocolate and milk chocolate.
Image by Enotovyj from Pixabay

The first thing we need to do is to choose a code for what the patterns of chocolates mean. To encode the two numbers we want to subtract we will use sequences of dark chocolates separated by milk chocolates, one sequence for each number. The more dark chocolates before the next milk chocolate the higher the number will be. For example if we started with the pattern laid out as below then it would mean we wanted to compute 4 – 3. Why? Because there is a group of four dark chocolates and then after some milk chocolates a group of three more.

M M M D D D D M M D D D M M M M …

(M = Milk chocolate, D = Dark chocolate)

Coloured flat circular lollipops with swirly spiral pattern
Image by Denis Doukhan from Pixabay

Here is a program that does the subtraction if you follow it when the pattern is laid out
like that. It works for any two numbers where the first is the bigger. The answer is given
by the final pattern. Try it yourself! Begin with a red lolly and follow the table below.
Start at the M on the very left of the pattern above.

 

Instructions table for Chocolate Turing Machine

From the above starting pattern our subtraction program would leave a new pattern:

M M M D M M M M M M M M M M M …

There is now just a single sequence of dark chocolates with only one chocolate in it.
The answer is 1!

Try lining up some chocolates and following the instructions yourself to see how it works.

This article was originally published on the CS4FN website and a copy can also be found on page 11 of Issue 14 of CS4FN, “Alan Turing – the genius who gave us the future”, which can be downloaded as a PDF, along with all our other free material, here: https://cs4fndownloads.wordpress.com/

 

 


Related Magazine …

EPSRC supports this blog through research grant EP/W033615/1.

Chocolate Turing Machines – edible computing

by Paul Curzon, Queen Mary University of London

Could you make the most powerful computer ever created…out of chocolates? It’s actually quite easy. You just have to have enough chocolates (and some lollies). It is one of computer science’s most important achievements.

Imagine you are in a sweet factory. Think big – think Charlie and the Chocolate Factory. A long table stretches off into the distance as far as you can see. On the table is a long line of chocolates. Some are milk chocolate, some dark chocolate. You stand in front of the table looking at the very last chocolate (and drooling). You can eat the chocolates in this factory, but only if you follow the rules of the day. (There are always rules!)

The chocolate eating rules of the day tell you when you can move up and down the table and when you can eat the chocolate in front of you. Whenever you eat a chocolate you have to replace it with another from a bag that is refilled as needed (presumably by Oompa-Loompas).

You also hold a single lolly. Its colour tells you what to do (as dictated by the rules of the day, of course). For example, the rules might say holding an orange one means you move left, whereas a red one means you move right. Sometimes the rules will also tell you to swap the lolly for a new one.

The rules of the day have to have a particular form. They first require you to note what lolly you are holding. You then check the chocolate on the table in front of you, eat it and replace it with a new one. You pick up a lolly of the colour you are told. You finally move left, move right or finish completely. A typical rule might be:

If you hold an orange lolly and a dark chocolate is on the table in front of you, then eat the chocolate and replace it with a milk one. Swap the lolly for a pink one. Finally, move one place to the left.

A shorthand for this might be: if ORANGE, DARK then MILK, PINK, LEFT.

You wouldn’t just have one instruction like this to follow but a whole collection with one for each situation you could possibly be in. With three colours of lollies, for example, there are six possible situations to account for: three for each of the two types of chocolate.

As you follow the rules you gradually change the pattern of chocolates on the table. The trick to making this useful is to make up a code that gives different patterns of chocolates different meanings. For example, a series of five dark chocolates surrounded by milk ones might represent the number 5.

See Chocoholic Subtraction for a set of rules that subtracts numbers for you as a result of shovelling chocolates into your face.

Our chocolate machine is actually a computer as powerful as any that could possibly exist. The only catch is that you must have an infinitely long table!

By powerful we don’t mean fast, but just that it can compute anything that any other computer could. By setting out the table with different patterns at the start, it turns out you can compute anything that it is possible to compute, just by eating chocolates and following the rules. The rules themselves are the machine’s program.

This is one of the most famous results in computer science. We’ve described a chocoholic’s version of what is known as a Turing machine because Alan Turing came up with the idea. The computer is the combination of the table, chocolates and lollies. The rules of the day are its program, the table of chocolates is its memory, and the lollies are what is known as its ‘control state’. When you eat chocolate following the rules, you are executing the program.

Sadly Turing’s version didn’t use chocolates – his genius only went so far! His machine had 1s and 0s on a tape instead of chocolates on a table. He also had symbols instead of lollies. The idea is the same though. The most amazing thing was that Alan Turing worked out that this machine was as powerful as computers could be before any actual computer existed. It was a mathematical thought experiment.

So, next time you are scoffing chocolates at random, remember that you could have been doing some useful computation at the same time as making yourself sick.

This article was originally published on the CS4FN website and a copy can also be found on page 10-11 of Issue 14 of CS4FN, “Alan Turing – the genius who gave us the future”, which can be downloaded as a PDF, along with all our other free material, here: https://cs4fndownloads.wordpress.com/


Related Magazine …

EPSRC supports this blog through research grant EP/W033615/1.

Microwave health check – using wearable tech to monitor elite athletes’ health

Microwave health check

by Tina Chowdhury, Institute of Bioengineering, School of Engineering and Materials Science, Queen Mary University of London

Black and white photo of someone sweating after exertion
Image by un-perfekt from Pixabay

Microwaves aren’t just useful for cooking your dinner. Passing through your ears they might help check your health in future, especially if you are an elite athlete. Bioengineer Tina Chowdhury tells us about her multidisciplinary team’s work with the National Physics Laboratory (NPL).   Lots of wearable gadgets work out things about us by sensing our bodies. They can tell who you are just by tapping into your biometric data, like fingerprints, features of your face or the patterns in your eyes. They can even do some of this remotely without you even knowing you’ve been identified. Smart watches and fitness trackers tell you how fast you are running, how fit you are and whether you are healthy, how many calories you have burned and how well you are sleeping or not sleeping. They also work out things about your heart, like how well it beats. This is done using optical sensor technology, shining light at your skin and measuring how much is scattered by the blood flowing through it.  

Microwave Sensors

With PhD student, Wesleigh Dawsmith, and electronic engineer, microwave and antennae specialist, Rob Donnan, we are working on a different kind of sensor to check the health of elite athletes. Instead of using visible light we use invisible microwaves, the kind of radiation that gives microwave ovens their name. The microwave-based wearables have the potential to provide real-time information about how our bodies are coping when under stress, such as when we are exercising, similar to health checks without having to go to hospital. The technology measures how much of the microwaves are absorbed through the ear lobe using a microwave antenna and wireless circuitry. How much of the microwaves are absorbed is linked to being dehydrated when we sweat and overheat during exercise. We can also use the microwave sensor to track important biomarkers like glucose, sodium, chloride and lactate which can be a sign of dehydration and give warnings of illnesses like diabetes. The sensor sounds an alarm telling the person that they need medication, or are getting dehydrated, so need to drink some water. How much of the microwaves are absorbed is linked to being dehydrated

Making it work

We are working with with Richard Dudley at the NPL to turn these ideas into a wearable, microwave-based dehydration tracker. The company has spent eight years working on HydraSenseNPL a device that clips onto the ear lobe, measuring microwaves with a flexible antenna earphone.

Blue and yellow sine wave patterns representing light
Image by Gerd Altmann from Pixabay

A big question is whether the ear device will become practical to actually wear while doing exercise, for example keeping a good enough contact with the skin. Another is whether it can be made fashionable, perhaps being worn as jewellery. Another issue is that the system is designed for athletes, but most people are not professional athletes doing strenuous exercise. Will the technology work for people just living their normal day-to-day life too? In that everyday situation, sensing microwave dynamics in the ear lobe may not turn out to be as good as an all-in-one solution that tracks your biometrics for the entire day. The long term aim is to develop health wearables that bring together lots of different smart sensors, all packaged into a small space like a watch, that can help people in all situations, sending them real-time alerts about their health.

This article was originally published on the CS4FN website and a copy can also be found on page 8 of Issue 25 of CS4FN, “Technology worn out (and about)“, on wearable computing, which can be downloaded as a PDF, along with all our other free material, here: https://cs4fndownloads.wordpress.com/  


 


This blog is funded through EPSRC grant EP/W033615/1.

Microwave Racing – making everyday devices easier to use

An image of a microwave (cartoon), all in grey with dials and a button.

Microwave Racing

by Dom Furniss and Paul Curzon, 2015

When you go shopping for a new gadget like a smartphone or perhaps a microwave are you mostly wowed by its sleek looks, do you drool over its long list of extra functionality? Do you then not use those extra functions because you don’t know how? Rather than just drooling, why not go to the races to help find a device you will actually use, because it is easy to use!

An image of a microwave (cartoon), all in grey with dials and a button.
Microwave image by Paul from Pixabay

On your marks, get set… microwave

Take an everyday gadget like a microwave. They have been around a while, so manufacturers have had a long time to improve their designs and so make them easy to use. You wouldn’t expect there to be problems would you! There are lots of ways a gadget can be harder to use than necessary – more button presses maybe, lots of menus to get lost in, more special key sequences to forget, easy opportunities to make mistakes, no obvious feedback to tell you what it’s doing… Just trying to do simple things with each alternative is one way to check out how easy they are to use. How simple is it to cook some peas with your microwave? Could it be even simpler? Dom Furniss, a researcher at UCL decided to video some microwave racing as a fun way to find out…

Everyday devices still cause people problems even when they are trying to do really simple things. What is clear from Microwave racing is that some really are easier to use than others. Does it matter? Perhaps not if it’s just an odd minute wasted here or there cooking dinner or if actually, despite your drooling in the shop, you don’t really care that you never use any of those ‘advanced’ features because you can never remember how to.

 

Better design helps avoid mistakes

Would it matter to you more though if the device in question was a medical device that keeps a patient alive, but where a mistake could kill? There are lots of such gadgets: infusion pumps for example. They are the machines you are hook up to in a hospital via tubes. They pump life-saving drugs, nutrient rich solutions or extra fluids to keep you hydrated directly into your body. If the nurse makes a mistake setting the rate or volume then it could make you worse rather than better. Surely then you want the device to help the nurse to get it right.

Making safer medical devices is what the research project, called CHI+MED, that Dom works* on is actually about. While the consequences are completely different, the core task in setting an infusion pump is actually very similar to setting a microwave – “set a number for the volume of drug and another for the rate to infuse it and hit start” versus “set a number for the power and another for the cooking time, then hit start”. The same types of design solutions (both good and bad) crop up in both cases. Nurses have to set such gadgets day in day out. In an intensive care unit, they will be using several at a time with each patient. Do you really want to waste lots of minutes of such a nurse’s time day in, day out? Do you want a nurse to easily be able to make mistakes in doing so?

 

User feedback

What the microwave racing video shows is that the designers of gadgets can make them trivially simple to use. They can also make them very hard to use if they focus more on the looks and functions of the thing than ease of use. Manufacturers of devices are only likely to take ease of use seriously if the people doing the buying make it clear that we care. Mostly we give the impression that we want features so that is what we get. Microwave racing may not be the best way to do it (follow the links below to explore more about actual ways professionals evaluate devices), but next time you are out looking for a new gadget check how easy it is to use before you buy … especially if the gadget is an infusion pump and you happen to be the person placing orders for a hospital!

 


*CHI+MED finished in 2015 and this issue of CS4FN was one of the project’s outputs.

The original version of this article was originally published on the CS4FN website and on page 16 of Issue 17 of CS4FN, “Machines making medicine safer“, which is free to download as a PDF, along with all of our other free material, here: https://cs4fndownloads.wordpress.com/

 

 

This blog post is funded through EPSRC grant EP/W033615/1: Paul Curzon is
one of the EPSRC’s ICT Public Engagement Champions.

 

 

Can a computer tell a good story?

Cartoon image depicting a Mexica (Aztec) warrior such as a Jaguar Knight

A tale by Rafael Pérez y Pérez

of the Universidad Autónoma Metropolitana, México

(from the CS4FN archive)

What’s your favourite story? Perhaps it’s from a brilliant book you’ve read: a classic like Pride and Prejudice or maybe Twilight, His Dark Materials or a Percy Jackson story? Maybe it’s a creepy tale you heard round a campfire, or a favourite bedtime story from when you were a toddler? Could your favourite story have actually been written by a machine?

Stories are important to people everywhere, whatever the culture. They aren’t just for entertainment though. For millennia, people have used storytelling to pass on their ancestral wisdom. Religions use stories to explain things like how God created the world. Aesop used fables to teach moral lessons. Tales can even be used to teach computing! I even wrote a short story called ‘A Godlike Heart‘ about a kidnapped princess to help my students understand things like bits.

It’s clear that stories are important for humans. That’s why scientists like me are studying how we create them. I use computers to help. Why? Because they give a way to model human experiences as programs and that includes storytelling. You can’t open up a human’s brain as they create a story to see how it’s done. You can analyse in detail what happens inside a computer while it is generating one, though. This kind of ‘computational modelling’ gives a way to explore what is and isn’t going on when humans do it.

So, how to create a program that writes a story? A first step is to look at theories of how humans do it. I started with a book by Open University Professor Mike Sharples. He suggests it’s a continuous cycle between engagement and reflection. During engagement a storyteller links sequences of actions without thinking too much (a bit like daydreaming). During reflection they check what they have written so far, and if needed modify it. In doing so they create rules that limit what they can do during future rounds of engagement. According to him, stories emerge from a constant interplay between engagement and reflection.

What knowledge would you need to write a story about the last football World Cup?

With this in mind I wrote a program called MEXICA that generates stories about the ancient inhabitants of Mexico City (they are often wrongly called the Aztecs – their real name is the Mexicas). MEXICA simulates these engagement-reflection cycles. However, to write a program like this you need to solve lots of problems. For instance, what type of knowledge does the program need to create a story? It’s more complicated than you might think. What knowledge would you need to write a story about the last football World Cup? You would need facts about Brazilian culture, the teams that played, the game’s rules… Similarly, to write a story about the Mexicas you need to know about the ancient cultures of Mexico, their religion, their traditions, and so on. Figuring out the amount and type of knowledge that a system needs to generate a story is a key problem a computer scientist trying to develop a computerised storyteller needs to solve. Whatever the story you need to know something about human emotions. MEXICA uses its knowledge of them to keep track of the emotional links between the characters using them to decide sensible actions that then might follow.

By now you are probably wondering what MEXICA’s stories look like. Here’s an example:

“Jaguar Knight made fun of and laughed at Trader. This situation made Trader really angry! Trader thoroughly observed Jaguar Knight. Then, Trader took a dagger, jumped towards Jaguar Knight and attacked Jaguar Knight. Jaguar Knight’s state of mind was very volatile and without thinking about it Jaguar Knight charged against Trader. In a fast movement, Trader wounded Jaguar Knight. An intense haemorrhage aroused which weakened Jaguar Knight. Trader knew that Jaguar Knight could die and that Trader had to do something about it. Trader went in search of some medical plants and cured Jaguar Knight. As a result, Jaguar Knight was very grateful towards Trader. Jaguar Knight was emotionally tied to Trader but Jaguar Knight could not accept Trader’s behaviour. What could Jaguar Knight do? Trader thought that Trader overreacted; so, Trader got angry with Trader. In this way, Trader – after consulting a Shaman – decided to exile Trader.”

As you can see it isn’t able to write stories as well as a human yet! The way it phrases things is a bit odd, like “Trader got angry with Trader” rather than “Trader got angry with himself”. It’s missing another area of knowledge: how to write English naturally! Even so, the narratives it produces are interesting and tell us something about what does and doesn’t make a good story. And that’s the point. Programs like MEXICA help us better understand the processes and knowledge needed to generate novel, interesting tales. If one day we create a program that can write stories as well as the best writers we will know we really do understand how humans do it. Your own favourite story might not be written by a machine, but in the future, you might find your grandchildren’s favourite ones were!

If you like to write stories, then why not learn to program too then you could try writing a storytelling program yourself. Could you improve on MEXICA?

More on …

Natural Language Processing [PORTAL]

A Godlike Heart

Related Magazines …


EPSRC supports this blog through research grant EP/W033615/1. 

Patterns for Sharing – making algorithms generalisable

A white screen with 8 black arrows emanating from a smaller rectangle drawn in marker pen, representing how one idea can be used in multiple ways

Patterns for Sharing

by Paul Curzon and Jane Waite, Queen Mary University of London

A white screen with 8 black arrows emanating from a smaller rectangle drawn in marker pen, representing how one idea can be used in multiple ways
Image adapted from original by Gerd Altmann from Pixabay

Computer Scientists like to share: share in a way that means less work for all. Why make people work if you can help them avoid it with some computational thinking. Don’t make them do the same thing over and over – write a program and a computer can do it in future. Invent an algorithm and everyone can use it whenever that problem crops up for them. The same idea applies to inclusive design: making sure designs can be used by anyone, impairments or not. Why make people reinvent the same things over and over. Let others build on your experience of designing accessible things in the past. That is where the idea of Design Patterns and a team called DePIC come in.

The DePIC research team are a group of people from Queen Mary University of London, Goldsmiths and Bath Universities with a mission to solve problems that involve the senses, and they are drawing on their inner desire to share! The team unlock situations where individuals with sensory impairments are disadvantaged in their use of computers. For example, if you are blind how can you ‘see’ a graph on a screen, and so work with others on it or the data it represents. DePIC want to make things easier for those with sensory impairments, whether it be at home, leisure or at work, they want to level the playing field so that everyone can take part in our amazing technological world. Why shouldn’t a blind musician feel a sound wave and not be restricted because they can’t see it (see ‘Blind driver filches funky feely sound machine!’). DePIC, it turns out, is all about generalisation.

Generalise it!

Generalisation is the computational thinking idea that once you’ve solved a problem, with a bit of tweaking you can use the solution for lots of other similar problems too. Written some software to put names and scores in order for a high score table? Generalise the algorithm so it can sort anything in to order: names and addresses, tracks in a music collection, or whatever. Generalisation is a powerful computational thinking idea and it doesn’t just apply to algorithms, it applies to design too. That is the way the DePIC team are working.

DePIC actually stands for Design Patterns for Inclusive Collaboration. Design Patterns are a kind of generalisation: so design ideas that work can be used again and again. A Design Pattern describes the problem it solves, including the context it works in, and the way it can be solved. For example, when using computers people often need to find something of interest amongst information on a screen. It might, for example, be to find a point where a graph reaches it’s highest point, find numbers in a spreadsheet of figures that are unusually low, or locate the hour hand on a watch to tell the time. But what if you aren’t in a position to see the screen?

Anyone can work with information using whatever sense is convenient.

Make good sense

One solution to all these problems is to use sound. You can play a sound and then distort it when the cursor is at the point of interest. The design pattern for this would make clear what features of the sound would work well, its pitch say, and how it should be changed. Experiments are run to find out what works best. Inclusive design patterns make clear how different senses can be used to solve the same problem. For example, another solution is to use touch and mark the point with a distinctive feel like an increase in resistance (see the 18th century ‘Tactful Watch’!).

The idea is that designers can then use these patterns in their own designs knowing they work. The patterns help them design inclusively rather then ignoring other senses. Suddenly anyone can work on that screen of information, using whatever senses are most convenient for them at the time. And it all boils down to computer scientists wanting to share.

 


This article was originally published on the CS4FN website and a copy can also be found on page 9 in Issue 19 of the CS4FN magazine “Touch it, feel it, hear it“, which you can download free as a PDF along with all of our other free material here.

 

Your own electrical sea: sensing your movements

Your own electrical sea
by Paul Curzon, Queen Mary University of London

A silhouetted man holding up an umbrella as a lightning storm rages around him against a slate grey sky. He is holding a briefcase.
A man sheltering from an electrical storm Image by Gerd Altmann from Pixabay

You can’t see them, but there are waves of electricity flowing around you right now. Electricity leaks out of power lines, lights, computers and every other gadget nearby. Soon a computer may be able to track your movements by following the ripples you make in your own electromagnetic sea. Scientists at Microsoft Research in the US have figured out a way to sense the position of someone’s body by using it as an antenna.

Why would you want a computer to do this? So that you could control it just by moving your body. This is already possible with systems like the Xbox Kinect, but that works by tracking you with a camera, so you have to stay in front of it or it loses you. A system that uses your body as an electric antenna could follow you throughout a room, or even a whole building.

First you need an instrument that can sense the changes you make in your own electrical field as you move around. In the future, the researchers would like this to be a little gadget you could carry in your pocket, but the technology isn’t quite small enough yet. For this experiment, they used a wireless data sensor that’s about twice the size of a mobile phone. The volunteers wore it in a little backpack. All the electrical data it picked up were transmitted to a computer that would run the calculations to figure out how the user was moving.

Get moving

In their first experiment, the researchers wanted to find out whether their gadget could sense what movements their volunteers made. To do this, they had the volunteers take their sensing devices home and use them in two different rooms: the kitchen and the living room. Those two rooms are usually different from one another in interesting ways. Living rooms are usually big open spaces with only a few small appliances in them. Kitchens, though, are often small, and cram lots of big electricals in the same room. The electrical sensors would really have to work hard to make sense through the interference.

Once the experiment was ready to go, each volunteer ran through a series of twelve movements. Their exercises included waving, bending over, stepping to the right or left, and even a bit of kicking and punching. The sensor would collect the electrical readings and then send them to a laptop. What happened after that was a bit of artificial intelligence. The researchers used the first few rounds of movements to train the computer to recognise the electrical signatures of each movement. Later on, it was the computer’s job to match up the readings it got through the sensor to the gestures it already knew. That’s a technique called machine learning.

One of the surprising things that made the sensor’s job tougher was that electrical appliances change what they are doing more often than you think. Maybe a refrigerator switches its cooling on and off, or a computer starts up its hard disk. Each of these changes means a change in the electrical waves flowing through the room, and the computer had to recognise each gesture through the changing noise.

Where’d you go?

The next step for the system was to see if it could recognise which room someone was standing in when they performed the movements. There were now eight locations to keep straight – two locations in one large room and six more scattered throughout the house. It was up to the system to learn the electrical signature for each room, as well as the signature for each movement. That’s pretty tough work. But it worked well – really well. The system was able to guess the room almost 100% of the time. What’s more, they found that the location tracking even worked on the data from the first experiment, when they were only supposed to be looking at movements. But the electrical signatures of each room were built into that data too, and the system was expert enough to pick them out.

Putting it all together

In the future the researchers are hoping that their gadgets will become small enough to carry around with you wherever you are in a building. This could allow you to control computers within your house, or switch things on and off just by making certain movements. The fact that the system can sense your location might mean that you could use the same gestures to do different things. Maybe in the living room a punch would turn on the television, but in the kitchen it would start the microwave. Whatever the case, it’s a great way to use the invisible flow of energy all around us.

 


This article was originally published on CS4FN and can also be found on pages 14-15 of CS4FN Issue 15, Does your computer understand you?, which you can download as a PDF. All of our free material can be downloaded here: https://cs4fndownloads.wordpress.com/

 

Playing Bridge, but not as we know it – the sound of the Human Harp

Playing Bridge, but not as we know it
by Paul Curzon, Queen Mary of London

Looking upwards at the curve of a bright white suspension bridge gleaming in the sunshine with a blue sky behind it
               Elizabeth Quay Bridge in Australia

Clifton, Forth and Brooklyn are all famous suspension bridges where, through a feat of engineering greatness, the roadway hangs from cables slung from sturdy towers. The Human Harp project created by Di Mainstone, Artist in Residence at Queen Mary, involves attaching digital sensors to bridge cables attached by lines to the performer’s clothing. As the bridge vibrates to traffic and people, and the performer moves, the angle and length of the lines are measured and different sounds produced. In effect human and bridge become one augmented instrument, making music mutually. Find out more at www.humanharp.org

 

This article was originally published on CS4FN and a copy can also be found (on page 17) in Issue 17 of CS4FN, Machines making medicine safer, which you can download as a PDF.

All of our free material can be downloaded here: https://cs4fndownloads.wordpress.com