Microwave Racing

Making everyday devices easier to use

An image of a microwave (cartoon), all in grey with dials and a button.
Microwave image by Paul from Pixabay

When you go shopping for a new gadget like a smartphone or perhaps a microwave are you mostly wowed by its sleek looks, do you drool over its long list of extra functionality? Do you then not use those extra functions because you don’t know how? Rather than just drooling, why not go to the races to help find a device you will actually use, because it is easy to use!

On your marks, get set… microwave

Take an everyday gadget like a microwave. They have been around a while, so manufacturers have had a long time to improve their designs and so make them easy to use. You wouldn’t expect there to be problems would you! There are lots of ways a gadget can be harder to use than necessary – more button presses maybe, lots of menus to get lost in, more special key sequences to forget, easy opportunities to make mistakes, no obvious feedback to tell you what it’s doing… Just trying to do simple things with each alternative is one way to check out how easy they are to use. How simple is it to cook some peas with your microwave? Could it be even simpler? Dom Furniss, a researcher at UCL decided to video some microwave racing as a fun way to find out…

Watch the Microwave Racing video here.

Everyday devices still cause people problems even when they are trying to do really simple things. What is clear from Microwave racing is that some really are easier to use than others. Does it matter? Perhaps not if it’s just an odd minute wasted here or there cooking dinner or if actually, despite your drooling in the shop, you don’t really care that you never use any of those ‘advanced’ features because you can never remember how to.

Better design helps avoid mistakes

Would it matter to you more though if the device in question was a medical device that keeps a patient alive, but where a mistake could kill? There are lots of such gadgets: infusion pumps for example. They are the machines you are hook up to in a hospital via tubes. They pump life-saving drugs, nutrient rich solutions or extra fluids to keep you hydrated directly into your body. If the nurse makes a mistake setting the rate or volume then it could make you worse rather than better. Surely then you want the device to help the nurse to get it right.

Making safer medical devices is what the research project, called CHI+MED, that Dom worked on is actually about. While the consequences are completely different, the core task in setting an infusion pump is actually very similar to setting a microwave – “set a number for the volume of drug and another for the rate to infuse it and hit start” versus “set a number for the power and another for the cooking time, then hit start”. The same types of design solutions (both good and bad) crop up in both cases. Nurses have to set such gadgets day in day out. In an intensive care unit, they will be using several at a time with each patient. Do you really want to waste lots of minutes of such a nurse’s time day in, day out? Do you want a nurse to easily be able to make mistakes in doing so?

User feedback

What the microwave racing video shows is that the designers of gadgets can make them trivially simple to use. They can also make them very hard to use if they focus more on the looks and functions of the thing than ease of use. Manufacturers of devices are only likely to take ease of use seriously if the people doing the buying make it clear that we care. Mostly we give the impression that we want features so that is what we get. Microwave racing may not be the best way to do it (follow the links below to explore more about actual ways professionals evaluate devices), but next time you are out looking for a new gadget check how easy it is to use before you buy … especially if the gadget is an infusion pump and you happen to be the person placing orders for a hospital!

Dom Furniss and Paul Curzon, 2015

Watch …

Magazines …

*The CHI+MED project ended in 2015 and this issue of CS4FN was one of the project’s outputs.

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Can a computer tell a good story?

A tale by Rafael Pérez y Pérez

What’s your favourite story? Perhaps it’s from a brilliant book you’ve read: a classic like Pride and Prejudice or maybe Twilight, His Dark Materials or a Percy Jackson story? Maybe it’s a creepy tale you heard round a campfire, or a favourite bedtime story from when you were a toddler? Could your favourite story have actually been written by a machine?

Stories are important to people everywhere, whatever the culture. They aren’t just for entertainment though. For millennia, people have used storytelling to pass on their ancestral wisdom. Religions use stories to explain things like how God created the world. Aesop used fables to teach moral lessons. Tales can even be used to teach computing! I even wrote a short story called ‘A Godlike Heart‘ about a kidnapped princess to help my students understand things like bits.

It’s clear that stories are important for humans. That’s why scientists like me are studying how we create them. I use computers to help. Why? Because they give a way to model human experiences as programs and that includes storytelling. You can’t open up a human’s brain as they create a story to see how it’s done. You can analyse in detail what happens inside a computer while it is generating one, though. This kind of ‘computational modelling’ gives a way to explore what is and isn’t going on when humans do it.

So, how to create a program that writes a story? A first step is to look at theories of how humans do it. I started with a book by Open University Professor Mike Sharples. He suggests it’s a continuous cycle between engagement and reflection. During engagement a storyteller links sequences of actions without thinking too much (a bit like daydreaming). During reflection they check what they have written so far, and if needed modify it. In doing so they create rules that limit what they can do during future rounds of engagement. According to him, stories emerge from a constant interplay between engagement and reflection.

What knowledge would you need to write a story about the last football World Cup?

With this in mind I wrote a program called MEXICA that generates stories about the ancient inhabitants of Mexico City (they are often wrongly called the Aztecs – their real name is the Mexicas). MEXICA simulates these engagement-reflection cycles. However, to write a program like this you need to solve lots of problems. For instance, what type of knowledge does the program need to create a story? It’s more complicated than you might think. What knowledge would you need to write a story about the last football World Cup? You would need facts about Brazilian culture, the teams that played, the game’s rules… Similarly, to write a story about the Mexicas you need to know about the ancient cultures of Mexico, their religion, their traditions, and so on. Figuring out the amount and type of knowledge that a system needs to generate a story is a key problem a computer scientist trying to develop a computerised storyteller needs to solve. Whatever the story you need to know something about human emotions. MEXICA uses its knowledge of them to keep track of the emotional links between the characters using them to decide sensible actions that then might follow.

By now you are probably wondering what MEXICA’s stories look like. Here’s an example:

“Jaguar Knight made fun of and laughed at Trader. This situation made Trader really angry! Trader thoroughly observed Jaguar Knight. Then, Trader took a dagger, jumped towards Jaguar Knight and attacked Jaguar Knight. Jaguar Knight’s state of mind was very volatile and without thinking about it Jaguar Knight charged against Trader. In a fast movement, Trader wounded Jaguar Knight. An intense haemorrhage aroused which weakened Jaguar Knight. Trader knew that Jaguar Knight could die and that Trader had to do something about it. Trader went in search of some medical plants and cured Jaguar Knight. As a result, Jaguar Knight was very grateful towards Trader. Jaguar Knight was emotionally tied to Trader but Jaguar Knight could not accept Trader’s behaviour. What could Jaguar Knight do? Trader thought that Trader overreacted; so, Trader got angry with Trader. In this way, Trader – after consulting a Shaman – decided to exile Trader.”

As you can see it isn’t able to write stories as well as a human yet! The way it phrases things is a bit odd, like “Trader got angry with Trader” rather than “Trader got angry with himself”. It’s missing another area of knowledge: how to write English naturally! Even so, the narratives it produces are interesting and tell us something about what does and doesn’t make a good story. And that’s the point. Programs like MEXICA help us better understand the processes and knowledge needed to generate novel, interesting tales. If one day we create a program that can write stories as well as the best writers we will know we really do understand how humans do it. Your own favourite story might not be written by a machine, but in the future, you might find your grandchildren’s favourite ones were!

If you like to write stories, then why not learn to program too then you could try writing a storytelling program yourself. Could you improve on MEXICA?

Rafael Pérez y Pérez, Universidad Autónoma Metropolitana, México

from the CS4FN archive

More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Patterns for Sharing

Making algorithms generalisable

A white screen with 8 black arrows emanating from a smaller rectangle drawn in marker pen, representing how one idea can be used in multiple ways
Image adapted from original by Gerd Altmann from Pixabay

Computer Scientists like to share: share in a way that means less work for all. Why make people work if you can help them avoid it with some computational thinking. Don’t make them do the same thing over and over – write a program and a computer can do it in future. Invent an algorithm and everyone can use it whenever that problem crops up for them. The same idea applies to inclusive design: making sure designs can be used by anyone, impairments or not. Why make people reinvent the same things over and over. Let others build on your experience of designing accessible things in the past. That is where the idea of Design Patterns and a team called DePIC come in.

The DePIC research team are a group of people from Queen Mary University of London, Goldsmiths and Bath Universities with a mission to solve problems that involve the senses, and they are drawing on their inner desire to share! The team unlock situations where individuals with sensory impairments are disadvantaged in their use of computers. For example, if you are blind how can you ‘see’ a graph on a screen, and so work with others on it or the data it represents. DePIC want to make things easier for those with sensory impairments, whether it be at home, leisure or at work, they want to level the playing field so that everyone can take part in our amazing technological world. Why shouldn’t a blind musician feel a sound wave and not be restricted because they can’t see it (see ‘Blind driver filches funky feely sound machine!’). DePIC, it turns out, is all about generalisation.

Generalise it!

Generalisation is the computational thinking idea that once you’ve solved a problem, with a bit of tweaking you can use the solution for lots of other similar problems too. Written some software to put names and scores in order for a high score table? Generalise the algorithm so it can sort anything in to order: names and addresses, tracks in a music collection, or whatever. Generalisation is a powerful computational thinking idea and it doesn’t just apply to algorithms, it applies to design too. That is the way the DePIC team are working.

DePIC actually stands for Design Patterns for Inclusive Collaboration. Design Patterns are a kind of generalisation: so design ideas that work can be used again and again. A Design Pattern describes the problem it solves, including the context it works in, and the way it can be solved. For example, when using computers people often need to find something of interest amongst information on a screen. It might, for example, be to find a point where a graph reaches it’s highest point, find numbers in a spreadsheet of figures that are unusually low, or locate the hour hand on a watch to tell the time. But what if you aren’t in a position to see the screen?

Anyone can work with information
using whatever sense is convenient.

Make good sense

One solution to all these problems is to use sound. You can play a sound and then distort it when the cursor is at the point of interest. The design pattern for this would make clear what features of the sound would work well, its pitch say, and how it should be changed. Experiments are run to find out what works best. Inclusive design patterns make clear how different senses can be used to solve the same problem. For example, another solution is to use touch and mark the point with a distinctive feel like an increase in resistance (see the 18th century ‘Tactful Watch’!).

The idea is that designers can then use these patterns in their own designs knowing they work. The patterns help them design inclusively rather then ignoring other senses. Suddenly anyone can work on that screen of information, using whatever senses are most convenient for them at the time. And it all boils down to computer scientists wanting to share.

Paul Curzon and Jane Waite, Queen Mary University of London

More on …

Related Magazines

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Your own electrical sea

Sensing your movements

You can’t see them, but there are waves of electricity flowing around you right now. Electricity leaks out of power lines, lights, computers and every other gadget nearby. Soon a computer may be able to track your movements by following the ripples you make in your own electromagnetic sea. Scientists at Microsoft Research in the US have figured out a way to sense the position of someone’s body by using it as an antenna.

Why would you want a computer to do this? So that you could control it just by moving your body. This is already possible with systems like the Xbox Kinect, but that works by tracking you with a camera, so you have to stay in front of it or it loses you. A system that uses your body as an electric antenna could follow you throughout a room, or even a whole building.

First you need an instrument that can sense the changes you make in your own electrical field as you move around. In the future, the researchers would like this to be a little gadget you could carry in your pocket, but the technology isn’t quite small enough yet. For this experiment, they used a wireless data sensor that’s about twice the size of a mobile phone. The volunteers wore it in a little backpack. All the electrical data it picked up were transmitted to a computer that would run the calculations to figure out how the user was moving.

Get moving

In their first experiment, the researchers wanted to find out whether their gadget could sense what movements their volunteers made. To do this, they had the volunteers take their sensing devices home and use them in two different rooms: the kitchen and the living room. Those two rooms are usually different from one another in interesting ways. Living rooms are usually big open spaces with only a few small appliances in them. Kitchens, though, are often small, and cram lots of big electricals in the same room. The electrical sensors would really have to work hard to make sense through the interference.

Once the experiment was ready to go, each volunteer ran through a series of twelve movements. Their exercises included waving, bending over, stepping to the right or left, and even a bit of kicking and punching. The sensor would collect the electrical readings and then send them to a laptop. What happened after that was a bit of artificial intelligence. The researchers used the first few rounds of movements to train the computer to recognise the electrical signatures of each movement. Later on, it was the computer’s job to match up the readings it got through the sensor to the gestures it already knew. That’s a technique called machine learning.

One of the surprising things that made the sensor’s job tougher was that electrical appliances change what they are doing more often than you think. Maybe a refrigerator switches its cooling on and off, or a computer starts up its hard disk. Each of these changes means a change in the electrical waves flowing through the room, and the computer had to recognise each gesture through the changing noise.

Where’d you go?

The next step for the system was to see if it could recognise which room someone was standing in when they performed the movements. There were now eight locations to keep straight – two locations in one large room and six more scattered throughout the house. It was up to the system to learn the electrical signature for each room, as well as the signature for each movement. That’s pretty tough work. But it worked well – really well. The system was able to guess the room almost 100% of the time. What’s more, they found that the location tracking even worked on the data from the first experiment, when they were only supposed to be looking at movements. But the electrical signatures of each room were built into that data too, and the system was expert enough to pick them out.

Putting it all together

In the future the researchers are hoping that their gadgets will become small enough to carry around with you wherever you are in a building. This could allow you to control computers within your house, or switch things on and off just by making certain movements. The fact that the system can sense your location might mean that you could use the same gestures to do different things. Maybe in the living room a punch would turn on the television, but in the kitchen it would start the microwave. Whatever the case, it’s a great way to use the invisible flow of energy all around us.

Paul Curzon, Queen Mary University of London

 Linked Magazines

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

 

Playing Bridge, but not as we know it – the sound of the Human Harp

Looking upwards at the curve of a bright white suspension bridge gleaming in the sunshine with a blue sky behind it
               Elizabeth Quay Bridge in Australia.
Image Sam Wilson, CC BY-SA 4.0 , via Wikimedia Commons

Clifton, Forth and Brooklyn are all famous suspension bridges where, through a feat of engineering greatness, the roadway hangs from cables slung from sturdy towers. The Human Harp project created by Di Mainstone, Artist in Residence at Queen Mary, involves attaching digital sensors to bridge cables attached by lines to the performer’s clothing. As the bridge vibrates to traffic and people, and the performer moves, the angle and length of the lines are measured and different sounds produced. In effect human and bridge become one augmented instrument, making music mutually. Find out more at www.humanharp.org


Paul Curzon, Queen Mary University of London

Watch …

 More on …

Magazines

This article was originally published on CS4FN and a copy can also be found (on page 17) in Issue 17 of CS4FN, Machines making medicine safer, which you can download as a PDF.

All of our free material can be downloaded here: https://cs4fndownloads.wordpress.com

 

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Strictly Judging Objects

Strictly Come Dancing, has been popular for a long time. It’s not just the celebs, the pros, or the dancing that make it must see TV – it’s having the right mix of personalities in the judges too. Craig Revel Horwood has made a career of being rude, if always fair. By contrast, Darcey Bussell was always far more supportive. Bruno Tonioli is supportive but always excited. Len Goodman was similarly supportive but calm. Shirley Ballas aims for supportive but strict while Motsi Mabuse was always supportive and enthusiastic. It’s often the tension between the judges that makes good TV (remember those looks that Darcy always gave Craig, never mind when they started actually arguing). However, if you believe Dr Who, the future of judges will be robots like AnneDroid in the space-age version of The Weakest Link…let’s look at the Bot future. How might you go about designing computer judges, and how might objects help?

Write the code

We need to write a program. We will use a pseudocode (a mix of code and English) here rather than any particular programming language to make things easier to follow.

The first thing to realise is we don’t want to have to program each judge separately. That would mean describing the rules for every new judge from scratch each time they swap. We want to do as little as possible to describe each new one. Judges have a lot in common so we want to pull out those common patterns and code them up just once.

What makes a judge?

First let’s describe a basic judge. We will create a plan, a bit like an architect’s plan of a building. Programmers call these a ‘class’. The thing to realise about classes is that a class for a Judge is NOT the code of any actual working judge just the code of how to create one: a blueprint. This blueprint can be used to build as many individual judges as you need.

What’s the X-factor that makes a judge a judge? First we need to decide on some basic properties or attributes of judges. We can make a list of them, and what the possibilities for each are. The things common to all judges is they have names, personalities and they make judgements on people. Let’s simply say a judge’s personality can be either supportive or rude, and their judgements are just marks out of 10 for whoever they last watched.

Name : String
CharacterTrait : SUPPORTIVE, RUDE
Judgement : 1, 2, 3, 4, 5, 6, 7, 8, 9, 10

We have just created some new ‘types’ in programming terminology. A type is just a grouping of values. The type CharacterTrait has two values possible (SUPPORTIVE or RUDE), whereas the type Judgement has 10 possible values. We also used one common, existing type: String. Strings are just sequences of letters, numbers or symbols, so we are saying something of type Name is any such sequence. (This allows both for futuristic judges and for hip hop and rapper judges- perhaps one day, in retirement C3P0 will become a judge, but also for the likes of 50 Cent one day to become one.)

Let’s start to describe Judges as people with a name, personality and capable of thinking of a mark.

DESCRIPTION OF A Judge:
    Name name
    CharacterTrait personality
    Judgement mark

This says that each judge will be described by three variables, one called name, one called personality and one called mark. This kind of variable is called an instance variable –  because each judge we create from this blueprint will have their own copy, or instance, of the instance variables that describes that judge. So we might originally have created a Len judge and so on, but many series later find we need Mostsi one and then an Anton one. Each new judge needs their own copies of the variables that describe them.

All we are saying here in the class (the blueprint) is whenever we create a Judge it will have a name, a personal character (it will be either RUDE or SUPPORTIVE) and a potential mark.

For any given judge we will always refer tco their name using variable name and their character trait using variable personality. Each new judge will also have a current judgement, which we will refer to as mark: a number between 1 and 10. Notice we use the types we created, Name, Character and Judgment to specify the possible values each of these variables can hold.

Best behaviour

We are now able to say in our judge blueprint, our class, whether a judge is rude or supportive, but we haven’t actually said what that means. We need to set out the actual behaviours associated with being rude and supportive. We will do this here in a fairly simple way, just to illustrate. Let’s assume that the personality shows in the things they say when they give their judgement. A rude judge will say “It was a disaster” unless they are awarding a mark above 8/10. For high marks they will grudgingly say “You were ok I suppose”. We translate this into commands of how to give a judgment.

IF (personality IS RUDE) AND (mark <= 8) THEN SAY “It was a disaster” IF (personality IS RUDE) AND (mark > 8)
THEN SAY “You were ok I suppose”

It would be easy for us to give them lots more things to choose to say in a similar way, it’s just more rules. We can do a similar thing for a supportive judge. They will say “You were stunning” if they award more than 5 out of 10 and otherwise say “You tried really hard”.

TO GiveJudgement:
    IF (personality IS RUDE) AND (mark <= 8)
    THEN SAY “It was a disaster”     

    IF (personality IS RUDE) AND (mark > 8)
    THEN SAY “You were ok I suppose”

    IF (personality IS SUPPORTIVE) AND (mark > 5)
    THEN SAY “You were stunning”

    IF (personality IS SUPPORTIVE) AND (mark <= 5)
    THEN SAY “You tried hard”

A ten from Len

The other thing that judges do is actually come up with their judgement, their mark.  For real judges it would be based on rules about what they saw – a lack of pointed toes pulls it down, lots of wiggle at the right times pushes it up… We will assume, to keep it simple here, that they actually just think of a random number – essentially throw a 10 sided dice under the desk with numbers 1-10 on!

TO MakeJudgement:
    mark = RANDOM (1 TO 10)

Finally, judges can reveal their mark.

TO RevealMark:
    SAY mark

Notice this doesn’t mean they say the word “mark”. mark is a variable so this means say whatever is currently stored in that judge’s mark.

Putting that all together to make our full judge class we get:

DESCRIPTION OF A Judge:
    Name name
    CharacterTrait personality
    Judgement mark

    TO GiveJudgement:
        IF (personality IS RUDE) AND (mark <= 8)
        THEN SAY “It was a disaster”

        IF (personality IS RUDE) AND (mark > 8)
        THEN SAY “You were ok I suppose

        IF (personality IS SUPPORTIVE) AND (mark > 5)
        THEN SAY “You were stunning”

        IF (personality IS SUPPORTIVE) AND (mark <= 5)
        THEN SAY “You tried hard”

    TO MakeJudgement:
        mark = RANDOM (1 TO 10)

    TO RevealMark
        SAY mark

What is a class?

So what is a class? A class says how to build an object. It defines properties or attributes (like name, personality and current mark) but it also defines behaviours: it can speak, it can make a judgement and it can reveal the current mark. These behaviours are defined by methods – mini-programs that specify how any Judge should behave. Our class says that each Judge will have its own set of the methods that use that Judge’s own instance variables to store its properties and decide what to do.

So a class is a blueprint that tells us how to make particular things: objects. It defines their instance variables – what the state of each judge consists of and some rules to apply in particular situations. We have so far made a class definition for making Judges. It is important to realise that we haven’t made any actual objects (no actual judge) so far though – defining a class does not in itself give you any actual objects – no actual Judges (no Bruno, no Motsi, …) exist to judge anything yet. We need to write specific commands to create them as we will see.

We can store away our blueprint and just pull it out to make use of it when we need to create some actual judges (eg once a year in the summer when this year’s judges are announced).

Kind words for our contestants?

Suppose Strictly is starting up (as it is as I write, but let’s suppose all the judges will be androids this year) so we want to create some judges, starting with a rude judge, called Craig Devil Droidwood. We can use our class as the blueprint to do so. We need to say what its personality is (Judges just think of a mark when they actually see an act so we don’t have to give a mark now.)

CraigDevilDroidwood IS A NEW Judge 
                    WITH name “Craig Devil Droidwood”
                    AND personality RUDE

This creates a new judge called Craig Devil Droidwood and makes it Rude. We have instantiated the class to give a Judge object. We store this object in a variable called CraigDevilDroidwood.

For a supportive judge that we decide to call Len Goodroid we would just say (instantiating the class in a different way):

LenGoodroid IS A NEW Judge 
            WITH name “Len Goodroid”
            AND personality SUPPORTIVE

Another supportive judge DarC3PO BussL would be created with

DarC3POBussL IS A NEW Judge 
             WITH name “DarC3PO BussL”
             AND personality SUPPORTIVE

Whereas in the class we are describing a blueprint to use to create a Judge, here we are actually using that blueprint and making different Judges from it. So this way we can quickly and easily make new judge clones without copying out all the description again. These commands executed at the start of our program (and TV programme) actually create the objects. They create instances of class Judge, which just means they create actual virtual judges with their own name and personality. They also each have their own copy of the rules for the behaviour of judges.

Execute them

Once actual judges are created, they can execute commands to start the judging. First the program tells them to make judgements using their judgement method. We execute the MakeJudgement method associated with each separate judge object in turn. Each has the same instructions but those instructions work on the particular judges instance variables, so do different things.

EXECUTE MakeJudgement OF CraigDevilDroidwood
EXECUTE MakeJudgement OF DarC3POBussL
EXECUTE MakeJudgement OF LenGoodroid

Then the program has commands telling them to say what they think,

EXECUTE GiveJudgement OF CraigDevilDroidwood
EXECUTE GiveJudgement OF DarC3POBussL
EXECUTE GiveJudgement OF LenGoodroid

and finally give their mark.

EXECUTE RevealMark OF CraigDevilDroidwood
EXECUTE RevealMark OF DarC3POBussL
EXECUTE RevealMark OF LenGoodroid

In our actual program this would sit in a loop so our program might be something like:

CraigDevilDroidwood IS A NEW Judge 
                    WITH name “Craig Devil Droidwood”
                    AND personality RUDE
DarC3POBussL        IS A NEW Judge 
                    WITH name “DarC3PO BussL”
                    AND personality SUPPORTIVE
LenGoodroid         IS A NEW Judge 
                    WITH name “Len Goodroid”
                    AND personality SUPPORTIVE

FOR EACH contestant DO THE FOLLOWING
    EXECUTE MakeJudgement OF CraigDevilDroidwood
    EXECUTE MakeJudgement OF DarC3POBussL
    EXECUTE MakeJudgement OF LenGoodroid

    EXECUTE GiveJudgement OF CraigDevilDroidwood
    EXECUTE GiveJudgement OF DarC3POBussL
    EXECUTE GiveJudgement OF LenGoodroid

    EXECUTE RevealMark OF CraigDevilDroidwood
    EXECUTE RevealMark OF DarC3POBussL
    EXECUTE RevealMark OF LenGoodroid

So we can now create judges to our heart’s content, fixing their personalities and putting the words in their mouths based on our single description of what a Judge is. Of course our behaviours so far are simple. We really want to add more kinds of personality like strict judges (Shirley) and excited ones (Bruno). Ideally we want to be able to do different combinations making perhaps excited rude judges as well as excited supportive ones. This really just takes more rules.

A classless society?

Computer Scientists are lazy beings – if they can find a way to do something that involves less work, they do it, allowing them to stay in bed longer. The idea we have been using to save work here is just that of describing classes of things and their properties and behaviour. Scientists have been doing a similar thing for a long time:

Birds have feathers (a property) and lay eggs (a behaviour).

Spiders have eight legs (a property) and make silk (a behaviour)

We can say something is a particular instance of a class of thing and that tells us a lot about it without having to spell it all out each time, even for fictional things: eg Hedwig is a bird (so feathers and eggs). Charlotte is a spider (so legs and silk). The class is capturing the common patterns behind the things we are describing. The difference when Computer Scientists write them is because they are programs they can then come alive!

All change

We have specified what it means to be a robotic judge and we’ve only had to specify the basics of Judgeness once to do it. That means that if we decide to change anything in the basic judge (like giving them a better way to come up with a mark than doing it randomly or having them choose things to say from a big database of supportive or rude comments) changing it in the plan will apply to all the judges of whatever kind. That is one of the most powerful reasons for programming in this way.

We could create robot performers in a similar way (after all don’t all the winners seem to merge into one in the long run?). We would then also have to write some instructions about how to work out who won – does the audience have a vote? How many get knocked out each week? … and so on.

Of course we’ve not written a full program, even for judges, just sketched a program in pseudocode. The next step is to convert this into a program in an object-oriented programming language like Python or Java. Is that hard? Why not give it a try and judge for yourself?

Paul Curzon and Peter W. McOwan, Queen Mary University of London

Revised from TLC blog and the cs4fn archive.

More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Cryptography: You are what you know

A path through the forest at dawn in the fog
Image from PIXABAY

“Carter headed into the trees, his hat pulled low. Up ahead was a dark figure, standing in the shadow of a tree. As he drew close, Carter gave the agreed code phrase confirming he was the new agent: “Could I borrow a match?” The dark figure, stepped away from the tree, but rather than completing the exchange as Carter expected, he pulled a silenced gun. Before Carter could react, he heard the quiet spit of the gun and felt an excruciating pain in his chest. A moment later he was dead. Felix put the gun away, and quickly dragged the body into the bushes out of sight. He then went back to waiting. Soon another figure approached, but from the other direction. This time it was Felix who gave the pass phrase, which he now knew. “Could I borrow a match?” The new figure confidently responded, “Doesn’t everyone use a lighter these days?” Felix hadn’t known what he would say, but was happy to assume this was Carter’s real contact. He was in. “Hello. I’m Carter.” …

The trouble with using spy novel style passphrases to prove who you are is you still have to trust the other person. If they might have nefarious intentions, you want to prove who you are without giving anything else away. You certainly don’t want them to be able to take the information you give and use it to pretend to be you. Unfortunately, the above story is pretty much how passwords work, and why attacks like phishing, where someone sends emails pretending to be from your bank, are such a problem.

This is why phishing works

The story outlines the essential problem faced by all authentication systems trying to prove who someone is or that they possess some secret information. You give up the secret in the process to anyone there to hear. Security protocols somehow need ways one agent can prove to another who they are in a way that no one can masquerade as them in future. Creating a secure authentication system is harder than you might think! To do it well takes serious skill. What you don’t do is just send a password!

A simple solution for some situations is sometimes used by banks. Rather than ask you for a whole account number, they ask you for a random set of its digits: perhaps, the third, fifth and eighth digit one time, but completely different ones the next. Though they have learnt some of the secret, anyone listening in can’t masquerade as you as they will be asked for different digits when they do. Take this idea to an extreme and you get the “Zero Knowledge Proof“, where none of the secret is given up: possibly one of the cleverest ideas of computer science.

Paul Curzon, Queen Mary University of London

More on …

Magazines …


All of our material is free to download from: https://cs4fndownloads.wordpress.com


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Cryptography: Shafi Goldwasser and the Zero Knowledge Proof

Shafi Goldwasser is one of the greatest living computer scientists, having won the Turing Award in 2012 (equivalent to a Nobel Prize). Her work helped turn cryptography from a dark art into a science. If you’ve ever used a credit card through a web browser, for example, her work was helping you stay secure. Her greatest achievement, with Silvio Micali and Charles Rackoff, is the “Zero knowledge proof”.

Zero knowledge proofs deal with the problem that, to be really secure, security protocols often need to prove that some statement is true without giving anything else away (see “You are what you know“). A specific case is where an agent (software or human) wants to prove they know some secret, without actually giving the secret up.

Satisfy me this

There are three properties a zero knowledge proof must satisfy. Suppose Peggy is trying to convince Victor that some statement about a secret is true. Firstly, if Peggy’s statement is true then Victor must be convinced of this at the end. Secondly, if it is not actually true, there must only be a tiny chance that Peggy can convince Victor that it is true. Finally, Victor must not be able to cheat in any way that means he learns more about the secret beyond the truth of the statement. Shafi and colleagues not only came up with the idea, but showed that such proofs, unlikely as they seem, were possible.

Biosecurity break-in

Imagine the following situation (based on a scenario by Jean-Jacques Quisquater). A top secret biosecurity laboratory is protected so only authorised people can get in and out. The lab is at the end of a corridor that splits. Each branch goes to a door at the opposite end of the lab. These two doors are the only ways in or out. The rest of the room is totally sealed (see diagram).

Now, Peggy claims she knows how to get in, and has told Victor she can steal a sample of the secret biotoxin held there if he pays her a million dollars. Victor wants to be sure she can get in, before paying. She wants to prove her claim is true, but without giving anything more away, and certainly not by showing him how she does it, or giving him the toxin. She doesn’t even want him to have any hard evidence he could use to convince others that she can get in, as then he could use it against her. How does she do it?

“I can get in”

A floor plan of a top secret lab
Plan of top secret lab
Image by CS4FN

She needs a Zero knowledge proof of her claim “I can get in”! Here is one way. Victor waits in the foyer, unable to see the corridor. Peggy goes to the fork, and chooses a branch to go down then waits at the door. Victor then goes to the fork, unable to see where she is but able to see both exit routes. He then chooses an exit corridor at random and tells Peggy to appear there. Peggy does, passing through the lab if need be.

If they do this enough times, with Victor choosing at random which side she should appear, then he can be strongly certain that she really does know how to get in. After all, that is the only way to appear at the other side. More to the point, he still cannot get in himself and even if he records everything he sees, he would have no way to convince anyone else that Peggy can get in. Even if he videod everything he saw, that would not be convincing proof. A video showing Peggy appearing from the correct corridor would be easy to fake. Peggy has shown she can get into the room, but without giving up the secret of how, or giving Victor a way to prove she can do it to anyone else.

So, strange as it seems, it is possible to prove you know a secret without giving anything more away about the secret. Thanks to Shafi and her co-researchers the idea is now a core part of computer security.

Paul Curzon, Queen Mary University of London

More on …

Magazines …

All of our magazines are free to download from: https://cs4fndownloads.wordpress.com

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Dressing it up

Why it might be good for robots to wear clothes

Even though most robots still walk around naked, the Swedish Institute of Computer Science (SICS) in Stockholm explored how to produce fashion conscious robots.

The applied computer scientists there were looking for ways to make the robots of today easier for us to get along with. As part of the LIREC project to build the first robot friends for humans they examined how our views of simple robots change when we can clothe and customise them. Does this make the robots more believable? Do people want to interact more with a fashionable robot?

How do you want it?

These days most electronic gadgets allow the human user to customise them. For example, on a phone you can change the background wallpaper or colour scheme, the ringtone or how the menus work. The ability of the owner to change the so-called ‘look and feel’ of software is called end-user programming. It’s essentially up to you how your phone looks and what it does.

Dinosaurs waking and sleeping

The Swedish team began by taking current off-the-shelf robots and adding dress-up elements to them. Enter Pleo, a toy dinosaur ‘pet’ able to learn as you play with it. Now add in that fashion twist. What happens when you can play dress up with the dinosaur? Pleo’s costumes change its behaviour, kind of like what happens when you customise your phone. For example, if you give Pleo a special watchdog necklace the robot remains active and ‘on guard’. Change the costume from necklace to pyjamas, and the robot slowly switches into ‘sleep’ mode. The costumes or accessories you choose communicate electronically with the robot’s program, and its behaviour follows suit in a way you can decide. The team explored whether this changed the way people played with them.

Clean sweeps

In another experiment the researchers played dress up with a robot vacuum cleaner. The cleaner rolls around the house sweeping the floor, and had already proven a hit with many consumers. It bleeps happily as its on-board computer works out the best path to bust your carpet dust. The SICS team gave the vacuum a special series of stick-on patches, which could add to its basic programming. They found that choosing the right patch could change the way the humans perceive the robot’s actions. Different patches can make humans think the robot is curious, aggressive or nervous. There’s even a shyness patch that makes the robot hide under the sofa.

What’s real?

If humans are to live in a world populated by robots there to help them, the robots need to be able to play by our rules. Humans have whole parts of their brains given over to predicting how other humans will react. For example, we can empathise with others because we know that other beings have thoughts like us, and we can imagine what they think. This often spills over into anthropomorphism, where we give human characteristics to non-human animal or non-living things. Classic examples are where people believe their car has a particular personality, or think their computer is being deliberately annoying – they are just machines but our brains tend to attach motives to the behaviours we see.

Real-er robots?

Robots can produce very complex behaviours depending on the situations they are in and the ways we have interacted with them, which creates the illusion that they have some sort of ‘personality’ or motives in the way they are acting. This can help robots seem more natural and able to fit in with the social world around us. It can also improve the ways they provide us with assistance because they seem that bit more believable. Projects like the SICS’s ‘actDresses’ one help us by providing new ways that human users can customise the actions of their robots in a very natural way, in their case by getting the robots to dress for the part.

Peter W McOwan and the CS4FN team, Queen Mary University of London (Updated from the archive)

More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

The naked robot

A naked robot holding a flower
Image by bamenny from Pixabay 

Why are so many film robots naked? We take it for granted that robots don’t wear clothes, and why should they?

They are machines, not humans, after all. On the other hand, the quest to create artificial intelligence involves trying to create machines that share the special ingredients of humanity. One of the things that is certainly special about humans in comparison to other animals is the way we like to clothe and decorate our bodies. Perhaps we should think some more about why we do it but the robots don’t!

Shame or showoff?

The creation story in the Christian Bible suggests humans were thrown out of the Garden of Eden when Adam and Eve felt the need to cover up – when they developed shame. Humans usually wear more than just the bare minimum though, so wearing clothing can’t be all about shame. Nor is it just about practicalities like keeping warm. Turn up at an interview covering your body with the wrong sort of clothes and you won’t get the job. Go to a fancy dress party in the clothes that got you the job and you will probably feel really uncomfortable the moment you see that everyone else is wearing costumes. Clothes are about decorating our bodies as much as covering them.

Our urge to decorate our bodies certainly seems to be a deeply rooted part of what makes us human. After all, anthropologists consider finds like ancient beads as the earliest indications of humanity evolving from apehood. It is taken as evidence that there really was someone ‘in there’ back then. Body painting is used as another sign of our emerging humanity. We still paint our bodies millennia later too. Don’t think we’re only talking about children getting their faces painted – grownups do it too, as the vast make-up industry and the popularity of tattoos show. We put shiny metal and stones around our necks and on our hands too.

The fashion urge

Whatever is going on in our heads, clearly the robots are missing something. Even in the movies the intelligent ones rarely feel the need to decorate their bodies. R2D2? C3PO? Wall-E? The exceptions are the ones created specifically to pass themselves off as human like in Blade Runner.

You can of course easily program a robot to ‘want’ to decorate itself, or to refuse to leave its bedroom unless it has managed to drape some cloth over its body and shiny wire round its neck, but if it was just following a programmed rule would that be the same as when a human wears clothes? Would it be evidence of ‘someone in there’? Presumably not!

We do it because of an inner need to conform more than an inner need to wear a particular thing. That is what fashion is really all about. Perhaps programming an urge to copy others would be a start. In Wall-E, the robot shows early signs of this as he tries to copy what he sees the humans doing in the old films he watches. At one point he even uses a hubcap as a prop hat for a dance. Human decoration may have started as a part of rituals too.

Where to now?

Is this need to decorate our bodies something special, something linked to what makes us human? Should we be working on what might lead to robots doing something similar of their own accord? When archaeologists are hunting through the rubble in thousands of years’ time, will there be something other than beads that would confirm their robot equivalent to self-awareness? If robots do start to decorate and cover up their bodies because they want to rather than because it was what some God-like programmer coded them to do, surely something special will have happened. Perhaps that will be the point when the machines have to leave their Garden of Eden too.

Paul Curzon, Queen Mary University of London (from the archive)

More on …

Related Magazines …


Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This blog is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos